Recently a colleague from outside of government said to me:
If only [government agency x] had shown it to us before launching, we could have given some feedback to make it better.
I conceptually agree that such an exercise would have been valuable, but, the challenge I see with a statement like this is that it’s - to a limited extent - the equivalent of insider trading. My colleague may not have intended his statement this way. As a government of/by/for the people, though, it’s important to avoid playing favorites (setting aside party politics and so on for a moment). Therefore, allowing one person or group a “sneak peek” while excluding others isn’t reasonable or acceptable.
The underlying premise of my colleague’s statement - get something into the hands of outside stakeholders so they can give feedback - is a very valid one. Sadly, it bumps up against the systemic aversion to risk which exists broadly across government. (There is a strong argument to be made that one of government’s basic functions is to reduce risk for all of society, hence organizations like the police, the SEC, and the FAA, but that’s another discussion.) This risk can be broken down into two types: reputation and liability.
Avoiding risks to reputation generally means avoiding the proverbial black eye - when newspaper articles argue that government should have done something differently, or when the public complains about poor use of the taxes they have paid. In my view, this is also known as the “get it done right the first time” factor.
Liability is a different beast - direct harm has come to a constituent as a result of an action or inaction for which the government theoretically bears some responsibility, and the repercussions can be financial restitution (more tax money spent), dramatic policy changes, invalidation of laws, criminal liability, and/or other things.
A lot of time - and therefore, money - is spent addressing risk, and a significant percentage of the bureaucracy is about risk management. (I had a recent example of this, where a small mistake resulted in the implementation of a governance process just in case it should ever happen again, despite there being only a tiny risk to reputation.) Aversion to risk isn’t just a government thing - it’s a human thing. We teach our children to look both ways before crossing the street; publicly-traded corporations rarely intentionally make decisions which will cause their stock to lose value. We value feeling safe.
The question, therefore, is how can a government explore new capacities in an intelligent way, without exposing itself to additional risk? How can it do so in a way which avoids bias? When it comes to public-facing technology, the answer - at least to me - seems relatively simple: set the correct expectations with the stakeholders. And of course, “stakeholders” doesn’t mean just the public. It also means the political leadership, the executive management, and so on.
I did exactly this when I helped launch the NYC Developer Portal beta a few months ago. Internally, we set expectations that we would launch the site without any fanfare (no press release, no big event, etc). Rather, we would do a small public launch where we could invite people to provide feedback about what they saw. When we did the quiet launch, we also carefully set public expectations: this is a work in progress (it’s not perfect and shouldn’t be heavily relied upon), we want your feedback (the public gets to help improve it), more things to come (some planned features aren’t ready yet). Both internally and externally, we stated that we planned to have a bigger “official” release at some undefined point in the future. We branded it as a “beta” effort, which it still remains today.
More important, though, is the need to establish a “beta” program - a systemic plan to set internal and public expectations correctly while engaging public stakeholders for feedback. It starts with well-defined criteria for what can enter the program. Clearly, some projects are poor candidates, because the risks are significant. But many projects, even ones we often think are critical, could be potential candidates for a “beta” release. Once the criteria is defined, it becomes a lot easier to convince internal stakeholders that the rewards outweigh the risks.
A formalized “beta” program allows us to avoid playing favorites to one individual or group over another. It allows us to get public stakeholder feedback in a constructive manner. Finally, it allows us to move more freely within a highly risk-averse culture.