Public choice theory, government technology, , and management problems all the way down
Thinking out loud about an incomplete (if pervasive) mental model of this stuff
Programming note: I always enjoy replies by email as well as comments. They encourage me to do a bit more of this, so don’t hesitate!
Theory is insufficient and theory is necessary. Without a model highlighting certain elements of the world for us, we do not have the conceptual bandwidth to process and make sense enough to make decisions.
Something I observe in the world of talk about government technology is an occasional implicit model. This model roughly says: “X is bad because it is serving private interests instead of public interests; therefore by making it publicly operated, it will fix the problem, and public interests will be served.”
The canonical case of this is the varied scandals around TurboTax and the IRS “Free File” program. The narrative goes, roughly:
“TurboTax is private service (part of publicly traded Intuit, Inc.; a fact which makes “maximizing shareholder value” its explicit reaffirmed goal)
It is built on top of a public service and need (filing taxes)
TurboTax has done bad things (true!) because it is serving its interests rather than the public’s
Therefore, by having the government build software that replaces TurboTax, we will have something that serves the public interest instead”
I’m picking this case because it is the most egregious. (Intuit did, after all, effectively commit fraud per recent FTC ruling in even advertising its offerings as free. It also hid its Free File program-required-actually-free options from Google Search results. Not great!)
But the same shape of an analytic model of the world exists around lots of aspects of government technology — more generally, of the form “private technology development vendors serve their own interests rather than government’s, and therefore such development should be done by public sector staff.”
One starts to get to grayer areas as one follows this (should government build its own cloud data center? or rely on well-trodden commodity alternatives operated by Amazon, Google, Microsoft?) but you still do see variations on this.
And overall, the shape of this argument is, to me, “negative” — in the purely definitional sense as defining good in opposition to a concrete bad.
Arbitrating “public interest”: public choice theory
Perhaps because I have a diploma that carries the phrase Political Economy of Industrial Societies on it, I have an interest in the theoretical underpinnings of (and objections to) of this line of thinking!
Specifically, the issue I come to is that which public choice theory speaks very well to. Per Wikipedia, here are two useful verbatim summaries:
“it is the subset of positive political theory that studies self-interested agents (voters, politicians, bureaucrats) and their interactions”
“[the study of] how elected officials, bureaucrats and other government agents can be influenced by their own perceived self-interest when making decisions in their official roles”
In short, public choice theory says that even those specific individuals tasked with acting in “the public interest” face their own incentives; distinct from, but nonetheless useful to understand, those of private sector actors.
And we forget that a major part of Government doing XYZ is in fact defining and arbitrating what constitutes the public interest. That then sets up the downstream feedback loops and incentives individual agents face.
It’s also worth noting that public choice theory is broadly positive, which is to say descriptive: it seeks to describe what is going on, rather than what should go on.
And I think that’s immensely valuable, and underpriced in discussion of government technology today. While a given product manager at Intuit faces some structural incentives that will recur as feedback loops from above, so too will a product manager at a federal agency tasked with doing the same. And understanding those incentives is a useful exercise.
What’s more, I’d argue it’s a critical exercise for any practitioner: if you ever find yourself in one of these public sector organizational contexts, it will become overwhelmingly clear very quickly that how, specifically, the notion of a “public interest” is defined and arbitrated guides your options very much.
A few positive (descriptive) observations:
In general, a feature of public sector technology is that the definition of what the public interest is is delegated up (to political processes, legislative ones, appointees’ decisions.) I have seen much frustration when people realize that acting in government often means executing against a public interest definition they personally disagree with. But that is… definitional.
A private sector actor may have financial incentives, and/but public sector actors have both political and budget process incentives. To assume these align perfectly to one individual’s notion of public interest is to be confronted very quickly with a messier reality, the realization that your model had been partial at best.
What is legible gets managed. What you tend to see is those things which are legible to the political and bureaucratic processes are the ones that end up being executed on. I actually see this as structural feature, namely that of a primary feedback loop being audit and oversight mechanisms. Not having documentation showing a process was followed is something an auditor can ding you on. Not being “user centered” is… not legible in and of itself. An auditor cannot have a finding on this. There is no clear checklist of remediation for this. This structural aspect drives a lot more of what we see than we’d like to acknowledge, I think. It’s why highly legible characteristics of technology — e.g. mobile responsive — are so much more the focus and delivered upon well than the messier things.
(Brief aside: I think the work of operationalizing “user centered technology” into legible-to-government characteristics is in some ways half the ballgame. I’ll put it out there that my bucket list item is the technology service has the ability to easily submit complaints/feedback and then those complaints/feedback are clearly documented and acted upon in short order.)
Management problems all the way down
The last piece here: if you want to align the work of a public sector-executed technology project to some public interest, you actually face a surprisingly similar problem to that of managing an external private actor (whether a vendor, or a TurboTax-like tech service.)
That is: operationalizing what “good” is.
In the IRS Free File program, “good” was defined broadly as “the tax provider verifiably has a free option available to lower-income Americans.”
And as we saw with TurboTax, this operationalization was incomplete. It turned out what we really cared about was that lower income Americans used it. But we didn’t manage it that way.
Similarly, we should ask: how are we operationalizing “good” for any given government technology project?
That turns out to be hard. I have some answers (measuring and reporting task completion, complaints/feedback as I mentioned before, for example.)
But I do find this to be an analytic gap. Often I think I see projects operationalize good as “doing user research.” Similar to our Free File example, you can see readily how that being the definition of success might fail in a context of adverse individual incentives set up by the org. User research is a means. What is the end?
Similarly, we can’t just call a project “user centered” and put up a banner saying “Mission Accomplished.”
That gets to another (I think) instructive insight from public choice theory that I’d apply to government technology more broadly: after terms of success have been set, everyone is incentivized to frame the outcome as positively as possible. (This flows really organically from political incentives. It’s actually probably a more generic problem of the political economy of service delivery.)
Now I’ve got to run because I have a newborn and this was timeboxed.
For another time:
What if external advocates did user testing independently?
What if GAO looked at user feedback on a service? Metrics on completion rates?
What other ways could we make quality legible to government processes?
Another great post. Thanks.
I think if government accepts/adopts the premise that its software products should be open source when possible, then its can adopt the success metrics that other open source projects adopt: primarily how many commits, forks and follows on Github? How many other government agencies are sharing the code? How big, healthy and vibrant is the community using these software components.
Focusing on product and its success metrics are critical -- no doubt. But we can also look at the success metrics of the technical components of that product using existing/proven practices from the open source universe.