Understanding the Cascade of Rigidity
Lessons from GPS satellites, Medicare, and federal hiring, for AI and the coming disruption
Thomas L. Hutcheson asked in the comments on my last post about the spat over “regulations gone bad”: But why does not each layer, closer to the actual practice, make the reg more flexible?
That is such a good question, and I want to unpack that. What Thomas is referring to is my assertion that part of what’s at play when regulations sound reasonable on paper but end up functioning more restrictively is what I call the cascade of rigidity. I think it’s a useful concept to clarify because other corners of the comments are caught in unhelpful fights about whether regulation is a good thing or a bad thing, with some folks correctly pointing out that it’s not a binary. Any time you subject an individual or institution to regulation, you are making a tradeoff with the possible benefit on one side (keeping kids safe from unsanitary conditions in daycare, for example) and the costs of compliance (inspections, extra sinks to separate hand washing from food prep, for example) on the other side. The trick is to get the highest benefit for the lowest cost. How you craft the regulation matters, and the best regulators know this. But they don’t always understand how the words they write will be operationalized.
Everyone’s favorite story of the cascade of rigidity isn’t about regulation per se, but rather how requirements get ever more rigid as they descend through a hierarchy. It’s the story of how an enterprise service bus, believed to be required by law, tanked the software update for next generation GPS satellites, told through the eyes of Matthew Weaver of the U.S. Digital Service who originally wrote about it here. This is originally from Recoding America, Chapter 4, but I told an abridged version of it in a paper for the Niskanen Center last year, so I’ll borrow from that here.
Weaver had been brought to Raytheon, the company the Air Force had hired to write the software for the next generation GPS satellites, because the Raytheon team was behind schedule and over budget. This issue of data transmission to the ground stations and back again was one of a few problems that was holding them back. There is an industry standard way of doing this, a simple, reliable protocol that is built into almost every operating system in the world. (For the nerds among us, it’s UDP, Universal Datagram Protocol.)
But this team wasn’t using this simple protocol on its own. Instead, the team had written a piece of software to receive the message from that protocol, read the data, and then recode it into a different format, so they could feed it into a very complex piece of software called an Enterprise Service Bus, or ESB. The ESB eventually delivered the data to yet another piece of software, at which point the whole process ran in reverse order to deliver it back to the original, simple protocol. Because the data was taking such a roundabout route, it wasn’t arriving quickly enough for the ground stations to make the calculations needed. Using the simple protocol alone would have made the entire job a snap—as easy as nailing a couple of boards together. Instead, they had this massive Rube Goldberg contraption that was never going to work.
The people on this project knew quite well that using this ESB was a terrible idea. They’d have been relieved to just throw it out, plug in the simple protocol, and move on. But they couldn’t. It was a requirement in their contract. The contracting officers had required it because a policy document called the Air Force Enterprise Architecture had required it. The Air Force Enterprise Architecture required it because the Department of Defense Enterprise Architecture required it. And the DoD Enterprise Architecture required it because the Federal Enterprise Architecture, written by the Chief Information Officers Council, convened by the White House at the request of Congress, had required it. Was it really possible that this project was delayed indefinitely, racking up cost overruns in the billions, because Congress has ordered the executive branch to specify something as small and technical as an ESB?
The short answer is no. But it’s important to understand why pretty much everyone thought it was yes, and it ties back to Clinger and Cohen’s attempt to get the executive branch to take tech seriously [the Clinger Cohen Act of 1996.] Among other provisions, the law as it was enacted required each federal agency to have an “Information Technology Architecture.” There was a sense that the technical architectures of the various agencies should be coordinated in some way, so the CIO Council got the job of coming up with that uber-architecture. The result seems to have been a classic example of design by committee. However the document came together, the 434-page Federal Enterprise Architecture, released in 1999, requires that federal technology solutions have a “service-oriented architecture.” And it defines that in terms of an Enterprise Service Bus.
It’s true that many laws and policies fail because they are overly prescriptive and lock implementers into a narrow set of options. But that is not quite what happened here. Neither the Clinger-Cohen Act nor any other law explicitly required an ESB. Nowhere in the Federal Enterprise Architecture does it say “thou shalt always use an enterprise service bus.” There are five mentions of “enterprise service bus” in the document, but all of them are in charts or diagrams listing various application components that could support interoperability. ESBs became mandatory in practice within the Department of Defense through overzealous interpretations of law, policy, and guidance, combined with lack of technical understanding.
This is the cascade of rigidity, which I also talk about in terms of how culture eats policy. When the culture is characterized by risk aversion and incentives for overspecification, whatever lovely policy you’ve written is likely to get “eaten” in implementation – in other words, it will not have the effect you intended it to have. To quote myself again, this time directly from the book:
Even when legislators and policymakers try to give implementers the flexibility to exercise judgment, the words they write take on an entirely different meaning, and have an entirely different effect, as they descend through the hierarchy, becoming more rigid with every step. When rules rarely have their intended effect, more rules are not likely to improve outcomes.
Back to last Friday’s post on MGP. The Current Affairs guy paints MGP as erroneously and misleadingly holding up an example of regulation gone mad. Presumably what they both mean is regulation that was written with too heavy a hand. That may be, but her own description of the confusion surrounding it makes it clear that what’s problematic is the interpretation of the regs by (presumably state-level) enforcement agencies, and the way everyone in the chain of command from writing the regs all the way down to this childcare worker believing that she’s not allowed to peel a banana ends up pointing fingers at each other saying “Not my fault! I never said that! Someone is lying!” They are failing to acknowledge the cascade of rigidity.
The cascade of rigidity is the result of an open loop system, in which implementation teams neither test their programs in the real world nor loop back to the source for adjustments. Agency staff are commonly taught to treat legal language as literal operating instructions, as if a programmer had written code and they were the computer executing that code. But as any programmer will tell you, code rarely works as intended on the first try. It works after trying one approach, testing it, adjusting, and continuing that cycle over and over again. That cycle of adjustment is very difficult to engineer within policy implementation today. Chapter 10 of Recoding America shows this happening over and over again in the implementation of a Medicare law, though in that case, the implementers heroically fought to close the loop and break the cascade of rigidity. The longer story is worth reading in the book, but here’s a summary of it:
MACRA (Medicare Access and CHIP Reauthorization Act) was designed to pay doctors more for higher-quality care. But an implementation team at CMS knew that doctors were already frustrated with the burdensome and confusing ways they had to report their data under the existing program, and many were so concerned that the new system would be just as bad that they were threatening to stop taking Medicare patients. Thus, a law designed to improve the quality of care threatened to degrade it, especially for patients in rural areas who relied on the small practices that were most affected.
Recognizing how challenging the administrative requirements could be for practices with fewer resources and limited Medicare revenue, one provision in the law exempted doctors who treated a minimal number of Medicare patients. But CMS’s initial interpretation of this provision would have required all providers to collect and submit a full year's worth of data in order to demonstrate they fell below the exemption threshold. This meant exempt doctors would still have to comply with all the program's requirements, including updating their systems and reporting data, only to be excused from all this at a later date. It’s not hard to see why this approach, while technically accurate, would have worked against the intent of lawmakers. Those doctors would have left the program, hurting the very patients the law meant to help. It took months of negotiation between the delivery team on one side and regulators and lawyers on the other to convince them to exempt doctors based on the prior year’s data.
Another provision allowed smaller practices to form "virtual groups" to gain advantages enjoyed by larger practices. Staff interpreted this provision as a mandate to create a "Facebook for Doctors," a platform for doctors to find and connect with each other. A staffer on loan from the USDS doubted that Congress intended for CMS to create a social media platform, especially considering the limited time and resources available. She took the almost unheard of step of consulting the Office of House Legislative Counsel, and confirmed that Congress simply wanted to make it easier for small practices to report together and had no intention of mandating a "Facebook for Doctors." As Elon Musk says, “Don’t optimize what shouldn’t exist.” Facebook for Doctors would have served no purpose, and did not need to exist, and CMS narrowly avoided having to build it.
Under more common circumstances, these and other overly literal interpretations of the law would have resulted in a burdensome, unwieldy, and ultimately unsuccessful implementation. Doctors would have simply opted out, leaving patients with fewer options, and some in rural areas with none. Conflicts like these too rarely resolve in favor of common sense.
If the delivery team on MACRA hadn’t gotten way out of their lane and forced the issue of, say, how to determine which doctors were exempt, there would have been totally justified complaints that the regulations were absurdly burdensome. But the law as written was fine. In this case, it was the regulators introducing the unhelpful rigidity, and the people closest to the users, in this case the doctors, trying to reject it. In the case of the GPS satellites, the people closest to the software would have done anything to remove the requirement of the ESB. They knew it was the problem. This is more common than you might think. Much like the software itself, with the simple, clear, standard UDP at the top of the bottom and the Rube Goldberg machine of the ESB crammed tragically in the middle, the people at the top and the bottom can have pretty reasonable ideas. What gets stuck in the middle can break it all.
But I don't want to let the people at the top off the hook. Like I said in my last post, their good intentions don’t matter. It’s their job to tame the cascade of rigidity, or at least have strategies to mitigate it, so the magic words of law and policy cast the spell they intended. And the cascade of rigidity is in part a dysfunction of their own creation.
Biden’s AI executive order is an example of how people at the top need to meaningfully understand the cascade of rigidity and account for it when they write policy. I’m a fan of safeguards when it comes to using AI in public sector contexts, but the procedures required by the executive order includes public consultation with outside groups, studies to demonstrate the equity impacts of the application of any AI-enabled technology, the creation of a mechanism to appeal the AI’s decision, and a requirement to allow individuals to opt out of any use of AI. It’s easy to imagine uses of AI where it makes sense to allow members of the public to opt out, but there are also those where opt out is entirely impractical, like the postal service’s use of AI-enabled handwriting recognition to read addresses on envelopes. Equity studies can take years, though exact averages are hard to calculate because so many studies are still in progress after years because the data required to conduct them doesn’t exist or is very hard to access. They are hard to access in part because laws like the Privacy Act of 1974, which is itself an earlier well-intentioned guardrail that has become a barrier to commonsense data sharing. What sound like reasonable constraints necessary for safe use of AI could in effect stop its use, when the speed of AI development is on such a widely different timescale than the bureaucracy. You can read more on how AI might meet the cascade of rigidity here.
One of the most glaring examples of the cascade of rigidity in action is how government assesses candidates for positions. A system supposedly designed around merit, explicitly intended to counter the pre-1883 spoils system, has come to favor insiders and make it very hard for outsiders to join government. But let's pick this up after the Thanksgiving holiday. I have a lot to say there. (And yes, I know Elon Musk retweeted me speaking on this topic with Ezra Klein.)
But I still haven’t answered Thomas’s question about why each step down in the hierarchy makes the rule or reg or requirement more rigid instead of less. It’s a hard question to answer, and what I usually hear in response are “it’s the culture” and/or “it’s the incentives.” But that also doesn’t really answer the question. The best answer I have for why that’s the culture or why those are the incentives is that decisions made in government are frequently contested. We built our government to have a lot of surface area for objection from those outside it, for obvious reasons. If you believe someone is going to object to the decisions you make, you try not to make them. A culture emerges in which the goal is to have decisions be the outcomes of processes, not people. You want to be able to defend any decision from criticism by demonstrating strict adherence to a process in which no judgment can be questioned, because no judgment was used.1
It doesn’t work, of course. Someone will end up being criticized, because many decisions made without use of judgement end up being bad ones. Who gets that criticism, and how it gets addressed is a big topic, and the best thing I can say about it is go read The Unaccountability Machine by Dan Davies, which I will write more about soon.
There is a lot of talk of major disruption in federal government. I’ve been advocating for major change for a long time. I also hope that the disruptors dig in and understand the dynamics that take good intentions and turn them into bad outcomes, or we risk repeating the cycles that got us here.
Quoting myself here from https://www.digitalistpapers.com/essays/ai-meets-the-cascade-of-rigidity
Here's a question: what can you do with you come into a rigidly planned, doomed-to-fall IT project halfway through?
Love this piece, Jen. The 'cascade of rigidity' is not confined to the public sector--I've seen it in spades across multiple Fortune 500 companies.
This happens because managers at each level see their role as, well, managing. And if you press a typical manager to explain what that means, they'll ultimately arrive at 'control.' They exercise this control in two key ways: first, by reducing ambiguity when passing down guidance, eliminating room for misinterpretation; second, by making supervision more tractable through detailed requirements. In theory, this makes performance more 'legible,' though it often achieves the opposite.
With 3-4 layers, the problem remains manageable (pun intended) since feedback loops are faster and more direct. But it grows exponentially worse as layers multiply. Most large organizations operate with 10+ layers, so the distortions are significant.
There's a compounding effect in hierarchical systems where influence depends on rank rather than value added. These organizations select for and promote people who excel at ensuring compliance and being compliant. Over time, the chain of command eventually fills with risk-averse, micromanaging administrators. The "irregular" people who are most likely to drive positive change through initiative and off-script ingenuity are marginalized or end up leaving.
So the fundamental issue lies in the "cascade" itself. Exhorting people to be more outcome-oriented or more empowering leaders might work in isolated cases but won't fix the structural problem. The only lasting solution is eliminating multiple layers while finding alternative ways to achieve control and performance: through incentives, direct customer accountability, transparency, and equipping front-line teams with the skills and information to make sound decisions.
Choices we make around an organization's core management systems, processes, and structures have an outsized impact on culture.