The Autopilot Paradox
Why AI and Rigid Scrum Are Failing Developers
In aviation, progress requires finding the right balance. Safety depends on a careful mix of innovation and tradition. Focusing too much on new ideas can harm the industry’s excellent safety record. At the same time, sticking only to old methods is also risky because it means missing out on important improvements. This idea offers important lessons for software development, where two major trends—generative AI and the Scrum framework—risk upsetting this balance.
Consider the Robin DR-400, a common training aircraft. A single thrust lever controls the propeller, while a computer called FADEC handles the complex task of adjusting the engine’s fuel-to-air mixture. This system prevents the pilot from stalling the engine, a risk that comes with doing this task manually. The FADEC system is redundant, meaning it has two independent computers, so the engine will keep running even if one fails. This results in a system that improves safety, created through rules that are flexible but not careless. This outcome would not be possible with rules that were either too strict or too open.
Air traffic control is another example from aviation. Highly skilled controllers manage the altitude, speed, and direction of aircraft. In theory, automated systems could manage this complex airspace, and German Air Traffic Control has developed prototypes (p. 26) for this purpose. Once again, the key factor is balancing the proven skill of human controllers with the potential of automation. A highly automated system would require excellent work from human controllers if something went wrong, and it would also demand very high-quality work from the software engineers who create it.
Software development involves a similar balancing act between new and old methods, though it is in a largely unregulated field. The rise of generative AI, or LLMs, has a big impact on this dynamic. As we saw with air traffic control, replacing human work with automation requires the new systems to be of very high quality, while the people using them must remain highly skilled. This creates a problem for software development. Skills are learned through the hard work of practice, not by watching a system that you don’t fully understand. As automation increases, the quality of that automation must also increase. With current LLMs, however, high-quality results are not guaranteed. They generate code that can be inefficient or miss important edge cases.
Also, experienced software developers can often figure out missing requirements based on their knowledge of the project’s subject area. LLMs cannot do this. They are limited by a lack of context, a tendency to invent facts, and an inability to reason logically.
This flawed approach to automation is not limited to coding tools; it also appears in how projects are managed. This leads to the second key issue: the misunderstood role of agile management processes, especially Scrum, whose strict application often harms the software industry. When a process like Scrum leaves little room for interpretation, following its rules exactly becomes very important. This inflexible framework, with all its ceremonies and “agile theater,” makes it so difficult to innovate on the process itself that it is rarely done. Any change from the official rules means a team is no longer considered to be “doing Scrum.”
By setting out a process that can be followed on autopilot, Scrum makes people dependent on the system itself. For this to work, two things would need to be true: the process itself would have to be perfect, and the people using it would need to keep their project management skills sharp to handle times when the system fails. Scrum fails in both areas. The result is developers who don’t have a process that fits their work and who haven’t learned how to manage a project well.
The lesson from aviation is clear: real progress requires a careful balance between new ideas and reliable methods. Both generative AI and inflexible management systems like Scrum fail this test. They offer a simple-looking autopilot for the hard work of coding and project management, but the automation has problems. More importantly, relying on them weakens the hands-on skills that lead to real problem-solving and new ideas.
If you liked this article, you might also like What Software Engineering can learn from Aviation