Submit a proposal for a talk at our new virtual conference, Coding with AI: The End of Software Development as We Know It. Proposals must be submitted by March 5; the conference will take place April 24, 2025, from 11AM to 3PM EDT.
When tools like GitHub Copilot first appeared, it was received wisdom that AI would make programming easier. It would be a boon to new programmers at the start of their careers, just learning a few new programming languages. Some of that is no doubt true: Large language models can answer questions, whip up a tutorial, turn descriptive comments into code, and even write short programs successfully. And large language models are getting better at the things they can’t yet do: understanding large codebases and writing code with fewer bugs. On the surface, it looks like things are getting easier for entry-level programmers.
![](https://d3ansictanv2wj.cloudfront.net/safari-topic-cta-1f60e6f96856da19ba3cb25660472ca5.jpg)
![](https://d3ansictanv2wj.cloudfront.net/safari-topic-cta-1f60e6f96856da19ba3cb25660472ca5.jpg)
Learn faster. Dig deeper. See farther.
That may be true, but I—and an increasing number of others—have argued that AI broadens the gap between junior and senior developers. As we grow into AI, we’re growing beyond “this makes programming easier.” As we grow into AI, we’re finding that programming is less about writing clever prompts and more about managing context. Writing about ChatGPT’s memory feature, Simon Willison said, “Using LLMs effectively is entirely about controlling their context—thinking carefully about exactly what information is currently being handled by the model.” Forgive the anthropomorphism, but a conversation with a language model is just that: a conversation, where previous statements from both parties are part of the context. The context also includes the code you’re working on and any other documents or instructions (including sketches and diagrams) that the AI can access. In addition to the context that’s explicit in a chat session, a lot of context is implicit: assumptions, experiences, and other knowledge shared by the humans working on a project. That implicit context is a critical part of software development and also has to be made available to AI. Managing context is an important skill for any developer using AI, but it’s new, a skill junior developers have to acquire in addition to basic programming.
Writing more specifically about programming, Steve Yegge makes it clear that chat-oriented programming (CHOP) isn’t the future; it’s the present. “You need to type fast, read fast, use tools well, and have the chops (ahem) to sling large quantities of text and context around manually.” Right now, we need better tools for doing this—and we will eventually have those tools. But they’re not here yet. Still, whether you’re a junior or senior developer, it’s a way of programming that you need to learn if you intend to be competitive. And context is key. Discussing the difference between GPT-4o and o1, Ben Hylak and swyx write that, unlike 4o, “o1 will just take lazy questions at face value and doesn’t try to pull the context from you. Instead, you need to push as much context as you can into o1.” Their point is that today’s most advanced models don’t really want prompts; they want product briefs, as thorough and complete as you can make them. AI can help software developers in many ways, but software developers still have to think through the problems they need to solve and determine how to solve them. Programming with AI requires teaching the AI what you want it to do. And describing how to solve a problem is a far more fundamental skill than being able to spit out Python or JavaScript at scale.
To prepare for AI, we all need to realize that we’re still in charge; we still need to understand and solve the problems we face. Sure, there are other skills involved. AI writes buggy code? So do humans—and AI seems to be getting better at writing correct code. Bruce Schneier and Nathan Sanders argue that AI mistakes are different from human mistakes, if for no other reason than that they’re random rather than focused around a misunderstood concept. But regardless of the source or the reason, bugs need to be fixed, and debugging is a skill that takes years to learn. Debugging code that you didn’t write is even more difficult than debugging your own code. AI-generated bugs may not be a fundamentally bigger problem than human bugs, but for the time being humans will have to find them. (And managers will need to recognize that a job that devolves into bug-fixing, while essential, is likely to be demoralizing.) AI writes insecure code? Again, so do humans. Vulnerabilities are just another kind of bug: AI will get better at writing secure code over time, but we are still responsible for finding and fixing vulnerabilities.
So yes, the industry is changing—perhaps faster than it’s changed at any time in history. It’s not just lone programmers, bashing away at the keyboards (if it ever was). It’s software developers working with AI at every stage of product development, and with each other. It’s often been said that software development is a team sport. Now there’s another player on the team, and it’s a player that may not follow the same rulebook.
How do we prepare for the change coming our way? First, don’t ignore AI. Steve Yegge reports that he’s seen companies where the senior developers won’t touch AI (“overhyped new-fangled junk”), while the juniors are excited to move forward. He’s also seen companies where the juniors are afraid that AI will “take their jobs,” while the seniors are rapidly adopting it. We need to be clear: If you’re ignoring AI, you’re resigning yourself to failure. If you’re afraid that AI will take your job, learning to use it well is a much better strategy than rejecting it. AI won’t take our jobs, but it will change the way we work.
Second, be realistic about what AI can do. Using AI well will make you more effective, but it’s not a shortcut. It does generate errors, both of the “this won’t compile” kind and the “results looks right, but there’s a subtle error in the output” kind. AI has become reasonably good at fixing the “doesn’t compile” bugs, but it’s not good at the subtle errors. Detecting and debugging subtle errors is hard; it’s important to remember Kernighan’s law: Software is twice as hard to debug as it is to write. So if you write code that is as clever as you can be, you’re not smart enough to debug it. How does that apply when you need to debug AI-generated code, generated by a system that has seen everything on GitHub, Stack Overflow, and more? Do you understand it well enough to debug it? If you’re responsible for delivering professional-quality code, you won’t succeed by using AI as a shortcut. AI doesn’t mean that you don’t need to know your tools—including the dark corners of your programming languages. You are still responsible for delivering working software.
Third, train yourself to use AI effectively. O’Reilly author Andrew Stellman recommends several exercises for learning to use AI effectively.1 Here are two: Take a program you’ve written, paste it into your favorite AI chat, and ask the AI to generate comments. Then look at the comments: Are they correct? Where is the AI wrong? Where did it misconstrue the intent? Stellman’s point is that you wrote the code; you understand it. You’re not second-guessing the AI. You’re learning that it can make mistakes and seeing the kinds of mistakes that it can make. A good next step is asking an AI assistant to generate unit tests, either for existing code or some new code (which leads to test-driven development). Unit tests are a useful exercise because testing logic is usually simple; it’s easy to see if the generated code is incorrect. And describing the test—describing the function that you’re testing, its arguments, the return type, and the expected results—forces you to think carefully about what you’re designing.
Learning how to describe a test in great detail is an important exercise because using generative AI isn’t about writing a quick prompt that gets it to spit out a function or a short program that’s likely to be correct. The hard part of computing has always been understanding exactly what we want to do. Whether it’s understanding users’ needs or understanding how to transform the data, that act of understanding is the heart of the software development process. And whatever else generative AI is capable of, one thing it can’t do is understand your problem. Using AI successfully requires describing your problem in detail, in a prompt that’s likely to be significantly longer than the code the AI generates. You can’t omit details, because the AI doesn’t know about the implicit assumptions we make all the time—including “I don’t really understand it, but I’m sure I can wing it when I get to that part of the program.” The more explicit you can be, the greater the probability of a correct result. Programming is the act of describing a task in unambiguous detail, regardless of whether the language is English or C++. The ability to understand a problem with all its ramifications, special cases, and potential pitfalls is part of what makes a senior software developer; it’s not something we expect of someone at the start of their career.
We will still want AI-generated source code to be well-structured. Left to itself, generated code tends to accumulate into a mountain of technical debt: badly structured code that nobody really understands and can’t be maintained. I’ve seen arguments that AI code doesn’t need to be well-structured; humans don’t need to understand it, only AI systems that can parse mind-numbingly convoluted logic do. That might be true in some hypothetical future, but at least in the near-term future, we don’t have those systems. It’s overly optimistic at best to assume that AI assistants will be able to work effectively with tangled spaghetti code. I don’t think AI can understand a mess significantly better than a human. It is definitely optimistic to believe that such code can be modified, either to add new features or to fix bugs, whether a human or an AI is doing the modification. One thing we’ve learned in the 70 or so years that software development has been around: Code has a very long lifetime. If you write mission-critical software now, it will probably be in use long after you’ve retired. Future generations of software developers—and AI assistants—will need to fix bugs and add features. A classic problem with badly structured code is that its developers have backed themselves into corners that make modification impossible without triggering a cascade of new problems. So part of understanding what we want to do, and describing it to a computer, is telling it the kind of structure we want: telling it how to organize code into modules, classes, and libraries, telling it how to structure data. The result needs to be maintainable—and, at least right now, that’s something we do better than AI. I don’t mean that you shouldn’t ask AI how to structure your code, or even to do the structuring for you; but in the end, structure and organization are your responsibility. If you simply ask AI how to structure your code and then follow its advice without thinking, then you’ll have as much success as when you simply ask AI to write the code and commit it without testing.
I stress understanding what we want to do because it’s been one of the weakest parts of the software development discipline. Understanding the problem looks in both directions: to the user, the customer, the person who wants you to build the software; and to the computer, the compiler, which will deal with whatever code you give it. We shouldn’t separate one from the other. We often say “garbage in, garbage out,” but frequently forget that “garbage in” includes badly thought-out problem descriptions as well as poor data or incorrect algorithms. What do we want the computer to do? I’ve seen many descriptions of what the future of programming might look like, but none of them assume that the AI will determine what we want it to do. What are the problems we need to solve? We need to understand them—thoroughly, in depth, in detail, and not in a single specification written when the project starts. That was one of the most important insights of the Agile movement: to value “individuals and interactions over processes and tools” and “customer collaboration over contract negotiation.” Agile was based on the recognition that you are unlikely to collect all the user’s requirements at the start of a project; instead, start building and use frequent demos as opportunities to collect more insight from the customer, building what they really want through frequent mid-course corrections. Being “agile” when AI is writing the code is a new challenge—but a necessary one. How will programmers manage those corrections when AI is writing the code? Through managing the context; through giving the AI enough information so that it can modify the code that needs changing while keeping the rest stable. Remember that iterations in an Agile process aren’t about fixing bugs; they’re about making sure the resulting software solves the users’ problem.
Understanding what we want to build is especially important right now. We’re at the start of one of the biggest rethinkings of software development that we’ve ever had. We’re talking about building kinds of software that we’ve never seen before: intelligent agents that solve problems for their users. How will we build those agents? We’ll need to understand what customers want in detail—and not the “I want to order groceries from Peapod” detail but at a higher, more abstract level: “I want software that can negotiate for me; I want software that can find the best deal; I want software that maximizes the probability of success; I want software that can plan my retirement.” What kinds of specifications will we need to do that correctly? If software is executing actions on behalf of a customer, it needs to ensure that those actions are performed correctly. If finances are involved, errors are close to intolerable. If security or safety are concerned, errors are really intolerable—but in many cases, we don’t know how to specify those requirements yet.
Which is not to say that we won’t know how to specify those requirements. We already know how to build some kinds of guardrails to keep AI on track. We already know how to build some evaluation suites that test AI’s reliability. But it is to say that all of these requirements will be part of the software developers’ job. And that, all things considered, the job of the software developer may be getting more difficult, not less.
With all of this in mind, let’s return to the so-called “junior developer”: the recent graduate who knows a couple of programming languages (more or less) and has written some relatively short programs and completed some medium-length projects. They may have little experience working on larger teams; they probably have little experience collecting requirements; they are likely to have significant experience using coding assistants like GitHub Copilot or Cursor. They are likely to go down unproductive rabbit holes when trying to solve a problem rather than realize that they’ve hit a dead end and looking for another approach. How do they grow from a “junior” developer to a “senior”? Is asking an AI questions sufficient? Let’s also consider a related question: How does a “senior” become senior? Trisha Gee makes a very underappreciated point in “The Rift Between Juniors and Seniors”: Part of what makes a senior software developer senior is mentoring juniors. Mentoring solidifies the senior’s knowledge as much as it helps the junior take the next step. You don’t really know anything well until you can teach it. In turn, seniors need juniors who can be taught.
Whether there’s a formal training program for junior developers or informal mentoring, we clearly need juniors precisely because we need seniors—and where will the next generation of seniors come from if not well-trained juniors? Forrest Brazeal makes the point:
If we can’t make room in our taxonomy of technical work for someone who still needs human training, we are just doing the same old thing IT has been doing for decades: borrowing from our future to cash in on the current hype.…And every experienced generalist starts out inexperienced. They start as a junior developer. That’s not where software engineering dies: it’s where it’s born.
Yes—that’s where software engineering is born: not in learning programming languages or memorizing APIs but in practice, experience, and mentorship. We need to be reminded that software development isn’t just about generating code. The importance of writing code may diminish in the future, but as Stanford computer science professor Mehran Sahami said in a conversation with Andrew Ng, “We taught you Python, but really we were trying to get you to understand how to take problems and think about them systematically.” Good programmers will have honed their skills in understanding the problem and goals, structuring the solution, providing necessary context to others, and coaching others to build their own skills in these areas. AI doesn’t change these essential skills—and no software developer, senior or junior, will go wrong by investing time in learning them.
As Tim O’Reilly writes, AI may be the end of programming as we know it, but it is not the end of programming. It’s a new beginning. We’ll be designing and building new kinds of software that we couldn’t have imagined a few years ago. Software development is about understanding and solving problems, regardless of whether the programming language is Python or English, regardless of whether or not an AI assistant is used. It will be the software developers’ job to determine what we want, what we really need, and to describe that to our machines of loving grace.
Footnotes
- From personal communication; we will soon publish an article by Andrew Stellman that goes into more detail.
Thanks to Nat Torkington, Andrew Stellman, Kevlin Henney, Tim O’Reilly, and Mary Treseler for comments, discussion, and even a few paragraphs.