A Farewell to Developers
We've had a good run
The more I think about my last article, "Prompt and Tag," the more I realize there's a much bigger takeaway than I initially focused on. The deeper insight lies in the reason I needed to create that method - the underlying workflow that I've been iterating and improving upon. While many are fixated on tooling - integrating AI into IDEs and terminals - I'm starting to see that these new tools, though awesome, are a distraction from the real revolution.
A Glimpse at my Current Workflow
Currently, I use Perplexity as my initial jumping-off point. Its annotated iterative search is perfect for exploring the vast solution space of possible stacks - languages, frameworks, cloud platforms, IaC providers, deploy pipelines, and component architectures. Then I switch to Claude for implementation-specific details, refining my plan with a divide-and-conquer approach that scales well with projects of any size. Finally, I take my results to Cursor, where my "AI intern" handles the grunt work.
These tools are great, but far more valuable is the formal process I've developed. You could burn these tools to the ground, and my process would remain intact. I can operate at almost full efficiency with access to any frontier model. The real game-changer here is the LLM itself. While developers worldwide are busy scooping up golden eggs, those who will come out ahead are the ones operating at a process level. Tool makers will always be playing catch-up. Relying on these tools will hold you back, while honing your process will propel you forward.
"formally versioning our conversational threads with LLMs will be more important than versioning the code we write as a result of those threads"
Threads vs Diffs
As I navigate this new AI-assisted workflow, it's becoming clear that source code is no longer the ultimate source of truth. As AI improves in reasoning and context windows grow, the importance of code will diminish further. We're approaching a future where committing code will make as much sense as committing compiler-generated assembly for an Unreal Engine game. You don't think twice about using a high-level model-driven editor without considering the underlying machine code. This realization leads me to believe that formally versioning our conversational threads with LLMs will become more important than versioning the code we write as a result of those threads.
We'll need a way to look back at old threads and understand how certain decisions were made. This requires a method to view the state of the context window at that decision point. When dealing with multiple LLMs, we also need to ensure - or at least measure - context window parity. It's like having a team of brilliant but amnesiac coders locked in separate rooms, working on the same project. You'll quickly tire of explaining the problem repeatedly to each of them, along with what you've done, what you plan to do, and what you're currently working on.
The "Prompt and Tag" method I developed isn't what's important here. What matters is that I envision this emerging as a crucial part of "coding" in this new paradigm. It will become as ubiquitous as Git.
Determinism vs Non-Determinism
There's a significant difference we can't ignore - compilers are deterministic. This is so fundamental that even considering this statement gives pause. If this weren't the case, the "works on my machine" meme would be less funny and more serious. I recently had Perplexity work out a plan for a project I was ideating. Due to spotty internet on a train ride, I rage-clicked submit multiple times and ended up with 10 completely different "best ways to implement my app."
AI skeptics might cite this non-deterministic behavior as a reason to hesitate incorporating it into their workflow. At first glance, this makes sense. But upon deeper reflection, you realize this unique behavior is what makes LLMs so magical. After all, I'm non-deterministic, as are all developers. Every employee at a company is non-deterministic, and yet we still have Google.
Explaining the Paradox
How do we reconcile these two views: the intrinsic determinism of code versus the non-deterministic nature of LLMs? The key is to recognize the paradigm shift that AI imposes upon developers. Every layer up the stack - from assembly to C to JavaScript to TypeScript - creates a higher-level abstraction. Each layer's purpose is to tell the computer how to do what you want it to do. Now, this has changed, causing cognitive dissonance and disruption.
This is the first time in history we don't care about the "how" - well, not completely, yet, but we're orders of magnitude closer than ever. Each abstraction layer has been slowly removing "hows." With C, we no longer had to worry about "how" bits were manipulated - we just told the compiler to "copy this value." Further up the stack, we no longer had to worry about "how" memory management works - we just let garbage collection do its thing. But all these abstractions have one thing in common - they are all higher-level ways to tell a system "how" to achieve a result. In this new paradigm, we won't be bothered by this "how" nonsense anymore - we'll just have requirements and acceptance criteria. We will no longer be developers slinging code; we will be product designers slinging requirements.
A Future Not-So Far Away
Imagine telling AI to "inventory everything in my house." Within seconds, a containerized environment is deployed, complete with a properly initialized database, an API for adding items, and a React frontend with a basic interface. The AI sends you a push notification with a link to your inventory server's frontend interface. Then you decide to share the link with 10 million devoted fans. The AI, realizing your true intent, modifies the acceptance criteria. A migration plan is generated, a new load-balanced Kubernetes cluster is deployed to handle the high traffic, and everything is migrated and rolled out with nothing more than a few angry fans hitting "refresh" for 5 seconds until the AI sorts it out.
Conclusion
While we're not quite there yet, I'm certain the role of the "developer" as we know it will fade away - quite simply because everyone will be able to develop anything they want, provided they can explain what they want. There will still be people who are better at this than others, creating a new role similar to that of a product developer. In other words, the Steve Jobs of the world will no longer need their Wozniak. But until that time comes, we are witnessing a hyper-unique slice of time in our civilization - one that will be over as quickly as it fell quietly in our laps - a magical time where the developers of the world have a once-in-a-civilization opportunity to have apparent superpowers. We have the vocabulary needed to describe the tiny bit of "how" still required to make anything we want with almost zero effort. When the "how" requirement fades away, this asymmetric advantage we have will be leveled out and we'll all be superheroes. Until then, drop everything, enjoy the ride, and take advantage of how lucky you are to be alive in this tiny slice of history.

