Claude Code Is Not an Assistant. It's a Compiler.
Claude Code isn't an AI assistant—it's a compiler from English to code. And the engineers using it today aren't early adopters. They're building the next layer of abstraction in computing, just like Grace Hopper did in 1952 when nobody believed computers could understand anything but arithmetic.

And the engineers using it today are building the next layer of abstraction in computing.
In 1952, Grace Hopper had a working compiler, and nobody would touch it.
"I had a running compiler, and nobody would touch it," she recalled decades later. "They carefully told me computers could only do arithmetic. They could not do programs."
She had built A-0, a system that took mathematical notation and translated it into machine code for the UNIVAC I. Before this, programmers had to manually look up the addresses of subroutines from a library and patch them into their programs—a slow, error-prone process that required deep knowledge of the machine's physical architecture. Hopper's system automated this entirely.
The establishment didn't believe it could work. When she proposed developing a programming language that used English words instead of mathematical symbols, she was told flatly that "computers didn't understand English."
She persisted anyway. "It's much easier for most people to write an English statement than it is to use symbols," she explained. "So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code."
It took three years for her idea to be accepted.
Here's what I've come to understand: every major leap in computing follows the same pattern. Someone builds a translation layer that lets humans express intent at a higher level, and the system figures out the lower-level details. Each time, skeptics insist it can't work—that the translation will be too slow, too unreliable, too imprecise. Each time, they're eventually proven wrong.
In 1954, John Backus at IBM proposed building a system that would let programmers write in something resembling mathematical notation instead of machine code. His managers were skeptical. Computer time cost hundreds of dollars per minute back then, and memory was precious. Everyone "knew" that hand-crafted machine code would always be more efficient than anything a translator could produce.
Backus assembled a team anyway. What was supposed to be a six-month project took three years. They worked nights because it was the only time they could access the IBM 704 to test their code. Snowball fights broke out during long winter debugging sessions.
When they finally delivered FORTRAN in 1957, it did something remarkable: the generated code ran nearly as fast as hand-written machine code. The skeptics had been wrong about efficiency. But more importantly, they'd been wrong about what mattered. FORTRAN didn't just match human programmers—it freed scientists and engineers to think about their actual problems instead of wrestling with machine architecture.
"Much of my work has come from being lazy," Backus said later with characteristic modesty. "I didn't like writing programs, so I started work on a system to make them easier to write."
The pattern repeated with databases. In 1970, Edgar Codd at IBM published a paper proposing that data could be organized into tables and queried with a language based on mathematical set theory. Before this, programmers had to navigate complex hierarchical structures to find information—they needed to know exactly where data was physically stored and how to traverse the paths to reach it.
IBM initially refused to implement Codd's relational model. They had a successful product (IMS) built on the old hierarchical approach and didn't want to cannibalize their own revenue. So Codd demonstrated his ideas to IBM's customers instead. The customers pressured IBM. Eventually, the company relented and built System R, which spawned SQL and DB2.
Today, something like 90% of the world's structured data sits in relational databases. We don't pre-compute every possible report. We describe what we want, and the database engine figures out how to retrieve it. The physical storage is abstracted away.
Now look at Claude Code through this lens.
A compiler takes one language and translates it into another. C to assembly. TypeScript to JavaScript. FORTRAN to IBM 704 machine code.
What does Claude Code do? It takes English and translates it to working software.
That's not a metaphor. That's a literal description of the function.
But here's what makes the current moment interesting: this compiler isn't reliable yet. It makes mistakes. It needs correction and guidance. It requires human oversight.
And the engineers working with it every day—the ones catching errors, teaching it patterns, managing context, building reliable workflows around an unreliable core—they're doing exactly what compiler builders have always done.
When Hopper built A-0, she wasn't just using a tool. She was defining how the translation should work. When Backus's team built FORTRAN, they weren't just programming—they were teaching a system to recognize patterns and produce efficient output. When Chamberlin and Boyce built SQL at IBM, they were creating a way for humans to express intent that machines could execute.
The engineers doing AI-first development today are compiler builders. They're figuring out what patterns work. They're handling edge cases. They're building the reliability that will eventually make their current work obsolete—which, if you understand the history, is exactly the point.
Let me trace another thread: how we build for the web.
In the beginning, there were static HTML pages. A file sat on a server. A user requested it. The server sent back the exact same content to everyone. Simple, fast, but completely inflexible.
Then came server-side rendering. Technologies like CGI, PHP, and ASP let the server decide what to show each user. The page was generated at request time based on who was asking, what they'd done before, what was in the database. This was more powerful but required round-trips to the server for every interaction.
Then came client-side rendering. JavaScript frameworks like React and Vue moved the logic to the browser. The server sent a minimal shell plus code, and the user's device built the interface. This enabled rich, responsive applications that felt like native software. But it came with costs: massive JavaScript bundles, slow initial loads, SEO problems.
Now we're seeing hybrid approaches—server components, islands architecture, streaming HTML—trying to get the best of both worlds.
The pattern through all of this: as compute becomes cheaper and more distributed, we move from "pre-built artifacts" to "just-in-time generation." We don't store every possible state. We store instructions for generating state on demand.
Databases did this decades ago. Nobody stores pre-computed results of every possible query. You describe what you want in SQL, and the engine generates the answer at request time.
Graphics cards do this sixty times per second. Games don't store every frame as an image. They store scene descriptions—geometry, textures, lighting rules—and the GPU "compiles" visuals in real-time.
MIDI doesn't store sound waves. It stores intent: "play this note at this velocity for this duration." The synthesizer generates the actual audio on demand.
Fonts aren't pixels. They're mathematical descriptions of curves. Your computer renders them at any size, on any screen, in real-time.
What's the next step for software interfaces?
Think about your accountant.
You talk to them in natural language. You explain your situation, your goals, your constraints. They give you advice, produce documents, solve problems.
But beneath the surface, something else is happening. Your accountant builds tools—spreadsheet templates, checklists, calculation models. They learn new tax regulations. They develop internal processes. They might hire staff or partner with specialists. They create systems to serve you better over time.
You don't see any of this. You just see service that improves.
Software today doesn't work like this. It's written once, shipped, and frozen. Developers build for an average user, a persona. The actual humans using the product have to adapt themselves to what was built.
But if code is becoming the output of a compiler—if we can translate intent to implementation on demand—why compile once?
What if the interface regenerated each time you opened the application, based on what you actually need in that moment? Not the same component with different data—a different component entirely, built specifically for your current context.
What if the system beneath the surface was building its own tools? Learning from your interactions? Developing new capabilities to serve you better, without you ever seeing the machinery?
This is what I mean by "living software." Not personalization in the shallow sense of different content. Adaptation in the deep sense of different structure.
If everything generates on demand, what do we actually store?
Not code. Not rendered output. We store intent.
Think of it as a recipe rather than a meal. We cache:
- What the user wants to accomplish (the goal)
- What they value (the constraints and priorities)
- What agents need to run (the orchestration logic)
- What information sources to consult (the data dependencies)
The actual interface—the thing the user sees and interacts with—is generated at request time by a compiler that takes this intent specification and produces working software.
This means the technology stack becomes a compiler decision. React or Vue? That's an internal optimization, not a human choice. Design system? That's a constraint language you provide, not a component library you select. The compiler converges on patterns that work, just as FORTRAN converged on efficient code generation.
We won't see consolidation because everyone picks the same framework. We'll see consolidation because the compiler discovers optimal patterns.
If this vision plays out, what happens to the people who build software today?
Engineers become compiler builders (for now). The work is teaching the system—identifying patterns that work, handling failures gracefully, building reliability into an unreliable core. Later, as the compiler matures, engineers shift to defining intent, specifying constraints, designing the rules that govern generation.
Product managers become pattern archaeologists. Instead of defining features, they watch how users reshape their own software. They notice that 80% of users are trying to add a calendar view to this flow, or that most power users turn off certain defaults. They extract these patterns and encode them into better intent specifications.
Designers become constraint authors. Rather than drawing screens, they define principles: "navigation should feel light," "actions should be reversible," "information density adapts to expertise level." The compiler implements these constraints differently for each user, in each context.
QA tests outcomes, not implementations. You can't test every possible generation. You test whether the system achieves the intent. Does it help users accomplish their goals? Does it respect the constraints? Does it produce value?
I need to be honest about where we are versus where I'm describing.
The compiler—Claude Code and systems like it—is not reliable enough yet. It makes mistakes. It loses context. It doesn't maintain memory across sessions. It can't modify itself or create new capabilities. It requires constant human supervision.
These aren't small gaps. They're the entire problem.
But that's exactly what the current generation of AI-first engineers is working on. Every time you catch an error and teach the system the right pattern, you're building the compiler. Every time you create a workflow that handles context well, you're building the compiler. Every time you figure out how to maintain state across sessions, you're building the compiler.
This is the work. Not using a tool—building the next layer of abstraction.
Grace Hopper's colleagues told her computers couldn't understand English. John Backus's managers thought machine-generated code would be too slow. Edgar Codd's employer refused to implement his relational model because it threatened existing products.
Every time, the skeptics focused on the current limitations. Every time, they underestimated how quickly those limitations would be overcome once people started working seriously on the problem.
I'm not predicting timelines. I don't know if living software arrives in two years or twenty. But the direction seems clear. We've been moving toward higher levels of abstraction for seventy years. We've been moving from pre-computed artifacts to just-in-time generation for decades. We've been moving from rigid structures to dynamic adaptation since databases learned to answer queries instead of just retrieve records.
The engineers working AI-first today aren't early adopters of a productivity tool. They're participants in the next phase of that long arc. They're building the compiler that will eventually make the current way of building software feel as antiquated as hand-coding machine instructions.
Hopper was once asked about her proudest accomplishment. She didn't mention the A-0 compiler or COBOL or her pioneering work in programming languages.
"If you ask me what accomplishment I'm most proud of," she said, "the answer would be all the young people I've trained over the years; that's more important than writing the first compiler."
She understood something essential: the technology matters less than the capability it enables. Compilers matter because they free humans to work at higher levels of abstraction. Each layer of abstraction opens possibilities that were unimaginable before.
The question isn't whether AI will change software development. It already has. The question is: what will we build once the translation layer is good enough that we can stop thinking about code and start thinking about what we actually want software to do?
That's the world living software opens up. And the engineers figuring out how to make the current unreliable compiler reliable—they're the ones building the bridge to get there.
This is the first in a series exploring the future of software development. Next: what "intent-native applications" might actually look like in practice.