What AI-First Development Reveals About Our Broken SDLC
Five days. 69,000 lines of code. 160 React components. One developer with AI agents. This is the story of rebuilding a production website from scratch—and what it revealed about the broken state of our development tooling in the age of AI-first workflows.

Last Friday at 2 AM, I stared at my terminal and couldn't believe what I was seeing.
Four windows open. In each one, Claude Code working on something different. One building an admin dashboard component. Another writing an API endpoint. A third—an agent I'd built—scanning Linear for new tasks and enriching them automatically. And in the fourth window, me, clicking through the new website looking for bugs to throw into the queue.
Five days earlier, this website didn't exist.
The old Squid Club website worked. People signed up for the community, read the blog, registered for events. But every time I wanted to change something, it felt like fighting through mud. Legacy React code, decisions made under pressure, tech debt accumulated layer upon layer.
I tried incremental refactoring. "One component today, another tomorrow." It didn't work. Every small change triggered a cascade of problems. And behind what looks like a simple community website hides an entire system—blog management, analytics, event registration, image handling, user permissions.
At some point I made a decision that seemed extreme: start from scratch. Shadow repo. Move everything from React to shadcn with Next.js.
In 1957, John Backus and his team at IBM released FORTRAN—the first widely-used high-level programming language. Before FORTRAN, programming meant hand-coding in assembly language, managing operation codes and memory addresses directly. One early pioneer described it as "hand-to-hand combat with the machine."
FORTRAN changed that. A program that might have needed 1,000 assembly instructions could now be written in about 50 FORTRAN statements.
But here's what's interesting: many working programmers were skeptical. They had deep expertise in assembly and were proud of the arcane skills needed to wring efficiency out of early machines. John Backus later described this culture as a "priesthood of programming"—guardians of arcane knowledge, possessing skills far too complex for ordinary mortals.
The most frequent argument against compilers? That compiled code could never be as efficient as handwritten assembly. And this wasn't entirely untrue. Early compilers sometimes produced verbose, suboptimal machine code.
Sound familiar?
Fred Brooks, in his landmark 1986 paper "No Silver Bullet", reflected on this transition:
"Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility."
A factor of five. Not from better algorithms. Not from faster hardware. From changing the abstraction layer.
The first thing I did was kill Docker.
In the old site, every deployment took 15 minutes. Traditional CI/CD, containers, the whole story. Fifteen minutes doesn't sound like much, until you're working with AI agents that can generate code fast. Suddenly your feedback loop becomes a bottleneck.
I connected Railway directly to GitHub. Every push triggers automatic deployment. Minutes instead of fifteen.
This sounds like a small technical detail, but it exposed something bigger: our infrastructure wasn't built for this pace. The CI/CD pipelines, testing workflows, deployment flows—all designed for a world where writing code is the bottleneck. When code gets written faster, you see everything else.
At some point I started running two Claude Code agents in parallel on the same repo.
The problem is obvious: conflicts. Two agents writing to the same files, stepping on each other, git merge conflicts everywhere.
The solution was to manage it like a tech lead manages a team.
I moved everything to Linear. Not just a task list—full planning. Every task with clear definitions, documented dependencies, marked milestones. Then I built what I called the "Project Manager Agent"—a sub-agent whose entire job is working with Linear.
It scans for new tasks, enriches them with context, asks me clarifying questions, checks if recent commits already solved the problem, updates priorities. Keeps the backlog ready and enriched.
My setup during those nights:
- Terminal 1: Project Manager Agent updating Linear
- Terminal 2: Claude Code on Task A
- Terminal 3: Claude Code on Task B
- Terminal 4: Me browsing the site, throwing bugs into the queue
When there were conflicts—mostly in Prisma migrations—I described the situation to Claude and it resolved them. I guided each agent to commit only on its own files. Prisma helped because a central schema prevents many issues.
This isn't magic. It's management. But here's what struck me:
I was doing manually what should be automated.
The enrichment, work distribution, conflict prevention, Linear synchronization—all of this should be infrastructure, not manual work.
In February 2001, seventeen software developers met at a ski resort in Utah. They were frustrated with heavyweight development methodologies—documentation-heavy, plan-driven processes that couldn't adapt to changing requirements.
They wrote the Agile Manifesto. Four values, twelve principles. The core insight: the existing SDLC was designed for a world where requirements could be known upfront and change was expensive. The reality was different.
Seven years later, around 2008, a similar frustration emerged. Agile had transformed development, but operations was still stuck in the old model. Teams could ship code fast, but deployment remained slow, fragile, manual. DevOps emerged to bridge that gap—bringing deployment into the same agile cadence as development.
Each time, the pattern was the same: a step-change in one part of the development process exposed bottlenecks elsewhere. The SDLC had to evolve to catch up.
We're in another such moment.
I'm an engineer, not a designer. And in the old site, it showed.
The UI worked, but it wasn't good. I didn't have the intuition to know why a button should be here instead of there, why these margins and not others.
So I did what I know how to do: I went to read books. But this time, instead of reading myself, I built a tool that lets Claude Code read them.
I called it Candlekeep. A CLI tool that gives Claude access to books like a human reads them. It sees the library, browses tables of contents, reads the relevant chapters for the task at hand.
I loaded 21 books:
Core UI/UX:
- Don't Make Me Think (Steve Krug)
- Refactoring UI (Adam Wathan & Steve Schoger)
- Laws of UX (Jon Yablonski)
- 100 Things Every Designer Needs to Know About People (Susan Weinschenk)
Behavioral Design:
- Hooked: How to Build Habit-Forming Products (Nir Eyal)
- Continuous Discovery Habits (Teresa Torres)
Plus 15 more on psychology, game design, and product management. Total: 6,114 pages of design knowledge available as context.
For every design decision, instead of guessing, I sent Claude to the books. It came back with grounded recommendations, offered options. Since I'm visual, I asked it to build demo pages showing each option using my existing design system. Then I chose.
I didn't become a designer. But suddenly I had access to designers' knowledge, and the ability to apply it quickly.
Most websites are built for two types of consumers: humans and search engines.
But there's a third player emerging: language models.
People already ask Claude, ChatGPT, and Perplexity questions. These models need access to information to answer well.
So I built the site with LLM-first design.
I created an endpoint called llms.txt—an address that returns site content in a format language models can digest. Not HTML full of noise, but clean markdown with the important information.
I added a "Share with AI" button that gives users a ready prompt to paste into their preferred model. And I built analytics that track who accesses it—Claude, ChatGPT, Perplexity, Gemini, or a regular browser.
If someone asks "What is Squid Club?" and Claude has access to current information from the site, the answer will be accurate and up-to-date. A new distribution channel. And almost nobody is building for it yet.
At the end of those five days (December 23-27, 2025):
But the numbers aren't what stayed with me.
What stayed with me was the feeling that I did manually things that should be automatic.
I managed agents like a tech lead instead of having infrastructure do it. I manually connected Linear to Claude instead of having real integration. I built enrichment logic inside a sub-agent instead of it being part of the workflow.
Our SDLC—the entire cycle from planning through development to deployment—was designed for a world where humans write code.
Now we have agents that write code, but the infrastructure around them is still old. Jira, Linear, GitHub Actions—they don't know what an agent is. They don't know how to distribute work, prevent conflicts, do automatic enrichment.
Grace Hopper, when promoting compilers in the 1950s, encountered this: management thought automatic programming would make programmers obsolete. She had to repeatedly demonstrate that these tools would augment programmers' productivity, not replace the need for skilled people.
In fact, far from eliminating jobs, high-level languages led to an explosion in demand for programmers. They opened the door for many more people to become programmers. The pool widened.
The same pattern is unfolding now. The question isn't whether AI replaces developers. The question is: what needs to change in our tooling to support this new way of working?
Product managers will need tools that translate requirements into enriched tasks automatically. QA will need infrastructure that generates and runs tests at agent pace. Team leads will need ways to manage parallel work without manual synchronization. Analysts will need visibility into what happens when five agents work on the same codebase.
What I did in this project was a manual prototype of something that should be a product.
A year ago, a project like this would have taken me months. I would have needed to hire a designer or learn design myself. I would have worked alone, one agent, one task at a time. I would have waited fifteen minutes for every deployment.
Something changed in what an individual can build.
And something hasn't yet changed in the tools that support it.
That gap is an opportunity.
The site is live: www.squid-club.com
In upcoming posts, I'll dive deeper into each piece:
- Multi-agent workflows with Linear: How to build a Project Manager Agent
- Candlekeep: How to give AI access to books and why it changes everything
- LLM-first web design: How to build sites that language models can consume
And if you want to talk about these things—we have an entire community doing exactly that.
-
Brooks, Frederick P. "No Silver Bullet—Essence and Accident in Software Engineering." Proceedings of the IFIP Tenth World Computing Conference, 1986.
-
Backus, John. "The History of FORTRAN I, II, and III." ACM SIGPLAN Notices, 1978.
-
Beck, Kent et al. "Manifesto for Agile Software Development." 2001.
-
Haldar, Vivek. "When Compilers Were the 'AI' That Scared Programmers." 2024.
-
Wikipedia contributors. "Assembly language." Wikipedia, 2024.
-
Wikipedia contributors. "History of compiler construction." Wikipedia, 2024.
This post is part of an ongoing series about AI-first development practices. Follow along at www.squid-club.com/blog.