The First Book No Human Should Read: Why I Wrote a UI/UX Guide for AI Agents
After 10 years building backend systems, I discovered why AI agents fail at UI design -- and wrote the first book meant only for agents, not humans.

This year will be my tenth year writing software. Eighty percent of that time I spent in the dark. Algorithms, backend methods, systems no user ever sees or needs to know exist. Ten years, and I almost never built an interface for a human being. Which is strange, because the entrepreneurial part of me was always looking to solve real problems, to make people's lives better. I was building the engine, but never the steering wheel.
The first serious time I built a human-facing interface was about two and a half years ago when I started a startup that built a visual trading system. No-code, building trading methodologies that would normally cost thousands of dollars in custom development. I remember the faces of the clients the moment they understood what they could do. Eyes opening wide. That moment when someone realizes they suddenly can -- there's nothing more addictive.
I took a full-stack course. I was far from the best. I built interfaces that got the job done, but honestly not things a designer would put in their portfolio.
Now it's 2026. Agents build everything for me. But there's one place they still fall short, and it happens to be the place with the biggest impact on what the user actually feels: user interfaces.
The reason is simple. UI is opinionated. Like interior design -- bring two designers to the same space and you'll get two completely different spaces. Like financial markets -- two advisors, two opposing strategies. The beliefs, taste, and experience of the person giving the opinion dramatically change the outcome.
And language models? They were trained on everyone's opinions. They are the ultimate design-by-committee. Anyone who's worked in the industry knows that design by committee produces mediocrity. Apple didn't build its products by majority vote. An agent without clear visual direction will produce the average of everything it has seen, and in design, the average is always mediocre.
Research backs this up. A 2025 study published in Springer's AI & Society found that individual differences in personality traits fundamentally shape UX preferences -- a personality-aware design approach needs to incorporate individual differences as a primary consideration. This means there genuinely is no single "correct" design for all users, which is exactly why a model trained on all opinions defaults to the average of none of them.
Meanwhile, the design world in 2026 is actively rebelling against AI-generated sameness. Designers are embracing what's being called the "tactile rebellion" -- friction, texture, imperfection -- as a direct response to the sterile, generic output of AI tools. If agents keep producing design-by-committee interfaces, the gap between "AI-built" and "human-designed" will only widen.
Over the past year I learned a few things that made a real difference.
The first thing I learned from my wife, who's a designer. For her, gathering inspiration isn't optional -- it's mandatory. When I first started a new project I'd try to imagine what the site should look like. A square person like me? No chance. So I started doing what she does: before every new build, I scan the internet for sites whose design I admire. Screenshots, source code, everything goes into the project.
For CandleKeep, for example, I built on design elements from Linear, Entire.io, and Clerk. This way the agent doesn't get "build me a nice website" -- it gets "look at these specific elements from these sites and build something in this style." The difference in output quality is dramatic.
Every time the agent builds a component, I ask for three alternatives. I pick one, then ask for three variations of that one. This way, instead of trying to choose from written markdown descriptions, I see the options visually and choose. Small change in process, enormous change in results.
I combine leading UI/UX books with real data from PostHog on what users actually do on the site. The data tells me what's not working. The books tell me why, and what to do instead.
The books in my stack:
These aren't abstract theory books. They're full of concrete, actionable principles. But they were all written for humans.
I'm a book person. When I don't understand something, I read. When I don't know how to do something, I read. That's how I learned to build web apps, and that's how I taught my agents -- I connected them to professional books through CandleKeep.
But after a few months of working this way, I realized something that should have been obvious: agents read differently than humans.
Humans need the "why." We need the story of The Monk Who Sold His Ferrari to internalize a principle. We need motivation, context, emotion. An agent wants to know what to do, what's correct, what's not. It doesn't need to be convinced. It doesn't need motivation.
I understood this while building Skills for agents and working with Anthropic's best practices. The format that works best for them is clear, unambiguous rules with no background stories. The exact opposite of books.
Consider the difference:
How a human book teaches "visual hierarchy":
"Imagine walking into a room where everything is the same size, the same color, the same weight. Your eye has nowhere to land. Now imagine that same room with one piece of art on the wall, lit by a single spotlight. That's visual hierarchy -- guiding the eye to what matters most..."
How an agent book teaches the same concept:
"Establish visual hierarchy through size, color, and weight contrast. Primary actions: large, high-contrast, bold. Secondary actions: medium size, muted color. Tertiary: small, low-contrast. Never give equal visual weight to more than one element per section."
Same knowledge. Completely different packaging. The human version builds understanding through metaphor. The agent version delivers actionable rules the model can apply immediately.
I started searching history for whether anyone had already solved this problem -- how to take the same knowledge and package it differently for a different type of reader.
Nassim Taleb writes in Antifragile that time is the best measure of fragility. Something that has survived 1,000 years will probably survive another 1,000. What I found had survived 1,500 years.
A page of Talmud is a masterpiece of information design. The Mishnah and Gemara sit in the center -- the core text. Rashi's commentary runs along the right margin, providing plain explanation and accessibility. The Tosafot commentary runs along the left, offering deep critical analysis. Same knowledge, three layers, three types of readers.
When the rabbis wanted to make the knowledge accessible to a broader audience, they extracted the stories into separate books -- the Ein Yaakov -- because not every reader needs the same format. The late antique scholars had developed sophisticated methods for this: epitomizing, abbreviating, compressing, anthologizing. They understood that knowledge transfer isn't just about what you teach, but how you package it for the specific reader.
The llms.txt movement today is doing something similar at the documentation level -- websites creating AI-optimized versions of their content. Companies report up to 10x token reductions when serving markdown instead of HTML to language models. Anthropic themselves requested llms.txt for their documentation.
But llms.txt is about documentation and APIs. I'm talking about something deeper: books. Deep professional knowledge. The kind of knowledge that takes a human years to accumulate.
So I did exactly what the Talmud scholars did. I took over 1,400 pages from the leading UI/UX books in the world -- Refactoring UI, Don't Make Me Think, Laws of UX, 100 Things Every Designer Needs to Know, Hooked, UX for Beginners, and more -- and wrote a new book. 36,000 words. Over 170 clear, actionable rules.
This book is not meant for human eyes. It's dry, direct, no stories. Exactly what an agent needs.
The hero section of CandleKeep -- "Turn Any AI Agent Into a Domain Expert" -- was built by an agent that read this book. A one-liner that communicates the value the moment you land on the page.
I think this is a first swallow. Imagine a world where every professional domain has books written for agents -- accounting, investment advisory, medicine, law. Not help docs. Professional knowledge.
And the big difference: a book for agents is a living thing. It evolves, grows, and can grow differently for each user. An agent reads the book and generates its own notes and side documents based on what its specific user needs. That's the vision for CandleKeep -- a knowledge source that grows beyond its base and adapts for each user.
1,500 years ago, scholars packaged knowledge in layers for different types of readers. Now, for the first time, we're doing it for readers who aren't human at all.
The book is available for free on CandleKeep.
- Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder -- Time as the ultimate measure of robustness
- The Talmud's Genre among Late Antique Genres -- How the Talmud pioneered layered knowledge design
- The llms.txt Standard -- Structuring web content for AI consumption
- Optimizing API Docs for AI Agents -- 10x token reduction with AI-optimized content
- When Technology Meets Personality (Springer, 2025) -- Why individual differences shape UX preferences
- Aesthetics in the AI Era: 2026 Design Trends -- The "tactile rebellion" against AI-generated sameness
- Anthropic: Building Effective Agents -- Best practices for agent skill design
- CandleKeep -- The first library for AI agents