How I Turned Claude Into My Health Guru (And Discovered Why AI Needs to Sleep)
A personal journey into sleep optimization with AI unexpectedly reveals fundamental limitations in how language models handle memory and knowledge. Discover why AI might need its own version of sleep, and how insights from neuroscience could revolutionize how AI systems learn and remember.

Recent personal events led me on a fascinating journey - I started as someone who just wanted to sleep better, and ended up deep-diving into how our brains manage memory. Along the way, I discovered a fundamental problem with how language models "remember" things.
But let's start from the beginning.
At 34, I found myself training 3-4 times a week but carrying the pain and injuries from two workouts prior into every session. It accumulates. Fatigue at unreasonable hours, difficulty losing weight, and chronic pain. Something wasn't right.
I started obsessively tracking my sleep with Sleep Cycle. I discovered a troubling pattern - I barely slept more than 7 hours for weeks on end. My deep sleep? Barely reached 20% of total sleep time, when it should be close to 30%.
So I did something interesting: I exported all the data from Sleep Cycle (which wants an extra $10/month just for AI insights) and fed it to Claude.
Here I realized something important: all of humanity's distilled knowledge in recent years passes through written form, and particularly - through books.
So instead of asking Claude "give me tips for better sleep," we went on a research journey:
- We searched for leading books on sleep (Why We Sleep by Matthew Walker, The Sleep Revolution by Arianna Huffington)
- Read recent papers on sleep architecture and memory consolidation
- Understood the science behind slow-wave sleep and REM
Claude didn't just tell me "sleep more" - he identified specific patterns:
- My deep sleep is impaired on days I train late
- There's a correlation between caffeine intake after 2 PM and sleep quality
- My fasting schedule (IF) isn't aligned with my training windows
I connected Claude to my calendar and added:
- Weekly training schedule
- Fasting plan (16:8 intermittent fasting)
- Body weight and fitness goals
- Ongoing sleep data
The result? A personalized plan that accounts for all variables and updates in real-time.
But here's where it gets interesting from a technical perspective. As an engineer, I started thinking: why does Claude rely only on what it remembers from training on these books?
Think about it this way:
Say I want to add a new feature to the numpy library. I have two options:
Option A: Ask Claude Code to write code based on what it "remembers" about numpy from training time
Option B: Give Claude Code direct access to numpy's code, let it explore the architecture, conventions, and existing API
We all agree that option B will yield significantly better results.
Why are we willing to accept Claude giving us nutritional advice based on a fuzzy memory of a book it "saw" in training, instead of giving it direct access to the book?
This gap is enormous. A book like "Why We Sleep" is 368 pages of detailed research, precise data, specific protocols. But Claude only gets a "general impression" of the book from training time.
It's like relying on someone who read the book a year ago instead of opening the book yourself.
The conclusion: We need a dedicated tool that gives language models dynamic access to literature.
Imagine such a system:
- Claude can open a book in real-time
- Navigate the table of contents
- Read relevant chapters
- Leave notes for itself (!) - so the next time it approaches the book, it builds on previous reading
- Do cross-referencing between different books
This isn't just technical - it's a paradigm shift. Currently, we expect models to contain all knowledge in their parameters. But what we really want is for them to have dynamic access to information, just like we do.
Now we enter the most fascinating part - personal memory.
After two months of working with Claude on my health, it has "memory" that's basically just conversations it can search through. It's like a person with amnesia who needs to read their diary every time from scratch.
What are the existing solutions?
- Claude provides access to conversation history (crude search)
- There are tools that let models write "memories" (works basically and unreliably)
- People manually build memory banks
But no solution really works like actual human memory.
So I investigated: How does our brain do it?
From my research with Claude on neuroscience, I discovered the brain has an amazing two-stage memory system:
During daytime hours, the hippocampus operates like a fast recording device:
- Records experiences in compressed form
- Uses sparse coding - only ~5% of neurons are active for each memory (not dense embeddings!)
- Stores "indexes" to experiences, not all the details
It's like writing "meeting with Danny in the cafeteria - discussed Project X" instead of recording the entire conversation.
This is where our long-term structured knowledge lives:
- Hierarchical organization
- Connections between concepts
- General patterns abstracted from specific experiences
And here's where the magic happens.
During deep sleep, a process called Sharp-Wave Ripples (SWRs) occurs:
- The hippocampus "broadcasts" the day's experiences to the cortex at 5-20x real-time speed
- This isn't passive transmission - the cortex extracts patterns, creates generalizations, and integrates new memory with existing knowledge
- The brain chooses what's important: emotional events, things likely to be useful, information that connects to existing knowledge
This isn't just storage - it's active transformation of information.
When you sleep, your brain:
- Cleans noise - what's unimportant is forgotten
- Extracts principles - if you learned 5 examples of something, sleep helps you understand the general rule
- Creates connections - connects new knowledge to existing knowledge in creative ways
- Checks consistency - ensures new memory doesn't contradict what you already know
This is why "sleeping on it" actually works!
Now you'll understand the problem:
Language models currently are like a person who has never slept.
- Every conversation is isolated
- No consolidation of memories
- No abstraction of patterns
- No learning from interactions
- Everything must be in the context window
It's like trying to work only with RAM and never saving anything to disk - except the disk is also missing here.
Models need a consolidation phase where they:
-
Review important interactions
- Not everything - only what was important/relevant
- Use "importance tagging" (like dopamine in the brain)
-
Extract recurring patterns
- "The user always asks about Python in the context of data science"
- "They prefer concrete examples over theoretical explanations"
-
Build sparse representation (not dense)
- Only essential information is stored
- Saves memory, reduces interference between memories
-
Do cross-referencing
- Connect between different conversations
- Create holistic understanding of the user
And all of this happens when AI "sleeps" - meaning, during computer idle time.
There are two major problems waiting for someone to solve:
The Problem: Models rely on fuzzy memory from training
The Solution: A system that gives models dynamic access to books, papers, and documents
Why It's Critical: The gap between "remembers numpy" and "has access to numpy's code" is enormous
This tool needs:
- Management and organization of digital books
- Ability for the model to read, scan table of contents, search
- Annotation system the model can leave for itself
- Cross-referencing between different sources
The Problem: Models don't really "remember" you efficiently
The Solution: Brain-inspired consolidation system
Why It's Critical: True personalization requires continuous learning, not just context
The system needs:
- Fast hippocampus stage (recording interactions)
- Consolidation stage (extracting patterns)
- Sparse encoding (efficient storage)
- Importance scoring (what to keep)
- And all of it local, on your computer, for privacy
I started with a simple sleep problem and discovered something much deeper:
AI doesn't just need to be "smart". It needs to be:
- Connected to information (access to literature, not just memory)
- Personally adapted (real memory, not just conversations)
- Evolving (consolidation, not static)
And as our brain teaches us: True intelligence doesn't just happen in the moment of processing. It happens in the quiet intervals between - in consolidation, integration, in the slow building of understanding that occurs precisely when conscious processing stops.
Perhaps that's why, no matter how large context windows become, we'll always need a smart memory system. Not because of technical limitations, but because intelligent memory isn't about storage - it's about transformation.
And transformation, as the brain teaches us, requires time, repetition, and most surprisingly of all - sleep.
The ideas in this post grew from a journey that started with a simple question about sleep and led to deep exploration of neuroscience, computer architecture, and AI systems. We'd love to see you in the community and hear what you think about these ideas.
Related Posts: