A Strange New World of AI: My Second Brain Setup
How I connected Obsidian, Claude Code, and Telegram into a personal AI system that travels with me everywhere — and what it tells us about where this technology is going.
This article went from concept to published webpage on an afternoon walk. I gave my AI assistant the task from my phone, walked, stopped once on a park bench to review the draft and approve it, and pushed it to GitHub. The website updated automatically. Concept to live page, on a walk. I was not texting with my head down in the middle of the street — mostly I was just walking and thinking while the AI handled the administrative and technical work of getting it online. The thinking and writing — the human value-added part — came from pieces of content I already had on my computer in various forms. The AI assembled, formatted, and published it. That is the kind of work AI is increasingly able to handle, and the pace is accelerating.
This is not science fiction. This is a Tuesday afternoon in 2026.
A necessary disclosure: this article was written with AI assistance. But that sentence is almost meaningless without context. The ideas, observations, and opinions came from eighteen months of my own work — notes in my vault, half-finished drafts, consulting experiences, conversations with colleagues. The AI helped me organize, draft, challenge, and publish those thoughts. It did not generate them from nothing. The old categories — “human-written” or “AI-generated” — do not describe what actually happened here. The line between the two is blurring fast, and pretending it is still sharp does not help anyone understand what is coming.
How I Got Here
About eighteen months ago I started building AI-powered problem-solving applications using Visual Studio and traditional web development tools. My first app was a monstrosity: 50,000 lines of code, full authentication systems, Supabase databases, Retrieval Augmented Generation pipelines, and hundreds of hardcoded prompts. It worked, but it was genuinely difficult to maintain.
Then the models got better. I rebuilt the same functionality with 5,000 lines of code. Then the models got better again. Today I can have a direct conversation with a model and get comparable output without writing a single line of code. No app, no database, no deployment pipeline.
The direction of travel is obvious once you see it. The code is shrinking toward zero. The intelligence is moving into the model and into plain text files that anyone can read and edit.
What I Have Built
My current setup has three components. They sound simple because they are.
Obsidian is my second brain. It is a note-taking application that stores everything as plain markdown files on my local computer. Research notes, project plans, writing drafts, consulting work, session summaries. Eighteen months of accumulated thinking, all searchable, all readable by a human or an AI. The vault syncs to a private GitHub repository so it is backed up and accessible from any device.
Claude Code is an AI coding and reasoning assistant that runs as a command-line tool on my Linux computer. Unlike the standard chat interface at claude.ai, Claude Code can read and write files on my machine, run code, search the web, and take multi-step actions. It reads my vault. It knows my projects. When I start a session, it picks up where we left off because everything important is in the vault.
Telegram is the interface. A Telegram bot connects to Claude Code so that any message I send from my phone goes directly to my AI assistant running on my computer at home. The AI responds in the chat. I stay in Telegram. Everything else happens on my machine.
That is it. Three things. No monthly SaaS fees beyond my Anthropic subscription. No proprietary platform locking up my data. Everything I create is a plain text file I own.
What This Looks Like in Practice
This morning I had a conversation about the architecture of AI tools, the work I am doing with a manufacturing client, and the design of a new problem-solving application. Notes from that conversation are now in my vault. A project brief was created. A research file was updated with three new articles.
I did not sit down at a computer to do any of this. I was having a conversation, the way you would with a knowledgeable colleague.
When I ask a question, Claude searches my vault first, then the web if needed. When I want to capture something, it creates or updates the appropriate file. When I want to draft an article, it reads my writing style guide from the vault and writes in my voice. When I want to check on a project, it reads the project file and gives me a status update.
The key insight is that my Obsidian vault gives the AI persistent context. Each conversation is fresh, but the vault carries the accumulated memory. The more I put into it, the more useful the AI becomes. The knowledge compounds over time.
The Skills Files Insight
Here is where this connects to something bigger.
The first apps I built eighteen months ago encoded intelligence in code. Thousands of lines of logic, database schemas, API calls. When I wanted to change how the AI behaved, I had to change the code and redeploy.
The new approach encodes intelligence in plain text files — skills files, in the terminology that has emerged around this architecture. A skills file for problem-solving step two might describe what good looks like, what mistakes to watch for, what questions to ask, and what a model response should contain. An AI reads the skills file at runtime and behaves accordingly.
To change how the AI behaves, I edit a text file. No code. No deployment. No developer required.
For work that integrates AI with human judgment, this is the direction the field is moving. Andrej Karpathy, one of the original architects of Tesla’s AI systems and a co-founder of OpenAI, described almost exactly this approach recently. His insight: instead of building complex Retrieval Augmented Generation pipelines with vector databases and chunking algorithms, use a language model to actively maintain a structured markdown knowledge base. Raw materials go in, the model compiles them into an organized wiki, and the wiki compounds in value over time. At roughly 100 articles and 400,000 words, you can ask the model complex questions against the entire knowledge base without any of the machinery that enterprise AI vendors are charging a fortune to install.
Karpathy’s conclusion was direct: “I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files.” The whole AI infrastructure industry is building retrieval pipelines. A structured markdown vault with clean summaries outperforms most of them at practical scale.
Garry Tan, CEO of Y Combinator, built a consulting tool called Gstack that works the same way. Daniel Miessler, a cybersecurity practitioner, built a personal AI assistant around structured text files that define skills for different roles. All of them are converging on the same architecture: model plus structured text files plus a thin layer of plumbing.
The code used to be the bottleneck. Building a personal knowledge system meant months of development, databases, and deployment pipelines. Now the code is increasingly the easiest and least important part of the equation. The quality of your thinking and the structure of your knowledge matter more.
Why This Matters Beyond Personal Productivity
In 1995, finding information on the internet required knowledge of Boolean search operators and an understanding of how indexes worked. By 2000, you just typed a question. The capability was always there. The interface abstraction unlocked mass adoption.
Today, getting expert-quality output from an AI model requires knowing how to prompt it well. That is a skill most people do not have yet. The 90% of people who cannot prompt effectively are being underserved by current tools.
The gap gets closed two ways. Better models that require less precise prompting. And better skills files that encode expert prompting ability so ordinary users get better output without needing to understand what is happening underneath.
I can generate a high-quality eight-step problem-solving report from this AI setup because I know what good looks like and I know how to ask for it. (Here is an example using the Apollo 13 oxygen tank failure.) A single skills file is not enough to get there. The current version uses a combination of skills files — problem-solving structure, coaching logic, and report writing — working together. Even with that, someone unfamiliar with the domain will still need guidance. The skills files lower the bar significantly, but they do not eliminate it.
This is the direction of what I am building next for the Lean Enterprise Institute — or something close to it. The models are improving fast enough that any specific prediction is probably wrong. But the general architecture — skills files, thin harness, model as the engine — is more future-proof, more portable, and easier to maintain than anything I have built before. The setup exists on my phone today. Figuring out how to make it usable for others is the next problem to solve.
What This Kind of Setup Can Already Do
Beyond writing articles on walks, the practical range of this architecture is wider than most people realize.
A phone photo of a production dashboard, a handwritten A3 report, or a defect on the shop floor can go straight to the AI and come back with detailed observations within seconds. The current generation of vision models handles this well.
A recorded meeting can be sent as an audio file and returned as a structured summary with key decisions, action items, and follow-up emails drafted. No transcription service, no separate summarization tool. One step.
Dimensional quality data can be analyzed for process drift. An SOP can be evaluated for whether it properly distinguishes major steps, key points, and reasons why — or whether it collapses them together the way most procedures do. The turnaround is minutes, not hours.
None of this is a future capability. This is what the current generation of models can do when connected to the right context and the right skills files.
Where This Likely Goes
This setup did not appear overnight. I started using Obsidian as a personal knowledge base about eighteen months ago. About twelve months ago I connected it to Claude Code so the AI could read and write my vault. The phone connection — Telegram to Claude Code — only became possible in late March 2026. Each layer built on the last, and each layer required a Linux computer, technical knowledge, and time to configure. When the next generation of models drops, predictions made today will probably look naive. So take what follows as a direction, not a forecast.
Science fiction author William Gibson said it best: “The future is already here, it’s just not evenly distributed.” The technology is here. What lags behind is the distribution — the point where most people can access this without needing to be a developer. How long that takes is harder to predict than the technology itself.
Distilled models like Gemma running on edge devices. Voice interfaces on phones good enough to handle 80% of knowledge work. The GUI will fade as the constraint that created it fades.
The end state is not a better app. It is a capable AI that knows your context, lives on your device or close to it, and is reachable from wherever you are. The second brain connects to the AI. The AI travels in your pocket.
Karpathy described the shift as token throughput moving away from code manipulation and toward knowledge manipulation. That is exactly what I am experiencing. Less building, more thinking. Less deploying, more learning. The tool is getting out of the way.
Early versions of this exist today. They require more technical comfort than most people have. But the direction is clear, the pace of change is not slowing down, and the barrier to entry drops with every model generation.
Humans plus AI, connected well, remains greater than the sum of the parts. The connection is getting simpler every quarter. I plan to go on a lot more walks.
Humans + AI > Problems.