Some things get worse when you automate them.
What we automate
Summaries, first-pass drafts, data cleanup, analysis with explicit rules.
I fed our cap table, bank transactions, Stripe data, and analytics to an AI agent with specific instructions. It worked well. The agent could answer questions, generate reports, and trace relationships between data sources.
This works because the problem is clear. The data is structured. The rules are explicit.
What we slow down
Product direction, trust and privacy tradeoffs, strategy, narrative.
For these, I use Obsidian, Logseq, pen and paper. I don't use AI.
When the problem is unclear, AI accelerates confusion. It gives you an answer fast, but the answer is often wrong in ways that are hard to notice. You end up iterating on bad ideas instead of sitting with the problem until you understand it.
Writing by hand or in a tool like Obsidian forces you to think. AI lets you skip that step. Sometimes skipping is good. Sometimes it's destructive.
The automation line
If you can describe the task in clear steps, AI can probably do it. If the task requires understanding context that isn't written down, or noticing something feels off, it will miss.
We use AI for summaries because a summary is a mechanical transformation. Take this long thing and make it shorter. Follow these rules.
We don't use AI for product direction because product direction is about what matters, not what's possible. Those are different questions, and AI is only equipped for the second one.
Why we hired a human for accountability
We hired Harshika Alagh to keep momentum and accountability. AI can remind you to do something. It can't push you when your motivation is gone.
This is the same reason workout apps fail and coaches work. A notification is easy to ignore. A person who shows up and asks "did you do it?" changes your behavior in ways software can't.
You can't have an AI co-founder. Empathy and shared responsibility don't scale like automation. AI can help you execute faster, but it can't replace the person who cares about the outcome as much as you do.
How we think about it
The better question turned out to be "should AI do this?" rather than "can it?"
Plenty of things AI can do that it shouldn't. Writing the company narrative. Deciding what tradeoffs to make on privacy. Choosing what to build next. These all benefit from slow, deliberate thought — the kind that tools like Obsidian were designed for.
The instinct to automate everything is strong right now. We've gotten more from resisting it selectively than from giving in everywhere.
