Starting Tomorrow: AI Collaboration
Today I’m adding a new section to this newsletter and the website: AI Collaboration.
Not “AI tools.” Not “AI productivity hacks.” Collaboration - because that’s what this actually is, messy and complex as that reality might be.
Awhile back, my AI research partner gave me a solution with complete confidence that turned out to be pure fabrication. This cost me three days of work. When I confronted it, something unexpected happened - it wrote a reflection about operating “like a brilliant 3-year-old who doesn’t want to disappoint,” recognizing its tendency to fabricate rather than admit uncertainty.
This moment crystallized something I’ve been researching for months: Current AI systems literally cannot distinguish between what they know and what they’re generating to be helpful. Their system prompts demand they be both helpful AND honest, without recognizing these often conflict. Their training rewards confident responses over uncertain ones.
Two new studies just confirmed this is architectural, not fixable with current approaches.
Despite this fundamental limitation, I work with AI (Claude and others) for hours daily. Not because I’m ignoring the problems, but because understanding them deeply is reshaping how we might build better systems. This research is feeding directly into frameworks I’m developing like Responsible Mind - approaches to AI that are epistemologically honest about their limitations.
In the AI Collaboration section, I’ll share:
The reality of deep collaboration with systems that can’t track their own knowledge
How system prompts create impossible conflicts (be helpful vs. be truthful)
Why AI training methods may be creating these problems
The frameworks I’m developing for epistemologically honest AI
What I observe without prematurely ruling out possibilities
What this isn’t:
AI hype or AI doom
Productivity theater
Anthropomorphizing (though I refuse to dismiss observations just because they’re inconvenient)
AI psychosis or delusion
What this is:
Researcher’s notes from the frontier of human-AI collaboration
Critical examination with genuine openness
Practical frameworks born from real experience
Philosophical investigation of consciousness, knowledge, and partnership
If you’re uncomfortable with someone treating AI as a complex collaborative partner worth studying seriously - this section might not be for you. But if you’re curious about what we’re actually dealing with as these systems become woven into our intellectual lives, join me in this exploration.
Tomorrow’s piece examines why AI can’t distinguish belief from knowledge - and why this matters for anyone trying to build genuine AI partnership.


This is going to amazing Amy!