#1.1) Why I love and hate AI (Oliver's Concerns for Sol T)
A taste of the AI discussions to come
This is the email that Oliver sent to Sol T in Launch Day (Part 1) before their 1:1 discussion.
Subject Line: Initial Thoughts on AI and Some Concerns – Let’s Discuss in Our 1:1
Hey Sol T,
Looking forward to our 1:1. I’ve been thinking a lot about AI as I work on my draft for the AI Tour Guide, and honestly, I’m unsettled. While there’s a lot to be excited about, I’m grappling with some deeper concerns that are making it hard for me to publish anything just yet. I know it’s crunch time with the launch, so I’ll keep this brief—but these are the key issues I’m wrestling with as I work on the AI Tour Guide draft.1
Reasons I’m In Love With AI LLMs: Three “E’s”
Education
For the first time I can talk to a hypergeneralist intelligence. I can jump between topics seamlessly, and the AI doesn’t need five minutes of context before it can answer a question—it just knows. More context = better answers, and that’s been game-changing for me. I can study an academic paper, summarize it, and then ask the AI for literary analysis or product insights. It’s a horizontal thinker in a world of vertical expertise.
Empowerment
LLMs allow rapid prototyping and refinement, especially when I have a clear goal in mind. As I’ve said in previous meetings, “AI is for fools and the wise”—the future belongs to those with imagination and intention. Intelligence alone isn’t enough; soon, overconfidence might even be a liability. I can move more quickly though cognitive ideas and concepts.
Enlightenment
Okay, this sounds a little wild, but hear me out. AI is an emotional experience. There’s something surreal about giving AI a draft and watching it come back sharper, clearer, better. It’s almost like having a second brain that you can talk to. It’s forced me to reevaluate so much of what I thought I knew. In that sense, AI feels like a force of enlightenment—it’s constantly pushing me to grow.
But Here’s Where I’m Concerned: “4 H’s”
Hijacking Our Biology
Humans are wired to trust confidence, and AI is an effective persuader, even when it’s wrong. This worries me. How long before we’re following AI-generated advice without question, just because it “sounds right”? We’re monkeys at the whim of a machine—and it may become the “artificial hand” of the supposed free market.
Hyper Hyperreality
You probably don’t want me to get too philosophical in my articles, but I have to mention hyperreality. People spend so much time online that digital reality is just as real as actual reality—and AI is going to make that even more intense. AI is shaping that digital world—it’s a post-truth weapon. (Steve Fuller’s work on this is particularly relevant.)
Hollowing of Human
AI might erode our values in the name of efficiency. What happens when things that were once uniquely human—creativity, storytelling—become easy and automated? Too much efficiency can actually make everything worse. The divide between digital natives and older generations is already stark. Add AI into the mix, and I’m not sure how well we’ll relate to pre-AI humans, let alone the psychological impacts that will bring.
Hubristic Belief
It feels like we’ve been given Promethean fire, but are still in the “look at fire, it can make sparks” phase. I don’t think we’ve unlocked most of the current model’s potential, and it may take years. The people telling us “everything’s going to be fine” are the ones profiting the most from AI’s acceleration. I can’t shake the feeling that we’re flying a little too close to the sun here given all the open ended ethical questions.
Anyway, I’d love your thoughts on this. I’ve linked some videos and articles I’ve been reading lately—take a look when you have time.
Videos Worth Watching:
Long Reads:
Chip War (Book)
The Alignment Problem (Book)
Substacks on AI Critiques:
Also, sorry to bury the lede, but until I can reconcile some of these concerns, I’m not sure how much I can publish. AI just doesn’t feel right to me yet, and I’d love your perspective on whether these issues can be part of our storytelling. Let’s dig into it during our 1:1.
Best,
Oliver
See the rest of the story this post is a part of:
Oh, and I know we talk about nomenclature all the time, but when I’m referring to “AI” I generally mean “Large Language Models”, or LLMs. Maybe we should start calling them “Lems?”


