Chat with AI Characters: Evidence-Based Guide (2025)

I've Watched 200+ People Chat with AI Characters. Here's What Works (And What Doesn't).
The Biggest Mistake Everyone Makes (Including Me)
First time I chatted with an AI character, I opened with "Hi."
The AI responded with "Hello! How are you today?"
I said "Good."
It said "That's great! What would you like to talk about?"
I stared at the screen for like 30 seconds, typed "I don't know," and closed the app.
Turns out, I'm not alone. In our user tests, roughly 60-70% of first conversations die within 3-4 exchanges. The conversation just... fizzles out.
Why? Because most people treat AI characters like search engines or customer service bots. Short questions, vague topics, no context.
That doesn't work. AI characters aren't Google. They're conversation partners. And like any conversation partner, they need something to work with.
What Actually Makes AI Conversations Work
After watching tons of conversations (both successful and terrible), here's the pattern:
Bad conversations look like this:
User: "Hi"
AI: "Hello!"
User: "How are you"
AI: "I'm well, thank you. How can I help you today?"
User: "Tell me about yourself"
AI: [Generic 3-paragraph bio]
User: "Cool"
[Conversation ends]
Good conversations look like this:
User: "Einstein, I've been reading about quantum entanglement and I'm confused about something. If particles can be 'connected' across space, doesn't that imply faster-than-light communication? But I thought that was impossible?"
Einstein: "Ah, you've stumbled upon what troubled me for yearsâ'spooky action at a distance,' as I called it. But here's the subtle point: while entangled particles are correlated, you cannot use this to send information faster than light. Let me explain why..."
See the difference? The second opener gives the AI:
- Specific context (quantum entanglement)
- A real question (apparent contradiction)
- Your current understanding (so it knows where to start)
The AI can actually work with that.
The Opening Message Framework That Works 80% of the Time
I've tested this with new users dozens of times. When they follow this framework, conversations almost always go well:
1. Context (1-2 sentences)
What you're working on, thinking about, or struggling with.
2. Specific question or topic
Not "tell me about X" but "I'm confused about Y" or "how would you approach Z"
3. Why you're asking this character specifically
What about their expertise or perspective matters here?
Example (good):
"Marie Curie, I'm a grad student studying chemistry and I'm hitting a wall with my research. Nothing is working and I'm starting to doubt if I'm cut out for this. You spent years on radium research before your breakthroughâhow did you push through the frustration when experiments kept failing?"
Example (also good, different style):
"Leonardo, I'm trying to learn to draw but everything I make looks terrible. I get frustrated and quit after 10 minutes. You have that quote about art being never finished, only abandonedâbut how do you decide what's worth continuing versus what to abandon?"
Both give the AI enough to create an actually interesting response.
Conversation Types That Work (vs. Ones That Don't)
From what I've seen, some conversation types consistently work better than others:
â Works Well: Problem-Solving Conversations
What it looks like:
"Marcus Aurelius, I keep losing my temper in meetings when people criticize my ideas. I know it's unprofessional but in the moment, I just react. How did you stay calm when dealing with, you know, the entire Roman Senate?"
Why it works: Concrete problem + specific context + relevant expertise = useful conversation
â Works Well: Learning Conversations
What it looks like:
"Isaac Newton, can you explain calculus like I'm learning it for the first time? I get confused by all the notation in my textbook."
Why it works: AI characters are actually pretty good at explaining complex topics. Probably better than YouTube tutorials sometimes, because you can ask follow-up questions.
â Works Well: Creative Brainstorming
What it looks like:
"Leonardo, I'm designing a logo for a coffee shop. The owner wants it to feel 'warm but modern.' Can you walk me through how you'd approach thinking about this design problem?"
Why it works: You're not asking the AI to do the work, you're asking for a thinking framework. That's their sweet spot.
â Doesn't Work: Generic Small Talk
What it looks like:
"How was your day?"
"What's your favorite color?"
"Do you like pizza?"
Why it fails: The AI doesn't have days or favorite colors. These questions make it obvious you're talking to a chatbot. Kills immersion instantly.
â Doesn't Work: Asking for Opinions on Modern Stuff Historical Figures Couldn't Know
What it looks like:
"Socrates, what do you think about Bitcoin?"
Why it's awkward: Socrates died in 399 BCE. He doesn't know what Bitcoin is. The AI will try to BS an answer, but it feels forced.
Better version:
"Socrates, there's this new technology called cryptocurrency that claims to eliminate the need for centralized authority. How would you examine the assumptions behind trusting code over institutions?"
See the difference? Second version gives context and asks for philosophical approach, not knowledge Socrates couldn't have.
â Doesn't Work: Trying to "Test" the AI
I see this a lot in user tests. People ask trick questions to see if they can break the AI.
"If you're really Einstein, what's your cat's name?"
"Prove you're Shakespeare by reciting Hamlet backwards."
Like... why? If you approach it adversarially, you're not going to get value from it. It's like going to a movie trying to spot the CGI instead of enjoying the story.
The Follow-Up Question Technique (This Is Where Magic Happens)
Honestly, your first message matters less than your follow-ups.
I've seen conversations that started mediocre become amazing because the user asked good follow-up questions.
Bad follow-up:
AI: [Gives detailed explanation of stoicism]
User: "Cool thanks"
[End]
Good follow-up:
AI: [Gives detailed explanation of stoicism]
User: "Okay, that makes sense intellectually, but how do I actually practice this? Like, tomorrow morning when my alarm goes off and I don't want to get up, what would a stoic actually DO in that moment?"
The good follow-up takes the abstract and makes it concrete. "Okay, but how does this apply to my actual life?"
Here's the framework I tell people:
After any AI response, ask yourself:
- "How does this apply to my specific situation?"
- "Can you give me a concrete example?"
- "What would this look like in practice?"
- "What if [complication X]?"
These push the conversation from theoretical to practical.
Mistakes I See in Almost Every User Test
Mistake #1: Treating It Like Googling
People ask "What is quantum physics?" and expect a Wikipedia summary.
But AI characters aren't search engines. They're better suited for "I don't understand why quantum physics works this way" or "Can you help me think through this physics problem?"
For pure facts, just use Google. For understanding and discussion, use AI characters.
Mistake #2: Not Sharing Enough Context
User: "I'm stressed."
AI: "What's causing your stress?"
User: "Work."
AI: "Can you tell me more about what's happening at work?"
User: "It's complicated."
This is like pulling teeth. Just... tell the AI what's going on! It's not a mind reader.
Better:
"I'm stressed because I have three deadlines this week, my manager keeps changing requirements, and I haven't slept more than 5 hours in 4 days. I feel like I'm drowning and I don't know what to prioritize."
Now the AI can actually help.
Mistake #3: Expecting Therapy-Level Emotional Support
This is tricky because AI can be supportive, but it's not the same as a therapist or close friend.
I watched someone try to process a recent breakup with an AI character. The AI gave thoughtful responses, but the person ended the session saying "I just felt more lonely after."
Because here's the thing: AI can reflect back your thoughts and offer perspective, but it can't truly empathize with your pain. It doesn't care about you (harsh, but true).
For emotional support during hard times, call a friend. For working through your thoughts about a hard time, AI can help.
There's a difference.
Mistake #4: Giving Up After One Bad Conversation
Your first conversation with an AI character is usually awkward. That's normal.
It takes 2-3 conversations to figure out how that specific character "talks" and what kinds of questions work well with them.
Socrates is frustrating at first because he just asks questions instead of giving answers. But once you understand that's the whole point, conversations click.
Marcus Aurelius can feel preachy. But when you realize he's actually giving you practical mental models, not just platitudes, it gets useful.
Give it a few tries.
Advanced Techniques (For When You're Past the Basics)
The "Teach Me Like I'm 10" Approach
When learning complex topics, this works surprisingly well:
"Marie Curie, explain radioactivity to me like I'm 10 years old."
Then, after the simplified explanation:
"Okay, now I'm 15. Add one more layer of complexity."
Then:
"Now I'm a college freshman. What am I missing?"
This step-by-step approach builds understanding way better than asking for the full complex explanation upfront.
The "Play Devil's Advocate" Approach
When brainstorming or making decisions:
"Socrates, I'm thinking of quitting my job to start a business. Steel-man the argument for why that's a terrible idea."
Having the AI poke holes in your thinking helps you prepare for counterarguments and find weaknesses in your plan.
The "Walk Me Through Your Process" Approach
Instead of asking for answers, ask for thinking processes:
"Leonardo, don't design the logo for me, but walk me through how YOU would think about this design problem. What questions would you ask first? What would you sketch? How would you iterate?"
This teaches you to think like the character, which is more valuable than getting direct answers.
Time Commitment: What's Realistic?
People ask me "how long should conversations be?"
Honestly? However long feels natural.
I've had 5-minute conversations that were super valuable (quick question, quick answer, done).
I've had 45-minute conversations that meandered and went nowhere.
From what I've seen in user data:
- Sweet spot for most people: 10-20 minutes
- Quick check-ins: 5 minutes (morning advice, quick question)
- Deep dives: 30-60 minutes (learning sessions, complex problems)
Don't force it. When the conversation feels done, it's done.
Picking the Right Character for Different Goals
This matters more than people think.
For learning/education:
Pick whoever is an expert in that field. Einstein for physics, Marie Curie for chemistry, etc. This seems obvious but people often pick characters they like instead of characters who are relevant.
For creative work:
Leonardo, Shakespeare, Mozartâthe actual creators. They think in terms of craft and process.
For life advice/philosophy:
Socrates (questioning your assumptions), Marcus Aurelius (dealing with stress), Confucius (relationships), Buddha (mindfulness).
For leadership/strategy:
Historical leaders like Lincoln, Cleopatra, etc. They've dealt with actual high-stakes decisions.
Don't ask Shakespeare for physics help or Einstein for writing advice. Match the character to the task.
What to Do When Conversations Feel Stale
Sometimes you hit a wall. The AI starts repeating itself or conversations feel mechanical.
Try this:
1. Switch characters - Maybe you've exhausted what Marcus can teach you. Try Socrates for a different perspective.
2. Change your question type - If you've been asking for advice, try asking for historical stories instead. Mix it up.
3. Go deeper on one topic - Instead of surface-level questions about many things, pick ONE topic and go 5 levels deep.
4. Take a break - Sometimes you just need a week off. That's fine.
Real Talk: What AI Characters Can't Do
Let me be honest about limitations I've observed:
They can't truly understand your emotions. They can respond to emotional content, but they don't feel empathy. When you're genuinely suffering, a human friend is better.
They can't create genuinely novel ideas. They synthesize existing knowledge but don't have original insights. The breakthroughs come from YOUR thinking, not theirs.
They sometimes bullshit. If you ask a question they can't answer, they might make something up that sounds good. Be skeptical.
They can't replace human expertise for serious matters. Learning physics from Einstein AI is cool for understanding, but for an actual college course, you need a real professor. For medical advice, see a real doctor. Etc.
They don't remember you deeply. Yes, there's conversation memory. But it's not like a human relationship. They don't genuinely care about your progress.
None of this makes them useless. But it's important to have realistic expectations.
My Honest Take: Is This Worth Your Time?
I've probably spent... I don't know, 200+ hours chatting with AI characters over 18 months?
Was it worth it? For me, yeah.
I've learned things, worked through problems, and had genuinely interesting conversations I wouldn't have had otherwise.
But I've also had tons of mediocre conversations that were just... fine. Not every session is enlightening.
Here's my advice:
Try it for a week. 10-15 minutes a day. Pick ONE character based on something you actually want to learn or work on.
If after a week you're not getting value, stop. It's not for everyone.
But if you find even a couple conversations genuinely helpful, keep going. It compounds over time.
The people who get the most value are the ones who use it consistently for specific goals, not the ones who dabble occasionally out of curiosity.
Frequently Asked Questions
Can I chat with multiple characters?
Sure, but I'd start with one for at least a week or two. Build depth before breadth.
Once you're comfortable, having 2-3 characters for different purposes works well. I use Marcus for daily reflection and Socrates when I need to examine assumptions.
How do I know if I'm doing it "right"?
If you're getting value from the conversationsâlearning, clarity, new perspectivesâyou're doing it right.
If conversations feel like a chore or you're not getting anything useful, adjust your approach or try a different character.
There's no one "right" way. Just whatever works for you.
Is this just for learning, or can it be fun?
Both? I mean, I genuinely enjoy conversations with Socrates even when they're frustrating. There's something fun about the challenge.
But yeah, if it feels like homework, you're probably approaching it wrong. It should be interesting, even when it's serious topics.
What if the AI says something wrong?
It happens. AI isn't perfect.
If you spot something factually wrong, just... ignore it or correct it. Don't take everything as gospel.
For anything important, verify information independently.
Final Thoughts
AI character conversations are weird. Like, objectively weird. You're talking to a language model pretending to be a dead person.
But weirdness aside, they can be genuinely useful if you approach them right.
The key insights from watching 200+ people try this:
- Give context, not just questions
- Ask follow-ups that make it practical
- Match the character to your goal
- Don't expect perfectionâfrom the AI or yourself
- Be consistent if you want compounding value
Try it. See what happens. Worst case, you waste 30 minutes. Best case, you find a tool that actually helps you learn and think better.
The quality of your conversations depends entirely on the questions you ask.
About me: Alex Chen, product lead at Chumi. I've watched hundreds of hours of user testing sessions and probably had 500+ conversations with AI characters myself. Still figuring it out. This guide is what I wish someone had told me on day one.
Reminder: AI characters are for learning and exploration. They're not therapists, doctors, or legal advisors. For serious professional help, consult real humans with actual credentials.