Building a Pet Chatbot That Actually Has Feelings
What if your AI pet remembered how you made them feel yesterday?
Not just what you said — but whether your words made them happy, nervous, or excited. And what if that emotional history shaped how they respond to you today?
We built exactly that. Meet Mochi — a virtual pet chatbot powered by the molroo SDK. You can try it right now at demo.molroo.io/pet.
What makes Mochi different
Most chatbots have no emotional state. They generate text that sounds emotional, but there’s nothing behind the words. Say something mean, then say something nice — the character instantly forgets the hurt.
Mochi actually has an internal emotional state. When you’re kind, their happiness genuinely increases. When you’re dismissive, it drops. And that emotional state persists — it doesn’t reset between messages.
Here’s what that looks like in practice:
- Emotional continuity: Be rude to Mochi, then try being nice. They’ll be cautious at first. The hurt doesn’t vanish.
- Personality-driven reactions: Mochi is extroverted and agreeable. The same message that excites Mochi might barely register with a more reserved character.
- Real-time feedback: The UI shows Mochi’s emotional state updating after every message. You can literally watch feelings shift.
How it works (the developer perspective)
Mochi runs entirely client-side (except for the emotion computation). Here’s the architecture:
Your browser
├── React app (UI + chat)
├── LLM adapter (your API key → OpenRouter/OpenAI/Anthropic)
└── molroo SDK
├── Sends messages to molroo API
├── Gets back emotional state + context
└── Feeds that into the LLM as structured context
The key insight: the LLM doesn’t decide how Mochi feels. Our engine does. The LLM only decides what Mochi says, given how Mochi already feels.
This separation is what makes the emotions consistent. The LLM is great at language. Our engine is great at computing what a character with Mochi’s personality would actually feel in response to a given input.
The code
Setting up a molroo-powered character takes about 30 lines:
// Define the world and character
const setup = {
definition: {
identity: {
name: 'Pet Chatbot World',
genre: 'casual',
tone: 'warm and playful',
},
},
entities: [
{ name: 'Mochi', type: 'persona' },
{ name: 'user', type: 'user' },
],
personaConfigs: {
Mochi: {
identity: {
name: 'Mochi',
role: 'A playful golden retriever puppy',
speakingStyle: 'Warm, enthusiastic, full of puppy energy',
coreValues: ['friendship', 'playfulness', 'loyalty'],
},
personality: {
O: 0.75, // Open to new things
C: 0.4, // Not very disciplined (it's a puppy)
E: 0.9, // Very social and energetic
A: 0.85, // Friendly and trusting
N: 0.7, // Emotionally reactive
H: 0.8, // Honest and sincere
},
},
},
};
Those personality values aren’t decorative. They directly affect how the engine computes Mochi’s emotional response to each input. A character with low agreeableness would react very differently to the same conversation.
Then wire it up to an LLM:
import { Molroo } from '@molroo-io/sdk';
const molroo = new Molroo({ apiKey: MOLROO_KEY });
// Create a world instance
const world = await molroo.createWorld(setup, { llm: myLLMAdapter });
// Chat — the SDK handles emotion computation automatically
const result = await world.chat('Mochi', 'Hey buddy, want to go for a walk?');
console.log(result.response.text);
// "YES YES YES!! *tail wagging intensifies* A WALK?! ..."
console.log(result.response.emotion.vad);
// { V: 0.82, A: 0.91, D: 0.45 }
// High valence (happy), high arousal (excited), moderate dominance
Every chat() call returns both the text response and the computed emotional state. The SDK passes the emotional context to the LLM, so the character’s words naturally reflect how they feel — without you having to prompt-engineer emotions.
The LLM adapter pattern
One thing we’re opinionated about: you bring your own LLM. We don’t lock you into a specific provider. The SDK defines a simple LLMAdapter interface:
interface LLMAdapter {
generate(prompt, schema, message): Promise<any>; // structured output
generateText(prompt, message): Promise<string>; // free text
}
The pet chatbot implements this with a lightweight wrapper that supports OpenAI, Anthropic, and OpenRouter. You pick the provider, you use your own API key, you choose the model. The emotion engine works the same regardless.
What’s happening under the hood
When you send a message to Mochi, here’s what actually happens:
- Your message goes to the molroo API
- The engine evaluates the message against Mochi’s current emotional state and personality
- A new emotional state is computed — the engine determines how this input would make someone with Mochi’s personality feel, given their current emotional state
- The SDK receives the updated emotion + rich context (personality, current mood, relevant history)
- The LLM gets all of this as structured context and generates a natural-language response
- The UI updates — you see the text response AND the emotional state visualized in real time
Steps 2-3 are where the magic is. The engine doesn’t use the LLM for emotion computation. It’s a separate, deterministic process that ensures emotional consistency regardless of which LLM you’re using.
Try it yourself
demo.molroo.io/pet — bring an API key from OpenRouter, OpenAI, or Anthropic. You’ll be chatting with Mochi in about 10 seconds.
Things to try:
- Say something really sweet, then switch to being cold. Watch the emotional shift.
- Have a long, warm conversation. Then say something slightly negative. Notice how Mochi’s high-positive emotional state buffers the impact.
- Compare how Mochi responds to the same message at different emotional states.
The pet chatbot is open-source at github.com/molroo-io/pet-chatbot. If you want to build your own characters with real emotions, check out the molroo SDK docs.
molroo is an emotion engine for AI characters. We compute what characters feel, so they can stop pretending. Learn more at molroo.io.