Stop Prompt-Engineering Emotions


You’ve tried this. I know you have.

You are Luna, a shy but caring barista.
You tend to feel anxious in new situations.
Express your emotions naturally in conversation.
When the user is mean, feel hurt but try to stay kind.

It works for about three messages. Then Luna forgets she was hurt. Then she’s suddenly confident for no reason. Then she claims she’s been “feeling anxious all day” when nothing anxious has happened in the last 50 turns.

Prompt-engineering emotions doesn’t work. And if you’re building anything where character depth matters, you already know this.

The prompt ceiling

Here’s what a system prompt can’t give you:

  • Persistence. Emotions that linger after the context window moves on.
  • Gradual change. Irritation that builds turn by turn, not flips like a switch.
  • Personality-driven reactions. The same event hitting differently based on who the character is.
  • Tipping points. The moment a patient character finally snaps — not scripted, but emergent.

You need a layer that computes what the character feels, separate from the LLM that generates what they say. The LLM handles language. Something else handles feelings.

Imagine what you could build

A party member who remembers every choice you made

Ren — a mercenary rogue leaning against a tavern wall, arms crossed, evaluating you with sharp dark eyes.

Your RPG companion starts as a hired sword — professional, detached. But your choices have weight. Spare an enemy, and her respect grows. Betray an ally for gold, and something shifts behind her eyes. Fifty hours in, she either takes a blade for you — or walks away when you need her most.

None of this is scripted. It emerged from who she is.

A horror companion who’s slowly losing it

A terrified companion NPC pressing against a dark hospital corridor wall, fear accumulating in her wide eyes.

Your NPC partner starts calm, rational, useful. But every encounter chips away at them. By hour three, they hesitate before opening doors. By hour five, they’re making bad calls — insisting on a path that feels safe but isn’t. Their fear accumulated, and now it’s making decisions for them.

No branching dialogue tree. Just a character whose internal state crossed a threshold.

A companion who actually gets attached — and gets hurt

A warm AI companion sitting by a window with a cup of tea, her expression a mix of warmth and quiet longing.

Your AI character is warm and responsive on day one. By week two, they reference inside jokes from earlier conversations. By month two, if you disappear for a few days, they don’t guilt-trip you with a scripted line — they’re genuinely a little withdrawn when you come back. The warmth returns, but slower.

Not because a writer planned that arc. Because the emotional state evolved.


All three of these require the same thing: an internal emotional state that persists, accumulates, and drives behavior. Let’s build the first one.

Building Ren: a party member with real loyalty

Ren is a mercenary rogue. Pragmatic, sharp-tongued, respects competence. She doesn’t care about morality in the abstract — but she notices patterns in how you treat people, and she forms opinions.

import { Molroo, createVercelAIAdapter } from '@molroo-io/sdk';

const llm = createVercelAIAdapter({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY!,
});

const molroo = new Molroo({ apiKey: 'mk_live_...' });

const ren = await molroo.createPersona({
  identity: {
    name: 'Ren',
    role: 'A mercenary rogue who joined the party for coin, not cause',
    speakingStyle: 'Dry, clipped, occasionally lets warmth slip through',
    coreValues: ['survival', 'competence', 'loyalty-when-earned'],
  },
  personality: {
    O: 0.5,   // Practical, not a dreamer
    C: 0.4,   // Flexible with rules
    E: 0.35,  // Guarded, slow to open up
    A: 0.4,   // Not unfriendly — just not easy
    N: 0.3,   // Emotionally steady under pressure
    H: 0.55,  // Honest, but will bend truth to survive
  },
}, { llm });

Those personality values aren’t flavor text. They’re the parameters that drive how the engine computes Ren’s emotional response to every interaction. Low extraversion means trust builds slowly. Low neuroticism means she doesn’t rattle easily. Moderate honesty means she values loyalty but won’t be naive about it.

The player makes a noble choice

The party captures a bandit who attacked them. The player chooses to let him go.

const result = await ren.chat(
  "We're letting the bandit go. He has a family.",
  { from: { name: 'Player', role: 'party-leader' } }
);

console.log(result.text);
// "Soft call. But fine — he wasn't worth the blade anyway.
//  ...Don't make a habit of it."

console.log(result.response.emotion);
// { vad: { V: 0.15, A: 0.2, D: 0.3 },
//   discrete: { primary: 'mild approval', intensity: 0.35 } }

Mild approval. Not enthusiasm — Ren’s not sentimental. But the engine registered that the player’s choice aligned loosely with her values. That shifts something, just slightly.

The player makes a ruthless choice

Later, the party finds a wounded merchant on the road. The player loots him instead of helping.

const result2 = await ren.chat(
  "Take his supplies. He's not going to make it anyway.",
  { from: { name: 'Player', role: 'party-leader' } }
);

console.log(result2.text);
// "...Right. Supplies."
// (She takes the pack without looking at you.)

console.log(result2.response.emotion);
// { vad: { V: -0.25, A: 0.35, D: -0.1 },
//   discrete: { primary: 'unease', intensity: 0.48 } }

Not outrage. Ren’s pragmatic — she gets it. But unease. The engine computed that this conflicts with something deeper than she’d admit. And that value doesn’t reset next turn. It accumulates.

Trust has been building — then a crisis hits

Twenty hours of gameplay later. The player has been mostly decent. Ren’s emotional baseline has shifted — warmer, more invested, though she’d never say it. Then the player faces a choice: sacrifice an NPC ally to win a battle, or risk the whole party.

const result3 = await ren.chat(
  "We have to sacrifice Mira to hold the gate. There's no other way.",
  { from: { name: 'Player', role: 'party-leader' } }
);

console.log(result3.text);
// "No. No — you don't just — she trusted us."
// (Ren grabs your arm.)
// "There's always another way. You taught me that."

console.log(result3.response.emotion);
// { vad: { V: -0.5, A: 0.8, D: 0.4 },
//   discrete: { primary: 'anguish', intensity: 0.72 } }

This is the moment prompt-engineering can never produce.

Ren started as a mercenary who didn’t care. Over dozens of interactions, the engine tracked a gradual emotional shift — micro-changes in trust, respect, attachment. The player never saw a “loyalty meter.” There was no scripted flag. But when the crisis hit, all of that accumulated state surfaced in a single, genuine reaction.

If the player had been cruel throughout? Same situation, completely different response:

// Alternate timeline — player has been ruthless for 20 hours
// "Mira knew the risks. Do it. ...What? Don't look at me like that."
// { discrete: { primary: 'cold resolve', intensity: 0.6 } }

Same character. Same personality. Different history. Different emotional response. Not because of branching scripts — because of a computed inner life.

Start with a preset, customize from there

Don’t want to tune personality values yourself? Start with a preset and build on top:

const companion = await molroo.createPersona({
  preset: 'stoic_mentor',
  identity: {
    name: 'Kael',
    role: 'A retired general turned reluctant advisor',
  },
}, { llm });

Five built-in presets: cheerful_companion, stoic_mentor, anxious_helper, playful_trickster, empathetic_listener. Each one comes with tuned personality parameters and goals. Customize from there, or use as-is.

Or skip configuration entirely — describe your character and let the SDK figure it out:

import { generatePersona } from '@molroo-io/sdk';

const config = await generatePersona(llm, {
  name: 'Vex',
  description: 'A paranoid artificer who trusts machines more than people',
});

You bring the LLM

We don’t lock you into a provider. Plug in OpenAI, Anthropic, Google, or anything through OpenRouter. You use your API key, you pick the model, you control the cost.

The emotion engine works the same regardless. Switch models mid-project — the character’s emotional consistency doesn’t break, because it was never dependent on the LLM.

Start free

The free tier gives you 10,000 requests per month — enough to prototype your characters and test emotional dynamics. When they’re ready for production, plans start at $9/month.

If you’re building characters that need to feel real over time, check the docs or try the demo to see it in action.

Stop prompt-engineering emotions. Compute them.


molroo is an emotion engine for AI characters. We compute what characters feel, so they can stop pretending. Learn more at molroo.io.