Beyond the Prompt: Refactoring LLM Personas with CORE, Multi-Persona Synthesis, and Opus Integration
We just wrapped a major development session focused on elevating our LLM interactions. Dive into how we refactored personas with the CORE framework, enabled multi-persona enrichment, integrated Claude Opus, and tackled a tricky TypeScript parsing quirk.
The world of Large Language Models (LLMs) is moving at lightspeed, and keeping our applications at the cutting edge requires constant evolution. We recently concluded a significant development session aimed at supercharging how our system interacts with LLMs – specifically through a deep refactor of our persona system, the introduction of multi-persona enrichment, and the integration of Anthropic's powerful Claude Opus model.
This wasn't just about adding new features; it was about laying a more robust foundation for nuanced, context-aware AI interactions. Here’s a look at the journey, the technical decisions, and a valuable lesson learned along the way.
The Why: Elevating LLM Interactions
Our goal was clear:
- Enhance Persona Consistency & Quality: Our existing personas, while functional, lacked a unified structure, leading to variability in output quality. We needed a systematic approach.
- Unlock Nuanced Perspectives: A single persona is great, but what if you need the perspective of a technical architect and a marketing specialist simultaneously? Multi-persona enrichment was the answer.
- Stay State-of-the-Art: New models like Claude Opus offer incredible capabilities. Integrating them ensures we leverage the best available intelligence.
With these objectives, we dove into the codebase.
The CORE of Our Persona System
The first major undertaking was refactoring all our built-in personas to the CORE framework. This structured approach to prompt engineering helps ensure consistent, high-quality responses by clearly defining every aspect of an LLM's role.
Here's how CORE breaks down:
- C (Context): Who is this persona? What's their background, years of experience, domain, and even their "era" if relevant? This sets the stage.
- O (Objective): What is their primary mission? What core functions are they meant to perform? This defines their purpose.
- R (Role & Rules): This is where we establish explicit constraints. What MUST the persona do (positive constraints) and what MUST NOT it do (negative constraints)? This is crucial for guiding behavior.
- E (Expression): How should the persona communicate? What's their tone, syntax, and overall communication style?
Beyond CORE, we added two critical components:
- Safety & Anti-Hallucination: Explicit instructions to bind to facts and use fallback phrases when information is unavailable.
- In-Character Examples: Two Q&A pairs per persona demonstrating their voice, boundaries, and expected output format.
This framework was applied to all 10 existing personas within prisma/seed.ts, transforming sparse, one-sentence prompts into rich, multi-faceted definitions. For instance, a persona like "Sasha" or "Noor" that previously had minimal context now has a full, consistent profile.
Furthermore, we updated src/server/services/persona-generator.ts to instruct the LLM to use the CORE framework when generating new personas, and bumped the max token limit from 2048 to 4096 to accommodate the richer prompts.
Unlocking Nuance: Multi-Persona Enrichment
Sometimes, a single expert isn't enough. Our next big step was enabling the system to leverage multiple personas for a single enrichment task. This allows for synthesized perspectives, cross-disciplinary insights, or even adversarial analysis.
The changes spanned our stack:
- Frontend (
src/app/(dashboard)/dashboard/projects/[id]/page.tsx): ThePersonaPickercomponent was switched frommode="single"tomode="multi", with a cap of three selected personas. This provides a clear UI for users to select multiple lenses for their note enrichment. - Backend (
src/server/trpc/routers/projects.ts): Our tRPC input schema for enrichment now acceptspersonaIds: z.array(z.string().uuid()).max(3).optional(), validating the maximum of three selections. - Service Layer (
src/server/services/note-enrichment.ts): This is where the magic happens. The service now loads all selected personas, preserving their original selection order. If a single persona is selected, the behavior remains unchanged. However, if multiple are chosen, their individual system prompts are combined with a synthesis instruction, guiding the LLM to blend their expertise into a coherent, multi-faceted response. This is a game-changer for complex analysis.
Powering Up with Claude Opus
No major LLM session is complete without integrating the latest and greatest models. We've added claude-opus-4-20250514 to our MODEL_CATALOG in src/lib/constants.ts and updated the COST_RATES in src/server/services/llm/types.ts (at 75 per 1M tokens for input/output). We also took the opportunity to fix a stale Haiku model ID, ensuring our cost tracking is accurate.
This integration means our users now have access to one of the most capable models on the market, ready to tackle even more demanding tasks.
A Developer's Rite of Passage: The Backtick Battle
Every significant dev session has its "aha!" moments, and sometimes, those are preceded by a "what the heck?!" moment. For us, it was a seemingly innocuous issue with backticks in our prisma/seed.ts file.
The Problem:
When defining our elaborate persona systemPrompts as template literals in prisma/seed.ts, we naturally wanted to include inline code examples using backticks (e.g., \const myVar = 10;`). However, TypeScript's parser, when encountering an unescaped backtick *within* a template literal, interprets it as the closing backtick of the template string itself, leading to a TS1005: Parse error` further down the line.
Consider this simplified example:
// In prisma/seed.ts (problematic)
const personaPrompt = `
Your task is to generate TypeScript code snippets.
For example, use \`const myVariable = 'value';\` for variable declarations.
`; // TS1005: Parse error!
The TypeScript compiler sees the backtick after `use `` and thinks the template literal ends there, causing subsequent characters to become syntax errors.
The Workaround & Lesson Learned: After some head-scratching, the solution was twofold:
- For general inline code references: We simply removed the backticks and used single quotes or no special formatting if the context was clear (e.g.,
==instead of\==``). This kept the prompts clean and TypeScript happy. - For literal backticks that must be present (e.g., for Prisma-specific syntax in Dr. Priya Sharma's prompt): We used a backslash to escape the backtick: ```. This tells TypeScript to treat it as a literal character, not a template literal delimiter.
// In prisma/seed.ts (workaround)
const personaPrompt = `
Your task is to generate TypeScript code snippets.
For example, use 'const myVariable = "value";' for variable declarations.
// OR, if a literal backtick is absolutely needed:
When referring to a specific field, use \`model.field\` syntax.
`; // All good!
Lesson Learned: When defining complex string literals, especially in seed files that might be processed by multiple layers (TypeScript compiler, database seeder), be acutely aware of special characters like backticks and ${ patterns. Escaping or finding alternative syntax is key to avoiding unexpected parsing errors.
What's Next on the Horizon
With the core work complete (commit 52d9ffc, 139 unit tests passing, typecheck clean!), we're looking ahead:
- Seed the Database: A crucial
npm run db:seedis needed to apply all the new CORE-framework persona prompts to our existing tenants. - UI Testing: Thoroughly test the multi-persona enrichment UI to verify the max 3 cap works and, critically, assess the quality of the combined prompt synthesis.
- Documentation: Update
docs/06-workflow-intelligence.mdwith compliance report export documentation. - Advanced Testing: As per our design document, we'll consider adding A/B temperature testing and adversarial jailbreak tests for our CORE personas to ensure robustness.
This session represents a significant leap forward in our LLM capabilities, making our system more intelligent, flexible, and robust. We're excited to see the deeper insights and richer interactions this will enable for our users.
{"thingsDone":["Refactored all 10 built-in personas to CORE framework including Safety & Anti-Hallucination and In-Character Examples.","Expanded sparse personas (Sasha, Noor, Avery, Quinn) to full CORE definitions.","Updated persona-generator.ts to instruct LLM for CORE framework and bumped max tokens to 4096.","Added multi-persona enrichment capability (max 3) to note enrichment, updating PersonaPicker, tRPC router, and note-enrichment service.","Integrated claude-opus-4-20250514 into MODEL_CATALOG and COST_RATES.","Fixed stale Haiku model ID in COST_RATES."],"pains":["Encountered TypeScript TS1005 parse error due to unescaped backticks inside template literal strings in prisma/seed.ts."],"successes":["All 139 unit tests passing.","Typecheck and lint clean.","Successfully implemented workaround for backtick parsing issue."],"techStack":["TypeScript","Next.js","tRPC","Prisma","Claude (Opus, Haiku)","LLMs","Prompt Engineering","React (for PersonaPicker)"]}