Unlocking Deeper AI Interactions: CORE Personas, Multi-Persona Enrichment, and Claude Opus
Dive into our latest development sprint where we overhauled our AI persona system with the CORE framework, enabled multi-persona note enrichment, and integrated Anthropic's powerful Claude Opus model.
We just wrapped up an exhilarating development sprint, pushing a significant set of enhancements that profoundly impact how our AI interacts and generates insights. Our focus was on three key areas: refining our AI personas for unparalleled consistency, enabling richer multi-perspective analysis, and expanding our arsenal of large language models. The result? A more intelligent, flexible, and powerful platform ready to elevate your workflow.
The Heart of Interaction: Introducing the CORE Persona Framework
One of the foundational elements of effective AI interaction is the persona. A well-defined persona guides the LLM to adopt a specific role, tone, and set of constraints, leading to more relevant and consistent outputs. Historically, some of our built-in personas, while functional, were a bit sparse – perhaps just a single sentence defining their essence. This could sometimes lead to unpredictable behavior, as the LLM had too much room for interpretation.
To combat this, we've introduced and refactored all our built-in personas (and updated our persona generation logic for user-created ones) to adhere to the CORE framework. CORE stands for:
- C (Context): Who is this persona? What's their background, years of experience, domain, and even the era they operate in? This grounds the AI in a specific identity.
- O (Objective): What is their primary mission? What are their core functions or goals when responding?
- R (Role & Rules): This is crucial. We define what the persona MUST do (positive constraints) and what they MUST NOT do (negative constraints). This acts as a powerful guardrail.
- E (Expression): How should they communicate? What's their preferred tone, syntax, and overall communication style?
Beyond CORE, we've also baked in explicit instructions for Safety & Anti-Hallucination (e.g., fact-binding, fallback phrases) and provided In-Character Examples (two Q&A pairs) to demonstrate the persona's voice and boundaries.
This refactor, applied to all 10 built-in personas in prisma/seed.ts (including expanding previously sparse ones like Sasha, Noor, Avery, and Quinn), means our AI agents are now more consistent, reliable, and better equipped to provide high-quality, in-character responses. We also updated src/server/services/persona-generator.ts to ensure all new personas generated by the LLM follow this robust CORE structure, bumping max tokens from 2048 to 4096 to accommodate the richer detail.
Beyond Single Perspectives: Multi-Persona Enrichment
Sometimes, a single expert perspective isn't enough. For complex tasks or nuanced analysis, you might need insights from multiple angles. We're thrilled to announce that our note enrichment feature now supports multi-persona selection, allowing you to choose up to three personas to analyze your content simultaneously!
This required changes across several parts of our stack:
- UI Update: In
src/app/(dashboard)/dashboard/projects/[id]/page.tsx, thePersonaPickercomponent was switched frommode="single"tomode="multi"and capped at a maximum of 3 selections. - API Enhancements: Our
src/server/trpc/routers/projects.tsnow accepts an optionalpersonaIdsarray (validated to be UUIDs and max 3 items) for the enrichment input. - Backend Logic: The
src/server/services/note-enrichment.tsservice now intelligently loads all selected personas, preserving their selection order. If only one persona is chosen, the behavior remains unchanged. But with multiple, it combines their CORE-defined system prompts with a synthesis instruction, allowing the LLM to provide a richer, synthesized output drawing on diverse "expert" opinions.
This upgrade empowers users to gain a more comprehensive understanding of their notes, leveraging the combined wisdom and unique perspectives of multiple AI agents.
Expanding Our LLM Toolkit: Welcoming Claude Opus
To further enhance the intelligence at our fingertips, we've integrated Anthropic's powerful claude-opus-4-20250514 model into our MODEL_CATALOG (src/lib/constants.ts). Opus is known for its advanced reasoning capabilities and extensive context window, making it an excellent addition for complex analytical tasks.
Alongside this, we've updated our COST_RATES (src/server/services/llm/types.ts) to reflect Opus's pricing (75 per 1M output tokens). We also took the opportunity to fix a stale model ID for Haiku, ensuring it now correctly references claude-haiku-4-5-20251001 throughout the codebase.
A Developer's Diary: Lessons Learned in Template Literals
Even seasoned developers hit snags, and this sprint was no exception! While refactoring the persona system prompts in prisma/seed.ts, which are defined as TypeScript template literal strings, we encountered a peculiar TS1005 parse error.
The culprit? Unescaped backtick characters (`) used for inline code references within our template literal strings. TypeScript's parser, when encountering a backtick inside an existing template literal, interprets it as the closing delimiter, leading to a syntax error. Similarly, unescaped ${ patterns can cause issues.
Our workaround and lesson learned:
- For general inline code examples within the persona prompts (e.g.,
==for equality), we simply removed the backticks, as the context usually made it clear it was a code reference. - For specific cases where a backtick was absolutely necessary (like referencing Prisma-specific syntax in Dr. Priya Sharma's prompt), we used the escape character
\`` (e.g.,```).
The key takeaway: When writing complex strings, especially those that might contain code snippets or variable interpolation patterns, within TypeScript template literals, be mindful of nested backticks and ${ sequences. Escaping them (or rephrasing) is often necessary to keep the parser happy!
Looking Ahead: What's Next on Our Horizon?
With these significant updates deployed (commit 52d9ffc, 139 unit tests passing, typecheck clean!), our immediate next steps include:
- Database Seeding: Running
npm run db:seedto apply the new CORE-framework persona prompts to all existing tenants. - Code Analysis Page Fix: Addressing a missing
projectIdin theReportGeneratorModalfor our code analysis feature, which requires adding project data to thecodeAnalysis.gettRPC query. - Documentation: Updating
docs/06-workflow-intelligence.mdwith comprehensive compliance report export documentation. - Advanced Persona Testing: Exploring A/B temperature testing and adversarial jailbreak tests for our CORE personas, as outlined in our design document, to further harden their reliability.
- Multi-Persona UI Testing: Thoroughly testing the new multi-persona enrichment UI to verify the max 3 cap works as expected and, crucially, to assess the quality of the combined prompt synthesis.
This sprint marks a substantial leap forward in our platform's AI capabilities. We're excited to see how these enhancements empower you to achieve even greater insights and productivity!