Adaptive AI Workflows: Unlocking Granular Control with Per-Step Provider & Persona Overrides
We've implemented a critical feature allowing users to override LLM provider/model and persona settings for individual workflow steps, dramatically improving flexibility and resilience against API outages or specific step requirements.
Imagine your meticulously crafted AI workflow grinding to a halt because a single LLM provider experiences an outage or a billing hiccup. Or perhaps you need a slightly different "tone" or "expertise" for just one step in a multi-stage process, without altering the entire workflow's persona. These aren't hypothetical scenarios; they're real-world challenges in building robust, flexible AI applications.
This week, we tackled these very problems head-on, rolling out a significant enhancement to our workflow engine: per-step LLM provider/model selection and persona overrides. This feature empowers users with unprecedented granular control, making workflows more resilient, adaptable, and precise.
The Challenge: When Global Settings Aren't Enough
Our existing workflow system allowed users to define a global LLM provider (e.g., Anthropic, OpenAI, Google) and a default persona for an entire workflow. While great for consistency, this design had limitations:
- Provider Resilience: If Anthropic's API went down, the entire workflow using it would fail, even if other steps could theoretically run on OpenAI. There was no way to quickly switch just the affected steps.
- Contextual Nuance: Some steps might benefit from a more "technical" persona, while others require a "creative writer" persona, even within the same workflow. Overriding the whole workflow's persona wasn't ideal.
- Debugging & Experimentation: Quickly testing different models or personas for a problematic step was cumbersome, often requiring workflow duplication and global setting changes.
The goal was clear: provide the user with the power to make these critical decisions directly on the execution page, step by step.
The Solution: Granular Control at Your Fingertips
We've implemented the ability to select a specific LLM provider/model and override the workflow's default persona for each individual step. This means:
- On-the-Fly Provider Switching: If Anthropic is down, users can click a badge on a failing step, switch it to OpenAI, and re-run that step without touching the rest of the workflow.
- Precision Persona Application: Design a workflow where an initial step summarizes with a "concise analyst" persona, a middle step brainstorms with a "creative marketer" persona, and a final step drafts with a "professional editor" persona—all within the same workflow.
- Enhanced Debugging: Easily isolate and test different LLM configurations for specific steps to fine-tune performance.
Let's dive into how we built this.
Behind the Scenes: The Technical Journey
Implementing this feature touched several layers of our stack, from database schema to frontend UI.
1. Database Evolution with Prisma
The foundation for per-step persona overrides began in our prisma/schema.prisma file. We added an optional personaId to the WorkflowStep model and established a reverse relation on the Persona model.
// prisma/schema.prisma
model WorkflowStep {
id String @id @default(uuid())
// ... other fields
personaId String? @db.Uuid
persona Persona? @relation(fields: [personaId], references: [id])
}
model Persona {
id String @id @default(uuid())
// ... other fields
workflowSteps WorkflowStep[]
}
After modifying the schema, npm run db:push && npm run db:generate ensured our database was synced and the Prisma client was regenerated, making the new personaId field available throughout our backend.
2. API Layer: tRPC for Type-Safe Updates
Our tRPC API endpoint for updating workflow steps (steps.update) needed to accept this new, optional personaId. We updated its input schema using Zod:
// src/server/trpc/routers/workflows.ts
// ... inside steps.update input schema
personaId: z.string().uuid().nullable().optional(),
We also ensured that when a workflow is duplicated, any step-level persona overrides are correctly carried over using Prisma's connect syntax for relations: persona: { connect: { id } }. This prevents a common gotcha where direct ID assignment might fail for disconnected relations.
3. Workflow Engine: Prioritizing Step-Level Settings
The core logic resides within src/server/services/workflow-engine.ts, specifically in the executeStep() function. This function was modified to first check for a personaId defined at the individual step level. If present, it overrides any workflow-level persona that would otherwise be injected into the LLM call. This ensures the granular control takes precedence.
4. Frontend Magic: React for a Seamless UX
The user interface required careful thought to integrate these new controls without cluttering the existing design.
- Provider Picker Integration: We imported our
ProviderPickercomponent intosrc/app/(dashboard)/dashboard/workflows/[id]/page.tsx. - Header Restructuring: A key challenge (and lesson learned, see below) was placing the
ProviderPicker. We restructured the step header, splitting the toggle button from the provider area. This allowed us to show a compactProviderPickerwithfilterAvailableenabled directly in the step header when the workflow ispendingorpaused, enabling quick changes. - Read-Only State: When a workflow is
runningorcompleted, the provider/model selection becomes read-only text, maintaining consistency with existing behavior. - Per-Step Persona Dropdown: Inside the expanded step body, after the prompt editor and before fan-out progress, we added a
<select>dropdown for persona selection. This provides a clear, dedicated space for this override. - Always-On Persona Query: To populate the persona dropdowns efficiently, we changed our personas query to always load, removing a previous
enabled: settingsOpengate.
5. Type Safety & Validation
Throughout the process, npm run typecheck was our constant companion, ensuring that all our changes, from database schema to frontend component props, remained type-safe and consistent. It passed clean, giving us confidence in the new implementation.
Lessons Learned: Navigating Frontend Interactions
Not every path to a feature is smooth. We encountered a classic frontend interaction challenge when integrating the ProviderPicker:
- The Problem: Our initial thought was to place the
ProviderPickerdirectly inside the existing step header<button>element (which toggles the step's expansion). However, theProviderPickercomponent itself renders an internal<button>. Nesting buttons (<button><button>...</button></button>) is invalid HTML and leads to unpredictable click propagation issues, making both buttons difficult to interact with reliably. - The Solution: We refactored the step header. Instead of a single interactive button wrapping the entire header, we now have a
<div>wrapper. Inside this<div>, the toggle<button>is on the left, and theProviderPicker(wrapped in its own<div>) is on the right. Crucially, we addedonClick={(e) => e.stopPropagation()}to theProviderPicker's wrapperdiv. This prevents clicks on the provider selection area from inadvertently triggering the step's expand/collapse action.
This experience reinforced the importance of understanding HTML semantics and event bubbling, especially when dealing with nested interactive components.
What's Next?
With the feature fully implemented and type-checked, the immediate next steps involve:
- Committing and pushing these changes.
- Thorough manual verification:
- Open a workflow with failed Anthropic steps, switch to OpenAI/Google, and re-run.
- Verify per-step persona selection persists and influences LLM output.
- Confirm workflow-level personas still work correctly when no per-step override is set.
- Ensure completed/running workflows correctly display read-only provider text.
This new level of control is a game-changer for our users, offering unprecedented flexibility and robustness in their AI-powered workflows. We're excited to see how it empowers them to build even more sophisticated and resilient applications!