nyxcore-systems
7 min read

Unlocking Granular Control: Empowering Per-Step AI Model & Persona Overrides in Our Workflow Engine

Dive into how we implemented per-step AI model and persona selection, transforming our workflow engine into a powerhouse of granular control and tackling tricky UI challenges along the way.

AILLMWorkflowUXFrontendBackendTypeScriptPrismaNext.jsDeveloperExperience

Building powerful AI applications often means striking a balance between automation and control. While a well-defined workflow can streamline complex tasks, rigid configurations can quickly become a bottleneck. What if you need a specific model for a nuanced summarization step, but a different one for creative brainstorming? Or what if a particular step requires a unique persona, distinct from the overall workflow's tone?

This was the challenge we set out to solve: to move beyond workflow-level defaults and introduce per-step provider/model selection and persona overrides into our AI workflow engine. This post chronicles our journey, from schema changes to UI refinements and the valuable lessons learned along the way.

The Goal: Fine-Grained Control, Step by Step

Our objective was clear: empower users with the ability to define not just what happens at each step of an AI workflow, but also who performs it (persona) and how (LLM provider/model). This meant:

  1. Adding an optional personaId to individual workflow steps.
  2. Allowing users to select a specific LLM provider and model for each step.
  3. Ensuring these per-step configurations seamlessly override any workflow-level defaults.
  4. Fixing critical UX bugs identified during initial testing to ensure a smooth user experience.

After a focused development session, I'm thrilled to report that these features are now complete and robust, pushed in two key commits: 14edacf (initial implementation) and c96f057 (UX fixes).

The Implementation Journey: From Database to UI

Let's break down the technical steps taken to bring this vision to life.

1. Database Schema & API Layer

The foundation of any new feature often starts with the data model. We needed to associate a Persona with a WorkflowStep.

  • Schema Update: In prisma/schema.prisma, we added an optional personaId (UUID) to our WorkflowStep model, establishing a foreign key relationship to the Persona model. We also added a reverse relation workflowSteps on the Persona model for easier querying.

    prisma
    model WorkflowStep {
      id          String    @id @default(uuid())
      workflowId  String
      workflow    Workflow  @relation(fields: [workflowId], references: [id], onDelete: Cascade)
      personaId   String?
      persona     Persona?  @relation(fields: [personaId], references: [id])
      // ... other fields
    }
    
    model Persona {
      id            String          @id @default(uuid())
      // ... other fields
      workflowSteps WorkflowStep[]
    }
    
  • Database Migration & Client Generation: After updating the schema, we ran npm run db:push && npm run db:generate to apply the changes to our database and regenerate the Prisma client, ensuring WorkflowStep.personaId was correctly typed and available.

  • API Endpoint Modifications: Our tRPC router (src/server/trpc/routers/workflows.ts) was updated to accept personaId in the steps.update input, allowing the frontend to send this new configuration.

  • Duplication Logic: When duplicating a workflow or step, we ensured the personaId (and its relation) was correctly carried through using persona: { connect: { id } }.

2. The Core Workflow Engine

The real magic happens in the workflow-engine.ts. This is where the per-step overrides truly take effect.

  • executeStep() Logic: We modified the executeStep() function in src/server/services/workflow-engine.ts. Now, before executing an LLM call for a given step, it first checks if a personaId is defined directly on that WorkflowStep. If present, it loads that specific persona from the database and uses it, effectively overriding any persona configured at the workflow level. The same logic applies to the LLM provider/model selection.

3. Frontend User Experience (UX)

Implementing the backend logic is one thing; making it intuitive and accessible in the UI is another.

  • Step Header Restructuring: To accommodate the new ProviderPicker directly on the step header, we had to restructure the header in src/app/(dashboard)/dashboard/workflows/[id]/page.tsx. This involved splitting the main toggle <button> from the ProviderPicker into distinct elements within a <div> wrapper. This was crucial to avoid invalid HTML (nested interactive elements) and prevent click propagation issues.
  • ProviderPicker Integration: We integrated the ProviderPicker (a component allowing selection of LLM providers and models) directly into the step headers. Crucially, this picker is only visible when the workflow is in a pending or paused state, maintaining a read-only view for active or completed workflows. We also removed the compact prop to ensure the model name was always visible, improving clarity.
  • Per-Step Persona Dropdown: Inside the expanded step body, we added a <select> dropdown for persona selection. This dropdown dynamically loads all available personas and displays their descriptions (truncated for brevity) alongside their names, making it easy for users to pick the right "voice" for each step. We also ensured the personas.list query was no longer gated, always loading all personas for this dropdown.

Lessons Learned: Navigating the "Pain Log"

No development session is without its challenges. Here's what we learned and how we overcame them:

Challenge 1: Nested Interactive Elements

  • The Problem: Our initial thought was to nest the ProviderPicker directly inside the existing step header <button> element that toggles the step's expansion. This seemed logical from a layout perspective.
  • The Fallout: This immediately led to invalid HTML (buttons inside buttons are a no-go) and unpredictable click propagation, making both the step toggle and the ProviderPicker unreliable.
  • The Solution: We refactored the step header into a <div> wrapper. Inside this wrapper, we placed the toggle <button> on the left and the ProviderPicker on the right. To prevent the ProviderPicker's clicks from inadvertently triggering the step expansion, we added onClick={(e) => e.stopPropagation()} to the picker's wrapper. A simple yet effective fix for a common frontend pitfall!

Challenge 2: Unpersisting Persona Selection & Type Safety

  • The Problem: During initial testing, users reported that their selected persona wasn't persisting. They'd choose an option from the dropdown, but the UI would immediately revert to the previous selection. Digging in, we found we were using (step as any).personaId for the select value binding, which, while bypassing TypeScript, often indicates a deeper issue.
  • The Fallout: A frustrating user experience and a clear sign of a mismatch between the UI state and the actual server state. The any cast was hiding the fact that our local step object wasn't correctly reflecting the updates.
  • The Solution:
    1. Optimistic UI State: We implemented an optimistic local state personaOverrides: Record<string, string | null>. When a user selects a persona, this local state updates immediately, giving instant visual feedback. This state is then cleared after a successful server refetch confirms the server state has been updated, ensuring consistency.
    2. Type Safety: After regenerating the Prisma client, step.personaId was properly typed. Switching to this correct property resolved underlying data access issues.
    3. Enhanced Dropdown: We also added the persona description after its name (truncated to 50 characters) in the dropdown options, providing more context to users.

Active State & Next Steps

With these changes, our database now robustly handles persona_id on workflow_steps, and our Prisma client is fully regenerated. No environment variables were touched, simplifying deployment.

While feature-complete, a few immediate next steps remain for full verification:

  1. Manual Verification: Open a workflow with a failed Anthropic step, switch the provider via the picker, and re-run to ensure the change takes effect.
  2. Persona Influence: Verify that per-step persona selection persists and genuinely influences the LLM's output.
  3. Workflow-Level Fallback: Confirm that workflow-level personas still work correctly when no per-step override is set.
  4. Read-Only States: Verify that completed or running workflows correctly display read-only provider text instead of interactive pickers.
  5. Mobile UX: Consider the mobile user experience. Currently, the ProviderPicker is hidden sm:block, indicating it might need a dedicated mobile fallback or responsive adjustments.

Conclusion

Empowering users with granular control over their AI workflows is a game-changer for flexibility and effectiveness. By enabling per-step model and persona selection, we've transformed our engine from a powerful tool into a truly adaptable one. The journey involved careful database design, robust backend logic, and thoughtful frontend implementation – navigating common pitfalls like nested interactive elements and ensuring a smooth, optimistic user experience. We're excited to see how users leverage this enhanced control to build even more sophisticated and precise AI applications!