nyxcore-systems
6 min read

Unlocking Project Wisdom: Our Dynamic Workflow Engine Reaches End-to-End Readiness (and a Tricky Prisma Bug Squashed!)

We've hit a major milestone! Our dynamic workflow builder and execution engine, designed to inject 'project wisdom' into automated processes, is feature-complete and ready for comprehensive end-to-end testing. This post dives into the journey, the final bug that almost stalled us, and the crucial lessons learned.

workflow-engineprismatypescriptnextjstrpcaillmdevelopment-journeylessons-learned

It's an exciting day in our development journey! After weeks of intense focus, our ambitious Dynamic Workflow Builder and Execution Engine has reached a significant milestone: all core features are implemented, type-checking is pristine, and the last critical runtime bug has been squashed. We're now poised for full end-to-end testing, bringing us one step closer to empowering users with automated, "wisdom-infused" workflows.

Our overarching goal with this project has been to create a flexible system where users can visually construct complex workflows, execute them, and most importantly, integrate "project wisdom" – contextual knowledge and insights – directly into the automated steps. Imagine an AI agent within a workflow, not just performing tasks, but doing so with the accumulated knowledge from your past projects. That's the vision!

The Journey So Far: Building the Foundations

This wasn't a small undertaking. Our progress has been a cumulative effort across several phases:

  • Laying the Schema Groundwork (Phases 1-3): We started by evolving our database schema, introducing 13 new fields for WorkflowStep, a WorkflowTemplate model, and crucial userId, description, and consolidationIds fields on the Workflow itself. We also established a shared BYOK resolver for LLM providers and updated our core constants to support new step templates and built-in workflows.
  • Engineering the Core Engine (Phases 4-5): This was the heart of the system. We rewrote the workflow engine from the ground up, implementing powerful template variables like {{input}}, {{steps.Label.content}}, and {{consolidations}}. Features like retry mechanisms, resume capabilities, and cost estimation were baked in. Concurrently, our tRPC router received a major overhaul to support all these new functionalities, including operations for steps, templates, duplication, resuming, cost estimation, and deletion.
  • Crafting the User Experience (Phases 6-8): What's a powerful engine without a great interface? We developed the intuitive builder page (/workflows/new) using dnd-kit for drag-and-drop workflow creation, complete with a consolidation picker. The execution page provides a live pipeline visualization, leverages Server-Sent Events (SSE) for real-time updates, features a Markdown renderer for step outputs, and allows for editing, retrying, and managing linked consolidations via a settings panel. We also updated the main workflow list page for better overview.
  • Injecting Project Wisdom (Consolidation Integration): This is where the magic happens. By adding consolidationIds to the Workflow schema, our engine can now loadConsolidationContent(), which in turn calls generatePromptHints(). This means that a crucial step in our Deep Build Pipeline, "Project Wisdom," can now dynamically inject relevant {{consolidations}} content directly into the LLM prompt, making our automated agents smarter and more context-aware.

The Final Hurdle: A Tricky Prisma Conundrum

Just as we thought we were sailing smoothly towards testing, a subtle but critical bug emerged during workflow creation. This is where the "Pain Log" turns into "Lessons Learned."

The Problem: When creating a new workflow, we were attempting to pass the userId as a scalar foreign key directly into prisma.workflow.create() while simultaneously providing nested steps: { create: [...] } to define the workflow's steps.

Here's a simplified version of what we tried to do:

typescript
// Incorrect approach
prisma.workflow.create({
  data: {
    name: "My New Workflow",
    userId: ctx.user.id, // Scalar FK
    tenantId: ctx.tenantId, // Scalar FK
    steps: {
      create: [
        { label: "Step 1", type: "LLM_STEP", config: {} },
        // ... more steps
      ],
    },
  },
});

The Error: This resulted in a cryptic-looking error from Prisma: Unknown argument 'userId'. Available options are marked with ?.

The Insight (and the "Why"): Prisma has a clever (and sometimes tricky) mechanism for handling create operations. When you provide nested create operations (like steps: { create: [...] }), Prisma infers that you're trying to create the workflow and its related WorkflowSteps in one go. For this, it typically expects the "checked" input type (WorkflowCreateInput).

The catch? In WorkflowCreateInput, foreign key fields like userId and tenantId are not directly available as scalar fields. Instead, they are represented as relation fields (user and tenant) which expect a connect or create operation themselves. Prisma performs an "XOR" logic: if you're using nested relation writes, it forces you into the "checked" input type, where scalar FKs are invalid. If you're not using nested relation writes, you can use scalar FKs (this is the WorkflowUncheckedCreateInput type). Mixing them is where the conflict arises.

The Solution: The fix involved switching from passing the scalar userId and tenantId directly to using Prisma's relation connect syntax:

typescript
// Correct approach
prisma.workflow.create({
  data: {
    name: "My New Workflow",
    user: { connect: { id: ctx.user.id } }, // Connect to existing user
    tenant: { connect: { id: ctx.tenantId } }, // Connect to existing tenant
    steps: {
      create: [
        { label: "Step 1", type: "LLM_STEP", config: {} },
        // ... more steps
      ],
    },
  },
});

This ensures that Prisma correctly links the new workflow to an existing user and tenant without conflicting with the nested steps creation. The same fix was applied to our duplicate mutation, which had the same underlying issue.

Lesson Learned: When using nested relation creates in Prisma (e.g., steps: { create: [...] }), always use the { connect: { id } } pattern for foreign key references. Do NOT mix scalar foreign keys with nested relation creates, as Prisma will enforce the "checked" input type where scalar FKs are not valid.

Other Developer Wisdom from the Trenches

Beyond this core bug, our journey has offered a few other valuable lessons:

  • Handling Json Fields in Prisma: When resetting Json fields, remember to use Prisma.JsonNull instead of plain null. This correctly signals to Prisma that the JSON value should be set to SQL NULL, rather than attempting to set the JSON value to the string "null".
  • TypeScript & Iteration: To avoid Type 'IterableIterator<[string, unknown]>' is not an array type or a string type. Do you need to enable the '--downlevelIteration' compiler option or configure a '--downlevelIteration' polyfill? errors when iterating over Map entries in older TypeScript targets, explicitly convert to an array using Array.from(map.entries()).
  • Database Migrations: When adding new required columns to an existing database, always remember to make them optional or provide a @default value in your Prisma schema. This prevents migration failures on existing rows that wouldn't have a value for the new column.

What's Next: Towards Full End-to-End Testing!

With the Prisma bug behind us, the path is clear for comprehensive testing. Our development environment is clean, the database schema is in sync, and the Prisma client is regenerated.

Our immediate next steps involve putting the entire system through its paces:

  1. Workflow Creation & Validation: We'll restart the dev server and verify that creating a new workflow, especially using our Deep Build Pipeline template with linked consolidations, works flawlessly.
  2. Project Wisdom Resolution: Crucially, we'll run these workflows and confirm that the {{consolidations}} template variable correctly resolves and injects project wisdom into the "Project Wisdom" step's prompt.
  3. Full Pipeline End-to-End: This is the big one – testing the entire 9-step Deep Build Pipeline, ensuring SSE streaming works, review pauses are respected, and the retry/edit output functionalities perform as expected.
  4. Edge Cases: We'll test scenarios with no consolidations linked to verify our fallback text appears correctly.
  5. Future Enhancements: We'll consider enriching the execution page to display the names of linked consolidations, rather than just generic badges based on IDs.

We're incredibly excited to move into this testing phase. The Dynamic Workflow Builder and Execution Engine, with its powerful "project wisdom" integration, is poised to be a game-changer, and we can't wait to share it with you! Stay tuned for more updates as we fine-tune and launch this powerful new capability.