nyxcore-systems
4 min read

Unleashing Parallel Power: Taming Our AI Prompt Fan-Out Pipeline

Dive into how we debugged and fixed a critical fan-out issue in our AI prompt pipeline, enabling parallel, feature-specific prompt generation with a sleek tabbed UI and a significant boost in workflow efficiency.

AILLMTypeScriptNext.jsDebuggingFrontendFullstackSoftwareDevelopment

Imagine building a powerful AI system designed to break down complex tasks into manageable, feature-specific prompts. Our goal was ambitious: to have this system generate not just one, but many highly targeted prompts for different features, each ready for a dedicated LLM call. This "fan-out" capability is crucial for efficiency, specificity, and ultimately, a more intelligent and dynamic AI experience.

But, as with any complex system, we hit a snag.

The Mystery of the Missing Tabs

We had the architectural pieces in place, or so we thought. Our design called for an "Implementation Prompts" step that would take a high-level request and fan it out into several sub-prompts – one for each feature identified. We expected a beautiful array of tabbed, downloadable prompts in our UI, each tailored to a specific feature.

Instead? A single, monolithic output. No tabs, no individual downloads – just a plain old LLM response. Our subOutputs field, which was supposed to hold the parsed individual prompts, was stubbornly NULL. The dream of parallel, feature-specific prompt generation remained just that: a dream.

The Detective Work: Tracing the Data Flow

Time for some debugging. We traced the data flow, from the StepTemplate where our fanOutConfig (the configuration that tells our system how to fan out) was defined, all the way to the server. The good news: the server-side stepConfigSchema and create mutation already supported fanOutConfig. The backend was ready for action!

The issue, it turned out, was closer to home: our frontend, specifically within new/page.tsx. Two critical spots were silently dropping the ball:

  1. stepTemplateToLocalStep(): This function, responsible for mapping our initial StepTemplate configuration to a local step representation for the UI, wasn't copying the fanOutConfig. It was simply forgotten in the transformation.
  2. handleSubmit(): Even if the local step had the fanOutConfig, when we finally submitted the step configuration to the server, the fanOutConfig wasn't being included in the payload. It was getting lost right before it reached its destination.

This meant that while our templates defined the fan-out behavior, and our backend expected it, the information was never making the journey from template to local state, nor from local state to the server.

The Breakthrough: A Few Lines, Big Impact

Armed with this knowledge, the fix was straightforward, though profoundly impactful. A few precise edits in new/page.tsx brought our fan-out capability to life:

  • We updated the StepConfig interface (line 62) to explicitly include fanOutConfig, ensuring TypeScript would now enforce its presence.
  • We added the fanOutConfig copy operation within stepTemplateToLocalStep() (line 96), correctly transferring the configuration from the template to our local step representation.
  • We ensured fanOutConfig was correctly passed through in the handleSubmit() steps mapping (line 679), guaranteeing it made it into the server payload.

Witnessing the Transformation

To truly celebrate the fix, we needed to see it work on a real-world example. We took an existing workflow, 02534270, and manually backfilled its Implementation Prompts step with the correct fanOutConfig via a quick SQL update and a node script.

The results were glorious:

  • The combined 43,000-character output from the LLM was correctly parsed into 8 distinct subOutputs.
  • Our UI sprang to life, displaying 8 beautifully tabbed sub-prompts, each with its own copy and download buttons.
  • Watching workflow 02534270 complete all 8 steps – processing approximately 45,000 tokens for about $0.50 in just 9 minutes – was incredibly satisfying. The fan-out was not just working; it was performing efficiently.

Lessons Learned and Next Steps

This session was a powerful reminder of the importance of end-to-end data plumbing. Even when your backend is ready and your templates are correctly defined, a small oversight in the frontend's data mapping or submission logic can halt a critical feature in its tracks. It also highlighted the need for robust testing across the entire pipeline, not just isolated components, and the occasional necessity of manual data backfills for existing workflows after such a fix.

With the fan-out pipeline now robustly in place, our attention shifts to further enhancements:

  • Adding a duplicate-save guard on our SaveInsightsDialog.
  • Wiring up a hybrid search (70% vector + 30% text) into our MemoryPicker for even smarter recall.
  • Automating embedding generation within insight-persistence.ts.
  • Implementing the plan from pure-dancing-valiant.md for a richer ReviewKeyPointsPanel and workflow overview.
  • And, of course, some good old housekeeping: cleaning up stale .log files.

Every line of code, every debug session, brings us closer to a more intelligent and efficient development experience. This fix is a significant step forward in our journey to build truly dynamic AI-powered tools.