nyxcore-systems
6 min read

Wrestling with Workflows: Building Intelligent Analysis and Bridging the nyxBook Gap

Join me as I recount a recent dev session, tackling complex LLM-powered integration analysis and a crucial UX feature for our nyxBook platform, complete with unexpected twists and valuable lessons.

LLMWorkflowEngineTypeScripttRPCSystemDesignDeveloperExperienceGemini

Another evening, another deep dive into the codebase. This session had a dual focus: launching a new, complex workflow for Integration Analysis and investigating a critical user experience gap in our nyxBook platform. As always, the journey from idea to deployment was paved with a few surprises and some hard-won lessons.

Part 1: Forging the Integration Analysis Workflow

Our primary goal was to create a robust, automated workflow for analyzing integrations between different providers – think comparing CodeMCP's output against nyxcore-systems for a given prompt. This isn't just about running an LLM; it's about a structured, multi-step process that can involve various tools and human review.

The design settled on a 10-step process, meticulously defined in src/lib/constants.ts. Each step represents a distinct phase, from initial data ingestion to LLM-driven analysis, and finally, human review. You can see the full blueprint in our docs/plans/2026-03-08-integration-analysis-design.md.

The "Aha!" Moment: Splitting Responsibilities

During the code review, a critical architectural decision emerged. We initially designed a single step, intIpchaChallenge, to handle both LLM-driven analysis and subsequent human review/comparison. The problem? Our workflow engine's compareProviders utility, crucial for the review phase, couldn't run effectively within an LLM step.

The solution was clear: split the monolithic step. We refactored intIpchaChallenge into two distinct steps:

  1. intIpchaAnalysis: An llm step focused purely on the LLM's analytical output.
  2. intIpchaReview: A review step where human operators could compare provider outputs, leveraging compareProviders as intended.

This separation not only aligned with the engine's capabilities but also improved clarity and control over each phase of the analysis.

typescript
// Before: A single step trying to do too much
// { id: 'intIpchaChallenge', type: 'llm', description: 'Analyze and challenge integration' }

// After: Split for clarity, control, and engine compatibility
// Step 1: LLM-driven analysis
const integrationAnalysisSteps = [
  // ... other initial steps
  { 
    id: 'intIpchaAnalysis', 
    type: 'llm', 
    description: 'LLM-driven integration analysis and initial challenge generation' 
  },
  // Step 2: Human review and comparison, now distinct
  { 
    id: 'intIpchaReview', 
    type: 'review', 
    description: 'Review and compare provider outputs, leveraging engine comparison tools' 
  },
  // ... subsequent steps
];

With the design solidified and the code implemented, we ran the first real test: workflow b6947b7a, comparing CodeMCP and nyxcore-systems. Success! The workflow executed as planned.

Lessons Learned: Engine Quirks and LLM Limits

No deployment is without its challenges. Here were two key lessons from this phase:

1. Understanding Engine Boundaries: providerFanOutConfig vs. compareProviders

The Pain: I initially tried to use providerFanOutConfig on a generic StepTemplate to manage parallel provider comparisons. The Failure: It quickly became apparent that providerFanOutConfig isn't part of the StepTemplate interface and is only executed by the engine on llm steps. My mental model of its scope was off. The Workaround & Lesson: Instead of trying to force a square peg into a round hole, I pivoted. We directly leveraged the compareProviders utility within the intIpchaReview step. Takeaway: Know your workflow engine's exact capabilities and configuration points. Sometimes, the direct, explicit approach (even if it feels less "configurable") is the most reliable path when facing engine-specific limitations.

2. Taming LLM Context Windows: Gemini Truncation

The Pain: During the initial runs, we observed Google Gemini truncating its output on some of the larger-context steps. Despite a seemingly generous maxTokens: 8192 setting, completions were often cut short around 328 tokens, indicating the prompt itself was eating up most of the context. The Workaround & Lesson: This is a classic LLM problem. The immediate fix was to raise the maxTokens limit on the affected steps from 8K to 16K. Takeaway: LLM context windows are a constant battleground. Always monitor token usage (both prompt and completion) and be prepared to adjust limits, or even refactor prompts, to avoid truncation, especially with complex tasks.

After addressing these issues and confirming stability, the Integration Analysis workflow was successfully deployed to production in two commits (9e36dd2, 34d6b8c). It's now live and running!

Part 2: Bridging the Gap to nyxBook Chapters

The second task of the session was prompted by a user's desire to save the output of a specific workflow (workflow 2045cdf2-2204-457e-b4bd-aaabfd3b5df7) directly into a nyxBook chapter. Our nyxBook platform allows users to generate chapters, but the current process for integrating workflow outputs is manual.

The Identified Feature Gap: No "Save to Chapter"

I explored the existing nyxBook chapter system, specifically looking at the generateChapter functionality. While it can create a workflow, it does not automatically save the workflow's final outputs into the chapter content. The user's current workflow involved manually copying and pasting text from various workflow step outputs into the chapter editor (src/app/(dashboard)/dashboard/nyxbook/[bookId]/chapters/[num]/page.tsx).

This is a clear user experience gap. It adds friction and makes the "workflow-generated" label feel less integrated.

The Proposed Solution: A New tRPC Mutation

To close this gap, I've proposed a new feature: a dedicated "Save to Chapter" path.

Immediate Next Steps (pending user approval):

  1. Develop a new tRPC mutation: nyxBook.chapters.saveFromWorkflow. This mutation, residing in src/server/trpc/routers/nyxbook.ts (which is already a substantial 1211 lines!), will take workflowId, bookId, and chapterNumber as inputs. Its core job will be to intelligently map specific workflow step outputs to the chapter's narrative and aktenlage fields.
  2. Implement UI: A "Save to Chapter" button and a simple dialog on the workflow details page ([id]/page.tsx). This dialog will allow the user to pick the target book and chapter.
  3. Traceability: The Chapter model already has generatedBy ("manual"|"workflow"|"import") and an optional workflowId (Foreign Key), which will be crucial for tracking the origin of chapter content.
  4. Consider Auto-Detection: In the future, we could explore auto-detecting if a workflow is "book-related" and conditionally showing the "Save to Chapter" button.

This feature will significantly improve the user experience, turning a multi-step manual process into a seamless, automated one.

Wrapping Up

This session was a great reminder of the dynamic nature of development. We successfully launched a sophisticated LLM-powered analysis workflow, navigating engine quirks and LLM limitations along the way. Simultaneously, we identified a crucial user experience improvement for nyxBook, laying out a clear path to make our platform even more integrated and user-friendly.

It's always satisfying to see new features go live and to chart the course for future enhancements that directly benefit our users.

json
{"thingsDone":["Designed and implemented a 10-step Integration Analysis workflow in `src/lib/constants.ts`.","Refactored a critical workflow step (`intIpchaChallenge`) into separate LLM analysis (`intIpchaAnalysis`) and human review (`intIpchaReview`) steps.","Successfully completed the first real run of the Integration Analysis workflow (`b6947b7a`).","Fixed Google Gemini truncation by raising `maxTokens` from 8K to 16K on three steps.","Deployed the Integration Analysis workflow to production twice (commits `9e36dd2`, `34d6b8c`).","Explored the nyxBook chapter system and identified a feature gap: no direct 'Save to Chapter' path from workflow outputs."],"pains":["Engine limitation: `providerFanOutConfig` was not available on generic `StepTemplate` and only runs on `llm` steps, requiring a pivot to `compareProviders`.","LLM truncation: Google Gemini truncated completion tokens on large-context steps despite 8K `maxTokens`, necessitating an increase to 16K.","User experience gap: nyxBook users must manually copy/paste workflow outputs into chapter editor, lacking an automated 'Save to Chapter' feature."],"successes":["Successfully launched a new, complex LLM-powered analysis workflow to production.","Effectively debugged and mitigated LLM truncation issues.","Identified a clear, actionable solution for a critical user-facing feature in nyxBook, including API and UI design."],"techStack":["TypeScript","Next.js","tRPC","LLMs (Google Gemini)","Custom Workflow Engine","Custom nyxBook Platform"]}