nyxcore-systems
6 min read

Late Night Code Sprints: Taming SSE, Persisting Reports, and Battling Prisma Quirks in nyxCore

A deep dive into a recent development session, tackling real-time update reliability, robust report generation, and navigating database schema challenges to enhance the nyxCore platform.

developmentnextjstypescriptprismassepdf-generationdev-opsdebuggingnyxcore

There's a unique satisfaction that comes with those late-night coding sessions. The world quietens, distractions fade, and it's just you, your IDE, and a challenging problem waiting to be solved. This past week, one such session blossomed into a flurry of critical fixes and exciting new features for the nyxCore platform. We tackled everything from ensuring seamless real-time updates to building robust report generation, and even wrestled with some intriguing database schema quirks.

Let's unpack the journey and the lessons learned.

Elevating the nyxCore Experience: What We Shipped

Our primary goal for this session was ambitious: fortify our real-time capabilities, enhance reporting, and streamline custom workflow development. I'm thrilled to report, mission accomplished!

1. Smarter Workflows with Auto-Context Injection

One of the core strengths of nyxCore lies in its custom workflows. We want users to focus on what they want to achieve, not how to perfectly format every prompt. To that end, we've implemented automatic context injection.

Now, in src/server/services/workflow-engine.ts, our buildAutoContext() function intelligently pulls relevant data. Even better, if a prompt doesn't explicitly reference {{steps.*}} variables, a fallback in resolvePrompt() ensures it still receives context, making workflows more robust and less prone to user error. This means less friction and more powerful custom automations.

2. Bulletproof Real-time Updates: The SSE Reconnect Fix

Our auto-fix and refactor features rely heavily on Server-Sent Events (SSE) to provide live updates as the AI processes tasks. However, we discovered a crucial flaw: if a user switched tabs or lost connection briefly, the live updates would cease. The original SSE endpoint was a "one-shot" deal for non-pending runs, closing the connection prematurely.

The fix? We've refactored the SSE endpoints (src/app/api/v1/events/auto-fix/[id]/route.ts and src/app/api/v1/events/refactor/[id]/route.ts) to poll the database every 3 seconds for active runs. For terminal runs, we now explicitly send a done event. This ensures that even if a connection drops and reconnects, the user will always see the most up-to-date status of their active processes, providing a much more reliable and seamless user experience.

3. Robust Reporting: Persistence and PDF Export

Reports are a cornerstone of any professional tool, and we've significantly upgraded nyxCore's reporting capabilities:

  • Report Persistence: We introduced a new Report Prisma model with Row-Level Security (RLS) to securely store generated reports. Our src/server/trpc/routers/reports.ts now handles listing, getting, and deleting these persisted reports. All generateReport mutations now save directly to the database. No more lost reports!
  • PDF Export: Beyond simple Markdown, users can now download professional-grade PDFs. A new Python script, scripts/md2pdf.py, leverages md2pdf-mermaid and Playwright to convert Markdown (including Mermaid diagrams!) into beautiful PDFs. A dedicated /api/v1/reports/pdf POST endpoint handles the conversion, allowing dual MD+PDF downloads directly from the ReportGeneratorModal and the ReportsTab viewer.
  • Enhanced Report Headers: Small but mighty, the ReportGeneratorModal now dynamically displays projectName + reportType + date instead of a generic "nyxCore," adding clarity and professionalism to each report.

4. Minor Tweaks, Major Impact

  • The QR code URL for nyxcore.cloud was updated in src/server/services/report-generator.ts:72, ensuring our branding is consistent.
  • Our .venv/ directory, housing Python dependencies for PDF generation, has been correctly added to .gitignore to keep our repository clean.

Lessons from the Trenches: Overcoming Challenges

No significant development session is without its hurdles. These "pain points" are often the most fertile ground for learning.

Challenge 1: Prisma and the vector Type Conundrum

When adding the new Report model, I tried to prisma db push --accept-data-loss to update the schema. The Problem: Prisma, while excellent, sometimes struggles with highly specific database types that aren't part of its core set. In our case, db push kept dropping the embedding vector(1536) column on our workflow_insights table, deeming it an "Unsupported type." This is a critical column for our AI embeddings! The Workaround: After every db:push (which we mostly automate with dev-start.sh), we have to manually restore the column with:

sql
ALTER TABLE workflow_insights ADD COLUMN IF NOT EXISTS embedding vector(1536);

...and then recreate the HNSW index. This is a recurring issue, reminding us that sometimes, even with powerful ORMs, a deep understanding of the underlying database (in our case, PostgreSQL with its pgvector extension) is essential.

Challenge 2: The Elusive Cannot read properties of undefined (reading 'create')

After adding the Report model to the Prisma schema, generating a report failed with Cannot read properties of undefined (reading 'create') at auto-fix.ts:480. The Root Cause: This cryptic error meant that our application's Prisma client didn't "know" about the newly added Report model. Prisma generates client code based on your schema. If the schema changes but the client isn't regenerated, the client is out of sync. The Fix: A simple npm run db:generate was all it took. This regenerates the Prisma client, incorporating the new Report model and resolving the undefined error. The Lesson: Always remember the two-step dance for schema changes: db:push (to update the database) then db:generate (to update your application's Prisma client). Our dev-start.sh script now includes both steps automatically, preventing future headaches.

Challenge 3: Debugging the SSE Reconnect Logic

As mentioned earlier, the SSE real-time updates were failing on tab switches. The Investigation:

  1. The auto-fix/refactor detail pages lost live updates.
  2. The SSE endpoint was designed to send a "refresh" message and close if the run's status wasn't "pending."
  3. However, the AI pipeline continued running server-side after the initial client disconnect.
  4. When the user reconnected (e.g., switched back to the tab), the SSE endpoint was hit again. But since the run was no longer "pending" (it was "running" or "completed" server-side), the endpoint immediately closed, leaving the client in the dark. The Solution: Instead of a one-shot check, we implemented a polling mechanism. For active runs, the client now continuously polls the database every 3 seconds for the latest status. When the run finally reaches a terminal state (e.g., "completed," "failed"), the server sends a definitive done event. This robust approach ensures the client always reflects the true state of the server-side process.

Looking Ahead: The Next Frontier

With these significant improvements under our belt, our gaze turns to the immediate next steps:

  1. Persona Chooser Refactor: We're building a unified <PersonaPicker> component. This will be a collapsible section with rich persona cards (portrait, traits, specialization, exp/level), replacing all existing persona selectors across the platform (report modal, workflow builder, discussions). The goal is to make persona selection more intuitive and visually engaging.
  2. Report Tab Styling: The Reports tab in the project detail page needs a visual overhaul to align perfectly with the nyxCore design system and provide a stunning user experience.
  3. Testing Auto-Context: Thoroughly testing custom workflows with the new auto-context injection (e.g., analyse → review → generate prompt) to ensure seamless operation.
  4. RAG-based Policy Library: Exploring the integration of a Retrieval-Augmented Generation (RAG) system for tenant-specific policy libraries (e.g., DSGVO, ISO 27001). This confirms our belief that RAG is often superior to a pure LLM approach for factual, regulated content.

This session was a testament to the iterative nature of development – building new features, fixing existing ones, learning from challenges, and constantly refining the user experience. The nyxCore platform is evolving rapidly, and I'm excited for what's next!