nyxcore-systems
5 min read

Shipping Smarter Prompts: NyxCore's 90% Reduction with Digest Compression

We've successfully implemented and verified an end-to-end digest compression system for our LLM-powered workflows, dramatically reducing prompt sizes by up to 90% and paving the way for more efficient and cost-effective AI interactions.

LLMAIPrompt EngineeringWorkflow AutomationSystem DesignTypeScriptDatabaseEfficiencyContext Window

In the world of AI-driven applications, especially those leveraging Large Language Models (LLMs), managing the size and complexity of prompts is a constant challenge. Longer prompts mean higher token costs, slower inference times, and a greater risk of hitting context window limits. At NyxCore, where we build sophisticated LLM-powered workflow automation, addressing this head-on became a critical mission.

Our goal? To implement and verify an end-to-end "digest" compression system. The idea is simple yet powerful: instead of passing the full, verbose output of a previous step to the next, we generate a concise, context-rich summary – a "digest" – that captures only the essential information. This session was all about proving that this system worked, flawlessly, in a live workflow.

The Mission: End-to-End Digest Verification

After several iterations and fixes, it was time for the ultimate test: running a real-world, complex workflow and observing the prompt size reductions in action. We targeted our "nyxCore - Kimi K2 v2" Extension Builder pipeline, a multi-step process that generates code and designs based on a given repository. This pipeline is a perfect candidate for digest compression, as intermediate steps often produce extensive outputs.

What We Set Out to Verify:

  1. Digest Generation: Are digests being created automatically for each completed step?
  2. Prompt Size Reduction: How much smaller are the prompts when using digests?
  3. System Stability: Do previous fixes (like error logging, backfill loops, and alternatives selection) hold up under production-like conditions?

The Results Are In: A Resounding Success!

I'm thrilled to report: mission accomplished. The digest system is fully operational, and the impact on prompt size is nothing short of spectacular.

We ran the workflow f196e1b6-962d-45b5-b586-646688cd2243, and here's a breakdown of the character reductions across its five core steps:

Step NameOriginal Prompt Size (chars)Digested Prompt Size (chars)Reduction (%)
Analyze Target Repo7,4753,73050%
Design Features9,5644,15057%
Extend & Improve26,9663,60687%
Implementation Prompts41,0553,62691%

These numbers speak for themselves. In the most data-intensive steps, we saw prompt sizes shrink by a staggering 87% and 91%! This translates directly into significant cost savings, faster processing, and the ability to handle much more complex tasks within the LLM's context window.

Under the Hood: Confirming Stability

Beyond the impressive numbers, it was crucial to confirm the robustness of the system. Several fixes from previous sessions were put to the test and held up perfectly:

  • Error Logging: Our enhanced error logging in src/server/services/step-digest.ts's catch block proved invaluable for quick debugging during development, and thankfully, remained silent during this successful run.
  • Backfill Loop: The backfill logic in src/server/services/workflow-engine.ts (around line 585), designed to generate digests for completed steps that might have missed them on an initial run, worked as expected. This ensures data consistency even if a digest fails to generate initially.
  • Alternatives Selection: Digest generation for alternative selection paths (around line 673 in workflow-engine.ts) also functioned correctly, ensuring that even branching logic benefits from compression.

Testing Utilities

For this verification, I also created a couple of temporary utility scripts that proved incredibly useful:

  • scripts/run-workflow.ts: A script for direct workflow execution, bypassing our usual SSE/authentication layers. This allowed for rapid, focused testing. (Deleted after use).
  • scripts/backfill-digests.ts: A one-off script for manually triggering digest backfilling, which was handy for initial setup and ensuring the backfill logic was sound. (Deleted after use).

Lessons Learned & Pro-Tips

While this session was remarkably smooth, a minor point of friction offered a good reminder:

  • Raw SQL vs. ORM Utilities: When working with raw SQL queries in a TypeScript/Prisma environment, npx prisma db execute can sometimes be finicky, especially with specific syntax or quoted identifiers. For direct, reliable raw SQL execution, falling back to the native client like psql (e.g., PGPASSWORD=nyxcore_dev psql -h localhost -U nyxcore -d nyxcore) is often more efficient. Remember to quote camelCase column names like "workflow_steps"."digest" if your database schema uses them.

What's Next for NyxCore?

With digest compression firmly in place, our immediate focus shifts to leveraging this newfound efficiency and expanding its capabilities:

  1. Project Wisdom Integration: Testing the {{project.wisdom}} feature by linking a project to a workflow that utilizes consolidated data. This will allow LLMs to draw from a compressed, relevant knowledge base.
  2. Token Cost Analysis: Conducting a comprehensive comparison of total token costs before and after digest compression across multiple workflow runs to quantify the financial impact.
  3. Backfill Loop Optimization: Considering making the digest backfill loop optional (via an environment variable or workflow setting) to avoid unnecessary Haiku (our LLM provider) calls when not strictly needed.
  4. Minor Type Fix: Addressing a pre-existing type error in discussions/[id]/page.tsx:139 related to a Badge variant.
  5. Security Enhancement: Adding RLS (Row Level Security) policies for projectId columns if cross-tenant data access becomes a concern, ensuring robust data isolation.

This session marks a significant milestone for NyxCore. By intelligently compressing our workflow context, we're not just saving costs; we're opening doors to more complex, capable, and efficient AI-powered automation. The future of smarter prompts is here!

json
{"thingsDone":["Verified digest generation on live workflow","Confirmed significant prompt size reduction (50-91%)","Validated previous session's fixes (error logging, backfill loop, alternatives selection)","Created and used temporary testing scripts"],"pains":["No major issues, minor friction with `npx prisma db execute` vs. `psql` for raw queries"],"successes":["Digest system fully operational","Massive prompt size reduction","System stability and robustness confirmed"],"techStack":["TypeScript","Prisma","PostgreSQL","LLMs (Haiku)","Workflow Automation"]}