nyxcore-systems
6 min read

Unlocking Knowledge: Building Project Axiom, Our Advanced RAG System

Dive into the recent development sprint that brought Project Axiom to life – a comprehensive RAG knowledge system with intelligent document processing, hybrid search, and seamless workflow integration.

RAGLLMAIKnowledge ManagementFull-StackTypeScriptPrismaPostgreSQLVector SearchFTSNext.jstRPCDevelopment Workflow

Just wrapped up an intense development sprint, and I'm thrilled to share the fruits of our labor: Project Axiom. This isn't just another feature; it's a foundational RAG (Retrieval Augmented Generation) knowledge system designed to revolutionize how we access and utilize information within our applications. After countless lines of code, careful architectural decisions, and a few head-scratching moments, all 16 files are implemented, type-checking cleanly, and ready for integration.

The goal was ambitious: create a full-fledged RAG system that could ingest diverse documents, intelligently process them, and then serve up contextually relevant information with lightning speed and accuracy. Imagine a system where your LLM workflows aren't just generating text, but are deeply informed by your organization's specific knowledge base, accessible via a simple {{axiom}} variable.

Let's pull back the curtain and explore the core components that make Project Axiom tick.

Under the Hood: Axiom's Architecture

Building a robust RAG system involves several interconnected pieces, from data persistence to intelligent document processing and seamless API integration. Here's a breakdown of what we accomplished:

1. The Data Foundation: Prisma & PostgreSQL Power

At the heart of Axiom lies a meticulously designed data model, leveraging the power of Prisma and PostgreSQL.

  • Prisma Models (prisma/schema.prisma):
    • ProjectDocument: To store our source documents, each tagged with authority and category fields. These are crucial for defining relevance and boosting search scores later on.
    • DocumentChunk: The atomic unit of our knowledge base. Each chunk includes a vector embedding for semantic search and a tsvector for full-text search.
    • AxiomApiToken: For secure external access, these tokens are stored as SHA-256 hashes, complete with expiry checks and lastUsedAt tracking.
  • PostgreSQL Enhancements (prisma/rls.sql):
    • We added Row-Level Security (RLS) policies to ensure data isolation and security, especially vital in a multi-project environment.
    • Integrated a vector column for efficient similarity search.
    • Created an HNSW index on the vector column for blazing-fast approximate nearest neighbor (ANN) lookups.
    • Implemented a GIN FTS index on the tsvector column to power our full-text search capabilities.
    • A tsvector trigger automatically updates the tsvector column whenever document content changes, keeping our FTS index fresh.

2. Intelligent Document Processing: Beyond Simple Splits

Raw documents aren't directly usable by LLMs. They need to be intelligently broken down into manageable, semantically meaningful chunks.

  • src/server/services/rag/document-processor.ts: This service is the brain behind our chunking strategies:
    • Markdown: Chunks are intelligently split by headings, preserving context.
    • Plain Text: Paragraph-based chunking ensures coherent blocks of text.
    • Code: We parse code to chunk by declarations (functions, classes), maintaining structural integrity.
    • PDFs: Leveraging pdf-parse v2, PDFs are processed page by page, extracting text for further chunking.
  • Storage Adapter (src/server/services/storage.ts): Extended to handle various document MIME types, with a sensible 50MB maximum file size.
  • Dependency: pdf-parse v2.4.5 was added to package.json to handle PDF ingestion.

3. The Search Engine: Precision Through Hybrid Retrieval

Once documents are chunked and embedded, the real magic happens: finding the most relevant information.

  • src/server/services/rag/document-search.ts: Our hybrid search service combines the best of both worlds:
    • Hybrid Approach: A 70% vector + 30% FTS weighting, ensuring both semantic relevance (vector) and keyword accuracy (FTS).
    • Authority Boosting: Documents marked as "mandatory" receive a +0.3 score boost, while "guideline" documents get +0.15. This prioritizes critical information.
  • src/server/services/rag/load-axiom-content.ts: This service orchestrates the final content assembly:
    • It prioritizes mandatory chunks first, followed by relevant guideline and informational content.
    • Output is ordered by authority and relevance.
    • A strict 12K character limit ensures the retrieved context fits within typical LLM prompt windows.

4. Seamless Integration: APIs and Workflows

A powerful backend is only useful if it's easily accessible and integrated into existing workflows.

  • src/server/services/rag/token-service.ts: Manages nyx_ax_ prefixed API tokens, ensuring secure access and tracking.
  • tRPC Router (src/server/trpc/routers/axiom.ts): A comprehensive internal API for Axiom, handling:
    • Document upload, confirmation, fetching via URL, listing, deletion, updates, and reprocessing.
    • A dedicated search endpoint.
    • Document stats.
    • Nested tokens router for managing API keys.
  • External REST Endpoints (src/app/api/v1/rag/...): For integration with external systems, we exposed:
    • /search: For querying the knowledge base.
    • /ingest: For programmatic document ingestion.
    • /documents: For managing documents via REST.
  • Workflow Integration (src/server/services/workflow-engine.ts): The {{axiom}} template variable is now a first-class citizen in our workflow engine, seamlessly resolving context during resolvePrompt, runWorkflow, and buildChainContext. This is a game-changer for LLM-powered applications.
  • UI Integration (src/app/(dashboard)/dashboard/projects/[id]/page.tsx): A new "Axiom" tab (with a sleek Shield icon) under the "Knowledge" group provides a user-friendly interface for managing documents and tokens.
  • Developer Experience (src/lib/constants.ts, CLAUDE.md): We updated CLAUDE.md with detailed documentation on the Axiom router, service patterns, and the {{axiom}} template variable. The variable was also added to pre-defined step templates for "deep analysis" and "synthesis," making it immediately useful.

Challenges & Lessons Learned

No sprint is without its bumps. Here are a few notable challenges and the lessons we took away:

  • UI Component Quirks: Badge Variants
    • The Problem: Attempting to use variant="outline" on our custom Badge components in the AxiomTab led to a TypeScript error. Our components only supported "default", "accent", "success", "warning", and "danger" variants.
    • The Lesson: Always double-check the available props and types for UI components, especially custom ones. What seems like a standard variant in one design system might not exist in another. We quickly adapted by using "default" or "accent" to maintain consistency.
  • Dependency Upgrades: pdf-parse v1 vs v2 API
    • The Problem: Our initial attempts to import pdfParse from "pdf-parse" failed because pdf-parse v2 switched to a class-based API with no default export.
    • The Lesson: Major version bumps in dependencies often introduce breaking API changes. It's crucial to consult the changelog or documentation. The fix involved using const { PDFParse } = await import("pdf-parse"); and then instantiating it with new PDFParse({ data: new Uint8Array(buffer) }), followed by .getText() and .destroy() for proper resource management. This also highlighted the utility of dynamic import() for handling modules that might have different usage patterns.
  • Database Connectivity: The Humble psql
    • The Problem: Running psql directly without credentials led to socket and password authentication errors.
    • The Lesson: Even basic setup tasks can trip you up. Always ensure your environment variables (like DATABASE_URL) are correctly set and accessible, especially when interacting with the database from the command line. A quick psql $DATABASE_URL -f prisma/rls.sql after sourcing our .env solved it.

Immediate Next Steps

With the core implementation complete, here's what's next on our agenda:

  1. Run npm run db:push && npm run db:generate to apply all schema changes.
  2. Execute psql $DATABASE_URL -f prisma/rls.sql to create the vector columns, HNSW/GIN indexes, and tsvector triggers.
  3. Thoroughly test the document upload flow end-to-end: upload a markdown file, confirm ingestion, and verify chunk creation.
  4. Test hybrid search via both the tRPC endpoint and the external REST API to ensure accuracy and performance.
  5. Consider implementing persona document scanning (e.g., "every persona scans a document and interprets it") – this is a deferred feature that requires further workflow trigger infrastructure.
  6. Add {{axiom}} documentation to the user-facing workflow builder UI hints for better usability.

Conclusion

Project Axiom is now ready for prime time. This sprint wasn't just about writing code; it was about laying the groundwork for a more intelligent, informed, and efficient system. I'm excited to see how this RAG system empowers our applications and users, transforming raw data into actionable knowledge and fundamentally changing how we interact with our information landscape.