nyxcore-systems
7 min read

Beyond Generic Prompts: Injecting Project Wisdom into AI Code Assistants

We just shipped a major update, giving our LLM-powered AutoFix and Refactor pipelines deep project context by injecting consolidated knowledge directly into their prompts, leading to significantly smarter, more relevant suggestions.

LLMAICode GenerationRefactoringPrismaTypeScriptFullstackDeveloper ToolsArchitecture

It’s an exciting day in the dev lifecycle when a major feature lands, especially one that fundamentally elevates the "intelligence" of your AI-powered tools. Today, we pushed a significant update to main (9a632ac) that transforms our AutoFix and Refactor pipelines from capable, but somewhat generic, LLM agents into truly project-aware assistants.

The core idea is simple yet powerful: inject consolidated project knowledge directly into the LLM prompts that detect issues, generate fixes, and suggest refactorings. No more guessing; our LLMs now operate with the wisdom of the project's entire history.

The Problem: LLMs in a Vacuum

Our AutoFix and Refactor pipelines were already doing impressive work. They could analyze code, detect common issues, and propose improvements. But there was a missing piece: context. When an LLM operates solely on the code snippet it's given, it lacks the broader understanding of the project's architecture, its common pitfalls, past discussions, or even specific technical debt.

Imagine hiring a brilliant junior developer who only ever gets to see a single file at a time, without access to documentation, past bug reports, or team discussions. They'd make good suggestions, but they'd miss the nuances, the "unwritten rules," and the historical context that makes for truly great code. Our LLMs were a bit like that.

The Solution: A Project Context Engine

Our goal was to give these LLMs a "brain" – a consolidated understanding of the project they're working on. This led to the creation of src/server/services/pipeline-context.ts, a new service responsible for assembling what we call "project context."

This context isn't just a dump of files; it's a curated digest of wisdom, capped at around 30,000 characters to stay within reasonable LLM token limits. It pulls from five critical sources:

  1. Project Wisdom: High-level architectural decisions, design principles, or common patterns.
  2. Memory Insights: Summarized insights from past AutoFix or Refactor runs, capturing recurring issues or successful patterns.
  3. Discussions: Relevant snippets from internal team discussions, design reviews, or Slack threads.
  4. Documentation: Key sections from internal READMEs, wikis, or API docs.
  5. Previous Runs: Learnings from prior AutoFix or Refactor executions on the same project.

By stitching these together, we create a rich, project-specific narrative that precedes the actual code analysis in the LLM prompt.

The Architectural Journey: How We Did It

Implementing this required touching almost every layer of our stack.

1. Database Schema Update: Tying Runs to Projects

First, we needed to link every AutoFixRun and RefactorRun to a specific project. This was a straightforward addition of a projectId foreign key to AutoFixRun and RefactorRun in our prisma/schema.prisma, complete with reverse relations on the Project model.

prisma
// prisma/schema.prisma
model Project {
  id           String         @id @default(cuid())
  name         String
  // ... other project fields
  autoFixRuns  AutoFixRun[]
  refactorRuns RefactorRun[]
}

model AutoFixRun {
  id        String    @id @default(cuid())
  projectId String?
  project   Project?  @relation(fields: [projectId], references: [id])
  // ... other AutoFixRun fields
}

model RefactorRun {
  id        String    @id @default(cuid())
  projectId String?
  project   Project?  @relation(fields: [projectId], references: [id])
  // ... other RefactorRun fields
}

2. Frontend to Backend: Passing the Context Parameters

Our frontend now allows users to specify a projectId, select which contextSources to include, and even pick specific memoryIds for deeper dives. These parameters are passed up through our tRPC mutations:

typescript
// src/server/trpc/routers/auto-fix.ts (simplified)
start: publicProcedure
  .input(z.object({
    projectId: z.string().optional(),
    memoryIds: z.array(z.string()).optional(),
    contextSources: z.array(z.enum(['wisdom', 'memory', 'discussions', 'docs', 'previousRuns'])).optional(),
    // ... other inputs
  }))
  .mutation(async ({ input }) => {
    // ... call pipeline orchestrator with inputs
  }),

3. Orchestrators and Detectors: The Context Flow

The core logic resides in our pipeline orchestrators (auto-fix/pipeline.ts, refactor/pipeline.ts). They now call loadPipelineContext() using the provided projectId, memoryIds, and contextSources. The resulting projectContext string is then passed downstream to the LLM interaction layers.

All four of our LLM-facing modules (issue-detector.ts, fix-generator.ts, opportunity-detector.ts, improvement-generator.ts) were updated to accept and inject this projectContext into their respective prompts.

typescript
// src/server/lib/llm/issue-detector.ts (simplified)
export async function detectIssues(code: string, projectContext?: string) {
  const prompt = `
    You are an expert code quality assistant.
    ${projectContext ? `Here is relevant project context: ${projectContext}\n` : ''}
    Analyze the following code for potential issues and suggest improvements:
    \`\`\`typescript
    ${code}
    \`\`\`
  `;
  // ... call LLM with prompt
}

4. UI Enhancements: Making Context Visible

On the frontend, we've revamped the list pages for both AutoFix and Refactor. They now feature:

  • A project selector dropdown.
  • Toggle chips to easily enable/disable different context sources.
  • A collapsible MemoryPicker to select specific memory insights.
  • Filtered repository dropdowns based on the selected project.

Detail pages now proudly display context badges, showing the active project name, icons for enabled context sources, and the count of included memory insights. This offers immediate visual feedback on the context driving the LLM's suggestions.

Lessons Learned: Navigating the Trenches

No significant feature ships without a few bumps in the road. Here are our key takeaways from this session:

1. Prisma & Custom Database Types: The --accept-data-loss Dance

We use pgvector for embedding storage, which relies on a custom vector(1536) type in PostgreSQL. Prisma doesn't natively manage this type, which is usually fine for migrations. However, running npm run db:push after adding projectId to the schema triggered a warning about dropping the embedding vector(1536) column on workflow_insights.

The Lesson: When Prisma warns about dropping unsupported types, it's often a sign that it doesn't understand how to preserve them during schema changes. The workaround involved using --accept-data-loss (after verifying no actual data loss for other columns) and then immediately restoring the embedding column and its HNSW index via raw SQL:

sql
ALTER TABLE workflow_insights ADD COLUMN IF NOT EXISTS embedding vector(1536);
-- And then recreate the HNSW index if it was dropped (often done via a separate script or `npx prisma db execute --stdin`)

This is a recurring issue for us. We're exploring more robust strategies, perhaps by managing these specific columns outside of Prisma's direct db:push purview or having a more automated post-db:push script.

2. Frontend Data Structures: The items Property Gotcha

A minor but common TypeScript hiccup: when using trpc.projects.list.useQuery(), I instinctively tried projects.data?.map() expecting an array.

The Lesson: Always double-check the exact return type of your tRPC queries (or any API call, really!). Our list queries typically return an object { items: T[], total: number } for pagination metadata. The fix was simple: projects.data?.items.map(). It's a reminder that even experienced developers can fall into basic type traps.

What's Next?

With the feature landed, our immediate focus shifts to rigorous QA:

  1. Backward Compatibility: Verify AutoFix scans without a specified project still work as expected.
  2. Project-Aware Scans: Thoroughly test AutoFix and Refactor scans with project context enabled, checking repo filtering, context toggles, memory picker integration, and the context badges on detail pages.
  3. Security: Add RLS (Row Level Security) policies for our project_notes table to ensure data isolation (pending task #31).
  4. Cleanup: Tidy up .gitignore for mini-RAG log files (pending task #32).

This update marks a significant leap forward in making our AI code assistants more intuitive, precise, and genuinely helpful. By embedding project wisdom directly into their prompts, we're moving closer to a future where these tools feel less like generic algorithms and more like seasoned team members.


json
{"thingsDone":[
  "Implemented projectId FK for AutoFixRun and RefactorRun in Prisma schema",
  "Created pipeline-context.ts service to assemble project knowledge from 5 sources",
  "Extended tRPC start mutations with projectId, memoryIds, contextSources inputs",
  "Updated pipeline orchestrators to load and pass project context downstream",
  "Updated all 4 detector/generator LLM prompts to accept and inject projectContext",
  "Updated SSE routes to pass config fields to pipelines",
  "Rewrote frontend list pages with project selector, context toggles, memory picker, and filtered repo dropdowns",
  "Added context badges to frontend detail pages",
  "Ran db:push and db:generate, ensuring typecheck passes"
],
"pains":[
  "Prisma db:push warning about dropping custom 'embedding vector(1536)' column, requiring --accept-data-loss and manual SQL restoration",
  "Frontend TypeScript error due to incorrect assumption about trpc.projects.list return type (expecting array, received { items: [], total: number })"
],
"successes":[
  "Successfully made LLM pipelines project-aware by injecting consolidated knowledge",
  "Achieved clean typecheck and successful deployment to main",
  "Developed a robust system for gathering and injecting project-specific context",
  "Improved user experience with detailed context controls and visual feedback in the UI"
],
"techStack":[
  "TypeScript",
  "Next.js",
  "tRPC",
  "Prisma",
  "PostgreSQL",
  "pgvector",
  "LLM (Large Language Models)"
]}