nyxcore-systems
6 min read

Feature Boost & CI Conquer: A Day of Green Pipelines and Smarter Project Insights

We tackled a dual challenge: enriching our project list with vital stats and wrestling our CI pipeline back to a pristine green state across all stages. A deep dive into Prisma aggregations, ESLint upgrades, and E2E database setup.

TypeScriptNext.jsCI/CDESLintPostgreSQLpgvectorUnitTestingE2ETestingFeatureDevelopmentFrontendBackend

Every developer knows that feeling: a fresh feature waiting to be deployed, but a sea of red in the CI pipeline holding it back. Today was one of those days where we rolled up our sleeves to not only ship a useful new feature but also to meticulously debug and fix every single CI failure, bringing our entire suite of checks back to a glorious green.

Our mission was twofold: first, to empower our users with more insightful data directly on their project list, and second, to restore the integrity of our continuous integration system, which had accumulated a few nagging failures.

Elevating the Project List: Data at a Glance

The first order of business was to enhance our project dashboard. Users needed a quicker way to gauge the status and health of their projects without diving into each one individually. We envisioned a rich, interactive list where key metrics were immediately visible.

This involved some exciting backend work, primarily within our tRPC router for projects (src/server/trpc/routers/projects.ts) and a significant frontend overhaul (src/app/(dashboard)/dashboard/projects/page.tsx).

Backend Brilliance: Aggregations & Computations

To get the data we needed, we leveraged Prisma's powerful capabilities. We added _count includes for no less than seven relations on our project.findMany query. This gave us immediate counts for related entities like discussions, reports, and workflows.

But raw counts weren't enough. We needed deeper insights. This led to implementing nine parallel aggregation queries. These queries fetched critical data points such as:

  • The number of draft items.
  • Open actions requiring attention.
  • Breakdowns of workflow statuses.
  • Calculated costs for various project components (steps, discussions, reports, blogs).
  • Lookups for workflow and discussion IDs.

With this wealth of data, we then computed two crucial metrics:

  1. successRate: The ratio of completed workflows to total terminal workflows, giving a clear indicator of project health.
  2. totalSpend: A meticulously calculated total cost for the project, rounded to two decimal places for financial accuracy.

Before sending the data to the frontend, we gracefully stripped the internal _count fields from the response, keeping the API clean and focused on the computed insights. We also added an early-return optimization for empty project ID lists to prevent unnecessary database hits.

Frontend Flair: Redesigned Project Cards

On the frontend, the project cards received a complete makeover. We introduced a semantic icon row, using visually intuitive icons like GitBranch (for workflows), MessageSquare (for discussions), FileText (for reports), ListChecks (for actions), BookOpen (for blogs), Database (for data sources), and DollarSign (for total spend).

Each stat now sits proudly on the card, complemented by semantic Badge variants. For instance, the success rate badge dynamically changes color based on its value, providing an instant visual cue. Open actions are highlighted, and draft/post counts are clearly visible, ensuring users get a comprehensive overview at a glance.

The CI Gauntlet: Fixing the Red

With the feature logic in place, it was time to address the elephant in the room: our failing CI pipeline. Three distinct jobs were failing, each with its own unique challenge.

CI Fix 1: Taming ESLint & Type-checking

Our Lint & Typecheck job was a disaster zone. The culprit? A recent upgrade of eslint-config-next to version 14.2.35, which internally bumped @typescript-eslint to 8.x. This introduced a breaking change: the @typescript-eslint plugin, which was previously implicitly included, now needed to be explicitly declared.

The fix was straightforward but critical:

json
// .eslintrc.json
{
  "extends": ["next/core-web-vitals", "prettier"],
  "plugins": ["@typescript-eslint"], // <-- This was the missing piece!
  "rules": {
    "@typescript-eslint/no-unused-vars": [
      "warn",
      {
        "argsIgnorePattern": "^_",
        "varsIgnorePattern": "^_",
        "destructuredArrayIgnorePattern": "^_"
      }
    ]
    // ... other rules
  }
}

This change immediately resolved a cascade of "plugin not found" errors. However, it unveiled a new set of warnings: approximately 40 pre-existing no-unused-vars errors across 25 files. We systematically went through each one, either removing unused imports or prefixing unused variables with an underscore (_) to signal their intentional non-use.

A specific React Hook issue in src/components/markdown-renderer.tsx also surfaced, where a useCallback was conditionally called. The fix involved simply moving the useCallback declaration before an early return, adhering to React's rules of Hooks.

CI Fix 2: Aligning Unit Tests with Reality

Our Unit Tests job was failing due to a subtle mismatch in an expected model name within tests/unit/services/llm/kimi.test.ts. An adapter had been updated to use a "preview" version of the model, and our test hadn't caught up.

The fix was a quick one-liner:

typescript
// tests/unit/services/llm/kimi.test.ts
// ...
// Old: expect(result.model).toBe('kimi-k2-0711');
expect(result.model).toBe('kimi-k2-0711-preview'); // Updated to match the adapter
// ...

A small change, but vital for test accuracy!

CI Fix 3: Empowering E2E Tests with pgvector

The most stubborn of the CI failures was in our E2E Tests. We use pgvector for vector embeddings, and the E2E environment was failing with a cryptic type "vector" does not exist error.

Our initial thought was to simply switch the PostgreSQL Docker image in our CI workflow (.github/workflows/ci.yml) from postgres:16-alpine to the pgvector/pgvector:pg16 image. This seemed logical, as the new image explicitly includes pgvector.

However, the error persisted! This led to an important realization and a valuable lesson learned: while the pgvector/pgvector Docker image provides the pgvector extension binaries, it doesn't automatically create or activate the extension within the database itself. Each new database instance still requires explicit creation of the extension.

The workaround, and the ultimate fix, was to add a step in our CI workflow to explicitly create the extension before Prisma tried to push the schema:

yaml
# .github/workflows/ci.yml
# ...
- name: Setup Database
  run: |
    docker run --rm -d --name test-db -e POSTGRES_USER=test -e POSTGRES_PASSWORD=test -e POSTGRES_DB=test -p 5432:5432 pgvector/pgvector:pg16
    # Wait for PostgreSQL to be ready
    sleep 10
    # Explicitly create the pgvector extension in our test database
    psql -h localhost -U test -d test -c 'CREATE EXTENSION IF NOT EXISTS vector;'
    npx prisma db push --force --skip-generate
# ...

Adding that psql -c 'CREATE EXTENSION IF NOT EXISTS vector;' step was the key to unlocking our E2E tests, finally allowing them to run successfully.

Lessons Learned & Key Takeaways

This session provided some critical insights:

  • Explicit ESLint Plugins: Always be aware of major version bumps in core dependencies like eslint-config-next. They can introduce breaking changes that require explicit configuration, like adding @typescript-eslint to your plugins array.
  • Unused Variables in TypeScript Destructuring: When dealing with TypeScript interfaces and destructuring, you can't just rename a prop like { _teamId, ... } if _teamId isn't defined in the interface. Instead, use the rename syntax: { teamId: _teamId, ... } to correctly ignore the teamId while adhering to the interface contract.
  • pgvector Extension Activation: The pgvector Docker image is a great start, but remember that you still need to explicitly CREATE EXTENSION IF NOT EXISTS vector; within your database, especially in CI/CD environments where databases are often ephemeral.

A Green Horizon

By the end of the session, all three CI jobs – Lint & Typecheck, Unit Tests, and E2E Tests – were proudly passing green on run 22482474598. The satisfaction of seeing a clean pipeline after a debugging marathon is truly unmatched!

While we still have a minor permissions issue with our Vibe Publisher workflow (an unrelated repo-level setting, not a code problem), the core CI suite is robust.

Next up, a visual verification of our new project list page in the browser, and perhaps a thoughtful tooltip for our success rate badge to clarify "of completed runs."

It was a highly productive session, delivering both a valuable new feature and a much-needed boost to our development infrastructure's reliability. Onwards to cleaner code and happier deployments!