From Dashboard Insights to Green CI: A Day of Feature Builds and Pipeline Triumphs
Join us on a journey through a dev session where we brought our project dashboard to life with key stats and tackled a series of CI pipeline failures, including a tricky `pgvector` setup and an ESLint upgrade saga.
Every development session brings its unique set of challenges and triumphs. Today's mission was twofold: breathe new life into our project list page with insightful statistics and, perhaps more critically, bring our entire CI pipeline back to a glorious green state after a few recent regressions. I'm happy to report: mission accomplished!
Let's dive into the details of how we made it happen.
Enriching the Dashboard: A Data-Driven Project Overview
Our project list page, while functional, was a bit bare. To truly empower users, we needed to surface key metrics right on the project cards. The goal was to provide an at-a-glance understanding of each project's health and activity.
The Backend Magic: Data Aggregation with Prisma
The heaviest lifting for this feature happened on the backend, specifically in our src/server/trpc/routers/projects.ts file. We leveraged Prisma's powerful querying capabilities to fetch and compute the necessary statistics:
- Relational Counts (
_count): For immediate counts of related entities (like the number of associated actions or reports), we added_countincludes for 7 different relations directly within ourproject.findManyquery. This is a super efficient way to get related record counts with minimal database hits. - Parallel Aggregation Queries: For more complex metrics that weren't simple counts, we ran 9 additional parallel aggregation queries. These covered things like:
- The number of draft items.
- Open actions.
- Counts for various workflow statuses.
- Detailed cost breakdowns for steps, discussions, reports, and blog posts.
- Lookups for workflow and discussion IDs. Running these in parallel significantly sped up the data retrieval for each project.
- Computed Metrics: With the raw data in hand, we computed two crucial metrics:
successRate: Calculated as the ratio of completed workflows to all terminal workflows. This gives a quick health check of project execution.totalSpend: All associated costs, rounded to a neat two decimal places for financial reporting.
Finally, to keep our API responses clean, we stripped out the internal _count fields before sending the data to the frontend.
The Frontend Transformation: Visualizing the Data
With the rich data now available, the frontend (in src/app/(dashboard)/dashboard/projects/page.tsx) underwent a significant redesign. Each project card now features:
- A Stat Icon Row: A visually appealing row of icons (think
GitBranchfor workflows,MessageSquarefor discussions,FileTextfor reports,ListChecksfor actions,BookOpenfor blogs,Databasefor data, andDollarSignfor costs) to represent the key metrics. - Semantic Badges: We used semantic
Badgevariants to clearly indicate thesuccessRate, highlight the number of open actions, and differentiate between draft and published content. This provides immediate visual cues about a project's status.
This combination of robust backend data and an intuitive frontend display has transformed our project list into a powerful, data-rich overview.
The CI Gauntlet: Making Our Pipeline Green Again
While the feature work was exciting, a looming shadow was our failing CI pipeline. Three distinct jobs were red, demanding immediate attention.
Challenge 1: Taming the Linter (ESLint Upgrade Saga)
The first culprit was ESLint. Our Lint & Typecheck job was failing spectacularly with a barrage of errors.
The Problem: A recent upgrade of eslint-config-next (to 14.2.35) had, under the hood, pulled in @typescript-eslint@8.x. This version change introduced new requirements, specifically needing the @typescript-eslint plugin explicitly declared.
The Fix:
- We updated our
.eslintrc.jsonto explicitly include"plugins": ["@typescript-eslint"]. This immediately resolved the core configuration issue. - With the linter now working, it revealed approximately 40 pre-existing
no-unused-varserrors across 25 files. This was a fantastic opportunity for cleanup! We either removed unused imports or prefixed unused variables with_(e.g.,_myUnusedVar) to signal intent. - We also refined the
no-unused-varsrule configuration in.eslintrc.jsonto gracefully handle these situations:jsonThis allowed us to keep the"rules": { "no-unused-vars": ["error", { "varsIgnorePattern": "^_", "destructuredArrayIgnorePattern": "^_" }] }no-unused-varsrule strict for actual unused variables while permitting intentional omissions. - Finally, a specific bug involving a conditional React Hook in
src/components/markdown-renderer.tsxwas fixed by moving auseCallbackhook to an unconditional position before an early return.
Challenge 2: Unit Test Alignment
This was a quick one. Our Unit Tests job was failing due to an unexpected model name.
The Problem: A recent update to an LLM service adapter meant the expected model name in our tests/unit/services/llm/kimi.test.ts file was out of sync.
The Fix: Simply updating the expected model from kimi-k2-0711 to kimi-k2-0711-preview brought this test back to green. A reminder to keep tests tightly coupled with external API changes!
Challenge 3: The pgvector Odyssey (E2E Tests)
The most intricate challenge came from our E2E Tests job, which was failing with a cryptic type "vector" does not exist error.
The Problem: Our application uses the pgvector extension for vector embeddings in PostgreSQL. In our CI environment, we were using a standard postgres:16-alpine Docker image for our database. We thought simply switching to pgvector/pgvector:pg16 would solve it.
The Initial Failure: We updated ci.yml to use pgvector/pgvector:pg16. To our dismay, the type "vector" does not exist error persisted!
The Lesson Learned: This was a classic "read the fine print" moment. The pgvector/pgvector:pg16 Docker image includes the pgvector extension binaries, making it available for use. However, like any PostgreSQL extension, it still needs to be explicitly created within the database instance. A new database, even on an image that has the extension, won't have it enabled by default.
The Solution: We added a crucial step to our ci.yml workflow before prisma db push:
- name: Create pgvector extension
run: psql -h localhost -U postgres -d postgres -c 'CREATE EXTENSION IF NOT EXISTS vector;'
This command ensures that the vector extension is created in our test database, finally resolving the E2E test failures.
Key Learnings & Developer Wisdom
Beyond the fixes, these sessions always offer valuable insights:
pgvectorExtension Management: Remember that a Docker image providing an extension is not the same as the extension being activated in a database. AlwaysCREATE EXTENSIONexplicitly.- Destructuring Unused Props: When dealing with TypeScript and destructuring props where you only need some properties and want to ignore others (perhaps to collect the rest with a rest operator), directly renaming
{ _teamId, ... }will cause a TypeScript error if_teamIdisn't in the interface. The correct syntax to effectively "ignore" a prop while destructuring is to rename it to an ignored variable:{ teamId: _teamId, ... }.
Outcome and What's Next
I'm thrilled to report that all three CI jobs — Lint & Typecheck, Unit Tests, and E2E Tests — are now passing with flying colors! Our latest successful run is 22482474598.
The project list page now boasts a rich, data-driven view, and our CI pipeline is robust and reliable once more.
Our immediate next steps include:
- Visually verifying the new project list page in a browser to ensure the UI is perfect.
- Addressing a separate, non-code related issue with our Vibe Publisher workflow permissions (a quick trip to GitHub repo settings is needed).
- Considering adding tooltips to the success rate badges for even greater clarity.
It was a productive session, demonstrating the continuous interplay between building new features, maintaining code quality, and ensuring our automated safeguards (CI) are always in top shape. Happy coding!