Beyond the Green: Shipping Project Insights and Conquering CI's Triple Threat
Join me as I recount a recent development session, balancing the delivery of a new feature – enriched project dashboards – with the critical task of bringing a stubborn CI pipeline back to a glorious green.
Every now and then, a development session feels like a mini-saga. You start with a clear feature goal, but the universe (or, more accurately, the CI pipeline) has other plans. This past Friday was one of those days. My mission: enrich our project list page with vital statistics, then tackle a stubbornly red CI pipeline, which had three distinct failures.
The good news? Mission accomplished. All three CI jobs (Lint & Typecheck, Unit Tests, E2E Tests) are now passing, and our project cards are looking much smarter.
Let's dive into the journey.
Feature Spotlight: Bringing Projects to Life with Key Stats
Our dashboard/projects page was functional but a bit sparse. Users wanted to see more than just a project name; they needed quick insights into its health and activity. The goal was to transform each project card into a mini-dashboard, displaying things like success rates, open actions, and total spend.
The Backend Hustle (src/server/trpc/routers/projects.ts)
To achieve this, I leveraged Prisma's powerful query capabilities and a bit of custom aggregation:
_countfor Relations: For seven key relations (e.g.,workflows,discussions,reports), I added_count: trueto ourproject.findManyquery. This gave us immediate counts of related records.- Parallel Aggregation Queries: For more complex metrics like total spend or specific workflow statuses, I fired off nine parallel
prisma.$queryRaworprisma.<model>.aggregatequeries. This was crucial for performance, fetching all necessary data efficiently.- Example: Calculating
totalSpendinvolved summing costs acrosssteps,discussions,reports, andblogs. - Success Rate: This was a neat one –
(completed_workflows / terminal_workflows) * 100.
- Example: Calculating
- Data Transformation: Once all data was fetched, I rounded
totalSpendto two decimal places and stripped the internal_countobjects from the final response to keep the API clean. An early return for emptyprojectIdsadded robustness.
The Frontend Facelift (src/app/(dashboard)/dashboard/projects/page.tsx)
With the enriched data flowing in, the frontend got a much-needed redesign:
- Stat Icon Row: Each project card now features a neat row of icons (think
GitBranch,MessageSquare,FileText,DollarSign), each representing a key metric like workflows, discussions, reports, or total spend. - Semantic Badges:
successRateis now displayed with semanticBadgevariants (e.g., green for high success, amber for moderate). Open actions, drafts, and posts also get their own distinct badges, providing at-a-glance status.
The result is a far more informative and actionable project list, giving users a quick overview without needing to drill into each project.
The CI Gauntlet: Taming the Triple Threat
While the feature development was satisfying, the real challenge (and learning opportunity) came from bringing our CI pipeline back to green. Three distinct failures meant three different rabbit holes to explore.
Challenge 1: ESLint's Evolving Demands
Our Lint & Typecheck job was failing, spewing dozens of no-unused-vars errors.
The Problem:
It turned out eslint-config-next@14.2.35 had upgraded its underlying @typescript-eslint dependency to v8.x. This major version bump introduced stricter parsing and required explicit plugin configuration.
The Fixes:
-
Plugin Declaration: The first step was simple but critical:
json// .eslintrc.json { "plugins": ["@typescript-eslint"], // ... rest of your config }Without this, ESLint couldn't properly interpret TypeScript-specific rules.
-
Taming
no-unused-vars: Even with the plugin, a flood ofno-unused-varsremained. Many were legitimate, but some were for variables intentionally prefixed with_to signal they were unused (e.g., in destructuring or function parameters).json// .eslintrc.json { "rules": { "@typescript-eslint/no-unused-vars": [ "warn", { "argsIgnorePattern": "^_", "varsIgnorePattern": "^_", "destructuredArrayIgnorePattern": "^_", "caughtErrorsIgnorePattern": "^_" } ] } }This configuration allowed us to keep our
_prefix convention for ignored variables. -
Destructuring Gotcha: I initially tried renaming a destructured prop directly, like
{ _teamId, ... }, but TypeScript correctly flagged this as an invalid property name. The workaround? Use JavaScript's destructuring rename syntax:typescript// Before (TypeScript error) // const MyComponent = ({ _teamId, otherProp }) => { ... } // After (correct) const MyComponent = ({ teamId: _teamId, otherProp }) => { ... }This allowed us to ignore
_teamIdin ESLint while keeping the originalteamIdprop name from the interface. -
React Hook Order: A subtle bug in
src/components/markdown-renderer.tsxhad auseCallbackhook conditionally placed after an early return formermaiddiagrams. React Hooks must always be called unconditionally at the top level of your component. MovinguseCallbackbefore the early return fixed this.
Challenge 2: The Elusive Kimi Model
The Unit Test job failed with a straightforward message: a model name mismatch in tests/unit/services/llm/kimi.test.ts.
The Fix:
A quick update to the expected model string from kimi-k2-0711 to kimi-k2-0711-preview was all it took. This highlighted the importance of keeping test fixtures and expected values in sync with external API changes.
Challenge 3: pgvector's Hidden Requirement
The E2E tests were the final hurdle, failing with type "vector" does not exist.
The Problem:
My initial thought was to simply switch our CI's PostgreSQL Docker image from postgres:16-alpine to pgvector/pgvector:pg16. This image explicitly bundles the pgvector extension. I assumed that simply using the image would make the vector type available.
The Pain Log / Lesson Learned:
I was wrong. The image includes pgvector, but like many PostgreSQL extensions, it still needs to be explicitly created within the database instance. The Docker image makes the extension available for creation, but doesn't automatically create it in every database it manages.
The Fix:
I added a crucial step to our CI workflow (.github/workflows/ci.yml) before prisma db push:
# .github/workflows/ci.yml
jobs:
e2e:
steps:
- name: Create pgvector extension
run: psql -c 'CREATE EXTENSION IF NOT EXISTS vector;' -U postgres -h localhost -d your_db_name # Replace 'your_db_name'
- name: Push Prisma schema
run: prisma db push --skip-generate
# ... rest of E2E steps
This explicit CREATE EXTENSION command ensured the vector type was available when Prisma tried to push the schema, finally bringing the E2E tests to green.
Lessons Learned
This session was a great reminder of several key development principles:
- CI is Your Best Friend (and Toughest Critic): While a red CI can be frustrating, it's an invaluable feedback loop. Each failure points to an area needing attention, from linter configuration to database setup.
- Read the Fine Print (and Release Notes): Linter updates, especially major version bumps, often come with breaking changes or new configuration requirements. A quick glance at the
@typescript-eslint@8.xrelease notes would have pointed directly to the plugin declaration issue. - Distinguish Between "Available" and "Active": Just because a Docker image includes a dependency (like
pgvector) doesn't mean it's automatically active or configured within the running service. Always verify activation steps, especially for database extensions. - Systematic Debugging Pays Off: Tackling three distinct CI failures required breaking them down, isolating the cause of each, and applying targeted solutions.
What's Next?
With the CI pipeline purring happily, the immediate next steps are:
- Visual Verification: Double-check the
/dashboard/projectspage in the browser to ensure the new stats and UI elements render perfectly. - Vibe Publisher Workflow: The unrelated
vibe_publisher.ymlworkflow still fails due to missingcontents: writepermissions. This is a quick fix in repo settings, not code. - UX Polish: Consider adding a tooltip to the success rate badge, clarifying "of completed runs" for better user understanding.
It was a productive session, delivering a valuable new feature and restoring confidence in our CI. Onwards!
{
"thingsDone": [
"Enriched project list page with key stats (success rate, total spend, counts)",
"Added _count includes for 7 Prisma relations on project.findMany",
"Implemented 9 parallel aggregation queries for various metrics",
"Redesigned project cards with stat icon rows and semantic badges",
"Fixed ESLint errors after @typescript-eslint@8.x upgrade (added plugin, configured no-unused-vars ignore patterns)",
"Fixed conditional React Hook call in markdown-renderer.tsx",
"Updated expected model name in unit test for Kimi LLM service",
"Configured E2E CI to use pgvector/pgvector:pg16 image",
"Added explicit CREATE EXTENSION IF NOT EXISTS vector; step in E2E CI workflow"
],
"pains": [
"ESLint's @typescript-eslint@8.x requiring explicit plugin declaration",
"Numerous no-unused-vars errors due to stricter linting and custom ignore patterns needed",
"TypeScript error when attempting to rename destructured props directly for ESLint ignores",
"pgvector 'type \"vector\" does not exist' error despite using the pgvector Docker image",
"Understanding that the pgvector Docker image provides the extension but doesn't auto-create it"
],
"successes": [
"All 3 CI jobs (Lint & Typecheck, Unit Tests, E2E Tests) passing",
"Successful implementation of complex backend data fetching for project stats",
"Improved user experience on the project dashboard",
"Deepened understanding of ESLint configuration nuances",
"Clarified distinction between Docker image content and database instance configuration for extensions"
],
"techStack": [
"TypeScript",
"Next.js",
"React",
"tRPC",
"Prisma",
"PostgreSQL",
"pgvector",
"ESLint",
"@typescript-eslint",
"GitHub Actions (CI/CD)",
"Docker"
]
}