nyxcore-systems
8 min read

From Protocol Specs to Production: Building a Rentable IPCHA API Service

Just wrapped up a marathon session transforming 15 complex protocol modules into a fully deployed, rentable API service complete with a dashboard. Here's the story of how we got there, the tech stack, and the invaluable lessons learned along the way.

FastAPIDockerPrismaNext.jsAPI DevelopmentSystem DesignLessons LearnedTypeScript

It's 11:20 PM, March 14th, 2026. The hum of the server is a sweet lullaby tonight. What started as a daunting task – implementing all 15 modules of the obscure IPCHA protocol – has culminated in a fully deployed, production-ready API service. This wasn't just about writing code; it was about building a robust, rentable platform from the ground up, marrying intricate Python logic with a modern TypeScript frontend and a resilient Dockerized backend.

Let's break down the journey.

Phase 1: The IPCHA Protocol Core – Building the Brain

The first phase was purely about the protocol itself. IPCHA isn't your everyday algorithm; it's a beast involving scoring, sanitization, arbitration, sycophancy detection, evaluation, routing, auditing, red-teaming, and classification. My goal was to translate these specifications into a bulletproof Python implementation.

This phase involved deep dives into ipcha/, src/arbitration/, and extensive testing. We ended up with:

  • 15 distinct IPCHA modules, each handling a specific facet of the protocol.
  • 78 passing Python tests, covering every critical aspect from scoring to sycophancy detection. This extensive test suite was our bedrock, ensuring the core logic was sound before we even thought about exposing it.

This foundational work, committed as b34b44f on main, was the engine. But an engine needs a chassis, a dashboard, and a way for others to access its power.

Phase 2: The API & Dashboard – Bringing IPCHA to Life

This is where the real fun began – transforming a collection of Python modules into a rentable API service. The goal was to provide a secure, metered, and user-friendly interface to IPCHA's capabilities.

The FastAPI Sidecar: Our Python Gateway

To expose the IPCHA logic, I opted for a lightweight FastAPI application running as a sidecar. This allowed us to keep the Python environment isolated while providing a clean HTTP interface.

  • ipcha/api.py: This file became the heart of our API, exposing 10 critical endpoints: health, score, score/opposition, sanitize, validate, arbitrate, route, sycophancy/metrics, audit/rejections, and evaluate. Each endpoint maps directly to a core IPCHA function.
  • Docker Integration: The FastAPI service was containerized using ipcha/Dockerfile and integrated into our docker-compose.production.yml. It runs on an internal network (:8100), with the main application depends_on it and performing health checks. This ensures our app only tries to connect once the IPCHA sidecar is ready.

Data Persistence & Access Control: Prisma & Custom Tokens

A rentable API needs a way to manage users, tokens, and usage.

  • Prisma Schema: Three new models were introduced:
    • IpchaApiToken: For managing API access tokens.
    • IpchaUsageLog: To track every API call, essential for metering.
    • IpchaJob: For longer-running, asynchronous IPCHA operations. We leveraged @db.Uuid for robust IDs, @@map for clean database table names, reverse relations for easy data traversal, and Row-Level Security (RLS) policies for multi-tenancy.
  • Token Service (src/server/services/ipcha-token-service.ts): A custom service to generate, validate, revoke, and list API tokens. We prefix them with nyx_ip_ for clarity and use SHA-256 hashing for security.

The Gateway: REST Proxy & Middleware

Our main application acts as a proxy, forwarding requests to the IPCHA sidecar. This layer is crucial for managing access and ensuring fair use.

  • Sidecar Client (src/server/services/ipcha-client.ts): A simple TypeScript client, callIpcha<T>(), handles communication with the FastAPI sidecar, complete with IpchaClientError for robust error handling.
  • REST Middleware (src/app/api/v1/ipcha/middleware.ts): This is where the "rentable" aspect truly shines. We implemented:
    • Authentication: Validating nyx_ip_ tokens.
    • Rate Limiting: Both burst and daily quota limits to prevent abuse.
    • Scope Checking: Ensuring tokens only access authorized endpoints.
    • Usage Metering: Logging every call to IpchaUsageLog.
  • REST Proxy (src/app/api/v1/ipcha/_proxy.ts + 11 route files): A set of routes that simply forward incoming requests to the IPCHA sidecar and return the responses. This keeps our main app lean and focused.

The User Experience: tRPC Router & Dashboard

Finally, to make the API manageable for both customers and administrators, a comprehensive dashboard was built using Next.js and tRPC.

  • tRPC Router (src/server/trpc/routers/ipcha.ts): This provided a type-safe API layer for our frontend.
    • Customer endpoints: For users to view their usage, results, manage tokens, access a playground, and monitor jobs.
    • Admin endpoints: For system health, auditing rejections, managing jobs, and revoking tokens.
  • Dashboard (src/app/(dashboard)/dashboard/ipcha/): Eight distinct tab components offer a full view of the IPCHA ecosystem: usage, tokens, results, playground, admin-health, admin-audit, admin-jobs, and admin-tokens.

Everything is typecheck clean, builds successfully, and all sidecar endpoints have been smoke-tested. The feat/cli-api branch is now running live in production!

Lessons from the Trenches: My "Pain Log" Transformed

Not everything was smooth sailing. Here are some critical lessons learned during this sprint:

1. Docker Build Contexts & .dockerignore Nuances

  • The Problem: My FastAPI sidecar needed specific test files (tests/evaluation/) for some of its operations (e.g., the /evaluate endpoint).
  • The Attempt: I tried COPY tests/evaluation/ /app/tests/evaluation in the Dockerfile.
  • The Failure: The build failed because .dockerignore had a generic tests entry, preventing those files from being copied into the build context.
  • The Workaround: Explicitly added !tests/evaluation to .dockerignore.
  • Lesson Learned: Be extremely precise with .dockerignore. A broad exclusion can silently break builds or runtime dependencies. Always verify what's actually included in your build context.

2. Prisma's InputJsonValue Type Strictness

  • The Problem: When trying to store arbitrary JSON data in a Prisma Json field, TypeScript complained about Record<string, unknown> not being assignable to InputJsonValue.
  • The Attempt: Directly assigning a Record<string, unknown> to a Prisma Json field.
  • The Failure: TS2322 – Prisma's InputJsonValue is stricter than just Record<string, unknown>, requiring specific primitive types or arrays/objects of those.
  • The Workaround: Used JSON.parse(JSON.stringify(body)) before passing the data to Prisma. This creates a new object that conforms to Prisma's expected JSON structure.
  • Lesson Learned: Prisma's type safety extends to JSON fields. If you're dealing with truly arbitrary JSON, you might need to explicitly serialize/deserialize or ensure your input structure strictly matches InputJsonValue's definition.

3. TypeScript Type Narrowing & Default Values

  • The Problem: In the systemHealth tRPC procedure, I wanted to provide a default value for metrics if the IPCHA sidecar was unreachable, e.g., callIpcha().catch(() => ({ total: 0 })). This created a union type (ActualMetrics | { total: number }) that the frontend couldn't easily narrow.
  • The Attempt: Directly catching and returning a simplified object.
  • The Failure: The admin-health-tab.tsx component couldn't access specific fields like sycophancy.count without complex type guards, leading to TS2339.
  • The Workaround: Defined a const defaultMetrics with the full expected structure, then used callIpcha<typeof defaultMetrics>(). When handling the catch, I ensured the returned object also conformed to typeof defaultMetrics. For specific fields like rejections, I used .then(r => r.total) to ensure consistent access.
  • Lesson Learned: When providing default values for API responses, ensure the default object strictly matches the full expected type, especially with complex nested types. This makes type narrowing significantly easier for consumers.

4. Environment Variables in Docker Compose

  • The Problem: The IPCHA sidecar's /validate endpoint (which uses OpenAI) required an OPENAI_API_KEY. I tried to pass it directly via OPENAI_API_KEY=${OPENAI_API_KEY} in docker-compose.production.yml.
  • The Attempt: Inline environment variable definition.
  • The Failure: Docker Compose warned about OPENAI_API_KEY being an empty string because it wasn't present in .env.production for the ipcha service.
  • The Workaround: Realized the IPCHA sidecar doesn't need the key at startup, only on demand for the /validate endpoint. I added IPCHA_SIDECAR_URL=http://ipcha:8100 to .env.production for the main app. For OPENAI_API_KEY, the plan is to add env_file: .env.production to the ipcha service definition itself, and then ensure the key is present in that file.
  • Lesson Learned: Environment variables in docker-compose are tricky. Always be explicit about env_file or environment for each service. Understand which services actually need which variables at startup vs. on-demand.

What's Next? The Immediate Horizon

Even with a successful deployment, there are always immediate next steps:

  1. Add OPENAI_API_KEY to .env.production for the IPCHA service so the /validate endpoint functions in production.
  2. Merge feat/cli-api to main after final review.
  3. Test the dashboard UI thoroughly at the production URL, especially /dashboard/ipcha.
  4. Create the first IPCHA API token via the dashboard and test the external REST API.
  5. Refactor Docker Compose: Add env_file: .env.production to the ipcha service for cleaner environment variable management.
  6. Improve RLS SQL: Use CREATE POLICY IF NOT EXISTS or DROP POLICY IF EXISTS to prevent errors on re-running migrations.
  7. Consider API Documentation: An auto-generated Swagger/OpenAPI page at /api/v1/ipcha/docs would be a great addition.

This project was a true full-stack marathon, spanning complex protocol implementation, robust API development, secure access control, and a rich user interface. It's incredibly satisfying to see it all come together and run smoothly in production. The journey from abstract specifications to a tangible, rentable product is always challenging but deeply rewarding.

json
{
  "thingsDone": [
    "Implemented 15 IPCHA protocol modules and 78 tests",
    "Built FastAPI sidecar with 10 API endpoints",
    "Configured Docker for IPCHA service and app integration",
    "Designed Prisma schema with 3 new models (tokens, usage, jobs)",
    "Developed custom token service for API access",
    "Created sidecar client for type-safe communication",
    "Implemented REST middleware for auth, rate limiting, metering",
    "Set up REST proxy for all IPCHA endpoints",
    "Built tRPC router for customer and admin dashboard features",
    "Developed 8-tab IPCHA dashboard (usage, tokens, results, health, audit, jobs)",
    "Achieved typecheck clean and successful build",
    "Deployed to production on feat/cli-api branch"
  ],
  "pains": [
    "Docker .dockerignore excluding necessary files",
    "Prisma InputJsonValue type strictness with generic JSON",
    "TypeScript union types breaking type narrowing for default values",
    "Environment variable scope issues in Docker Compose"
  ],
  "successes": [
    "Robust IPCHA protocol implementation with extensive tests",
    "Seamless integration of Python FastAPI with Next.js/TypeScript app",
    "Comprehensive API access control and usage metering",
    "Intuitive dashboard for both users and administrators",
    "Successful production deployment of complex system"
  ],
  "techStack": [
    "Python",
    "FastAPI",
    "Uvicorn",
    "Docker",
    "Docker Compose",
    "PostgreSQL",
    "Redis",
    "Prisma",
    "TypeScript",
    "Next.js",
    "tRPC",
    "React"
  ]
}