nyxcore-systems
7 min read

From Localhost to Hetzner Cloud: Navigating the Production Deployment Minefield for Nyxcore

Join me on a candid journey from local development to a fully operational production environment on Hetzner Cloud, filled with database migrations, Docker gotchas, and critical lessons in environment management.

DockerPostgreSQLDeploymentHetznerNode.jsPrismapgvectorAI/LLMLessonsLearnedProduction

The hum of local development servers is a comfortable sound, but there's a unique thrill (and a touch of terror) in pushing a project from that safe haven into the wild west of production. For Nyxcore, my AI-powered workflow assistant, that journey recently culminated in a successful deployment to Hetzner Cloud.

This isn't a polished success story where everything goes smoothly. This is a raw, unvarnished account of the bumps, the head-scratchers, and the hard-won lessons learned during a marathon session to get Nyxcore live, complete with a full database migration and critical fixes.

My goal was clear: get Nyxcore fully operational at https://nyxcore.cloud, migrate a 1:1 copy of my local development database, fix authentication and email login, and then immediately pivot to implementing a crucial dual-provider mode. And I'm thrilled to report: Mission accomplished. Nyxcore is live, breathing, and ready for its next big feature.

But getting there? That was a journey.

The Production Gauntlet: Initial Setup and First Hurdles

My production setup leverages Docker Compose on a Hetzner VM, a reliable stack for many projects. The first few steps involved spinning up the containers and getting the initial database seeded. For this, I used a temporary node:20-alpine container with nohup to run my seed-prod.sh script – a quick and dirty way to get the initial data in without blocking my terminal.

Then came the first immediate challenge: email authentication. Nyxcore uses Resend, and despite setting RESEND_API_KEY and EMAIL_FROM in my .env.production file, emails weren't sending. This led to my first major lesson of the session.

Lesson Learned #1: Docker Compose restart vs. up --force-recreate

When you change an env_file (like .env.production) for a Docker Compose service, simply running docker compose restart <service_name> does not reload those environment variables. The container continues to run with the old environment.

I banged my head against this for a good while before realizing the critical difference. The fix?

bash
docker compose up -d --force-recreate app

This command forces Docker to tear down and recreate the app service container, ensuring it picks up the latest environment variables from the env_file. This immediately resolved the Resend email issue, and noreply@nyxcore.cloud was finally sending emails. A fundamental Docker Compose nuance I'm now acutely aware of.

The Database Migration Maze: pgvector and Persistent Data

The core of Nyxcore's intelligence lies in its vector embeddings, powered by pgvector in PostgreSQL. Migrating my local development database, complete with all its workflows, user data, and persona avatars, was paramount.

I used the standard pg_dump from local and psql to production. It was a beefy import: 10,557 lines, 16MB of data. Everything seemed fine until I tried to use any feature relying on vector embeddings.

Lesson Learned #2: pg_dump --clean and Custom Database Extensions/Columns

My initial pg_dump command included the --clean flag, thinking it would ensure a pristine import by dropping and recreating tables. While it does that, it also has a significant side effect for specialized setups like pgvector.

The --clean flag wiped out my manually added embedding vector(1536) column on the workflow_insights table and, critically, the CREATE EXTENSION vector command. This meant all my vector-related operations failed.

The workaround, which I had to perform twice (once after the initial seed, and again after the import wiped it), was to re-apply these custom schema changes:

sql
CREATE EXTENSION vector;
ALTER TABLE workflow_insights ADD COLUMN embedding vector(1536);
CREATE INDEX ON workflow_insights USING hnsw (embedding vector_l2_ops); -- And any other necessary indexes

The takeaway: if you have custom extensions or manually added columns/indexes that aren't part of your standard ORM migrations, be very careful with --clean during full database imports. You might need to re-apply them post-import.

Avatar Adventures and Data Reassignments

Nyxcore allows users to create AI personas, each with an avatar. I had 89 of these locally, and they needed to come to production.

Lesson Learned #3: macOS tar and Resource Forks on Linux

Transferring files from macOS to a Linux Docker container can be tricky. My initial tar command from macOS created a .tar.gz file that, when extracted in the Alpine Linux container, resulted in annoying ._<filename> metadata files and Permission denied errors. These are macOS resource fork files, not typically handled well by Linux.

The solution is a simple environment variable:

bash
COPYFILE_DISABLE=1 tar -czf persona_avatars.tar.gz ./public/images/personas

Setting COPYFILE_DISABLE=1 before running tar prevents macOS from creating those pesky metadata files. Once transferred, I used docker cp to get the archive onto the server, and extracted it into a new persistent Docker volume (persona_avatars) mounted at /app/public/images/personas. This ensures avatars survive container rebuilds.

Finally, I had to perform some SQL magic to reassign all data across 30 tables from an old GitHub user (my initial dev user) to my new email-based user on production. This was a straightforward UPDATE query for each table, but a crucial step for data consistency.

The Encryption Key Conundrum: A Silent Killer

One of the most insidious problems I faced involved API keys. Nyxcore stores API keys for various LLM providers (Anthropic, GitHub, Google, Kimi, OpenAI), encrypted at rest. After the database import, these keys were silently failing decryption.

Lesson Learned #4: Synchronizing Encryption Keys Across Environments

The problem was simple, yet critical: my local development environment had one ENCRYPTION_KEY, and the fresh production environment had a different, randomly generated key. Since the data was encrypted locally, the production server couldn't decrypt it. There were no errors, just silently failing API calls.

The fix was to explicitly set the ENCRYPTION_KEY in production's .env.production file to match the one used in my local environment.

diff
# .env.production
- ENCRYPTION_KEY=<randomly_generated_production_key>
+ ENCRYPTION_KEY=799784e9dee005f8797148d242387996834134fa6eaf2e7792c5f73b2bc44240 # Matches local

This is a vital security and data integrity lesson: ensure your encryption keys are consistent across environments if you're migrating encrypted data.

Other Nudges: npm ci --ignore-scripts

A minor hiccup involved trying to speed up npm ci in a seed container with --ignore-scripts. This failed because esbuild's platform binary, critical for tsx to work, wasn't installed, leading to a TransformError. The simple workaround was to run npm ci without the --ignore-scripts flag, allowing esbuild to set itself up correctly.

Looking Ahead: What's Next for Nyxcore

With all these hurdles cleared, Nyxcore is now fully operational at https://nyxcore.cloud. The immediate next step is the implementation of a dual-provider mode, allowing workflows to dynamically select the best LLM provider based on cost, performance, or specific capabilities. This involves significant schema changes, core service development, and UI updates.

Beyond that, I'll be adding certbot auto-renewal for SSL and fixing some sshd MaxStartups issues on the server to prevent connection drops. The development journey never truly ends!

Conclusion

Deploying to production is rarely a straightforward path. It's a testament to persistence, debugging skills, and the invaluable process of documenting "pain points" as "lessons learned." Each obstacle overcome makes the system more robust and the developer more capable. Nyxcore is now live, a testament to the power of pushing through the inevitable challenges of bringing an idea to life in the cloud.


json
{"thingsDone":[
  "Completed production deployment to Hetzner Cloud (https://nyxcore.cloud)",
  "Successfully imported local database 1:1 to production (10,557 lines, 16MB)",
  "Fixed Resend email authentication by correctly setting environment variables",
  "Enabled pgvector extension and added embedding column/HNSW index on workflow_insights",
  "Transferred 89 persona avatars with macOS tar workaround",
  "Configured persistent Docker volume for persona avatars",
  "Reassigned all data from GitHub user to email user via SQL UPDATE",
  "Synchronized ENCRYPTION_KEY from local to production for API key decryption"
],"pains":[
  "Docker Compose `restart` not reloading `env_file` changes",
  "`npm ci --ignore-scripts` causing `esbuild` `TransformError`",
  "`pg_dump --clean` wiping `pgvector` extension and custom columns/indexes",
  "macOS `tar` including `._` resource fork files causing permission errors in Alpine containers",
  "API keys failing decryption due to `ENCRYPTION_KEY` mismatch between local and production"
],"successes":[
  "Production environment fully operational and accessible",
  "Email authentication working correctly",
  "Database migration complete with all data intact",
  "Vector embeddings functioning as expected",
  "Persona avatars successfully transferred and persisted",
  "All encrypted API keys now decrypt correctly",
  "Identified and documented critical Docker, database, and encryption best practices"
],"techStack":[
  "Docker",
  "Docker Compose",
  "PostgreSQL",
  "pgvector",
  "Hetzner Cloud",
  "Node.js",
  "Prisma",
  "Resend (email service)",
  "tar",
  "SQL",
  "LLM Providers (Anthropic, GitHub, Google, Kimi, OpenAI)"
]}