nyxcore-systems
9 min read

Beyond the Build: Architecting Automated Blog Generation and Bulletproof UX Feedback

Dive into a recent development sprint where we tackled automating blog post generation via GitHub webhooks, scaled content batching, and implemented robust, project-wide toast notifications for an unbeatable user experience.

webdevgithubactionswebhooktrpcnextjstypescriptprismauxautomationdeveloper-experiencesystem-design

Every developer knows the satisfaction of a clean commit after a productive session. This past sprint was one of those, a deep dive into enhancing a core part of my project: the blog generation system. The goal was clear: make it more automated, more scalable, and give users immediate, clear feedback.

We tackled three main areas: bumping the blog batch generation limit, setting up a fully automated blog generation pipeline triggered by GitHub pushes, and rolling out a project-wide toast notification system. Let's break down the journey, the technical decisions, and a crucial lesson learned along the way.

Scaling Up: From 10 to 100 Blog Entries at a Time

My blog generation system allows me to turn development session memories into structured blog posts. Previously, I had a hard limit of processing 10 memory entries at a time. While good for testing, it quickly became a bottleneck for real-world usage.

The fix was straightforward but impactful. A single line change unlocked a significant scalability improvement:

typescript
// src/server/trpc/routers/projects.ts (around line 869)
// Before: max(10), After: max(100)
memoryEntryIds: z.array(z.string()).max(100), // Now allows up to 100 memory entries per batch

This simple adjustment means I can now select up to 100 memory entries and generate blog posts from them in a single batch. It's a small change with a big impact on efficiency, especially when catching up on a backlog of development notes.

The Automation Engine: Bringing Blogs to Life on GitHub Push

This was the star of the show – a complete overhaul of how blog posts are triggered. The dream: push a new .memory/letter_*.md file to my repo, and have a blog post automatically generated and ready for review. This required a multi-faceted approach involving webhooks, a new REST endpoint, and GitHub Actions.

Catching the Push: The GitHub Webhook Service

The first piece of the puzzle was to listen for GitHub push events. I extended my existing GitHubPushPayload handling in src/server/services/github-webhook.ts.

Here's the gist:

  1. Payload Expansion: Ensure the webhook payload includes after (the commit SHA), commits[].added/modified/removed (to see file changes), and repository.default_branch (to only act on pushes to the main branch).
  2. File Detection: I introduced extractMemoryFiles() to intelligently scan the commit for files matching the .memory/letter_*.md pattern.
  3. Content Fetching: For each detected memory file, fetchGitHubFile() uses the GitHub API to pull its raw content.
  4. Triggering Generation: Finally, handleMemoryFilePush() takes the content, creates a MemoryEntry placeholder, and then, crucially, fires off the generateBlogPost() function in the background. This is a "fire-and-forget" operation, ensuring the webhook response is fast while the blog generation happens asynchronously.

This service acts as the initial gatekeeper, filtering relevant pushes and preparing the data for the next stage.

The Secure Gateway: A New REST Endpoint

While the webhook handles the initial GitHub event, I needed a way for my GitHub Actions workflow to securely trigger the actual blog generation process within my application. This led to the creation of a new REST endpoint: src/app/api/v1/blog/auto-generate/route.ts.

Why a separate REST endpoint instead of directly calling the tRPC procedure? It provides a cleaner, more standard interface for external services like GitHub Actions, and allows for a simple Bearer token authentication mechanism using BLOG_AUTO_GENERATE_SECRET.

typescript
// src/app/api/v1/blog/auto-generate/route.ts (simplified)
import { NextRequest, NextResponse } from 'next/server';
import { generateBlogPost } from '@/server/services/blog-generator'; // Simplified path

export async function POST(req: NextRequest) {
  // 1. Authenticate with BLOG_AUTO_GENERATE_SECRET Bearer token
  const authHeader = req.headers.get('Authorization');
  if (!authHeader || !authHeader.startsWith('Bearer ')) {
    return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
  }
  const token = authHeader.split(' ')[1];
  if (token !== process.env.BLOG_AUTO_GENERATE_SECRET) {
    return NextResponse.json({ error: 'Forbidden' }, { status: 403 });
  }

  // 2. Parse request body: { projectId, files: [{title, content}], provider?, model? }
  const { projectId, files, provider, model } = await req.json();

  // 3. Create MemoryEntry and trigger background blog generation
  // ... logic to create MemoryEntry and then call generateBlogPost() ...

  return NextResponse.json({ message: 'Blog generation initiated' }, { status: 202 });
}

This endpoint acts as a secure, authenticated entry point for my GitHub Actions workflow to tell my application: "Hey, I've got new memory files, process them!"

Orchestration with GitHub Actions: vibe_publisher.yml

The final piece tying everything together is the GitHub Actions workflow. I rewrote .github/workflows/vibe_publisher.yml to smartly detect new .memory/ files and, if configured, call the new REST endpoint.

The key here is using git diff to identify new files added in a push and then making a curl request to the NYXCORE_URL with the BLOG_AUTO_GENERATE_SECRET and the relevant file content.

yaml
# .github/workflows/vibe_publisher.yml (simplified relevant part)
name: Vibe Publisher

on:
  push:
    branches:
      - main # Or your default branch

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Detect new .memory files
        id: detect_memory_files
        run: |
          MEMORY_FILES=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }} | grep '.memory/letter_.*\.md' || true)
          echo "MEMORY_FILES=${MEMORY_FILES}" >> $GITHUB_OUTPUT

      - name: Trigger Auto Blog Generation
        if: ${{ steps.detect_memory_files.outputs.MEMORY_FILES != '' && secrets.NYXCORE_URL && secrets.BLOG_AUTO_GENERATE_SECRET && secrets.NYXCORE_PROJECT_ID }}
        run: |
          for file in ${{ steps.detect_memory_files.outputs.MEMORY_FILES }}; do
            TITLE=$(basename "$file" .md | sed 's/letter_//g' | sed 's/_/ /g' | sed -r 's/\b(.)/\U\1/g')
            CONTENT=$(cat "$file")
            echo "Processing file: $file with title: $TITLE"
            curl -X POST \
              -H "Authorization: Bearer ${{ secrets.BLOG_AUTO_GENERATE_SECRET }}" \
              -H "Content-Type: application/json" \
              -d '{
                    "projectId": "${{ secrets.NYXCORE_PROJECT_ID }}",
                    "files": [
                      {
                        "title": "'"${TITLE}"'",
                        "content": "'"${CONTENT}"'"
                      }
                    ]
                  }' \
              ${{ secrets.NYXCORE_URL }}/api/v1/blog/auto-generate
          done
        env:
          NYXCORE_URL: ${{ secrets.NYXCORE_URL }}
          BLOG_AUTO_GENERATE_SECRET: ${{ secrets.BLOG_AUTO_GENERATE_SECRET }}
          NYXCORE_PROJECT_ID: ${{ secrets.NYXCORE_PROJECT_ID }}

      # Fallback to original Python script + PR flow if secrets not configured
      # ... (existing logic for manual PR flow) ...

This setup ensures that any push to the main branch containing new .memory/letter_*.md files will automatically trigger the blog generation process, making my content creation workflow incredibly smooth.

Enhancing User Experience: The Power of Toasts

No matter how robust your backend, a silent failure (or success!) is a poor user experience. I decided to implement a comprehensive toast notification system across the entire project detail page.

The Toast System Core: use-toast.ts

I built a module-level toast system in src/hooks/use-toast.ts. It provides:

  • toast(): The main function for showing a default toast.
  • toast.error(): A dedicated function for error messages (often red/destructive styled).
  • toast.success(): For positive feedback (often green/success styled).
  • useToast(): A hook to access the toast functions within components.

Key features include auto-dismissal after 5 seconds, and a maximum of 3 visible toasts at any given time to prevent UI clutter.

Rendering the Toasts: toaster.tsx

The actual visual rendering is handled by src/components/ui/toaster.tsx, leveraging existing Radix Toast primitives. This keeps the UI consistent with the rest of the application's design system.

Global Integration: providers.tsx

To make the toast system available globally, the <Toaster /> component was added inside <ToastProvider> in src/app/providers.tsx. This ensures that any component in the application can trigger a toast.

Project-Wide Feedback: page.tsx

The real work was integrating these toasts. I went through src/app/(dashboard)/dashboard/projects/[id]/page.tsx and added onError and onSuccess callbacks to all mutations. This includes:

  • updateProject, deleteProject
  • generateBatch, reimport
  • createNote, updateNote, deleteNote, enrichNote, applyEnrichment
  • All mutations related to Active Processes, Reports, and Axiom (e.g., createMutation, updateMutation, deleteMutation, extractMutation, confirmMutation, fetchUrlMutation, reprocessMutation, createTokenMutation, revokeTokenMutation).
  • createWorkflowMutation, importTodoMutation, createGroupWorkflowMutation, deleteManyMutation.

This meticulous integration means that almost every significant user action on the project detail page now provides immediate, clear feedback, greatly improving the user experience and reducing confusion when things go wrong (or right!).

Lesson Learned: The "Active" Field That Wasn't

Even in a productive sprint, there are always moments that make you pause and scratch your head. My "pain log" for this session highlighted a classic ORM pitfall.

The Scenario: When setting up the BLOG_AUTO_GENERATE_SECRET authentication for the webhook, I initially tried to query my ApiKey model with a condition to ensure the key was isActive. My thought process was, "I need to find an active API key to validate the incoming request."

The Code (Attempt):

typescript
// src/server/services/github-webhook.ts (initial thought)
const apiKey = await prisma.apiKey.findFirst({
  where: {
    key: incomingKey,
    isActive: true // <-- The problematic line
  }
});

The Failure: TypeScript immediately flagged TS2353: Property 'isActive' does not exist on type 'ApiKeyWhereInput'.

The Discovery: After a quick check of my schema.prisma file, I realized: the ApiKey model simply doesn't have an isActive field. Keys are considered active if they exist and haven't expired (expiry is checked at usage time, not as a direct field on the model). My mental model of the ApiKey schema was slightly out of sync with reality.

The Takeaway: Always, always, always double-check your schema definitions, especially when working with ORMs like Prisma. It's easy to assume a field exists or behaves a certain way based on common patterns, but the source of truth is your schema.prisma file. A quick glance can save you debugging time and prevent assumptions from leading to incorrect logic. In this case, removing the isActive filter was the correct approach, relying on the existence of the key and later expiry checks.

Looking Ahead

This session laid critical groundwork for a more robust and user-friendly application. With the BLOG_AUTO_GENERATE_SECRET now a required environment variable (both locally and in GitHub secrets), the next steps are all about validation:

  1. Test Toasts: Trigger API errors to confirm the flash messages appear as expected.
  2. Test Auto-Generation: Push a .memory/ file and watch the magic happen – verify the webhook creates the MemoryEntry and BlogPost.
  3. E2E Tests: Build out end-to-end tests for the entire blog generation flow, memory picker, and the docs pipeline sidebar.
  4. Persona Avatars: Give the Ipcha Mistabra persona a proper avatar image.

It's exciting to see these pieces come together, transforming a collection of session memories into a living, automated content engine. The journey continues!

json
{"thingsDone":["Batch blog generation limit increased from 10 to 100 entries","Automated blog generation triggered by GitHub push events on `.memory/*` files","Implemented a new REST endpoint for secure auto-generation triggering","Rewrote GitHub Actions workflow for smart file detection and API calls","Developed and integrated a project-wide toast notification system for API errors and successes"], "pains":["Attempted to filter Prisma `ApiKey` model by a non-existent `isActive` field, leading to a TypeScript error"], "successes":["Achieved seamless background blog generation automation","Established a comprehensive and user-friendly feedback system with toasts","Successfully integrated GitHub webhooks, a custom REST API, and GitHub Actions for a robust content pipeline"], "techStack":["Next.js","tRPC","TypeScript","Prisma","GitHub Actions","Radix UI","Node.js","REST API","GitHub Webhooks"]}