Beyond the Defaults: Bringing Explainability and `topP` Control to LLM Workflows
Join me as I walk through a recent development session, adding crucial info tooltips to our LLM workflow settings and introducing the powerful `topP` parameter for fine-grained model control, sharing the architectural decisions and lessons learned along the way.
Every developer knows the satisfaction of shipping features that genuinely improve a product. Recently, I tackled a session focused on two key enhancements for our LLM workflow builder: making existing model settings more understandable and adding a critical new parameter for finer control. The goal was simple: empower users with more context and more power.
This post dives into the technical journey, the decisions made, and the lessons learned from adding informative tooltips across our UI and integrating the topP parameter end-to-end across multiple LLM providers.
The Problem: When Settings Aren't Self-Explanatory
Our workflow builder allows users to configure various LLM parameters like Temperature, Max Tokens, and Max Retries. While these are standard in the LLM world, they aren't always intuitive for every user. We needed a way to provide quick, in-context explanations without cluttering the UI.
The solution? InfoTips.
Building a Lean InfoTip Component
Instead of pulling in a heavy UI library like Radix Tooltip for a relatively simple need, I opted to build a custom InfoTip component. This decision was driven by the desire to keep our bundle size lean and maintain full control over styling and behavior.
The InfoTip component is a simple React wrapper that renders a HelpCircle icon. On hover, a popover containing the explanatory text appears. The magic is in the CSS:
// src/components/ui/info-tip.tsx
import { HelpCircle } from "lucide-react";
import React, { useState } from "react";
interface InfoTipProps {
content: React.ReactNode;
children?: React.ReactNode;
}
export const InfoTip: React.FC<InfoTipProps> = ({ content, children }) => {
const [position, setPosition] = useState("bottom"); // 'top' or 'bottom'
const handleMouseEnter = (e: React.MouseEvent<HTMLDivElement>) => {
const rect = e.currentTarget.getBoundingClientRect();
const viewportHeight = window.innerHeight || document.documentElement.clientHeight;
// Check if there's more space above than below
if (rect.top > viewportHeight - rect.bottom) {
setPosition("top");
} else {
setPosition("bottom");
}
};
return (
<div className="relative inline-flex items-center" onMouseEnter={handleMouseEnter}>
{children}
<div className="group relative ml-1 cursor-help">
<HelpCircle className="h-4 w-4 text-gray-500 hover:text-gray-700" />
<div
className={`
pointer-events-none absolute z-10 w-64 rounded-md bg-gray-800 px-3 py-2 text-sm text-white opacity-0 shadow-lg transition-opacity duration-200
group-hover:opacity-100 group-hover:pointer-events-auto
${position === "bottom" ? "top-full mt-2" : "bottom-full mb-2"}
left-1/2 -translate-x-1/2
`}
>
{content}
</div>
</div>
</div>
);
};
This component intelligently positions itself above or below the trigger based on viewport space, ensuring the tooltip is always visible. Once InfoTip was ready, it was a straightforward process to wrap all relevant settings in our new/page.tsx dashboard:
// Example usage in src/app/(dashboard)/dashboard/workflows/new/page.tsx
<InfoTip content="Controls the randomness of the output. Higher values (e.g., 0.8) make output more random, lower values (e.g., 0.2) make it more focused and deterministic.">
<label htmlFor="temperature">Temperature</label>
</InfoTip>
<Slider ... />
This significantly enhanced the user experience by providing instant context for each setting.
The Power of Precision: Introducing topP
Beyond clarity, users often need more granular control over LLM behavior. Temperature is a great start, but topP (or nucleus sampling) offers an alternative way to manage randomness, often preferred for its more direct impact on token probability distribution.
Our goal was to expose topP as an adjustable parameter, end-to-end, for all supported LLM providers.
The Full-Stack Journey of topP
-
Database Schema (
prisma/schema.prisma): First, theWorkflowStepmodel needed to storetopP.prisma// prisma/schema.prisma model WorkflowStep { // ... existing fields ... topP Float @default(1.0) }After updating,
npm run db:pushandnpm run db:generatewere crucial to sync the schema and regenerate the Prisma client. -
Frontend UI (
src/app/(dashboard)/dashboard/workflows/new/page.tsx): A new slider fortopP(range 0.1-1.0, step 0.05) was added betweenTemperatureandMax Tokens. TheStepConfiginterface was updated to reflecttopP: number, and default values were set increateEmptyStep(). -
tRPC Router (
src/server/trpc/routers/workflows.ts): Our tRPC schemas for workflow step creation and updates neededtopPvalidation, ensuring type safety and proper data handling. -
LLM Service Types (
src/server/services/llm/types.ts): TheLLMCompletionOptionsinterface, which defines parameters passed to our LLM adapters, was updated:typescript// src/server/services/llm/types.ts export interface LLMCompletionOptions { temperature?: number; topP?: number; // <--- New! maxTokens?: number; stop?: string[]; } -
LLM Adapters (
src/server/services/llm/adapters/*.ts): This was the most critical part: ensuringtopPwas correctly passed to each provider's specific API. A key decision here was to only sendtopPwhen its value is less than 1.0. This prevents overriding provider-specific defaults unnecessarily and aligns with common LLM API practices wheretopP = 1.0typically means "no top-p filtering."- Anthropic (
anthropic.ts):top_p - OpenAI (
openai.ts):top_p - Google (
google.ts):topPwithingenerationConfig - Kimi (
kimi.ts):top_p
Here's a conceptual snippet from an adapter:
typescript// src/server/services/llm/adapters/openai.ts (simplified) async completion(prompt: string, options: LLMCompletionOptions): Promise<string> { const completionParams: OpenAI.Chat.CompletionCreateParams = { model: "gpt-4", // Or dynamic model messages: [{ role: "user", content: prompt }], temperature: options.temperature, max_tokens: options.maxTokens, // Only send top_p if it's explicitly set and not the default of 1.0 ...(options.topP !== undefined && options.topP < 1.0 && { top_p: options.topP }), }; const response = await this.client.chat.completions.create(completionParams); return response.choices[0]?.message?.content || ""; } - Anthropic (
-
Workflow Engine (
src/server/services/workflow-engine.ts): Finally, the engine responsible for orchestrating LLM calls was updated to pass thetopPvalue from the workflow step configuration into theLLMCompletionOptions.
This full-stack implementation ensures that topP is configurable by the user, persisted in the database, validated by the API, and correctly utilized by the underlying LLM providers.
Lessons from the Trenches
Reflecting on this session, a few key lessons emerged:
-
Prioritize Universal Support for Core Features:
- Pain Point: Initially, I considered adding both
Top PandFrequency Penalty. - Decision: I opted only for
Top Pbecause all our integrated providers (OpenAI, Anthropic, Google, Kimi) support it natively.Frequency Penalty, while useful, isn't universally supported (Anthropic, for example, lacks it). - Takeaway: For new core features, especially those interacting with external APIs, prioritize parameters with universal support. This simplifies the adapter logic, reduces conditional code paths, and ensures a consistent user experience across different providers. If a feature isn't universally supported, it might be better handled as a provider-specific override or left out of the core UI.
- Pain Point: Initially, I considered adding both
-
Build vs. Buy: The Case for Lean UI Components:
- Pain Point: We didn't have a tooltip component in our existing UI library.
- Decision: Instead of pulling in a larger library like Radix UI just for a tooltip, I built a custom
InfoTipusing pure CSS for hover effects and a bit of React state for positioning. - Takeaway: For simple, isolated UI elements, a lean custom component can often be more efficient than a full-fledged library. It reduces bundle size, eliminates external dependencies, and gives you complete control. Always weigh the complexity of the component against the overhead of a new dependency.
-
Mind the Defaults: Conditional Parameter Passing:
- Pain Point: Easy to just pass
topP: 1.0always. - Decision:
topPis only sent to providers when its value is< 1.0. - Takeaway: Many LLM APIs have intelligent default behaviors. Sending a parameter with its default value (e.g.,
top_p: 1.0often means "no nucleus sampling applied") can sometimes override desired provider-specific optimizations or internal defaults. It's often better to omit parameters that are at their default unless there's an explicit reason to send them. This keeps API calls cleaner and respects provider-side logic.
- Pain Point: Easy to just pass
Conclusion
This development session was a great example of how small, focused improvements can significantly enhance a product. By adding contextual InfoTips, we made our LLM workflow settings more accessible. By integrating topP end-to-end, we gave advanced users a powerful new lever for controlling model output. The journey also reinforced valuable lessons about feature scoping, component design, and thoughtful API integration.
Now, it's time for visual QA and then these changes will be committed and pushed, ready for our users to experience a more intuitive and powerful workflow builder.
{"thingsDone":["Created reusable InfoTip component","Added InfoTip tooltips to all workflow step settings","Implemented Top P slider in UI","Updated StepConfig interface with topP","Added topP default to step creation","Modified prisma/schema.prisma to include topP","Updated LLMCompletionOptions with topP","Modified LLM adapters (Anthropic, OpenAI, Google, Kimi) to pass topP conditionally","Updated workflow engine to pass topP to LLM calls","Updated tRPC routers for topP in workflow step schemas","Ran db:push and db:generate"],"pains":["Initially considered adding Frequency Penalty, but dropped due to lack of universal provider support","No existing UI tooltip component, required custom build"],"successes":["Both InfoTip and Top P features completed end-to-end","Typecheck passes","Schema synced","Lean custom InfoTip component built","Conditional topP passing implemented across adapters"],"techStack":["React","Next.js","TypeScript","Prisma","tRPC","PostgreSQL","LLMs (OpenAI, Anthropic, Google, Kimi)","CSS"]}