Steadybase
Core Concepts

Multi-LLM Routing

Intelligent routing across Claude, GPT-4o, and Gemini based on task requirements.

Multi-LLM Routing

Steadybase doesn't lock you into a single AI provider. Different tasks have different requirements, and the platform routes to the best model for each job.

Supported Models

ProviderModelBest ForUsed In
Anthropicclaude-sonnet-4-5Analysis, planning, research, reasoningDrew planning, Lisa research, Brain chat
OpenAIGPT-4oCreative content, outreach, structured outputBrian content drafts, dashboard generation
GoogleGemini 2.0 FlashFast classification, quick routing, triageLead scoring, ticket classification

Routing Logic

The platform routes based on task type, not user preference:

Request → Task Classification → Model Selection → Execution

                    ┌─────────────────┼─────────────────┐
                    ▼                 ▼                  ▼
              Claude             GPT-4o             Gemini
           (reasoning)        (creative)           (speed)

When Claude is Used

  • Task decomposition — Breaking complex requests into subtasks
  • Data analysis — Interpreting call transcripts, CRM data, signals
  • Research synthesis — Combining multiple data sources into insights
  • Conversational AI — Brain chat interface

When GPT-4o is Used

  • Content generation — Outreach emails, LinkedIn messages, call scripts
  • Document creation — QBR decks, proposals, executive summaries
  • Dashboard compilation — Structured multi-section outputs

When Gemini is Used

  • Lead scoring — Quick 0-100 score based on enriched data
  • Ticket classification — Route support tickets to the right queue
  • Fast triage — Any task where speed matters more than depth

Cost Optimization

Multi-LLM routing also optimizes costs. Not every task needs the most expensive model:

:::note Using Gemini Flash for classification tasks is approximately 10x cheaper than using Claude or GPT-4o for the same task, with comparable accuracy for simple routing decisions. :::

Adding New Models

The multi-LLM architecture is designed to be extensible. New models can be added by:

  1. Adding the provider SDK or API client
  2. Defining routing rules for the new model's strengths
  3. Updating workflow activities to include the new routing option

Future planned additions include local/on-premise models for sensitive data processing.

On this page