Open Source SDK v0.0.1

The open-source router
for your LLM stack

Bring your own keys. Smart routing based on task complexity, automated fallbacks, and cost optimization. Drop the LLMora SDK into your app and make your AI resilient in minutes.

$ pip install llmora
main.py
from llmora import LLMora
import os

# Initialize with your API keys
client = LLMora(
providers={
"openai": os.getenv("OPENAI_API_KEY"),
"anthropic": os.getenv("ANTHROPIC_API_KEY")
},
)

# Routes to the right model based on task complexity
response = client.chat.completions.create(
model="router:auto",
messages=[{"role": "user", "content": "Optimize this query."}],
fallback=["gpt-5.4", "claude-4.6-opus"]
)
Execution Trace

Right model. Right task.
Every single time.

llmora logs --follow
req_9a8f7b
Your App
LLMora
LLMora
router:auto
Haiku
Light
Sonnet
Mid
Opus
Heavy

Bring keys for your favorite providers

OpenAI
Anthropic
Mistral
cohere
Google
META
Together
groq

One SDK. Total Control.

Abstract away the complexity of managing multiple providers and rate limits. Bring your own keys and ship faster.

Smart Task Routing

Simple tasks go to lightweight models. Complex tasks get routed to the most capable ones. Our cost function optimizes for quality and spend automatically.

Task Best Fit Model
SimpleHaiku
MediumSonnet
ComplexOpus

Automated Fallbacks

If your primary model throws an error, failover instantly.

PrimaryOK
FallbackIDLE

Local Telemetry

Track exact token usage and costs directly in your own logs.

Cost Optimization

Stop overpaying for simple tasks. The built-in cost function routes cheap prompts to lightweight models and reserves expensive calls for when they actually matter.

Simple tasks$0.0002
Medium tasks$0.0043
Complex tasks$0.0089