AI Gateway

LiteLLM

Call 100+ LLMs using the OpenAI Input/Output Format

  • Translate inputs to provider's completionembedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Track spend & set budgets per project LiteLLM Proxy Server

How to use LiteLLM

You can use litellm through either:

  1. LiteLLM Proxy Server - Server (LLM Gateway) to call 100+ LLMs, load balance, cost tracking across projects
  2. LiteLLM python SDK - Python Client to call 100+ LLMs, load balance, cost tracking
LiteLLM - Getting Started | liteLLM
https://github.com/BerriAI/litellm
About the author

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Gateway.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.