📄️ Quick Start
Quick start CLI, Config, Docker
📄️ Proxy Config.yaml
Set model list, apibase, apikey, temperature & proxy server settings (master-key) on the config.yaml.
📄️ Load Balancing - Multiple Instances of 1 model
Load balance multiple instances of the same model
📄️ Cost Tracking & Virtual Keys
Track Spend and create virtual keys for the proxy
📄️ Caching
Cache LLM Responses
📄️ Logging - OpenTelemetry, Langfuse, ElasticSearch
Log Proxy Input, Output, Exceptions to Langfuse, OpenTelemetry
📄️ CLI Arguments
Cli arguments, --host, --port, --num_workers
📄️ Deploying LiteLLM Proxy
Deploy on Render https://render.com/