Helicone is now in maintenance mode. Here is how to switch to a self-hosted alternative in 5 minutes.
If you have been using Helicone to track LLM costs and traces, you may have noticed it was acquired by Mintlify in March 2026. Development has stopped. The self-hosted version has open issues that are not being fixed. The team is focused on Mintlify now. If you need a replacement, this post covers how to migrate to Torrix, a self-hosted LLM observability proxy that uses the same proxy-header model as Helicone. What Torrix does Torrix is a single Docker container that sits between your app and any LLM provider. Every call is logged to a local SQLite database with full prompt, response, token counts, cost, and latency. Nothing leaves your server. It supports OpenAI, Anthropic, Gemini, Groq, Mistral, Ollama, DeepSeek, Azure OpenAI, and any provider with a /v1/chat/completions endpoint. What changes when migrating from Helicone Only three things change: the base URL, the auth header name, and where you pass your OpenAI key. Your prompts, models, and messages stay exactly the same. Before (Helicone) client = OpenAI( api_key="sk-your-openai-key", base_url="https://oai.helicone.ai/v1", default_headers={ "Helicone-Auth": "Bearer hc-your-helicone-key", } ) After (Torrix) client = OpenAI( api_key="sk-your-openai-key", base_url="http://localhost:8088/proxy", default_headers={ "Authorization": "Bearer trxk_your-torrix-key", "x-target-url": "https://api.openai.com", "x-upstream-authorization": "Bearer sk-your-openai-key", } ) Your Helicone custom headers are aliased automatically. If you were using helicone-property-name, helicone-session-id, or helicone-request-id, Torrix reads those natively. Zero code changes beyond the base URL and API key. Step-by-step migration 1. Install Torrix curl -o docker-compose.yml https://raw.githubusercontent.com/torrix-ai/install/main/docker-compose.community.yml docker compose up -d Open http://localhost:8088, create an account, and copy your API key from Settings. 2. Update your code Swap the base URL, move your OpenAI key to x-upstream-authorization, and set x-target-url to your provider. Everything else stays the same. 3. Confirm it works Send any request and check that it appears in the Runs page at http://localhost:8088. You should see the model, tokens, cost, and full prompt trace. Why self-hosted Torrix runs entirely on your own infrastructure. Your prompts and API keys never pass through a third-party server. It is a single process backed by SQLite, so there is nothing to operate or scale. Full migration guide with curl examples and an Anthropic walkthrough:https://github.com/torrix-ai/install/blob/main/docs/migrate-from-helicone.md
Loading comments…