Fully compatible with the OpenAI SDK — switch with just one line of code. DeepSeek-optimized,
domestic BGP multi-line acceleration, enterprise-grade security. Making AI calls as easy as electricity.
Focused on technical optimization, delivering the ultimate AI model API experience
Official direct channel for DeepSeek V4-Flash / V4-Pro. Deeply optimized for reasoning and conversation scenarios with only +15% markup.
Fully compatible with OpenAI SDK / Anthropic SDK. Simply change your base_url for a seamless switch — no business code changes needed.
Direct connection via China BGP servers with Telecom/Unicom/Mobile multi-line access. P99 latency <500ms — no more unstable overseas API calls.
TLS 1.3 encrypted transmission, IP whitelist + rate limiting, automatic channel failover, and WAF rules that block attack requests in real time.
Multi-node cluster deployment with automatic health checks and fault removal. Channel switching completed within 60 seconds to ensure service continuity.
A dedicated team on standby around the clock. Ticket system + email multi-channel support with a 5-minute rapid response commitment.
Curated high-performance models covering conversation, reasoning, code, and more
| Model | Type | Price (¥/1M tokens) | Status |
|---|---|---|---|
| deepseek-v4-flash | Chat | ¥0.5 | 🟢 Online |
| deepseek-v4-pro | Reasoning | ¥2 | 🟢 Online |
| deepseek-reasoner | Reasoning | ¥4 | 🟢 Online |
| deepseek-chat | Chat | ¥1 | 🟢 Online |
| gpt-4o-mini | Chat | ¥1.5 | 🟢 Online |
| claude-sonnet-4 | Chat | ¥3 | 🟢 Online |
| gemini-2.5-flash | Chat | ¥0.5 | 🟢 Online |
Use the familiar OpenAI Python SDK — just change base_url and you're set in 3 lines.
Fully compatible with Chat Completions, Embeddings, Images, and all other API interfaces. Zero learning cost.
Supports Python, Node.js, Go, Java, curl, and all major languages and tools.
from openai import OpenAI # Just change this one line ↓ client = OpenAI( api_key="sk-your-api-key", base_url="https://toenk-api.com/v1" ) response = client.chat.completions.create( model="deepseek-v4-flash", messages=[{"role": "user", "content": "Hello!"}] ) print(response.choices[0].message.content)
Better prices for the same models. The TOENK advantage at a glance.
| Model | TOENK API | DeepSeek Official | Other Proxies |
|---|---|---|---|
| deepseek-v4-flash | ¥0.5/1M Best Value | ¥1/1M | ¥0.8~2/1M |
| deepseek-v4-pro | ¥2/1M Recommended | ¥4/1M | ¥3~8/1M |
| deepseek-reasoner | ¥4/1M Recommended | ¥8/1M | ¥6~16/1M |
| deepseek-chat | ¥1/1M | ¥2/1M | ¥1.5~5/1M |
| gpt-4o-mini | ¥1.5/1M | ¥2.25/1M | ¥2~5/1M |
| claude-sonnet-4 | ¥3/1M Curated | ¥4.5/1M | ¥3.5~8/1M |
| gemini-2.5-flash | ¥0.5/1M | ¥0.75/1M | ¥0.6~2/1M |
Quick answers to common questions
Register an account → Create an API Key → Set base_url to https://toenk-api.com/v1 in your code. Fully compatible with the OpenAI SDK — no need to learn a new API specification.
We support DeepSeek V4-Flash/V4-Pro/Reasoner/Chat, GPT-4o/4o-mini/4.1, Claude Sonnet 4/Opus 4, Gemini 2.5 Flash/Pro, and 21+ models covering conversation, reasoning, code, image and more.
Direct connection via China-based BGP servers with Telecom/Unicom/Mobile triple-network optimization. P99 latency <500ms. Multi-node cluster + smart load balancing delivers 99.9% service availability.
Pay-as-you-go based on actual token usage, with Alipay and WeChat Pay supported. New users receive 500,000 free tokens (~1 million conversations) upon registration, with multiple plan options available.
Yes! We provide comprehensive API documentation and SDK code examples. For any issues, reach our technical support team at support@toenk-api.com.
Register now and get 500,000 free tokens. Experience lightning-fast AI model calls today.
🚀 Sign Up Free