Your app already uses the OpenAI SDK. Now it can hit deAPI through that exact client – just point it at a different URL.
deAPI now supports the OpenAI API format. Swap two parameters in your client initialization, and your existing code connects to image generation, TTS, transcription, and embedding models running on decentralized GPUs.
The switch
from openai import OpenAI
# Before - OpenAI direct
client = OpenAI(api_key="sk-...")
# After - deAPI, same SDK
client = OpenAI(api_key="dpn-sk-...", base_url="<https://api.deapi.ai/v1>")
Two lines changed, everything else stays. Function calls, error handling, response shapes – the interface is identical.
What you get access to
Here’s what the compatible endpoint gives you:
- Image generation – FLUX.2 Klein, Z-Image-Turbo, FLUX.1 Schnell (from $0.00088/image)
- Text-to-Speech – Qwen3 TTS, Chatterbox, Kokoro ($12.86 per 1M characters)
- Video generation – LTX 2.3, up to 22B parameters (from $0.024/second)
- Transcription – Whisper Large V3 with direct YouTube/Twitch/X URL support ($0.021/hour)
- Embeddings – BGE M3 for search, RAG, and similarity ($0.068 per 1M tokens)
Pricing is per-request, no subscription. The $5 free credit you get at signup covers roughly 5,600 generated images or 237 hours of transcription.
Whisper: 17x cheaper than OpenAI
OpenAI charges $0.36 per hour of audio transcription. deAPI runs the same Whisper Large V3 model for $0.021 per hour.
If you transcribe 100 hours monthly, OpenAI bills you $36. deAPI runs the identical Whisper Large V3 model for $2.10. The output is the same – the GPUs are cheaper.
Framework compatibility
LangChain, LlamaIndex, CrewAI, AutoGen, Instructor, Vercel AI SDK – they all accept a base_url parameter. Plug in https://api.deapi.ai/v1 and they route to deAPI without code changes.
OpenAI publishes official SDKs for Python, Node/TypeScript, Go, .NET, and Java. Every one of them works as a deAPI client out of the box.
The multi-provider pattern
Say your app uses GPT-4o for chat through OpenAI, but routes image generation and TTS through deAPI to cut costs. Both clients use the same SDK – only the base_url differs:
from openai import OpenAI
# Chat completions via OpenAI
chat_client = OpenAI(api_key="sk-...")
# Image gen + TTS via deAPI
media_client = OpenAI(api_key="dpn-sk-...", base_url="<https://api.deapi.ai/v1>")
response = media_client.images.generate(
model="flux-2-klein-4b-bf16",
prompt="A minimalist logo for a podcast app",
size="1024x1024"
)
Your team learns one SDK. Adding a third or fourth provider later means another base_url, not another library.
Why this matters now
Most AI apps lock themselves into a single provider during the prototype phase. Three months later, switching costs real engineering time – custom client code, different response formats, new error handling.
OpenAI SDK compatibility eliminates that lock-in before it starts. Your code stays the same; only the config changes. Testing a cheaper image model takes just a few minutes.
deAPI brings decentralized GPU compute into that workflow. Developers who already use the OpenAI SDK can add it to their stack without writing new client code – and without committing to another monthly subscription.
Get started
- Sign up at deapi.ai – $5 free credit, no card required
- Grab your API key from the dashboard
- Replace
base_urlin your existing OpenAI client - Ship
Full docs: docs.deapi.ai