# deAPI — LLMs Full Context > **What it is.** **deAPI** provides a single, unified REST API to run open-source AI models on a decentralized GPU cloud ("thousands of GPUs worldwide"). Current products cover **Text-to-Image**, **Image-to-Image**, **Text-to-Speech**, **Image-to-Text (OCR)**, **Video-to-Text (YouTube transcription & file upload)**, **Audio-to-Text (file upload)**, **Image-to-Video**, **Text-to-Video**, and **Text-to-Embedding**. Marketed benefits include **10–20× cheaper inference** vs. proprietary providers, a **free tier**, and **$20 free credits** for new accounts. Updated: 2025-11-03 ## Base & Auth - **Base host:** `https://api.deapi.ai` - **Auth:** Bearer token in header: `Authorization: Bearer YOUR_API_KEY` - **Polling Pattern:** All async endpoints return `request_id` → poll `GET /api/v1/client/request-status/{request_id}` until status is `COMPLETED` ## Core Workflow Pattern All async endpoints follow this pattern: 1. Submit job → receive `request_id` 2. Poll status endpoint with `request_id` 3. Handle status: `PENDING` (wait), `COMPLETED` (success), `FAILED` (error) 4. Extract result from response when `COMPLETED` **Example Polling Function (Python):** ```python import os, requests, time API_KEY = os.getenv("DEAPI_KEY") BASE_URL = "https://api.deapi.ai/api/v1/client" def poll_job(request_id, max_wait=300, interval=2): """Poll job status until complete or timeout""" headers = {"Authorization": f"Bearer {API_KEY}"} start = time.time() while time.time() - start < max_wait: resp = requests.get(f"{BASE_URL}/request-status/{request_id}", headers=headers) data = resp.json() status = data.get("status") if status == "COMPLETED": return data.get("result") # or data.get("output_url") elif status == "FAILED": raise Exception(f"Job failed: {data.get('error')}") time.sleep(interval) raise TimeoutError(f"Job {request_id} timeout after {max_wait}s") ``` ## Core Endpoints ### Text-to-Image `POST /api/v1/client/txt2img` **Request:** ```json { "prompt": "a sunset over mountains", "negative_prompt": "blurry, low quality", "model": "flux-schnell", "width": 1024, "height": 768, "guidance_scale": 7.5, "num_inference_steps": 30, "seed": 42 } ``` **Response:** ```json {"request_id": "abc123", "status": "PENDING"} ``` **Complete Example:** ```python def generate_image(prompt, model="flux-schnell"): headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} payload = { "prompt": prompt, "model": model, "width": 1024, "height": 768, "num_inference_steps": 25 } # Submit job resp = requests.post(f"{BASE_URL}/txt2img", json=payload, headers=headers) request_id = resp.json()["request_id"] # Poll for result result = poll_job(request_id) return result["output_url"] # URL to generated image ``` (Docs: https://docs.deapi.ai/api/text-to-image.md) --- ### Image-to-Image `POST /api/v1/client/img2img` (multipart/form-data) **Request Parameters:** - `image` (required): Source image file (binary upload) - `prompt` (required): Text description of desired transformation - `negative_prompt` (optional): Undesired features to avoid - `model` (optional): Model selection for image editing - `loras` (optional): Array of LoRA models for style control - `guidance` (optional): Guidance scale (how closely to follow prompt) - `steps` (optional): Number of inference steps - `seed` (optional): Random seed for reproducibility **Price Calculation:** `POST /api/v1/client/img2img/price-calculation` **Complete Example:** ```python def edit_image(image_path, prompt, negative_prompt="", model="stable-diffusion-xl", guidance=7.5, steps=30): headers = {"Authorization": f"Bearer {API_KEY}"} with open(image_path, "rb") as f: files = {"image": f} data = { "prompt": prompt, "negative_prompt": negative_prompt, "model": model, "guidance": guidance, "steps": steps } resp = requests.post(f"{BASE_URL}/img2img", files=files, data=data, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id) return result["output_url"] # URL to edited image # Calculate price before processing def calculate_img2img_price(image_path, model="stable-diffusion-xl", steps=30): headers = {"Authorization": f"Bearer {API_KEY}"} with open(image_path, "rb") as f: files = {"image": f} data = {"model": model, "steps": steps} resp = requests.post(f"{BASE_URL}/img2img/price-calculation", files=files, data=data, headers=headers) return resp.json()["estimated_cost"] # Example with LoRA models for style control def edit_image_with_loras(image_path, prompt, loras=["style-anime", "enhance-details"]): headers = {"Authorization": f"Bearer {API_KEY}"} with open(image_path, "rb") as f: files = {"image": f} data = { "prompt": prompt, "loras": loras, "guidance": 7.5, "steps": 30 } resp = requests.post(f"{BASE_URL}/img2img", files=files, data=data, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id) return result["output_url"] ``` (Docs: https://docs.deapi.ai/api/image-to-image.md) --- ### Text-to-Speech (TTS) `POST /api/v1/client/txt2audio` **Request:** ```json { "text": "Hello, this is a test.", "model": "kokoro", "voice": "af_sarah", "language": "en", "speed": 1.0, "format": "mp3", "sample_rate": 24000 } ``` **Complete Example:** ```python def text_to_speech(text, voice="af_sarah"): headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} payload = { "text": text, "model": "kokoro", "voice": voice, "language": "en", "format": "mp3" } resp = requests.post(f"{BASE_URL}/txt2audio", json=payload, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id) return result["audio_url"] # URL to generated audio ``` (Docs: https://docs.deapi.ai/api/text-to-speech-tts.md) --- ### Image-to-Text (OCR) `POST /api/v1/client/img2txt` (multipart/form-data) **Complete Example:** ```python def ocr_image(image_path, language="en"): headers = {"Authorization": f"Bearer {API_KEY}"} with open(image_path, "rb") as f: files = {"image": f} data = {"language": language, "format": "text"} resp = requests.post(f"{BASE_URL}/img2txt", files=files, data=data, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id) return result["text"] # Extracted text ``` (Docs: https://docs.deapi.ai/api/image-to-text-ocr.md) --- ### Video-to-Text (YouTube Transcription) `POST /api/v1/client/vid2txt` **Complete Example:** ```python def transcribe_youtube(video_url, include_timestamps=True): headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} payload = { "video_url": video_url, "include_timestamps": include_timestamps } resp = requests.post(f"{BASE_URL}/vid2txt", json=payload, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id, max_wait=600) # Longer timeout for videos return result["transcript"] ``` (Docs: https://docs.deapi.ai/api/video-to-text-transcription.md) --- ### Video-to-Text (File Upload Transcription) `POST /api/v1/client/videofile2txt` (multipart/form-data) **Request Parameters:** - `video` (required): Video file (binary upload) - `include_ts` (optional): Include timestamps in transcription (boolean) - `model` (optional): Model selection (e.g., "whisper-3-large") - `return_result_in_response` (optional): Return result directly vs. async (boolean) **Price Calculation:** `POST /api/v1/client/videofile2txt/price-calculation` **Complete Example:** ```python def transcribe_video_file(video_path, include_timestamps=True, model="whisper-3-large"): headers = {"Authorization": f"Bearer {API_KEY}"} with open(video_path, "rb") as f: files = {"video": f} data = { "include_ts": str(include_timestamps).lower(), "model": model, "return_result_in_response": "false" } resp = requests.post(f"{BASE_URL}/videofile2txt", files=files, data=data, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id, max_wait=600) # Longer timeout for video files return result["transcript"] # Calculate price before processing def calculate_video_transcription_price(video_path, model="whisper-3-large"): headers = {"Authorization": f"Bearer {API_KEY}"} with open(video_path, "rb") as f: files = {"video": f} data = {"model": model} resp = requests.post(f"{BASE_URL}/videofile2txt/price-calculation", files=files, data=data, headers=headers) return resp.json()["estimated_cost"] ``` (Docs: https://docs.gamercoin.com/depin-api/api/upload-video-file-video-to-text) --- ### Audio-to-Text (File Upload Transcription) `POST /api/v1/client/audiofile2txt` (multipart/form-data) **Request Parameters:** - `audio` (required): Audio file (binary upload) - `include_ts` (required): Include timestamps in transcription (boolean) - `model` (optional): Model selection (e.g., "whisper-3-large") - `return_result_in_response` (optional): Return result directly vs. async (boolean) **Complete Example:** ```python def transcribe_audio_file(audio_path, include_timestamps=True, model="whisper-3-large"): headers = {"Authorization": f"Bearer {API_KEY}"} with open(audio_path, "rb") as f: files = {"audio": f} data = { "include_ts": str(include_timestamps).lower(), "model": model, "return_result_in_response": "false" } resp = requests.post(f"{BASE_URL}/audiofile2txt", files=files, data=data, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id, max_wait=600) # Longer timeout for audio files return result["transcript"] ``` (Docs: https://docs.gamercoin.com/depin-api/api/upload-audio-file-audio-to-text) --- ### Text-to-Embedding `POST /api/v1/client/txt2embedding` **Request:** ```json { "input": "Your text to embed here", "model": "text-embedding-3-small" } ``` **Response (synchronous):** ```json { "embeddings": [0.123, -0.456, 0.789, ...], "model": "text-embedding-3-small", "dimensions": 1536 } ``` **Complete Example:** ```python def generate_embedding(text, model="text-embedding-3-small"): headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} payload = {"input": text, "model": model} resp = requests.post(f"{BASE_URL}/txt2embedding", json=payload, headers=headers) return resp.json()["embeddings"] # Returns vector directly (synchronous) ``` (Docs: https://docs.deapi.ai/api/text-to-embedding.md) --- ### Get Job Status `GET /api/v1/client/request-status/{request_id}` **Response Formats:** ```json // PENDING {"status": "PENDING", "progress": 45} // COMPLETED {"status": "COMPLETED", "result": {"output_url": "https://...", ...}} // FAILED {"status": "FAILED", "error": "Model timeout"} ``` (Docs: https://docs.deapi.ai/api/get-results.md) --- ### Check Balance `GET /api/v1/client/balance` **Response:** ```json {"balance": 15.75, "currency": "USD"} ``` (Docs: https://docs.deapi.ai/api/check-balance.md) --- ### List Models `GET /api/v1/client/models` Returns live list of available models and their capabilities. (Docs: https://docs.deapi.ai/api/model-selection.md) --- ## Advanced Patterns ### Exponential Backoff Polling For production use, implement exponential backoff to reduce server load: ```python def poll_with_backoff(request_id, max_wait=300): headers = {"Authorization": f"Bearer {API_KEY}"} start = time.time() interval = 1 max_interval = 30 while time.time() - start < max_wait: resp = requests.get(f"{BASE_URL}/request-status/{request_id}", headers=headers) data = resp.json() status = data.get("status") if status == "COMPLETED": return data.get("result") elif status == "FAILED": raise Exception(f"Job failed: {data.get('error')}") time.sleep(interval) interval = min(interval * 1.5, max_interval) # Exponential backoff with cap raise TimeoutError(f"Job {request_id} timeout") ``` ### Batch Processing with Concurrency Process multiple jobs concurrently: ```python from concurrent.futures import ThreadPoolExecutor, as_completed def batch_generate_embeddings(texts, model="text-embedding-3-small", max_workers=10): results = [] with ThreadPoolExecutor(max_workers=max_workers) as executor: futures = {executor.submit(generate_embedding, text, model): text for text in texts} for future in as_completed(futures): try: result = future.result() results.append(result) except Exception as e: print(f"Failed: {e}") results.append(None) return results ``` ### Multi-Step Workflow (Text → Image → Video) ```python def text_to_video_workflow(prompt, max_retries=2): # Step 1: Generate image try: image_url = generate_image(prompt) except Exception as e: raise Exception(f"Image generation failed, aborting: {e}") # Step 2: Generate video from image (with retry) for attempt in range(max_retries + 1): try: headers = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"} payload = { "prompt": prompt, "image_url": image_url, "model": "stable-video-diffusion", "frames": 25, "fps": 8 } resp = requests.post(f"{BASE_URL}/img2video", json=payload, headers=headers) request_id = resp.json()["request_id"] result = poll_job(request_id, max_wait=600) return result["video_url"] except Exception as e: if attempt < max_retries: print(f"Video generation attempt {attempt + 1} failed, retrying...") time.sleep(5) else: raise Exception(f"Video generation failed after {max_retries + 1} attempts: {e}") ``` ## Error Handling Best Practices 1. **Always check status codes:** Handle 401 (unauthorized), 429 (rate limit), 500 (server error) 2. **Implement retry logic:** For network errors and 429/500 responses 3. **Set appropriate timeouts:** Video jobs may take 5-10min, images 30-60s 4. **Handle FAILED status:** Extract error message from response 5. **Log request_ids:** For debugging and support requests ## Models & Capabilities - **Text-to-Image:** Flux Schnell, Stable Diffusion XL, HiDream, NVIDIA Sana - **TTS:** Kokoro (multi-voice, multi-language) - **Video:** Stable Video Diffusion, CogVideoX - **Embeddings:** text-embedding-3-small, text-embedding-3-large - **OCR:** Florence-2, TrOCR variants Model list evolves—check `GET /api/v1/client/models` for current options. ## Pricing & Free Tier - **Free tier:** Daily limits on select endpoints - **New accounts:** $20 free credits - **Pay-as-you-go:** Costs scale with resolution, steps, duration - **Price calculator:** Available at endpoint-specific `/price-calculation` routes ## Getting Started 1. **Register:** https://deapi.ai/register → get API key 2. **Read docs:** https://docs.deapi.ai/get-started/introduction.md 3. **Test with curl:** ```bash curl -X POST https://api.deapi.ai/api/v1/client/txt2img \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"prompt": "a cat", "model": "flux-schnell", "width": 512, "height": 512}' ``` 4. **Monitor balance:** `GET /api/v1/client/balance` ## Canonical Links - Homepage: https://deapi.ai/ - Docs: https://docs.deapi.ai/ - Pricing: https://docs.deapi.ai/pricing.md - API Reference: https://docs.deapi.ai/api/ ## Safety & Compliance - No illegal/explicit content - Keep API keys server-side only - Review content policies before production use