Documentation
Learn how to integrate and use Arcpoint's unified AI gateway
Welcome to Arcpoint — the unified gateway for AI that gives you access to the best models from OpenAI, Anthropic, Google, and more through a single, OpenAI-compatible API.
Why Arcpoint?
- One API, All Models — Access GPT-4o, Claude, Gemini, and 100+ models through a single endpoint
- OpenAI Compatible — Use your existing OpenAI SDK code with zero changes, just update the base URL
- Powerful Guardrails — Rate limiting, cost control, content moderation built-in
- Full Observability — Track every request with detailed traces, costs, and analytics
- Deterministic Testing — Record production traffic and replay it without calling providers
Quick Start
Get started in under 2 minutes:
1. Get your API key
Sign up and create an API key from your dashboard.
2. Make your first request
bash
curl https://api.arcpoint.ai/v1/chat/completions \
-H "Authorization: Bearer $ARCPOINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'3. Use with your favorite SDK
Arcpoint is fully OpenAI-compatible. Just change the base URL:
python
from openai import OpenAI
client = OpenAI(
api_key="your-arcpoint-api-key",
base_url="https://api.arcpoint.ai/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)typescript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.ARCPOINT_API_KEY,
baseURL: 'https://api.arcpoint.ai/v1',
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});When you're signed in, code examples automatically include your API key!
Core Features
Multi-Provider Access
Route requests to any supported provider with a simple model change:
bash
# OpenAI
"model": "gpt-4o"
# Anthropic
"model": "claude-3-5-sonnet"
# Google
"model": "gemini-pro"Built-in Guardrails
Protect your applications with enterprise-grade controls:
- Rate Limiting — Per-user, per-org, or custom key-based limits
- Cost Control — Set budgets per request, hourly, daily, or monthly
- Content Moderation — Automatic content filtering
- Request Validation — Schema enforcement and input sanitization
Observability
Every request is tracked with:
- Request and response details
- Token usage and costs
- Latency metrics
- Model selection decisions
- Guardrail outcomes
Documentation Sections
Getting Started
- Authentication — API keys and security
- API Reference — Complete endpoint documentation
- Playground — Test API calls interactively
Configuration
Configure routing, guardrails, and policies with Manifests:
- Manifests — Configuration format and structure
- Pipelines — Request processing stages
- Steps Reference — Complete step documentation
- Expressions — Conditional logic
- Namespaces — Multi-tenant configuration
Common Patterns
- Model Routing — Smart model selection
- Rate Limiting — Quota management
- Cost Control — Budget management
- Retries & Fallbacks — Error handling
- Transcripts & Replay — Request capture and testing
- Experiments — A/B testing
Self Hosting
Run Arcpoint in your own infrastructure:
- Custom Instance — Dedicated managed infrastructure
- Open Source — Build and run from source
- Self Managed — Deploy with Docker, Helm, or Homebrew
- BYOC — Bring Your Own Cloud
Need Help?
- Discord — Join our community for support
- GitHub — Report issues or request features
- Email — Contact support@arcpoint.ai