Skip to content
Built for the OpenClaw ecosystem

OpenClaw has 330,000 stars.
Zero governance.

Add cost caps, PII scanning, rate limits, and an audit trail to every LLM call. One environment variable. Zero code changes.

Free forever · 1K requests/day · No credit card required

terminal
# Before — direct to provider
export OPENAI_BASE_URL=https://api.openai.com/v1
# After — governed in one line
export OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai
export X_CM_API_KEY=cm_sk_xxx
Rate LimitingCost TrackingPII ScanningSecurity ScanModel AllowlistHITL Approval
0K+OpenClaw ecosystem
0LLM providers
6-stepgovernance chain
Works todaysignup to governed call

Every request. Governed.

Your agents have no cost caps, no PII scanning, no audit trail. We fix that.

$3,600+in overnight LLM costs
135,000+exposed OpenClaw instances
Gartner:"insecure by default"
Rate Limit
Cost Check
PII Scan
Security
Model Allow
Approval

All checks passed · Request forwarded to provider

Shipping

Every LLM call governed. Zero code changes.

  • 6-step governance chain
  • 51 LLM providers, one URL
  • PII scanning (EU-aware: IBAN, VAT, passports)
  • Human-in-the-loop approvals
  • Model allowlists per org
Learn more
Early Access

Sandboxed OpenClaw containers. Plain English tasks.

  • BYOVM — use your own hardware
  • 3 security profiles (locked, web, full)
  • Desktop streaming (VNC)
  • Fleet orchestration
Request access
Shipping

Track every dollar. Replay any failure.

  • Real-time cost dashboard
  • Per-request cost tracking
  • Immutable audit trail
  • Observer SDK (Python + TypeScript)
Learn more

Works with your existing code

Change one environment variable. Your SDK, framework, and code stay exactly the same.

example.py
from openai import OpenAI # Just change the base URL — your code stays the same client = OpenAI(
base_url="https://api.curate-me.ai/v1/openai", # ← only change
api_key="cm_sk_xxx" ) response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Analyze this report"}] ) # Response headers include: # X-CM-Request-Id · X-CM-Governance-Time-Ms · RateLimit-Remaining

Every request through Curate-Me automatically gets rate limiting, cost tracking, PII scanning, security screening, model enforcement, and optional human approval.

Works with every SDK that makes HTTP calls

OpenAI SDKAnthropic SDKLangChainCrewAIVercel AIOpenAI AgentsAny HTTP client

Why this exists

The governance gap is real

The ecosystem is fragmenting.

Langfuse was acquired by ClickHouse. Helicone was acquired by Mintlify. LiteLLM suffered a supply chain attack. The tools developers used to stitch together observability and governance are disappearing.

Hosting alone isn’t enough.

Clawctl hosts OpenClaw containers. Portkey proxies LLM calls. E2B sandboxes code execution. Nobody does all three. And none of them track what your agents actually cost.

We built the integrated stack.

Gateway proxy + managed runners + observability in one platform. Governed by default. Start with the gateway — it’s free and takes one URL swap. Add runners when you need sandboxed execution.

See how we compare: vs Portkey · vs LiteLLM · vs Helicone · vs NemoClaw

Start governing your AI agents

OPENAI_BASE_URL=https://api.curate-me.ai/v1/openai

Free

Free

  • 1K requests/day
  • PII scanning
  • Cost tracking
  • 1 API key

Starter

$49/mo

  • 100K requests/mo
  • Budget enforcement
  • 3 runners (early access)
  • 5 API keys
Popular

Growth

$199/mo

  • 500K requests/mo
  • HITL approvals
  • Audit trail export
  • Unlimited API keys

Self-hosted option available · BYOK (bring your own keys) · No vendor lock-in