Hush is a text classification API that detects harmful and toxic content in any text. One API call is all it takes — integrate in minutes and moderate at any scale.
Three properties that make Hush different from basic keyword filters.
Classifications are returned in milliseconds, making it suitable for real-time content pipelines at any scale.
Goes beyond keyword matching to understand the intent and tone behind any piece of text.
A single REST API call is all it takes. Works with any language or framework that can send an HTTP request.
Type any text and see Hush classify it in real time. Paste a comment, message, or any string to see the API at work.
Tell us what went wrong with this classification.
Pick the plan that fits your traffic today and scale seamlessly as you grow. Same API, consistent latency, better support tiers.
Starter
$0
Kick the tyres with generous limits for prototypes and QA.
Growth
$9.99
Great for pilots and early production workloads.
Scale
$19.99
Unlimited moderation plus priority response times.
Everything you need to integrate Hush into your platform. Our REST API is simple, predictable, and incredibly fast.
/models/hush-preview:03-2026/v1/predict
Analyze a text message for toxicity. Returns a boolean classification indicating whether the message is toxic or safe.
/health
Check whether the Hush API is online and running.
import requests url = "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict" headers = { "Authorization": "Bearer YOUR_API_KEY" } data = { "text": "This community is amazing!" } response = requests.post(url, headers=headers, json=data) result = response.json() if result['is_toxic']: print("Message flagged.") else: print("Message is safe to send!")
const response = await fetch( "https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" }, body: JSON.stringify({ text: "This community is amazing!" }) } ); const result = await response.json(); if (result.is_toxic) { console.log("Message flagged."); } else { console.log("Message is safe to send!"); }
curl -X POST https://api.openproject.co.zw/models/hush-preview:03-2026/v1/predict \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{"text": "This community is amazing!"}'