Welcome to the HelpingAI API reference. This comprehensive guide covers all endpoints, parameters, and features available in our emotionally intelligent AI platform.
All API requests should be made to:
https://api.helpingai.co/v1
HelpingAI uses API keys for authentication. Include your API key in the Authorization header:
Authorization: Bearer YOUR_API_KEY
Get your API key from the HelpingAI Dashboard.
The primary endpoint for generating conversational responses with emotional intelligence.
/v1/chat/completionsGenerate chat completions with emotional intelligence
Key Features:
hideThink parameterView Chat Completions Documentation →
Retrieve information about available models and their capabilities.
/v1/modelsList all available models
/v1/models/{model_id}Get information about a specific model
Available Models:
Dhanishtha-2.0-preview - Our flagship model with emotional intelligenceAll requests should include:
application/jsonBearer YOUR_API_KEYcurl -X POST https://api.helpingai.co/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "Dhanishtha-2.0-preview",
"messages": [
{"role": "user", "content": "I'm feeling overwhelmed today"}
]
}'All successful responses return JSON with appropriate HTTP status codes:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "Dhanishtha-2.0-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I understand you're feeling overwhelmed today..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 45,
"total_tokens": 60
}
}{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}Rate limits vary by subscription tier:
| Tier | Requests/minute | Tokens/minute |
|---|---|---|
| Free | 60 | 10,000 |
| Pro | 3,000 | 500,000 |
| Enterprise | Custom | Custom |
Rate limit headers are included in responses:
X-RateLimit-Limit: Request limit per minuteX-RateLimit-Remaining: Remaining requests in current windowX-RateLimit-Reset: Time when rate limit resetsHelpingAI automatically detects and responds to emotional cues in user messages:
{
"messages": [
{"role": "user", "content": "I just got rejected from my dream job and I'm devastated"}
]
}Response includes empathetic understanding and supportive language.
Control AI reasoning visibility with the hideThink parameter:
{
"model": "Dhanishtha-2.0-preview",
"messages": [{"role": "user", "content": "Solve this complex problem"}],
"hideThink": false
}When hideThink=false, responses include <think>...</think> tags showing the AI's reasoning process.
Execute functions and access external data:
{
"model": "Dhanishtha-2.0-preview",
"messages": [{"role": "user", "content": "What's the weather in Tokyo?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}
]
}Get real-time responses as they're generated:
{
"model": "Dhanishtha-2.0-preview",
"messages": [{"role": "user", "content": "Tell me a story"}],
"stream": true
}pip install helpingainpm install helpingaiUse existing OpenAI libraries by changing the base URL:
from openai import OpenAI
client = OpenAI(
base_url="https://api.helpingai.co/v1",
api_key="YOUR_API_KEY"
)Always implement proper error handling:
try:
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Hello"}]
)
except Exception as e:
print(f"Error: {e}")| Code | Description | Solution |
|---|---|---|
invalid_api_key | API key is invalid | Check your API key |
insufficient_quota | Usage quota exceeded | Upgrade plan or wait |
model_not_found | Model doesn't exist | Use valid model name |
rate_limit_exceeded | Too many requests | Implement rate limiting |
invalid_request_error | Request format invalid | Check request structure |
Webhook support for real-time notifications about:
Need help with the API?