HelpingAI provides a comprehensive REST API that's fully compatible with OpenAI's API format while adding unique Chain of Recursive Thoughts capabilities. This overview covers the core concepts, architecture, and features you need to understand to build with HelpingAI.
Our flagship model is built on a revolutionary architecture that combines:
HelpingAI's API is designed with these principles:
hideThink parameterAll interactions use a message-based format:
{
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "How do I solve this complex problem?" },
{
"role": "assistant",
"content": "Let me work through this step by step..."
}
]
}Message Roles:
Currently available models:
| Model | Description | Context Length | Best For |
|---|---|---|---|
Dhanishtha-2.0-preview | Our flagship model with emotional intelligence | 32,768 tokens | General use, emotional understanding |
Tokens are pieces of text that the model processes. Understanding tokens helps you:
Token Guidelines:
HelpingAI automatically detects and responds to emotional cues:
# Input with emotional context {#input-with-emotional-context}
messages = [
{"role": "user", "content": "I just got rejected from my dream job and I'm devastated."}
]
# HelpingAI responds with empathy {#helpingai-responds-with-empathy}
# "I'm so sorry to hear about the job rejection. That must be incredibly disappointing, {#im-so-sorry-to-hear-about-the-job-rejection-that-must-be-incredibly-disappointing}
# especially when it was your dream job. It's completely natural to feel devastated right now..." {#especially-when-it-was-your-dream-job-its-completely-natural-to-feel-devastated-right-now}See how the AI thinks with the hideThink parameter:
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the best approach to solve climate change?"}],
hideThink=False # Shows <think>...</think> content
)When hideThink=False, you'll see the AI's reasoning process in <think> tags before the final response.
Get responses in real-time as they're generated:
stream = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Write a poem about hope"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Execute functions and access external data:
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools,
tool_choice="auto"
)| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions | POST | Generate chat completions |
/v1/models | GET | List available models |
All requests require an API key in the Authorization header:
Authorization: Bearer YOUR_API_KEYHelpingAI returns standard HTTP status codes and detailed error messages:
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}Common Error Codes:
400: Bad Request - Invalid parameters401: Unauthorized - Invalid API key429: Too Many Requests - Rate limit exceeded500: Internal Server Error - Server issueProvide context about the user's emotional state:
# Good: Provides emotional context {#good-provides-emotional-context}
messages = [
{"role": "system", "content": "The user is feeling anxious about an upcoming presentation."},
{"role": "user", "content": "Can you help me prepare?"}
]
# Better: Let the user express emotions naturally {#better-let-the-user-express-emotions-naturally}
messages = [
{"role": "user", "content": "I'm really nervous about my presentation tomorrow. Can you help me prepare?"}
]Set clear context and behavior:
messages = [
{
"role": "system",
"content": "You are a supportive career counselor. Be empathetic and provide practical advice."
},
{"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
]Always handle potential errors in streaming:
try:
stream = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=messages,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
except Exception as e:
print(f"Streaming error: {e}")Track usage to optimize costs:
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=messages
)
usage = response.usage
print(f"Prompt tokens: {usage.prompt_tokens}")
print(f"Completion tokens: {usage.completion_tokens}")
print(f"Total tokens: {usage.total_tokens}")pip install helpingainpm install helpingaiSince HelpingAI is OpenAI-compatible, you can use:
base_urlbaseURLReady to dive deeper? Explore these resources:
Need help? We're here for you: