Understand the fundamental concepts that make HelpingAI unique and how to leverage them in your applications.
Emotional Intelligence (EI) in HelpingAI refers to the model's ability to:
HelpingAI's emotional intelligence operates through:
messages = [
{"role": "user", "content": "I just lost my job and I don't know what to do. I'm scared about the future."}
]"I understand you lost your job. Here are some steps you can take: 1. Update your resume 2. Apply to new positions 3. Network with contacts...""I can hear how frightening and overwhelming this must feel right now. Losing a job is one of life's most stressful experiences, and it's completely natural to feel scared about what comes next. Your feelings are valid, and you're not alone in this.
Let's take this one step at a time. First, give yourself permission to process this emotionally - it's okay to feel upset. When you're ready, we can work together on practical next steps, but there's no rush. What feels most important to address right now?"Unlike traditional AI models that think before responding, HelpingAI uses Chain of Recursive Thoughts - thinking during response generation. This breakthrough approach enables:
Control reasoning visibility with the unique hideThink parameter:
Shows only the final response:
import requests
data = {
"model": "Dhanishtha-2.0-preview",
"messages": [{"role": "user", "content": "What's 15 * 24?"}],
"hideThink": True # Default behavior
}
response = requests.post(
"https://api.helpingai.co/v1/chat/completions",
headers={"Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json"},
json=data
)
# Output: "15 * 24 = 360" {#output-15-24-360}from openai import OpenAI
client = OpenAI(base_url="https://api.helpingai.co/v1", api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's 15 * 24?"}],
hideThink=True
)
# Output: "15 * 24 = 360" {#output-15-24-360}from helpingai import HelpingAI
client = HelpingAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's 15 * 24?"}],
hideThink=True
)
# Output: "15 * 24 = 360" {#output-15-24-360}const axios = require('axios');
const response = await axios.post(
'https://api.helpingai.co/v1/chat/completions',
{
model: 'Dhanishtha-2.0-preview',
messages: [{role: 'user', content: "What's 15 * 24?"}],
hideThink: true
},
{
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
}
}
);
// Output: "15 * 24 = 360"import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://api.helpingai.co/v1',
apiKey: 'YOUR_API_KEY'
});
const response = await openai.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{role: 'user', content: "What's 15 * 24?"}],
hideThink: true
});
// Output: "15 * 24 = 360"import { HelpingAI } from 'helpingai';
const client = new HelpingAI({apiKey: 'YOUR_API_KEY'});
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{role: 'user', content: "What's 15 * 24?"}],
hideThink: true
});
// Output: "15 * 24 = 360"Shows reasoning process in <think> tags:
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's 15 * 24?"}],
hideThink=False
)Output:
<think>
I need to multiply 15 by 24. Let me break this down:
- 15 × 24
- I can use the standard multiplication method
- 15 × 4 = 60 (ones place)
- 15 × 20 = 300 (tens place)
- 60 + 300 = 360
Let me verify: 15 × 24 = 15 × (20 + 4) = (15 × 20) + (15 × 4) = 300 + 60 = 360
Yes, that's correct.
</think>
15 × 24 = 360
I calculated this by breaking it down:
- 15 × 4 = 60
- 15 × 20 = 300
- 60 + 300 = 360HelpingAI achieves 5x better token efficiency than GPT-4 through:
Task: Generate an empathetic customer support response
| Model | Input Tokens | Output Tokens | Total | Cost |
|---|---|---|---|---|
| HelpingAI | 25 | 95 | 120 | $0.00018 |
| GPT-4 | 25 | 575 | 600 | $0.018 |
| Savings | Same | 6x fewer | 5x fewer | 99% less |
HelpingAI is designed as a drop-in replacement for OpenAI's API:
# Before (OpenAI) {#before-openai}
from openai import OpenAI
client = OpenAI(api_key="openai-key")
# After (HelpingAI) - Just change base URL and key {#after-helpingai---just-change-base-url-and-key}
from openai import OpenAI
client = OpenAI(
base_url="https://api.helpingai.co/v1",
api_key="helpingai-key"
)
# All existing code works the same! {#all-existing-code-works-the-same}
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview", # Just change the model
messages=[{"role": "user", "content": "Hello!"}]
)Works with existing OpenAI-compatible tools:
Set AI behavior and context:
messages = [
{
"role": "system",
"content": "You are a compassionate mental health support assistant with high emotional intelligence."
},
{"role": "user", "content": "I'm feeling overwhelmed"}
]Messages from the human user:
{"role": "user", "content": "I'm struggling with anxiety today"}Messages from the AI:
{"role": "assistant", "content": "I understand you're experiencing anxiety today..."}Results from function calls:
{"role": "tool", "tool_call_id": "call_123", "content": "Weather: 72°F, sunny"}HelpingAI maintains emotional and contextual memory throughout conversations:
# First message {#first-message}
{"role": "user", "content": "I'm nervous about my job interview tomorrow"}
# Later in conversation - AI remembers emotional context {#later-in-conversation---ai-remembers-emotional-context}
{"role": "user", "content": "How did it go?"}
# AI will remember the interview context and emotional state {#ai-will-remember-the-interview-context-and-emotional-state}response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the capital of France?"}],
temperature=0.2
)
# Output: "The capital of France is Paris." {#output-the-capital-of-france-is-paris}response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Write a haiku about hope"}],
temperature=0.8
)
# Output: More creative and varied haiku responses {#output-more-creative-and-varied-haiku-responses}Streaming provides responses as they're generated:
stream = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")HelpingAI can intelligently decide when and how to use tools:
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}]
# AI will automatically decide to call the weather function {#ai-will-automatically-decide-to-call-the-weather-function}
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools
)Provide emotional context for better responses:
# Good: Includes emotional context {#good-includes-emotional-context}
messages = [
{"role": "user", "content": "I'm excited but nervous about starting my new job tomorrow"}
]
# Better: More specific emotional context {#better-more-specific-emotional-context}
messages = [
{"role": "system", "content": "The user is starting a new job and feeling mixed emotions of excitement and anxiety"},
{"role": "user", "content": "Any advice for my first day?"}
]Enable reasoning for complex tasks:
# For complex problems {#for-complex-problems}
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Solve this step by step: ..."}],
hideThink=False # Show reasoning
)
# For simple responses {#for-simple-responses}
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the weather?"}],
hideThink=True # Hide reasoning (default)
)max_tokens limits