Core Concepts

Understand the fundamental concepts that make HelpingAI unique and how to leverage them in your applications.

Emotional Intelligence

What is Emotional Intelligence in AI?

Emotional Intelligence (EI) in HelpingAI refers to the model's ability to:

  • Recognize emotions in text and context
  • Understand emotional nuances and subtleties
  • Respond appropriately to different emotional states
  • Maintain emotional consistency throughout conversations
  • Adapt communication style based on emotional context

How It Works

HelpingAI's emotional intelligence operates through:

  1. Emotion Detection: Analyzing word choice, tone, and context
  2. Emotional Mapping: Understanding the emotional journey in conversations
  3. Empathetic Response Generation: Crafting responses that acknowledge and validate emotions
  4. Contextual Adaptation: Adjusting approach based on emotional state

Example: Emotional Intelligence in Action

Input with Emotional Context

python
messages = [
    {"role": "user", "content": "I just lost my job and I don't know what to do. I'm scared about the future."}
]

Traditional AI Response

"I understand you lost your job. Here are some steps you can take: 1. Update your resume 2. Apply to new positions 3. Network with contacts..."

HelpingAI Response

text
"I can hear how frightening and overwhelming this must feel right now. Losing a job is one of life's most stressful experiences, and it's completely natural to feel scared about what comes next. Your feelings are valid, and you're not alone in this.

Let's take this one step at a time. First, give yourself permission to process this emotionally - it's okay to feel upset. When you're ready, we can work together on practical next steps, but there's no rush. What feels most important to address right now?"

Chain of Recursive Thoughts

What is Chain of Recursive Thoughts?

Unlike traditional AI models that think before responding, HelpingAI uses Chain of Recursive Thoughts - thinking during response generation. This breakthrough approach enables:

  • Real-time logic injection: Adapts reasoning as the conversation evolves
  • Substep re-evaluation: Continuously refines understanding and responses
  • Dynamic flow control: Adjusts approach based on context and emotional cues
  • Human-like thought loops: Mirrors natural human reasoning patterns

The hideThink Parameter

Control reasoning visibility with the unique hideThink parameter:

hideThink: true (Default)

Shows only the final response:

Python (using requests)

python
import requests

data = {
    "model": "Dhanishtha-2.0-preview",
    "messages": [{"role": "user", "content": "What's 15 * 24?"}],
    "hideThink": True  # Default behavior
}

response = requests.post(
    "https://api.helpingai.co/v1/chat/completions",
    headers={"Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json"},
    json=data
)
# Output: "15 * 24 = 360" {#output-15-24-360}

Python (using OpenAI SDK)

python
from openai import OpenAI

client = OpenAI(base_url="https://api.helpingai.co/v1", api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's 15 * 24?"}],
    hideThink=True
)
# Output: "15 * 24 = 360" {#output-15-24-360}

Python (using HelpingAI SDK)

python
from helpingai import HelpingAI

client = HelpingAI(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's 15 * 24?"}],
    hideThink=True
)
# Output: "15 * 24 = 360" {#output-15-24-360}

JavaScript (using axios)

javascript
const axios = require('axios');

const response = await axios.post(
  'https://api.helpingai.co/v1/chat/completions',
  {
    model: 'Dhanishtha-2.0-preview',
    messages: [{role: 'user', content: "What's 15 * 24?"}],
    hideThink: true
  },
  {
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY',
      'Content-Type': 'application/json'
    }
  }
);
// Output: "15 * 24 = 360"

JavaScript (using OpenAI package)

javascript
import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://api.helpingai.co/v1',
  apiKey: 'YOUR_API_KEY'
});

const response = await openai.chat.completions.create({
  model: 'Dhanishtha-2.0-preview',
  messages: [{role: 'user', content: "What's 15 * 24?"}],
  hideThink: true
});
// Output: "15 * 24 = 360"

JavaScript (using HelpingAI SDK)

javascript
import { HelpingAI } from 'helpingai';

const client = new HelpingAI({apiKey: 'YOUR_API_KEY'});

const response = await client.chat.completions.create({
  model: 'Dhanishtha-2.0-preview',
  messages: [{role: 'user', content: "What's 15 * 24?"}],
  hideThink: true
});
// Output: "15 * 24 = 360"

hideThink: false

Shows reasoning process in <think> tags:

python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's 15 * 24?"}],
    hideThink=False
)

Output:

text
<think>
I need to multiply 15 by 24. Let me break this down:
- 15 × 24
- I can use the standard multiplication method
- 15 × 4 = 60 (ones place)
- 15 × 20 = 300 (tens place)
- 60 + 300 = 360

Let me verify: 15 × 24 = 15 × (20 + 4) = (15 × 20) + (15 × 4) = 300 + 60 = 360
Yes, that's correct.
</think>

15 × 24 = 360

I calculated this by breaking it down:
- 15 × 4 = 60
- 15 × 20 = 300
- 60 + 300 = 360

Token Efficiency

What Makes HelpingAI Token-Efficient?

HelpingAI achieves 5x better token efficiency than GPT-4 through:

  1. Optimized Architecture: Purpose-built for efficiency
  2. Intermediate Memory Snapshots: Reuses reasoning patterns
  3. Smart Context Management: Efficient handling of conversation history
  4. Emotional Context Compression: Compact emotional state representation

Token Efficiency Example

Task: Generate an empathetic customer support response

ModelInput TokensOutput TokensTotalCost
HelpingAI2595120$0.00018
GPT-425575600$0.018
SavingsSame6x fewer5x fewer99% less

OpenAI Compatibility

Drop-in Replacement

HelpingAI is designed as a drop-in replacement for OpenAI's API:

python
# Before (OpenAI) {#before-openai}
from openai import OpenAI
client = OpenAI(api_key="openai-key")

# After (HelpingAI) - Just change base URL and key {#after-helpingai---just-change-base-url-and-key}
from openai import OpenAI
client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="helpingai-key"
)

# All existing code works the same! {#all-existing-code-works-the-same}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",  # Just change the model
    messages=[{"role": "user", "content": "Hello!"}]
)

Compatible Libraries

Works with existing OpenAI-compatible tools:

  • LangChain: Use with OpenAI provider
  • LlamaIndex: Compatible with OpenAI integration
  • Vercel AI SDK: Drop-in replacement
  • OpenAI Python/JS libraries: Just change base URL

Message Roles

System Messages

Set AI behavior and context:

python
messages = [
    {
        "role": "system", 
        "content": "You are a compassionate mental health support assistant with high emotional intelligence."
    },
    {"role": "user", "content": "I'm feeling overwhelmed"}
]

User Messages

Messages from the human user:

python
{"role": "user", "content": "I'm struggling with anxiety today"}

Assistant Messages

Messages from the AI:

python
{"role": "assistant", "content": "I understand you're experiencing anxiety today..."}

Tool Messages

Results from function calls:

python
{"role": "tool", "tool_call_id": "call_123", "content": "Weather: 72°F, sunny"}

Context and Memory

Context Window

  • Size: 32,768 tokens
  • Management: Automatic context trimming
  • Best Practice: Monitor token usage for long conversations

Conversation Memory

HelpingAI maintains emotional and contextual memory throughout conversations:

python
# First message {#first-message}
{"role": "user", "content": "I'm nervous about my job interview tomorrow"}

# Later in conversation - AI remembers emotional context {#later-in-conversation---ai-remembers-emotional-context}
{"role": "user", "content": "How did it go?"}
# AI will remember the interview context and emotional state {#ai-will-remember-the-interview-context-and-emotional-state}

Temperature and Creativity

Temperature Scale (0-2)

  • 0.0-0.3: Focused, deterministic responses (good for factual tasks)
  • 0.4-0.7: Balanced creativity and consistency (default range)
  • 0.8-1.0: More creative and varied responses
  • 1.1-2.0: Highly creative but potentially inconsistent

Examples by Temperature

Temperature 0.2 (Factual)

python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's the capital of France?"}],
    temperature=0.2
)
# Output: "The capital of France is Paris." {#output-the-capital-of-france-is-paris}

Temperature 0.8 (Creative)

python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "Write a haiku about hope"}],
    temperature=0.8
)
# Output: More creative and varied haiku responses {#output-more-creative-and-varied-haiku-responses}

Streaming

Real-time Response Generation

Streaming provides responses as they're generated:

python
stream = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Benefits of Streaming

  • Better UX: Users see responses immediately
  • Perceived Speed: Feels faster even with same total time
  • Early Termination: Can stop generation early if needed
  • Real-time Interaction: Enables more dynamic applications

Tool Calling

Function Execution During Conversations

HelpingAI can intelligently decide when and how to use tools:

python
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            }
        }
    }
}]

# AI will automatically decide to call the weather function {#ai-will-automatically-decide-to-call-the-weather-function}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
    tools=tools
)

Best Practices

1. Leverage Emotional Intelligence

Provide emotional context for better responses:

python
# Good: Includes emotional context {#good-includes-emotional-context}
messages = [
    {"role": "user", "content": "I'm excited but nervous about starting my new job tomorrow"}
]

# Better: More specific emotional context {#better-more-specific-emotional-context}
messages = [
    {"role": "system", "content": "The user is starting a new job and feeling mixed emotions of excitement and anxiety"},
    {"role": "user", "content": "Any advice for my first day?"}
]

2. Use Chain of Recursive Thoughts Strategically

Enable reasoning for complex tasks:

python
# For complex problems {#for-complex-problems}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "Solve this step by step: ..."}],
    hideThink=False  # Show reasoning
)

# For simple responses {#for-simple-responses}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What's the weather?"}],
    hideThink=True  # Hide reasoning (default)
)

3. Optimize for Token Efficiency

  • Use clear, concise prompts
  • Set appropriate max_tokens limits
  • Choose optimal temperature for your use case
  • Implement conversation trimming for long chats

Next Steps