Chat Completions API {#chat-completions-api}

The chat completions API is the primary way to interact with HelpingAI. It generates conversational responses with built-in emotional intelligence and Chain of Recursive Thoughts capabilities.

POST/v1/chat/completions

Create a chat completion with emotional intelligence and Chain of Recursive Thoughts

Request Body {#request-body}

Required Parameters {#required-parameters}

ParameterTypeDescription
modelstringID of the model to use. Currently supports Dhanishtha-2.0-preview
messagesarrayA list of messages comprising the conversation so far

Optional Parameters {#optional-parameters}

ParameterTypeDescriptionDefault
temperaturenumberControls randomness (0-2). Higher values make output more random0.7
max_tokensintegerMaximum number of tokens to generate (1-4000)150
top_pnumberNucleus sampling parameter (0-1)1
frequency_penaltynumberPenalize frequent tokens (-2 to 2)0
presence_penaltynumberPenalize new tokens (-2 to 2)0
streambooleanWhether to stream back partial progressfalse
hideThinkbooleanHide Chain of Recursive Thoughts in <think> tagstrue
toolsarrayList of tools the model may callnull
tool_choicestring/objectControls which tool is called"auto"

Message Object {#message-object}

Each message in the messages array should have:

FieldTypeDescription
rolestringThe role of the message author (system, user, assistant, or tool)
contentstringThe contents of the message
namestring(Optional) The name of the author of this message
tool_callsarray(Optional) Tool calls generated by the model
tool_call_idstring(Optional) Tool call that this message is responding to

Examples {#examples}

Basic Chat Completion {#basic-chat-completion}

Python (using requests) {#python-using-requests}

Python
import requests

url = "https://api.helpingai.co/v1/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}
data = {
    "model": "Dhanishtha-2.0-preview",
    "messages": [
        {"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
    ],
    "temperature": 0.7,
    "max_tokens": 200
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Python (using OpenAI SDK) {#python-using-openai-sdk}

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
    ],
    temperature=0.7,
    max_tokens=200
)

print(response.choices[0].message.content)

Python (using HelpingAI SDK) {#python-using-helpingai-sdk}

Python
from helpingai import HelpingAI

client = HelpingAI(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
    ],
    temperature=0.7,
    max_tokens=200
)

print(response.choices[0].message.content)

JavaScript (using axios) {#javascript-using-axios}

JavaScript
const axios = require("axios");

(async () => {
  const response = await axios.post(
    "https://api.helpingai.co/v1/chat/completions",
    {
      model: "Dhanishtha-2.0-preview",
      messages: [
        {
          role: "user",
          content: "I'm feeling overwhelmed with my workload today.",
        },
      ],
      temperature: 0.7,
      max_tokens: 200,
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
      },
    }
  );

  console.log(response.data.choices[0].message.content);
})();

JavaScript (using OpenAI package) {#javascript-using-openai-package}

JavaScript
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "https://api.helpingai.co/v1",
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [
      {
        role: "user",
        content: "I'm feeling overwhelmed with my workload today.",
      },
    ],
    temperature: 0.7,
    max_tokens: 200,
  });

  console.log(completion.choices[0].message.content);
}

main();

JavaScript (using HelpingAI SDK) {#javascript-using-helpingai-sdk}

JavaScript
import { HelpingAI } from "helpingai";

const client = new HelpingAI({
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await client.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [
      {
        role: "user",
        content: "I'm feeling overwhelmed with my workload today.",
      },
    ],
    temperature: 0.7,
    max_tokens: 200,
  });

  console.log(completion.choices[0].message.content);
}

main();

With System Message {#with-system-message}

Python (using requests) {#python-using-requests}

Python
import requests

url = "https://api.helpingai.co/v1/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}
data = {
    "model": "Dhanishtha-2.0-preview",
    "messages": [
        {"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
        {"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
    ],
    "temperature": 0.8,
    "max_tokens": 300
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Python (using OpenAI SDK) {#python-using-openai-sdk}

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
        {"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
    ],
    temperature=0.8,
    max_tokens=300
)

print(response.choices[0].message.content)

Python (using HelpingAI SDK) {#python-using-helpingai-sdk}

Python
from helpingai import HelpingAI

client = HelpingAI(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
        {"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
    ],
    temperature=0.8,
    max_tokens=300
)

print(response.choices[0].message.content)

JavaScript (using axios) {#javascript-using-axios}

JavaScript
const axios = require("axios");

(async () => {
  const response = await axios.post(
    "https://api.helpingai.co/v1/chat/completions",
    {
      model: "Dhanishtha-2.0-preview",
      messages: [
        {
          role: "system",
          content:
            "You are a supportive career counselor who provides empathetic guidance.",
        },
        {
          role: "user",
          content: "I'm thinking about changing careers but I'm scared.",
        },
      ],
      temperature: 0.8,
      max_tokens: 300,
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
      },
    }
  );

  console.log(response.data.choices[0].message.content);
})();

JavaScript (using OpenAI package) {#javascript-using-openai-package}

JavaScript
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "https://api.helpingai.co/v1",
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [
      {
        role: "system",
        content:
          "You are a supportive career counselor who provides empathetic guidance.",
      },
      {
        role: "user",
        content: "I'm thinking about changing careers but I'm scared.",
      },
    ],
    temperature: 0.8,
    max_tokens: 300,
  });

  console.log(completion.choices[0].message.content);
}

main();

JavaScript (using HelpingAI SDK) {#javascript-using-helpingai-sdk}

JavaScript
import { HelpingAI } from "helpingai";

const client = new HelpingAI({
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await client.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [
      {
        role: "system",
        content:
          "You are a supportive career counselor who provides empathetic guidance.",
      },
      {
        role: "user",
        content: "I'm thinking about changing careers but I'm scared.",
      },
    ],
    temperature: 0.8,
    max_tokens: 300,
  });

  console.log(completion.choices[0].message.content);
}

main();

With Chain of Recursive Thoughts (hideThink=false) {#with-chain-of-recursive-thoughts-hidethinkfalse}

Python (using requests) {#python-using-requests}

Python
import requests

url = "https://api.helpingai.co/v1/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}
data = {
    "model": "Dhanishtha-2.0-preview",
    "messages": [
        {"role": "user", "content": "What's 15 * 24? Show your work."}
    ],
    "hideThink": False,  # Shows reasoning process
    "temperature": 0.3,
    "max_tokens": 300
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Python (using OpenAI SDK) {#python-using-openai-sdk}

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "What's 15 * 24? Show your work."}
    ],
    hideThink=False,  # Shows reasoning process
    temperature=0.3,
    max_tokens=300
)

print(response.choices[0].message.content)

Python (using HelpingAI SDK) {#python-using-helpingai-sdk}

Python
from helpingai import HelpingAI

client = HelpingAI(api_key="YOUR_API_KEY")

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "What's 15 * 24? Show your work."}
    ],
    hideThink=False,  # Shows reasoning process
    temperature=0.3,
    max_tokens=300
)

print(response.choices[0].message.content)

JavaScript (using axios) {#javascript-using-axios}

JavaScript
const axios = require("axios");

(async () => {
  const response = await axios.post(
    "https://api.helpingai.co/v1/chat/completions",
    {
      model: "Dhanishtha-2.0-preview",
      messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
      hideThink: false, // Shows reasoning process
      temperature: 0.3,
      max_tokens: 300,
    },
    {
      headers: {
        Authorization: "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
      },
    }
  );

  console.log(response.data.choices[0].message.content);
})();

JavaScript (using OpenAI package) {#javascript-using-openai-package}

JavaScript
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "https://api.helpingai.co/v1",
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
    hideThink: false, // Shows reasoning process
    temperature: 0.3,
    max_tokens: 300,
  });

  console.log(completion.choices[0].message.content);
}

main();

JavaScript (using HelpingAI SDK) {#javascript-using-helpingai-sdk}

JavaScript
import { HelpingAI } from "helpingai";

const client = new HelpingAI({
  apiKey: "YOUR_API_KEY",
});

async function main() {
  const completion = await client.chat.completions.create({
    model: "Dhanishtha-2.0-preview",
    messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
    hideThink: false, // Shows reasoning process
    temperature: 0.3,
    max_tokens: 300,
  });

  console.log(completion.choices[0].message.content);
}

main();

Response Format {#response-format}

A successful response returns a JSON object with the following structure:

JSON
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "Dhanishtha-2.0-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I understand you're feeling overwhelmed with your workload today. That's a really common experience, and it's completely valid to feel that way when you have a lot on your plate..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 45,
    "total_tokens": 65
  }
}

Response Fields {#response-fields}

FieldTypeDescription
idstringUnique identifier for the chat completion
objectstringObject type, always "chat.completion"
createdintegerUnix timestamp of when the completion was created
modelstringThe model used for the completion
choicesarrayList of completion choices
usageobjectToken usage information

Choice Object {#choice-object}

FieldTypeDescription
indexintegerThe index of the choice in the list
messageobjectThe generated message
finish_reasonstringReason the model stopped generating tokens

Message Object (Response) {#message-object-response}

FieldTypeDescription
rolestringAlways "assistant" for generated responses
contentstringThe generated response content
tool_callsarray(Optional) Tool calls made by the model

Usage Object {#usage-object}

FieldTypeDescription
prompt_tokensintegerNumber of tokens in the prompt
completion_tokensintegerNumber of tokens in the completion
total_tokensintegerTotal tokens used (prompt + completion)

Streaming {#streaming}

For streaming responses, set stream: true. You'll receive Server-Sent Events with partial responses:

JSON
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"Dhanishtha-2.0-preview","choices":[{"index":0,"delta":{"content":"I understand"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"Dhanishtha-2.0-preview","choices":[{"index":0,"delta":{"content":" you're"},"finish_reason":null}]}

data: [DONE]

Error Responses {#error-responses}

Error responses follow this format:

JSON
{
  "error": {
    "message": "Invalid API key provided",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}

Common Error Codes {#common-error-codes}

CodeDescription
invalid_api_keyThe API key is invalid or missing
insufficient_quotaYou've exceeded your usage quota
model_not_foundThe specified model doesn't exist
invalid_request_errorThe request format is invalid
rate_limit_exceededToo many requests in a short time

Best Practices {#best-practices}

  1. Provide Emotional Context: Include emotional cues in your messages for better responses
  2. Use System Messages: Set clear context and behavior expectations
  3. Monitor Token Usage: Track usage to optimize costs
  4. Handle Errors Gracefully: Always implement proper error handling
  5. Use Appropriate Temperature: Lower for factual tasks, higher for creative tasks

Next Steps {#next-steps}

  • Streaming Guide - Learn about real-time responses

  • Tool Calling - Function calling capabilities

  • Chain of Recursive Thoughts - Understanding AI thoughts

  • Models API - Available models and their capabilities model: "Dhanishtha-2.0-preview", messages: [ { role: "user", content: "I'm feeling overwhelmed with my workload today.", }, ], temperature: 0.7, max_tokens: 200, });

    console.log(completion.choices[0].message.content); }

main();