Chain of Recursive Thoughts Guide

HelpingAI's Chain of Recursive Thoughts engine is a breakthrough feature that allows you to see how the AI thinks during response generation. Unlike traditional models that think before responding, HelpingAI reasons during the response process, making its thought process transparent and adaptable.

What is Chain of Recursive Thoughts?

Chain of Recursive Thoughts is HelpingAI's unique ability to:

  • Think during generation: Reasoning happens in real-time as the response is created
  • Show thought process: Optional visibility into AI's reasoning with <think> tags
  • Adapt dynamically: Reasoning can change based on new information or context
  • Reuse logic blocks: Efficient processing through cached reasoning steps

The hideThink Parameter

Control reasoning visibility with the hideThink parameter:

  • hideThink: true (default): Hides reasoning, shows only final response
  • hideThink: false: Shows reasoning in <think>...</think> tags

Basic Examples

Python (using requests)

Python
import requests

url = "https://api.helpingai.co/v1/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}

# Show reasoning process {#show-reasoning-process}
data = {
    "model": "Dhanishtha-2.0-preview",
    "messages": [
        {"role": "user", "content": "What's 15 * 24? Show your work step by step."}
    ],
    "hideThink": False,  # Show reasoning
    "temperature": 0.3,
    "max_tokens": 400
}

response = requests.post(url, headers=headers, json=data)
result = response.json()
print(result['choices'][0]['message']['content'])

Python (using OpenAI SDK)

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

# Mathematical reasoning {#mathematical-reasoning}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "If I have 3 apples and buy 2 more bags with 5 apples each, how many apples do I have total?"}
    ],
    hideThink=False,  # Show reasoning process
    temperature=0.2,
    max_tokens=300
)

print(response.choices[0].message.content)

Python (using HelpingAI SDK)

Python
from helpingai import HelpingAI

client = HelpingAI(api_key="YOUR_API_KEY")

# Complex problem solving {#complex-problem-solving}
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "A train leaves Station A at 2 PM traveling at 60 mph. Another train leaves Station B at 3 PM traveling at 80 mph toward Station A. If the stations are 280 miles apart, when will they meet?"}
    ],
    hideThink=False,
    temperature=0.1,
    max_tokens=500
)

print(response.choices[0].message.content)

JavaScript (using axios)

JavaScript
const axios = require('axios');

(async () => {
  const response = await axios.post(
    'https://api.helpingai.co/v1/chat/completions',
    {
      model: 'Dhanishtha-2.0-preview',
      messages: [
        {role: 'user', content: 'Explain the logic behind solving this equation: 2x + 5 = 13'}
      ],
      hideThink: false,  // Show reasoning
      temperature: 0.3,
      max_tokens: 400
    },
    {
      headers: {
        'Authorization': 'Bearer YOUR_API_KEY',
        'Content-Type': 'application/json'
      }
    }
  );
  
  console.log(response.data.choices[0].message.content);
})();

JavaScript (using OpenAI package)

JavaScript
import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://api.helpingai.co/v1',
  apiKey: 'YOUR_API_KEY'
});

async function main() {
  const completion = await openai.chat.completions.create({
    model: 'Dhanishtha-2.0-preview',
    messages: [
      {role: 'user', content: 'What are the pros and cons of renewable energy? Think through each type.'}
    ],
    hideThink: false,  // Show reasoning
    temperature: 0.4,
    max_tokens: 600
  });

  console.log(completion.choices[0].message.content);
}

main();

JavaScript (using HelpingAI SDK)

JavaScript
import { HelpingAI } from 'helpingai';

const client = new HelpingAI({
  apiKey: 'YOUR_API_KEY'
});

async function main() {
  const completion = await client.chat.completions.create({
    model: 'Dhanishtha-2.0-preview',
    messages: [
      {role: 'user', content: 'Design a simple algorithm to find the largest number in an array. Explain your thought process.'}
    ],
    hideThink: false,
    temperature: 0.2,
    max_tokens: 500
  });

  console.log(completion.choices[0].message.content);
}

main();

Example Response with Reasoning

When hideThink=false, you'll see responses like this:

Text
<think>
The user is asking me to multiply 15 by 24 and show my work. I should break this down step by step to make it clear.

I can use the standard multiplication algorithm:
- 15 × 24
- I'll multiply 15 by 4 (the ones digit)
- Then multiply 15 by 20 (the tens digit)
- Then add the results

15 × 4 = 60
15 × 20 = 300
60 + 300 = 360

Let me double-check this by using a different method:
15 × 24 = 15 × (20 + 4) = (15 × 20) + (15 × 4) = 300 + 60 = 360

Yes, that's correct.
</think>

I'll solve 15 × 24 step by step:

**Method 1: Breaking down by place value**
- 15 × 4 (ones digit) = 60
- 15 × 20 (tens digit) = 300
- 60 + 300 = 360

**Method 2: Verification using distributive property**
- 15 × 24 = 15 × (20 + 4)
- = (15 × 20) + (15 × 4)
- = 300 + 60 = 360

Therefore, 15 × 24 = 360.

Advanced Use Cases

Emotional Reasoning

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "I just got rejected from my dream job and I'm feeling devastated. I don't know what to do next."}
    ],
    hideThink=False,
    temperature=0.7,
    max_tokens=500
)

print(response.choices[0].message.content)

Expected reasoning might include:

Text
<think>
The user is expressing deep emotional pain from job rejection. I can sense several emotions here:
1. Devastation - they used the word "devastated" which indicates intense disappointment
2. Confusion - "I don't know what to do next" shows they're feeling lost
3. This was their "dream job" so the stakes were high for them

I should:
- Acknowledge their pain empathetically
- Validate that this feeling is normal and understandable
- Provide gentle, practical guidance without being pushy
- Offer hope while being realistic
- Maybe suggest some concrete next steps when they're ready

I need to be careful not to minimize their feelings or rush them through the grieving process.
</think>

Complex Problem Solving

Python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "How would you design a system to handle 1 million concurrent users for a social media platform?"}
    ],
    hideThink=False,
    temperature=0.3,
    max_tokens=800
)

Creative Writing with Reasoning

Python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "Write a short story about a robot who discovers emotions. Show me how you develop the plot."}
    ],
    hideThink=False,
    temperature=0.8,
    max_tokens=600
)

Reasoning in Streaming

You can also see reasoning in real-time with streaming:

Python Example

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.helpingai.co/v1",
    api_key="YOUR_API_KEY"
)

stream = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "Solve this logic puzzle: If all roses are flowers, and some flowers are red, can we conclude that some roses are red?"}
    ],
    hideThink=False,
    stream=True,
    temperature=0.2,
    max_tokens=400
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)

Comparing Hidden vs Visible Reasoning

Hidden Reasoning (hideThink=true)

Python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "What's the capital of France?"}
    ],
    hideThink=True  # Default
)
# Output: "The capital of France is Paris." {#output-the-capital-of-france-is-paris}

Visible Reasoning (hideThink=false)

Python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[
        {"role": "user", "content": "What's the capital of France?"}
    ],
    hideThink=False
)
# Output:  {#output-}
# <think> {#think}
# This is a straightforward geography question. The capital of France is Paris.  {#this-is-a-straightforward-geography-question-the-capital-of-france-is-paris-}
# This is basic knowledge that I'm confident about. {#this-is-basic-knowledge-that-im-confident-about}
# </think> {#think}
# # The capital of France is Paris. {#-the-capital-of-france-is-paris}

Best Practices

1. Use for Complex Tasks

Enable reasoning for tasks that benefit from step-by-step thinking:

  • Mathematical problems
  • Logic puzzles
  • Complex analysis
  • Creative writing
  • Emotional support

2. Adjust Temperature

Lower temperature for logical reasoning:

Python
# For mathematical/logical tasks {#for-mathematicallogical-tasks}
temperature=0.1

# For creative reasoning {#for-creative-reasoning}
temperature=0.7

3. Provide Context

Give the AI context about what kind of reasoning you want:

Python
messages = [
    {"role": "system", "content": "Think through problems step by step, showing your reasoning clearly."},
    {"role": "user", "content": "How do I solve this calculus problem?"}
]

4. Parse Reasoning Content

Extract reasoning from responses:

Python
import re

def extract_reasoning(content):
    think_pattern = r'<think>(.*?)</think>'
    reasoning = re.findall(think_pattern, content, re.DOTALL)
    final_response = re.sub(think_pattern, '', content, flags=re.DOTALL).strip()
    return reasoning, final_response

reasoning, response = extract_reasoning(result['choices'][0]['message']['content'])

5. Monitor Token Usage

Reasoning increases token usage:

Python
response = client.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=messages,
    hideThink=False
)

usage = response.usage
print(f"Reasoning tokens: ~{usage.completion_tokens * 0.3}")  # Rough estimate
print(f"Total tokens: {usage.total_tokens}")

Use Cases

Educational Applications

  • Show students how to approach problems
  • Demonstrate critical thinking processes
  • Explain complex concepts step-by-step

Debugging and Development

  • Understand AI decision-making
  • Debug unexpected responses
  • Improve prompt engineering

Research and Analysis

  • Transparent analytical processes
  • Reproducible reasoning chains
  • Quality assurance for AI outputs

Creative Collaboration

  • See creative thought processes
  • Understand artistic decisions
  • Collaborate on creative projects

Limitations

  1. Token Usage: Reasoning increases token consumption
  2. Response Length: Longer responses with reasoning visible
  3. Processing Time: Slightly longer generation time
  4. Not Always Perfect: Reasoning reflects AI's process, not human logic

Next Steps