Skip to content

Conversation

@TonsOfFun
Copy link
Contributor

Summary

This PR adds support for capturing and tracking extended thinking traces from LLMs that support reasoning capabilities (like Claude's extended thinking and OpenAI o1 models).

New Components:

  • SolidAgent::Reasonable - Concern for models to store reasoning data
  • SolidAgent::Reasonable::Reason - Value object for reasoning traces
  • SolidAgent::HasReasons - Concern for agents to capture reasoning
  • Generator for adding reasoning columns to existing models

Features:

  • Capture reasoning content and token counts from LLM responses
  • Track thinking time and metadata
  • Support for redacted reasoning (provider-redacted content)
  • Reasoning chain aggregation for multi-step processes
  • Statistics and summarization helpers
  • Database persistence support via HasContext integration

Motivation

Inspired by the AgentFragment pattern in tonsoffun/writebook which tracks content transformations. This extends that concept to track AI reasoning/thinking traces for:

  • Transparency: Understanding why an AI made certain decisions
  • Debugging: Identifying issues in AI logic
  • Auditing: Maintaining records of AI decision-making processes
  • Cost tracking: Reasoning tokens are often billed separately

Usage

class AnalysisAgent < ApplicationAgent
  include SolidAgent::HasReasons
  include SolidAgent::HasContext

  has_reasons auto_capture: true, persist: true

  def analyze
    result = prompt(
      messages: analysis_messages,
      extended_thinking: true
    )

    # Access captured reasoning
    last_reasoning.content      #=> "Let me analyze this systematically..."
    total_reasoning_tokens      #=> 450
    reasoning_chain             #=> Full reasoning trace
  end
end

Test plan

  • Unit tests for Reason value object
  • Unit tests for HasReasons concern
  • Full test suite passes (189 tests, 441 assertions)

🤖 Generated with Claude Code

TonsOfFun and others added 2 commits January 29, 2026 10:49
- Introduced new agent manifests for CrewAI, Dotprompt, GitHub Prompt, and full-featured agents.
- Created tests for validating agent manifests, including parsing and schema validation.
- Implemented parsers for CrewAI, Dotprompt, and GitHub Prompt formats.
- Added validation tests for agent attributes, including name, version, and tool names.
- Enhanced the Picoschema conversion tests for JSON schema compatibility.
This adds support for capturing and tracking extended thinking traces
from LLMs like Claude (with extended thinking) and OpenAI o1 models.

New components:
- SolidAgent::Reasonable - Concern for models to store reasoning data
- SolidAgent::Reasonable::Reason - Value object for reasoning traces
- SolidAgent::HasReasons - Concern for agents to capture reasoning
- Generator for adding reasoning columns to existing models

Features:
- Capture reasoning content and token counts from LLM responses
- Track thinking time and metadata
- Support for redacted reasoning (provider-redacted content)
- Reasoning chain aggregation for multi-step processes
- Statistics and summarization helpers
- Database persistence support via HasContext integration

Inspired by the AgentFragment pattern in tonsoffun/writebook.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants