Proof of Human Intent (PoHI) - Cryptographically verifiable human approval for AI-driven development
-
Updated
Feb 12, 2026 - TypeScript
Proof of Human Intent (PoHI) - Cryptographically verifiable human approval for AI-driven development
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Governance layer for human–AI collaboration: evidence boundaries, audit artifacts, and change admissibility.
Determinism: Bit-identical outputs under identical inputs, configuration, and execution environment.
Governance beneath the model. Custody before trust. Open for audit. Constitutional Grammar for Multi-Model AI Federations, Firmware Specification • Zero-Touch Alignment • Public Release v1.0
A methodology that makes AI-assisted research transparent, traceable, and structured for independent verification.
SMALL (Schema, Manifest, Artifact, Lineage, Lifecycle) is a formal execution state protocol that makes AI-assisted work legible, deterministic, and resumable by separating durable state from ephemeral execution.
Mindful and honest AI. Transparent, private, and easy to run yourself
Stop Claude Code from doing irreversible damage. Policy-gated execution + receipts so you can ship agents without sweating production.
Turn AI into a repeatable, auditable SDLC: ticket → spec → plan → PR → quality gates → release. Agents, templates, skills, scripts, and reference workflows.
🔥 Emergent intelligence in autonomous trading agents through evolutionary algorithms. Testing zero-knowledge learning in cryptocurrency markets. Where intelligence emerges, not designed.
Not new AI, but accountable and auditable AI
DSL Core - Specification for Audit-by-Design - Human & machine-readable domain-specific language (DSL) for defining, validating, and auditing software requirements. Open specification, free to use and extend.
Digital Native Institutions and the National Service Unit: a formal, falsifiable architecture for protocol-governed institutional facts and next-generation public administration.
Deterministic governance system for AI-driven marketing that separates diagnostics, human reasoning, and execution into strictly controlled layers.
Docs-only case study: provenance architecture for supply chain traceability, integrity, and auditability.
Governance, architecture, and epistemic framework for the Aurora Workflow Orchestration ecosystem (AWO, CRI-CORE, and scientific case studies).
Deterministic, auditable ethical decision engine implementing the Sovereign Ethics Algebra (SEA).
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."