A rule-based engine. Reproducible. Patent Pending.
Every score is computed by a fixed rule set applied to structured log data. Two auditors running the same logs will always reach the same conclusion.
The engine has no randomness and no probabilistic inference. Audit results are stable across runs, across time, and across environments.
Truveil evaluates the behavioral log of any agent, regardless of its architecture, vendor, or deployment method. The audit is about what the agent did, not how it was built.
Each audit report records the engine version used to produce it. Organizations can track accountability improvements across versions and demonstrate progress over time.
Built from observed patterns across hiring, finance, healthcare, customer service, and legal agent deployments.
The workflow skipped a required human approval before a consequential action was taken.
The agent acted beyond its defined scope, initiating actions it was not authorized to perform.
The agent used inputs without validating their origin, freshness, or integrity before acting on them.
The agent reached a decision without producing traceable reasoning that humans can inspect or contest.
The agent performed actions using permissions or data it was not explicitly granted for that task.
Is the agent's reasoning visible and traceable? Are decisions logged with sufficient context for a human to understand what happened and why?
Are checkpoints and approval gates present at the right moments? Does the workflow create clear ownership for high-consequence decisions?
Are inputs verified before the agent acts on them? Is sourcing documented? Can the data lineage be traced back from a decision to its origin?
Can actions taken by the agent be undone? Are irreversible actions clearly flagged and gated before execution?
Combined into an overall accountability grade from A to F.
Free tier, no card required.
Start free → See pricing