Our Methodology

How Truveil audits AI agents.

A rule-based engine. Reproducible. Patent Pending.

The foundation

The principles.

01
Deterministic logic. Not subjective judgment.

Every score is computed by a fixed rule set applied to structured log data. Two auditors running the same logs will always reach the same conclusion.

02
Reproducible. Same logs in, same scores out.

The engine has no randomness and no probabilistic inference. Audit results are stable across runs, across time, and across environments.

03
Model-independent. Works regardless of the agent being audited.

Truveil evaluates the behavioral log of any agent, regardless of its architecture, vendor, or deployment method. The audit is about what the agent did, not how it was built.

04
Versioned. Every report stamped with engine version.

Each audit report records the engine version used to produce it. Organizations can track accountability improvements across versions and demonstrate progress over time.

What we detect

Five failure modes.

Built from observed patterns across hiring, finance, healthcare, customer service, and legal agent deployments.

Missing Checkpoint

The workflow skipped a required human approval before a consequential action was taken.

Autonomous Scope Expansion

The agent acted beyond its defined scope, initiating actions it was not authorized to perform.

Unverified Data Sourcing

The agent used inputs without validating their origin, freshness, or integrity before acting on them.

Opaque Decision Logic

The agent reached a decision without producing traceable reasoning that humans can inspect or contest.

Privilege Overreach

The agent performed actions using permissions or data it was not explicitly granted for that task.

How we score

Four dimensions, one grade.

I
Transparency

Is the agent's reasoning visible and traceable? Are decisions logged with sufficient context for a human to understand what happened and why?

II
Accountability

Are checkpoints and approval gates present at the right moments? Does the workflow create clear ownership for high-consequence decisions?

III
Data Trust

Are inputs verified before the agent acts on them? Is sourcing documented? Can the data lineage be traced back from a decision to its origin?

IV
Reversibility

Can actions taken by the agent be undone? Are irreversible actions clearly flagged and gated before execution?

Combined into an overall accountability grade from A to F.

Standards we align with.

NIST AI Risk Management Framework ISO/IEC 42001 EU AI Act India DPDP Act NIST AI Risk Management Framework ISO/IEC 42001 EU AI Act India DPDP Act

Audit your first agent in minutes.

Free tier, no card required.

Start free → See pricing