- /
- Blog
Building Trustworthy AI in Audit Through Data Traceability

Can we trust the numbers that AI produces?

Chris Ortega speaking at Connect 2025 New York
AI is moving faster than trust
AI adoption across audit and finance is accelerating. Ortega cited that more than 70% of firms are already experimenting with AI tools to automate reviews, summarize documentation, and perform validations. Yet only a small fraction have governance frameworks in place to verify the results.
For auditors, that creates a gap between speed and assurance. AI can deliver answers faster than ever, but without visibility into how those answers were formed, confidence erodes.
Traceability bridges that gap. It gives teams a way to see how AI reaches its conclusions, to check the logic, and to tie every result back to its source.
AI everywhere, governance nowhere
Across the profession, AI is being used to write memos, extract data, reconcile accounts, and summarize standards. Yet few firms have frameworks to trace those results back to source data or validate their accuracy.
The result: faster workflows, but fragile confidence.
As Ortega put it, “AI doesn’t remove professional skepticism. It amplifies it.”
Auditors can’t assume AI outputs are correct. They must verify them, document them, and understand the logic behind them. Without traceability, automation creates opacity, not efficiency.
The missing link: traceability as proof of trust
Traceability anchors trust. It connects every data point, summary, or calculation back to its source. Chris framed it as the AI trust equation:
Accuracy × Traceability × Governance = AI Value
“Accuracy alone isn’t enough,” he explained. “If you can’t show where the number came from, you can’t defend it.”
He compared it to his early audit days at Ernst & Young. “My senior told me two things would define my career: trust and the ability to verify. That’s the same standard we need for AI.”
Traceability turns AI from a black box into a transparent partner. It gives auditors confidence to rely on results and managers the evidence to stand behind them.
When applied correctly, traceability provides:
- Transparency, by showing how AI reached its conclusion.
- Control, by allowing users to confirm that data wasn’t altered.
- Accountability, by connecting results to authoritative sources.
Ortega illustrated this through practical examples. In one client scenario, he used a private AI assistant trained exclusively on IRS documentation to answer a tax question mid-meeting. Because every output referenced official regulations, the client received accurate, compliant advice in real time.
Traceability doesn’t slow AI down. It strengthens its reliability. It transforms automation from novelty to professional tool.
How to turn experimentation into a reliable framework
AI experimentation is valuable, but it needs structure to scale safely. Ortega encouraged firms to begin with clearly defined use cases, not broad technology rollouts.
“Don’t start with the tool,” he told the audience. “Start with the problem.”
Audit teams are applying this mindset through small pilots that target low-value, repetitive work, such as substantive testing. The focus is on automating manual procedures while keeping auditors in control of review and validation.
This approach creates a balanced model: AI handles the mechanics, and professionals maintain the judgment. It’s a practical way to turn experimentation into governance.
Traceability must be built into AI governance
Trustworthy AI relies on three connected pillars:
- Accuracy, ensuring outputs are correct and consistent.
- Traceability, ensuring every number ties back to evidence.
- Governance, ensuring oversight, policies, and data control.
Together, these elements define AI that can withstand regulatory, ethical, and audit scrutiny. Without them, AI becomes a black box.
Traceability should not be an afterthought added during review. It should be part of how tools are designed and how teams are trained to use them.
When traceability is embedded early, it accelerates both compliance and adoption.
The next era of AI trust in audit
AI isn’t replacing auditors. It’s redefining how they add value. Automation can remove the manual, repetitive work, but the profession’s foundation remains the same: trust.
Traceability is what makes AI auditable. It allows teams to explain results, prove reliability, and maintain the professional skepticism that defines good audit work.
As Ortega said during Connect 2025, “AI isn’t taking our jobs. It’s leveling us up.” AI is only as valuable as the trust behind it. Traceability is how that trust is earned.
What this means for the future of AI in audit
The next stage of AI adoption in audit is about building trust as it scales. Traceability gives firms a path to do that responsibly, turning automation into something verifiable, reviewable, and defensible.
.png)

