AI is impressive. It learns fast, adapts quickly, and makes decisions faster than any of us can blink. But there’s something it still doesn’t understand. Accountability. It can tell you what it did, but not always why. And if there’s one thing QA has mastered over the years, it’s learning how to explain the why.
As testers, we’re trained to prove our work, trace every step, and question what looks right but feels off. “It works” has never been enough for us. We need evidence, context, and a clear path that others can follow. When we document bugs, we don’t just say what failed; we add steps to reproduce so that anyone can see exactly how it happened. The ones we can’t reproduce stay with us like ghosts that haunt our projects, because without a trace, there’s no accountability.
In this session, I’ll walk through how those same QA instincts, documenting what happened, investigating why it failed, and involving humans at every stage, can guide how we build and monitor AI systems. Not through technical deep dives, but through simple principles that keep humans in charge of judgment.
We’ll talk about what the QA mindset looks like when applied to AI:
1. How test logs can inspire decision logs for explainable AI.
2. Why root cause analysis matters just as much for bias as it does for bugs.
3. And how keeping the “human in the loop” is still the best quality check we have.
Because at its core, QA isn’t just about preventing defects, it’s about protecting trust. And if AI is going to be part of our future, it needs that same habit of accountability we’ve built our entire profession around.