AI systems don't just fail —
they fail the people using them.
I study the specific ways people misunderstand AI — what those misunderstandings look like, what produces them, and what interventions work. My background is in computer science and machine learning; I did my PhD studying how people use AI systems because that was where the interesting and underserved questions were. Six years at Queen's University's EQUIS lab, building AI systems, running controlled studies, and producing findings detailed enough to change a design decision.
PhD, Computing · Queen's University, 2024
13 publications · CHI, ASSETS, HAI, CHI PLAY
Users form beliefs about AI from what they need it to do —
and act on those beliefs even when they're wrong
When AI doesn't respond as expected, users don't update their model — they modify their behaviour. They change how they provide input, repeat actions that had no effect, or perform actions that provide no input at all, believing these will change what the AI does. This pattern appears across different users, different AI systems, and different tasks. It is not random user error. It is a predictable consequence of how users form beliefs about AI — from what they need the system to do, not from what they observe it doing. A taxonomy of these errors documents twelve categories with a specific structure and explains why different beliefs produce the same observable behaviour.
Knowing what the AI intends to do next
helps users decide when to rely on it
Awareness cues — interface elements that communicate what the AI intends to do next — were designed in response to observed failure. When players knew what the AI intended, they could make an informed decision about whether to defer or override. Trust appropriateness improved — measured as Matthews Correlation Coefficient against a known ground truth — without changing how often players relied on the AI. They deferred when the AI was right and overrode it when it wasn't. Understanding the structure of the failure determined what kind of fix to look for.
When AI handles what users cannot,
users can accomplish what they otherwise couldn't
Partial automation — an AI partner handling inputs a player cannot produce while they retain control of everything else — was studied with players who had spinal cord injuries. All could play both games. The ground truth for whether this worked is not a benchmark score. It is what participants said about what it meant: autonomous rather than passive participation, access to feelings of competence, activities they had believed were no longer available to them. One participant said that playing with AI assistance could show patients that "there's still lots to do in life."