Gabriele Cimolino

Awareness Cues and Trust in Shared Control

TRAIT Workshop at CHI 2022  ·  Controlled Study  ·  Cimolino, Gutwin, Graham

Trust in AI is often framed as a quantity to be increased or decreased. This study framed it differently: the relevant variable is not how much users trust, but whether their trust is calibrated — whether they defer to the AI when it is reliable and override it when it is not.

The study tested whether awareness cues — interface elements that inform users of their AI partner's activities — could improve trust calibration in a shared-control game. Two types of cues were compared: action cues, which reported what the AI had just done, and intention cues, which announced what the AI was about to do.

Intention cues significantly improved appropriate trust (p<0.05). Action cues did not. The key finding: the improvement in appropriateness was not accompanied by an increase in trust frequency. Players did not trust the AI more overall. They trusted it correctly more often — deferring when the AI's intention was right, overriding when it was not.

The mechanism is local predictability. A system whose next action is visible is predictable for that moment, which is sufficient for the user to make a correct trust decision in that moment. They don't need a global model of the AI's reliability — they need to know what it is about to do. Intention cues provide exactly this, in task-relevant terms. Action cues report what already happened, which is less useful for a decision that needs to be made now.

The implication for design: trust calibration is achievable through interface design, independently of model capability. Making a model more accurate is one path to appropriate trust. Designing the interface so that users can evaluate its next action is another — and it is available regardless of whether accuracy can be improved.

Read the paper →