|Title||Evaluating Effects of User Experience and System Transparency on Trust in Automation|
|Publication Type||Conference Proceedings|
|Year of Conference||2017|
|Authors||Yang, X. Jessie, V. V. Unhelkar, K. Li, and J. A. Shah|
|Conference Name||12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017)|
|Conference Location||Vienna, Austria|
Existing research assessing human operators’ trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust . We defined trust of entirety as a measure that accounts for trust across a human’s entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of “area under the trust curve” than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the “cry wolf” effect — wherein human operators begin to reject an automated system due to repeated false alarms.