Ofir Turel (Melbourne)

Turel Seminar flyer

Online

More Information

Elizabeth Bowman

bmm-lab@unimelb.edu.au

  • Seminar series

Zoom link: https://unimelb.zoom.us/j/88140787544?pwd=Tyt0aWt2S2hYazNTMWYvUFVOa0Q3QT09

Prejudiced against the “machine”? A decision science view of algorithmic aversion

When Artificial Intelligence (AI) acts as an advisor, many users present at least some ‘algorithm aversion’ or resistance to accept the AI’s recommendations. This problem prevents harvesting the benefits of advising-role AI by individuals, firms and societies. I use decision science and psychology theories (evolutionary psychology, somatic marker hypothesis, and the accessibility-diagnosticity perspective) to explain why people are aversive to algorithms, and why this may be transient. I test hypotheses in two studies that use the Implicit Association Test (IAT). Participants were asked to help training an AI that focus on weight estimate from 2D images. They engaged in four assessments for which they provided an initial estimate, received algorithmic advice, and provided a corrected estimate. How much they shifted the corrected estimate toward the AI advice was a behavioural measure of reliance on algorithmic advice. After the first two estimation tasks, they received implied information on the algorithm’s performance (1-5% deviation from true value). The findings suggest that people are generally prejudiced against “machines” and holed implicit negative attitudes toward them. They further suggest that moving the needle between aversion and appreciation depends initially on subconscious biases against AI, because there is insufficient information to suppress them; and in later use stages, on accessibility to diagnostic information about the AI’s performance that reduces the weight given to subconscious prejudice in decision making.