top of page

Research

"We ought to try to understand as much as we can about the ways people think about AI, given how quickly everything is moving.”
                                                                               
- Renée Richardson Gosline

Unknown.jpeg

Dr. Gosline applies a behavioral science lens to digital transformation and has pioneered the concept of "friction auditing" in AI systems. She leads the Human-First AI Group at MIT's Initiative on the Digital Economy (IDE) and co-founded MIT Sloan's B-Lab (Behavioral Research Lab). She enjoys collaborating with colleagues and doctoral students, as well as firms that seek to partner on rigorous research (e.g., projects with Accenture and BMW).

 

Dr. Gosline has given academic talks at academic conferences and seminars at a variety of universities, including Yale, Dartmouth, Wharton, Columbia, and her alma mater, Harvard. She has been published in a variety of academic, trade, and media publications. Her research has been covered in both televised and print media. 

She is completing her new book, "In Praise of Friction," a research-backed plan for human-first AI systems that maximize value and minimize harm (MIT Press).

Select Publications

(Subset with a focus on human-technology interaction)

Nudge Users to Catch Generative AI Errors

Sloan Management Review (forthcoming) 2024

(with Yunhao Zhang, Haiwen Li, Arnab D. Chakraborty, Philippe Roussiere, and Patrick Connolly)

When it comes to mitigating the risks of Generative AI errors and biases, putting a human in the loop (HITL) is important. But it isn’t enough to protect your company and customers, as humans, also vulnerable to errors and biases, may either trust AI too much, or not enough. Findings from a field experiment by MIT and Accenture offer a solution: the introduction of “beneficial friction” in the form of tailored tools that help employees second-guess Gen AI outputs at the right times. This exploratory study suggests that targeted friction in the form of labels that flag potential errors and omissions can direct users’ attention to content that should be given closer inspection -- without sacrificing efficiency. The experiment explores a practical approach to the implementation of responsible AI and offers one pathway for organizations to navigate the challenges associated with GenAI adoption.

match-ignition-close-260nw-119901391_edi

Welcome
to Our Site

brain_ai.jpg

Who resists algorithmic advice? Cognitive style correlates with algorithmic aversion

Academy of Management Annual Conference proceedings, 2024

(with Heather Yang, Bocconi University)

Our paper investigates factors influencing individuals' preference for algorithm-based artificial intelligence, revealing that reflective thinkers are more likely to appreciate algorithmic advisors, providing insights for organizations relying on AI-based technologies.

AI.jpg

Judgment and Decision Making, Cambridge University Press 2023
(with Yunhao Zhang)

With the wide availability of large language models and generative AI, there are four primary paradigms for human–AI collaboration: human-only, AI-only (ChatGPT-4), augmented human (where a human makes the final decision with AI output as a reference), or augmented AI (where the AI makes the final decision with human output as a reference). In partnership with one of the world’s leading consulting firms, we enlisted professional content creators and ChatGPT-4 to create advertising content for products and persuasive content for campaigns following the aforementioned paradigms. First, we find that, contrary to the expectations of some of the existing algorithm aversion literature on conventional predictive AI, the content generated by generative AI and augmented AI is perceived as of higher quality than that produced by human experts and augmented human experts. Second, revealing the source of content production reduces—but does not reverse—the perceived quality gap between human- and AI-generated content. This bias in evaluation is predominantly driven by human favoritism rather than AI aversion: Knowing that the same content is created by a human expert increases its (reported) perceived quality, but knowing that AI is involved in the creation process does not affect its perceived quality. Further analysis suggests this bias is not due to a ‘quality prime’ as knowing the content they are about to evaluate comes from competent creators (e.g., industry professionals and state-of-the-art AI) without knowing exactly that the creator of each piece of content does not increase participants’ perceived quality.

Harvard Business Review, 2022

Friction isn’t always a bad thing, especially when companies are looking for responsible ways to use AI. The trick is learning to differentiate good friction from bad, and to understand when and where adding good friction to your customer journey can give customers the agency and autonomy to improve choice, rather than automating the humans out of decision-making. Companies should do three things: 1) when it comes to AI deployment, practice acts of inconvenience; 2) experiment (and fail) a lot to prevent auto-pilot applications of machine learning; and 3) be on the lookout for “dark patterns.”

Screenshot 2024-02-17 at 2.11.59 AM.png
trading-with-ai-technology-digital-transformation-technology-strategy-iot-internet-of-thin

(with Yunhao Zhang) 2023

When and why do people exhibit AI aversion versus appreciation? We thoroughly analyze how the entire trajectory of beliefs (prior belief, posterior belief, and belief-updating given agents’ performance feedback) affects people’s preference for AI versus a human agent across task domains. In a series of pre-registered experiments, we first find that – in both objective and subjective task contexts – algorithm aversion diminishes and appreciation increases when users are informed of smaller errors by AI (or larger errors by human agents). We then identify a cognitive mechanism rooted in people's subjective beliefs which reconciles these fluctuating preferences: task context and performance feedback act as cues that shape beliefs about the relative competence of the AI and human agent, which largely determines people's choices. Performance feedback that reveals small AI errors does not lead to algorithmic aversion, based on a subjective belief that humans would perform relatively worse in the task context. Further analysis into belief-updating dynamics reveals that – rather than exhibiting asymmetric or motivated belief-updating – people make non-discriminatory inferences for the same performance feedback on AI and the human agent. In summary, regardless of one’s initial stance, preference for human versus AI is largely malleable, influenced by beliefs about their relative competence, which are susceptible to alteration based on performance feedback in various contexts.

Journal of Consumer Psychology, 2020
(with Sachin Banker and Jeffrey K. Lee)

Products bearing premium brand labels are known to increase perceptions of efficacy and improve objective consumer performance relative to lesser-branded equivalents, in what is traditionally described as a marketing placebo effect. In this paper, we suggest that experiences bearing these highly regarded brand labels can lead to a reverse effect, such that consumer performance actually declines with their use. Our findings demonstrate across domains of improving mental acuity, learning a new language, and developing financial analysis skills that completing performance-branded training experiences impairs objective performance in related tasks, relative to lower-performance-branded or unbranded counterparts. We posit that branded training experiences can evoke a brand-as-master relationship in which consumers take on a subservient role relative to the brand. As a consequence, higher-performance brands may impose greater demands upon consumers, increasing performance-anxiety and interfering with an individual's ability to perform effectively. These results document an important ramification of applying branding to learning experiences and identify contexts in which traditionally positive marketing actions can backfire for consumers.

jcpy.v30.1_edited.jpg

Psychology Today, 2019

A new dawn has risen in a digital world with unprecedented access to the educational and training options that were previously only accessible to a few. Understanding whether we are bridging the digital divide means that we need to examine the unconscious barriers to gaining the full benefits of these tools. If we fail to understand how our relationships with these brands affect their efficacy, instead of lifting everyone up, we will, in fact, be leaving people behind.

download.jpg
thumb_1200_1553.png

MIT Sloan Management Review, 2017
(with Glen Urban and Jeffrey K. Lee)

New research using experiments in digital media finds that sharing consumers’ positive stories about a brand can be a highly effective online marketing strategy.

WBUR Cognoscenti, 2017

In the not-so-distant future, armies of robots using retina recognition software will tailor their sales pitches to your preferences and price point, writes Renée Richardson Gosline.

Screenshot 2024-02-18 at 10.12.51 AM.png
9781138786837_edited.jpg

Edited By Susan Fournier, Michael J Breazeale, Jill Avery (2015)

Chapter: 

"From Stranger to Friend:

Shaping Consumer-Brand Relationships with Social Media"

bottom of page