Author: Chiara Longoni, Associate Professor, Marketing and Social Science, Bocconi University

  • Knowing less about AI makes people more open to having it in their lives – new research

    Knowing less about AI makes people more open to having it in their lives – new research

    [ad_1]

    Anggalih Prasetya / Shutterstock

    The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

    Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

    This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy.

    Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments.

    The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical – like it’s crossing into human territory.

    Of course, AI doesn’t actually possess human qualities. A chatbot might generate an empathetic response, but it doesn’t feel empathy. People with more technical knowledge about AI understand this.

    They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious.

    On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools.

    Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don’t evoke the same sense of human-like qualities – such as analysing test results – the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI’s efficiency, rather than any “magical” qualities.

    It’s not about capability, fear or ethics

    Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks.

    This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called “algorithm appreciation”, while others show scepticism, or “algorithm aversion”. Our research points to perceptions of AI’s “magicalness” as a key factor shaping these reactions.

    These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.

    To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of “magicalness” shape people’s openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI.

    And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    [ad_2]

    Source link

  • Unfair decisions by AI could make us indifferent to bad behaviour by humans

    Unfair decisions by AI could make us indifferent to bad behaviour by humans

    [ad_1]

    CrizzyStudio / Shutterstock

    Artificial intelligence (AI) makes important decisions that affect our everyday lives. These decisions are implemented by firms and institutions in the name of efficiency. They can help determine who gets into college, who lands a job, who receives medical treatment and who qualifies for government assistance.

    As AI takes on these roles, there is a growing risk of unfair decisions – or the perception of them by those people affected. For example, in college admissions or hiring, these automated decisions can unintentionally favour certain groups of people or those with certain backgrounds, while equally qualified but underrepresented applicants get overlooked.

    Or, when used by governments in benefit systems, AI may allocate resources in ways that worsen social inequality, leaving some people with less than they deserve and a sense of unfair treatment.

    Together with an international team of researchers, we examined how unfair resource distribution – whether handled by AI or a human – influences people’s willingness to act against unfairness. The results have been published in the journal Cognition.

    With AI becoming more embedded in daily life, governments are stepping in to protect citizens from biased or opaque AI systems. Examples of these efforts include the White House’s AI Bill of Rights, and the European parliament’s AI Act. These reflect a shared concern: people may feel wronged by AI’s decisions.

    So how does experiencing unfairness from an AI system affect how people treat one another afterwards?

    AI-induced indifference

    Our paper in Cognition looked at people’s willingness to act against unfairness after experiencing unfair treatment by an AI. The behaviour we examined applied to subsequent, unrelated interactions by these individuals. A willingness to act in such situations, often called “prosocial punishment,” is seen as crucial for upholding social norms.

    For example, whistleblowers may report unethical practices despite the risks, or consumers may boycott companies that they believe are acting in harmful ways. People who engage in these acts of prosocial punishment often do so to address injustices that affect others, which helps reinforce community standards.

    Representation of AI

    Anggalih Prasetya / Shutterstock

    We asked this question: could experiencing unfairness from AI, instead of a person, affect people’s willingness to stand up to human wrongdoers later on? For instance, if an AI unfairly assigns a shift or denies a benefit, does it make people less likely to report unethical behaviour by a co-worker afterwards?

    Across a series of experiments, we found that people treated unfairly by an AI were less likely to punish human wrongdoers afterwards than participants who had been treated unfairly by a human. They showed a kind of desensitisation to others’ bad behaviour. We called this effect AI-induced indifference, to capture the idea that unfair treatment by AI can weaken people’s sense of accountability to others. This makes them less likely to address injustices in their community.

    Reasons for inaction

    This may be because people place less blame on AI for unfair treatment, and thus they feel less driven to act against injustice. This effect is consistent even when participants encountered only unfair behaviour by others or both fair and unfair behaviour. To look at whether the relationship we had uncovered was affected by familiarity with AI, we carried out the same experiments again, after the release of ChatGPT in 2022. We got the same results with the later series of tests as we had with the earlier ones.

    These results suggest that people’s responses to unfairness depend not only on whether they were treated fairly but also on who treated them unfairly – an AI or a human.

    In short, unfair treatment by an AI system can affect how people respond to each other, making them less attentive to each other’s unfair actions. This highlights AI’s potential ripple effects in human society, extending beyond an individual’s experience of a single unfair decision.

    When AI systems act unfairly, the consequences extend to future interactions, influencing how people treat each other, even in situations unrelated to AI. We would suggest that developers of AI systems should focus on minimising biases in AI training data to prevent these important spillover effects.

    Policymakers should also establish standards for transparency, requiring companies to disclose where AI might make unfair decisions. This would help users understand the limitations of AI systems, and how to challenge unfair outcomes. Increased awareness of these effects could also encourage people to stay alert to unfairness, especially after interacting with AI.

    Feelings of outrage and blame for unfair treatment are essential for spotting injustice and holding wrongdoers accountable. By addressing AI’s unintended social effects, leaders can ensure AI supports rather than undermines the ethical and social standards needed for a society built on justice.

    The Conversation

    The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    [ad_2]

    Source link