TRAILS Faculty Launch New Study on Perception Bias and AI Systems

Aug 26, 2024

Perception bias is a cognitive bias that occurs when we subconsciously draw conclusions based on what we expect to see or experience. It has been studied extensively, particularly as it relates to health information, the workplace environment, and even social gatherings.

But what is the relationship between human-based perception bias, and information that is generated by artificial intelligence (AI) algorithms?

Researchers from the Institute for Trustworthy AI in Law & Society (TRAILS) are exploring this topic, conducting a series of studies to determine the level of bias that users expect from AI systems, and how AI providers explain to users that their systems may include biased data.

The project, led by Adam Aviv, an associate professor of computer science at George Washington University, and Michelle Mazurek, an associate professor of computer science at the University of Maryland, is supported by a $150K seed grant from TRAILS.

It is one of eight projects that received funding in January when TRAILS unveiled its inaugural round of seed grants.

Mazurek and Aviv have a long track record of successful collaborations on security-related topics. Mazurek, who is the director of the Maryland Cybersecurity Center at UMD, says they’re both interested in how people make decisions related to their online security, privacy and safety.

She believes that decision-making based on AI-generated content—particularly how much trust is placed in that content—is a natural extension of the duo’s previous work.

“Analyzing how and why people make these decisions is important, particularly if we can look at a wide range of users,” Mazurek says.

For their perception bias project, Aviv and Mazurek—working with graduate students and a postdoctoral researcher from both institutions—have split the study into two separate subcategories.

A first cohort of study participants will engage with a Large Language Model (LLM) system like ChatGPT that is placed within a specialized survey framework. The users will be tasked with developing prompts designed to elicit biased responses.

Participants, for example, may be asked to audit an LLM for instances of sex-based biases, where they may prompt the system about perceptions of women in the workplace, their pay range, or aspects of how they take on leadership tasks.

Beyond a qualitative analysis of these prompts and responses, the participants will be surveyed to gauge their perceptions of fairness and trustworthiness, both before and after the interaction with the LLM software.

“We’re interested in measuring how well users can identify these kinds of biases themselves,” Aviv says.

It might be that many users are unable to expose or notice biases due to the guardrails and training built into LLMs to prevent biased responses, he says.

As a result, they may feel more confident that there aren’t such biases, even when they do exist, Aviv adds. Or perhaps worse, they expose a bias, cannot recognize it, and then expand upon it themselves.

A second part of the TRAILS project will involve examining bias and fairness disclaimers used by major AI providers. This effort encompasses a qualitative analysis of the language employed in these disclaimers, coupled with a user survey aimed at understanding how participants interpret and react to these warnings.

The Gemini app, an AI-powered chatbot developed by Google to integrate with Google Search, includes disclaimers such as, “Gemini may display inaccurate info, including about people, so double-check its responses.”

Additionally, in addressing the limitations of AI, it emphasizes, “AI is a powerful tool, but it's not perfect. Information generated by AI may contain inaccuracies or biases. Always verify information from multiple sources, especially when making important decisions. AI should be used as a supplement to human judgment, not a replacement.”

Ultimately, Mazurek and Aviv aim to uncover how users perceive and misunderstand biases in AI systems, how they apply this understanding when using such systems, and how AI providers can better assist users in navigating these challenges.

The insights gained can inform the development of guidelines and processes aimed at enhancing communication about bias in AI systems, as well as new user-inclusive mechanisms for bias testing.

As with any seed funding, the TRAILS team is hoping their initial work will lead to additional funding and possible collaborations with other researchers, Aviv says.

Assisting the co-PIs on the project are Alan Luo, a fourth-year computer science doctoral student at UMD; Monica Kodwani, a second-year computer science doctoral student at GW; and Jan Tolsdorf, a postdoctoral associate in Aviv’s Usable Security and Privacy Lab at GW.

Tolsdorf is spearheading the project’s initial phase, overseeing the development of the study protocol and the technical infrastructure necessary for the study. Luo and Kodwani are active designing the experimental setup and survey questions.

Mahmood Sharif, a senior lecturer in the School of Computer Science at Tel Aviv University, is also involved, offering his machine learning expertise and co-advising on the work being done.

—Story by Melissa Brachfeld, UMIACS communications group