Computer scientist Yang Liu wins $1M grant for research on fairness in AI

Liu’s research aims to achieve more equitable outcomes from decision-making tools based on automated machine-learning algorithms

Yang Liu
Yang Liu

Yang Liu, assistant professor of computer science and engineering in the Baskin School of Engineering at UC Santa Cruz, has received $1 million in funding from the National Science Foundation (NSF) and Amazon for research on the long-term effects of human interactions with artificial intelligence (AI) systems used to support decision-making.

Liu’s project, called “Fairness in Machine Learning with Human in the Loop,” is funded through the NSF’s Program on Fairness in AI in Collaboration with Amazon. Machine learning is a powerful data-driven approach that is now being used in many ways that affect people’s lives, prompting growing concerns about how to ensure that the technology is fairly and responsibly deployed and leads to equitable outcomes.

“When people started trying to automate decisions using machine learning, they realized that the algorithms can introduce bias to the decision-making process,” Liu explained. “These algorithms can inherit and reinforce pre-existing biases in the datasets that are used to train them, so a lot of studies focus on ways to remove the bias when training the model.”

But he said less attention has been paid to the long-term consequences of deploying machine-learning models as people interact with them and respond to the decisions they generate.

“The focus on static datasets does not address the dynamic interaction between people and AI,” Liu said. “We want to build a framework to capture the interaction between machine learning and human behavior, so we can understand how it affects people in the long run and how machine learning can be used to improve people’s well being in the future.”

Liu’s team has a broad range of expertise and includes researchers at the University of Michigan, Ohio State University, and Purdue University. The project will include a substantial experimental component, which will generate high-quality data from human subjects to help validate and improve the modeling assumptions used in the analysis and algorithmic design.

“I’m particularly excited about this part of the project, because typically we don’t have good data on how people respond and make sequential decisions over time,” Liu said. “So the first part is very mathematical and analytical, and then the experimental part will give us data to see how people really interact with these models.”

The idea for the experiment is to use an online crowdsourcing platform to create something like an online game that will allow the researchers to observe how people respond to a machine-learning system for making decisions about, for example, loan applications. People recruited for the experiment would be assigned a role with a certain profile, and the system would provide feedback after a decision is made, allowing people to take actions to improve their chances on the next round.

“If I apply for a loan and get denied, I want the model to suggest actions for me. I want to hear that, if you change this feature of your profile, your chance of approval will be higher next time,” Liu said.

When people respond in self-interested ways, however, it can lead to feedback loops with unexpected and unintended consequences. That’s why the project focuses on “the human in the loop,” placing the feedback and strategic nature of the human element front and center to provide a better perspective on the long-term impacts of algorithmic decision-making.

Liu said the project is a great fit for UC Santa Cruz, which has strong programs in data science, human-computer interaction, and ethical algorithms.

“The environment here is great, with a lot of interest and expertise in both the technical and the ethical aspects of this technology,” he said. “The students are interested in this too, and the grant provides an opportunity to engage more undergraduates in this research and to raise awareness of the issue of fairness in AI.”

NSF and Amazon have partnered to jointly support computational research focused on fairness in AI, with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society. Liu’s project was awarded $625,000 from NSF and $375,000 from Amazon through this program.

“NSF is delighted to join with Amazon to support this year’s cohort of FAI projects,” said Henry Kautz, director of the NSF’s Division for Information and Intelligent Systems. “Understanding how AI systems can be designed on principles of fairness, transparency and trustworthiness will advance the boundaries of AI applications. And it will help us build a more equitable society in which all citizens can be designers in and benefit from these technologies.”