Yang Liu, assistant professor of computer science and engineering at the University of California, Santa Cruz, has won a Faculty Early Career Development (CAREER) Award from the National Science Foundation (NSF) to fund his study of human-centered machine learning.
Machine learning models, artificial intelligence algorithms that improve themselves through data and experience, are being applied in a variety of industries that have serious impacts on peoples’ lives such as the screening of loan applications in financial services or Medicare applications in healthcare.
Liu’s research project will address issues of robustness, fairness, and the dynamics that arise in this field from a data-centric perspective. His team will study how algorithms can become biased by replicating existing biases in the data sets that train the models, but more importantly, they will build models to understand and predict humans’ behaviors when interacting with machine learning algorithms and consider the data produced by that interaction.
“Part of the proposed research will focus on understanding the possibilities of identifying and mitigating the natural bias and noise that exists in data from humans,” Liu said. “But looking one step further, it’s not just about machine learning and how it performs on the data you already have, it’s about the data that it’s going to generate in the future – I think that’s the missing part in most of the ongoing discussions. I care about the data that is going to be generated after a machine learning model and a data collection pipeline is deployed, so that’s one of the main inquiries of this proposal.”
Liu and his team will also use NSF funding to carry out human-subject studies to understand how people respond to various machine learning models in a wide range of applications, from financial services to recommendation systems, and possibly school admissions. They will use these experiments to build theoretical frameworks and computational solutions to ensure that machine learning models are designed and deployed to serve humans without bias.
“Machine learning is not going to be a one-shot or a static problem anymore,” Liu said. “Model accuracy matters, but it will become mores about long-term well-being. What are the behaviors, what are the dynamics that the model is going to induce on people? That’s something I am going to really focus on.”
Additionally, Liu wants to make sure that machine-learning models provide people with opportunities for improvement.
Some machine learning models give results without offering any explanation, and even when they do offer explanations, they often lack constructive suggestions for the user. For example, a constructive suggestion in the context of financial services might look like offering a way in which a customer who currently does not qualify for a loan could improve upon their financial profile to be approved in the future. Liu hopes that building out this level of transparency could increase human trust in machine learning technology, allowing it to become more widely adopted.
“This work emphasizes the importance of centering the actual experiences of people as they interact with machine learning technology, since that technology can have profound effects on their opportunities and well-being,” said Alexander Wolf, dean of the Baskin School of Engineering. “Liu’s ethics-centered approach aligns with the mission of our school to ensure that what we create has a positive impact on our society.”