Technology

Shaping the future of artificial intelligence

UC Santa Cruz experts are working to guide AI in ethical, sustainable, and socially beneficial directions

By and

Man sits coding at computer in darkened room, redwood trees visible through window in background

Assistant Professor of Electrical and Computer Engineering Jason Eshraghian is reimagining how artificial intelligence can operate by taking cues from the human brain, developing strategies to decrease energy consumption while maintaining performance.

Impact Creative for UC Santa Cruz

Press Contact

Of the many issues that loomed large in 2025, the continued rise of artificial intelligence was one of the most inescapable. Time Magazine named “the architects of AI” as its person of the year, featuring tech titans like Sam Altman and Jensen Huang. Meanwhile, Merriam-Webster dictionary’s word of the year was “slop,” in honor of the low-quality, AI-generated content that’s been taking over the internet. Artificial intelligence certainly seems to have invaded all facets of our modern lives this past year. And the results thus far have been a mixed bag.

Companies have increasingly attempted to apply AI in the workplace, simultaneously touting the potential for increased productivity and stoking fears about labor market impacts. Investing has skyrocketed, both buoying the U.S. stock market and triggering concerns about a potential bubble. A massive infrastructure buildout of data centers promises jobs and economic development opportunities but has also raised power bills and caused issues with water consumption and climate impacts. The breakneck race toward ever more powerful technology has shaped global geopolitics, while raising alarm and calls for regulation among those wary of the safety of AI. 

AI certainly seems to have reached an inflection point over the past year. No longer a novelty, the technology is beginning to have serious real-world impact on a global scale. The actions taken now by innovators, policymakers, and the public will shape the future of artificial intelligence, for better or for worse. That’s why UC Santa Cruz experts across a wide range of disciplines are providing leadership and perspective to help steer things in the right direction.  

If you believe in the importance of this work, please consider showing your support for the University of California as we continue tackling society’s biggest emerging challenges and opportunities. 

UC Santa Cruz experts diving deep on AI 

Sustainable, carbon-aware computing 

As AI adoption accelerates, so does its environmental impact. Recent projections suggest that, by the end of this decade, electricity demand from AI-driven data centers in the U.S. could be comparable to the annual consumption of tens of millions of households, with some analyses warning that costs may ultimately be passed on to consumers through higher electricity bills. Assistant Professor of Computer Science and Engineering Abel Souza applies his expertise in large-scale data centers to develop sustainable, carbon-aware approaches for designing and operating computing infrastructure, with the long-term goal of enabling zero-carbon operations. One recent project focuses on algorithms that forecast the “greenest” times to consume electricity. This can be used to schedule energy-intensive tasks, like AI training and electric vehicle charging. Because many AI tasks, like chatbot queries, can be processed anywhere in the world, this work also explores where computations should be performed. By evaluating factors such as weather, time of day, and renewable energy availability, the system can shift workloads to locations where low-carbon energy sources, such as solar or wind, are most abundant. Coupled with economic models of the power grid, Souza’s work examines how time and location can be leveraged to reduce not only carbon emissions but also electricity costs across U.S. power systems.

Addressing AI labor market impacts

Sociology Professor Chris Benner is a leading scholar of how technological change affects work and employment. Benner says AI can either deskill work, intensify surveillance, and hollow out jobs—or it can augment workers, reduce drudgery, and improve job quality. The results will depend on how these tools are developed, applied, and regulated. Looking back at past technological transitions, he believes AI isn’t likely to cause a rapid loss of jobs in any single occupational category. Rather, workers will see some of their tasks automated, resulting in shifting activities and responsibilities within jobs. Greater harm may actually come from algorithmic management and electronic monitoring, which could result in increased work intensity, loss of autonomy, and racialized and gendered bias in scheduling, evaluation, and discipline. To avoid these harms, workers should be involved in AI decision-making, so that AI systems can be designed to make work better and more fulfilling. 

Augmenting AI reasoning 

Yi Zhang, the director of the UC Santa Cruz Generative AI Center and professor of computer science and engineering, is a veteran of the AI field, having been involved from both the academic and industry founder sides for many years. Her research focuses on enabling AI models to leverage external tools and knowledge to solve complex tasks for better use in information-rich, real world environments. This encompasses the development of multi-agent AI systems and multi-round Retrieval-Augmented Generation (RAG), a strategy for improving accuracy in AI by connecting chatbots to relevant, up-to-date information before they provide answers. She has worked on AI systems capabilities in areas where information is incomplete, misleading, or socially contested, such as politics or healthcare. As the GenAI Center director, Zhang is bringing together scholars across academic divisions at UC Santa Cruz to collaborate on generative AI projects, sparking collaboration between researchers from fields like computer vision, climate and environmental sustainability, healthcare and biomedicine, education, and the humanities to translate foundational AI research into impactful applications. 

AI’s influence on the stock market and economy

Chenyue Hu is an Assistant Professor of Economics at UC Santa Cruz who studies macroeconomics, international finance, and international trade. She has been carefully watching international investment trends related to artificial intelligence and says she does not expect the AI “bubble” to burst abruptly, in the way markets did in the early-2000s. Unlike the dot-com era, today’s AI expansion is driven by real demand and constrained by infrastructure scarcity, she says, with data-center vacancy rates approaching zero. Hu argues that the firms leading this boom, such as the “Magnificent Seven,” are financially strong, with robust earnings that support current stock valuations. But looking ahead, the central uncertainty she sees is whether AI adoption can generate sufficiently broad and sustained productivity gains for the economy. At the same time, the potential downsides, including job displacement and environmental costs, will need to be carefully assessed and responsibly managed.

Improving explainability of AI systems

Can chatbots, self-driving cars, and other AI systems explain themselves when they make mistakes? Many of the AI systems we’ve come to know, like ChatGPT, are famously “black box” models, meaning their internal reasoning and computation is hidden to users and even researchers. This makes it harder to understand why errors or other unexpected outcomes occur, and hold people responsible when necessary. Assistant Professor of Computer Science and Engineering Leilani Gilpin’s AI Explainability and Accountability Lab is developing and evaluating methods for different kinds of AI models—from chatbots to autonomous vehicles—to explain their behavior. This has implications for de-bugging, risk management, and the overall safe and effective interaction between human and machine intelligence. 

Intellectual property and creativity in the age of AI

Associate Professor of Literature Zac Zimmer is an expert in digital humanities who has studied how human creativity intersects with AI infrastructures, especially regarding data use, training, and intellectual property, and the nature of the data used to train AI systems. He investigates the ethical and aesthetic implications of training AI tech with human-generated content, including literature, and what this means for authorship and ownership in the digital age. He participated in curating the Exploratorium’s Adventures in AI exhibition and teaches LIT 126H: Artificial Intelligence and Human Imagination, a course designed as part of the Responsible Artificial Intelligence Curriculum Design Project from the National Humanities Center. The course uses a humanistic framework to study the promises and perils of artificial intelligence technosystems,  the “anatomy” of specific AIs, cultural depictions of AI, and cultural artifacts co-created using AI. 

Education impacts, in the classroom and beyond

Roberto de Roock, an associate professor in the Education Department, studies how technology affects learning, with a focus on equity issues. He says it’s not surprising to see students lean on AI for schoolwork, especially when it comes to writing, where decades of overly formulaic assignments had already stripped composition of its critical thought. But AI tools can be harmful in ways students may not realize. In addition to concerns with hallucination and misinformation in AI outputs, de Roock notes that AI writing defaults to “pompous, formal White English,” which undermines dialects like African American Vernacular English. Students also may not be aware of privacy concerns around AI usage or that each prompt is used to further train models, providing free labor to multibillion-dollar corporations. As director of the Everett Program for Technology and Social Change de Roock seeks to advance local, open-source LLMs built by communities, instead of corporate tools. 

Facing the next stage in the evolution of writing

Linguistics Professor and Faculty Director of The Humanities Institute Pranav Anand is an expert in how large language models (LLMs) like ChatGPT are reshaping writing, authorship, and learning. Anand explains that writing is itself a form of technology, and the introduction of AI tools could be more of an evolution of this technology than an extinction. Anand sees a future where co-writing with AI moves humans toward curating language, instead of crafting it. This would reshape the link between linguistic expression and learning. Anand notes that we don’t fully understand which aspects of the writing process best support the development of critical thinking skills. Co-writing with AI presents an interesting research opportunity to isolate certain writing processes and study how they reflect and shape our thinking. In educational settings, meta-cognitive writing assignments, where students reflect on what they know and don’t know, often contribute most to learning, and AI could actually be used to support that process. 

Related Topics

Last modified: Jan 15, 2026