Skip to main content
University of Maryland
Computing & Society

Harnessing Market Forces: UMD Team Incentivizes AI Companies to Prioritize Safety

March 21, 2025
The image features a futuristic racing scene with neon-lit speed trails, high-performance cars, and a digital overlay emphasizing "AI SAFETY" in bold, blocky text, accompanied by a robotic hand holding a security shield symbol.

Tech companies are racing to build the best artificial intelligence (AI) models, but amid the intense competition, safety issues—like user privacy and biased data—often take a back seat. Ramping up government regulation is one way to address these concerns, but regulators have struggled to keep up with the rapid pace of AI development. 

Recognizing the urgency of the issue, a team of University of Maryland researchers is developing a system that motivates tech companies to compete not only on capability, but on responsibility as well.

The UMD team has proposed the first-ever auction-based AI regulation framework that incentivizes safety. Their innovative solution is based on a fundamental economics principle: companies respond to market incentives. 

“We realized that we need a market-driven regulatory framework, one that aligns safety with AI companies’ business goals,” says Furong Huang, an associate professor of computer science who is leading the UMD team. “Instead of fighting AI companies, we let ‘market forces’ work for us.” 

Here’s how it works: Companies submit AI models to a regulator for approval along with a monetary bid, which represents the money they’ve spent on their model’s compliance level. The regulator sets a minimum compliance threshold, but also rewards higher compliance levels. It randomly pairs up each submitted model and then rewards the more compliant model. Consequently, instead of striving to just pass the bar, AI developers compete to exceed it. 

The math proves it works. The UMD team modelled AI regulation as an all-pay auction, in which all participants pay the amount they bid, regardless of whether they win the auction or not. Their analysis proved that AI developers will submit models that exceed compliance standards. The results show a 15% increase in participation rates and a 20% rise in compliance spending. This outperforms simpler regulatory approaches that just set minimum standards.

“For the first time, we prove that responsible AI can be incentivized mathematically,” says Huang, who has an appointment in the University of Maryland Institute for Advanced Computer Studies. “We believe this work will make safe AI a winning strategy in the AI race.”

The team’s innovative proposal has significant implications, but before it can be implemented practically, several steps need to be taken. Moving forward, the team aims to consult with tech policy experts to design a practical plan that implements their theoretical concepts into a real-world setting. 

First, a regulatory body must be set up at either the state or federal level, along with all the organizational and bureaucratic processes that go along with such a move. Second, methods to evaluate model safety must be developed. There is research currently underway on quantifying fairness and other key safety aspects of AI models, but they have yet to be codified or standardized. 

Huang’s dedicated team of researchers includes Marco Bornstein, a Ph.D. student in applied mathematics; Zora Che, a Ph.D. student in computer science; Suhas Julapalli, an undergraduate senior majoring in computer science; Abdirisak Mohamed, an adjunct lecturer in the College of Information; and Amrit Singh Bedi, a former UMD assistant research scientist who is now an assistant professor of computer science at the University of Central Florida.

—Story by Aleena Haroon, UMIACS communications group

Back to Top