Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Tech Orgs Urge Targeted AI Regulations to Foster Innovation

Coalition of Leading Institutions Push for Policymakers to Develop Focused AI Rules
Tech Orgs Urge Targeted AI Regulations to Foster Innovation
The Data and Trust Alliance called on policymakers to focus on high-risk AI use cases. (Image: Shutterstock)

A coalition of technology giants and industry leaders is urging the federal government to only create new regulations for artificial intelligence that fill critical policy gaps.

See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity

The Data and Trust Alliance, a consortium of businesses and institutions including GM, IBM, Mastercard, Meta and the NFL, released guidance Friday calling on policymakers to improve AI regulatory harmonization while warning that overregulation could stifle innovation.

The alliance recommended that regulators review existing policies before introducing new legislation to govern AI, stating: "New regulation should only be introduced when necessary to address regulatory gaps." The recommendations also encourage lawmakers to "focus on potential harms" posed by AI in specific use cases rather than attempting to broadly regulate the emerging technology.

"Smart regulation that focuses on the highest-risk uses of AI and holds companies accountable for the AI they create and deploy is the best way to protect consumers while unlocking AI's massive upside potential," Rob Thomas, chief commercial officer of IBM, said in a statement.

The recommendations come amid a push from Senate leaders for stronger protections from the White House against risks associated with AI, such as bias in algorithmic decision-making. Majority Leader Chuck Schumer, D-N.Y., and Sen. Edward Markey, D-Mass., co-wrote a letter to the Office of Management and Budget on Monday, urging stronger protections against "supercharged, AI-powered algorithms [that] risk reinforcing and magnifying the discrimination that marginalized communities already experience due to poorly-trained and -tested algorithms."

"The stakes - and harms - are especially high where entities use algorithms to make 'consequential decisions,' such as an individual's application for a job, their treatment at a hospital, their admission to an educational institution, or their qualification for a mortgage," the letter says.

The recommendations urge policymakers to focus on high-risk AI use cases, avoiding the overregulation of applications with positive outcomes. The alliance describes the framework as the "consequential decision approach," which involves narrowly tailoring regulation to address gaps in existing regulations, prioritizing high-risk use cases and embedding AI rules into existing policy.

When gaps in AI oversight are identified, the alliance recommends that policymakers concentrate on defining specific AI use cases linked to potential harms.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.eu, you agree to our use of cookies.