A bipartisan pair of senators, Richard Blumenthal, a Democrat, and Josh Hawley, a Republican, have proposed that a new regulatory body be established by the US government to oversee artificial intelligence (AI). This body would also limit the development of language models, such as OpenAI’s GPT-4, to licensed companies. The senators’ proposal, which was unveiled as a legislative framework, is intended to guide future laws and influence pending legislation.
The proposed framework suggests that the development of facial recognition and other high-risk AI applications should require a government license. Companies seeking such a license would need to conduct pre-deployment tests on AI models for potential harm, report post-launch issues, and allow independent third-party audits of their AI models. The framework also calls for companies to publicly disclose the training data used to develop an AI model. Moreover, it proposes that individuals adversely affected by AI should have the legal right to sue the company responsible for its creation.
As discussions in Washington over AI regulation intensify, the senators’ proposal could have significant impact. In the coming week, Blumenthal and Hawley will preside over a Senate subcommittee hearing focused on holding corporations and governments accountable for the deployment of harmful or rights-violating AI systems. Expected to testify at the hearing are Microsoft President Brad Smith and Nvidia’s chief scientist, William Dally.
The following day, Senator Chuck Schumer will convene the first in a series of meetings to explore AI regulation, a task Schumer has described as “one of the most difficult things we’ve ever undertaken.” Tech executives with a vested interest in AI, such as Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, comprise about half of the nearly 24-person guest list. Other attendees include trade union presidents from the Writers Guild and AFL-CIO federation, as well as researchers dedicated to preventing AI from infringing on human rights, such as Deb Raji from UC Berkeley and Rumman Chowdhury, the CEO of Humane Intelligence and former ethical AI lead at Twitter.
Anna Lenhart, a former AI ethics initiative leader at IBM and current PhD candidate at the University of Maryland, views the senators’ legislative framework as a positive development after years of AI experts testifying before Congress on the need for AI regulation. However, Lenhart is uncertain about how a new AI oversight body could encompass the wide range of technical and legal expertise necessary to regulate technology used in sectors as diverse as autonomous vehicles, healthcare, and housing.
The concept of using licenses to limit who can develop powerful AI systems has gained popularity in both the industry and Congress. OpenAI CEO Sam Altman suggested AI developer licensing during his Senate testimony in May, a regulatory approach that could potentially benefit his company. A bill introduced last month by Senators Lindsay Graham and Elizabeth Warren also calls for tech companies to obtain a government AI license, but it only applies to digital platforms of a certain size.
However, not everyone in the AI or policy field supports government licensing for AI development. The proposal has been criticized by the libertarian-leaning political campaign group Americans for Prosperity, which worries it could hamper innovation, and by the digital rights nonprofit Electronic Frontier Foundation, which warns of potential industry capture by wealthy or influential companies. Perhaps in response to these concerns, the legislative framework proposed by Blumenthal and Hawley recommends robust conflict of interest rules for the new AI regulatory body. Section 2: The AI Regulatory Body Personnel. The proposed AI regulatory framework by Senators Blumenthal and Hawley leaves a few inquiries unresolved. It is still undetermined whether the AI supervision would be handled by a freshly established federal agency or a department within an existing one. The senators have not yet defined the standards that would be employed to identify high-risk use cases that necessitate a development license.
Michael Khoo, the director of the climate disinformation program at the environmental non-profit, Friends of the Earth, suggests that the new proposal appears to be a promising initial move, however, additional details are required for an adequate evaluation of its concepts. His organization is a part of a group of environmental and tech accountability organizations that are appealing to lawmakers through a letter to Schumer, and a mobile billboard scheduled to circle around Congress in the upcoming week, to inhibit energy-consuming AI projects from exacerbating climate change.
Khoo concurs with the legislative framework’s insistence on documentation and public disclosure of negative impacts, but argues that the industry should not be allowed to determine what is considered detrimental. He also encourages Congress members to require businesses to disclose the energy consumption involved in training and deploying AI systems, and to consider the risk of misinformation proliferation when assessing the impact of AI models.
The legislative framework indicates that Congress is contemplating a more stringent approach to AI regulation compared to the federal government’s previous efforts, which included a voluntary risk-management framework and a non-binding AI bill of rights. In July, the White House reached a voluntary agreement with eight major AI companies, including Google, Microsoft, and OpenAI, but also assured that stricter regulations are on the horizon. During a briefing on the AI company compact, Ben Buchanan, the White House special adviser for AI, stated that legislation is necessary to protect society from potential AI harms.