Commentary, Law and Economics

COMMENTARY: Much Ado About Thinking Machines: Regulatory Paths for Policymakers

Thomas Sarsfield, University of Missouri – Saint Louis

“I propose to consider the question, ‘Can machines think?'” So asked Alan Turing in his landmark research essay Computing Machinery and Intelligence in 1950. Six years later, at a conference at Dartmouth College in 1956, researchers would coin the term ‘Artificial Intelligence.’ At the time, many of the researchers believed a ‘thinking machine’ would reach human-level intelligence within a generation, and capital began to pour into research projects aiming to advance AI technology. The rapid development of technology and computers captured widespread interest—and fears. For example, Stanley Kubrick’s 1968 classic 2001: A Space Odyssey features HAL 2000, an artificial general intelligence (AGI) gone rogue. 

Luckily, the immediate threat of AGI, defined as a sentient AI, is not a concern. Nevertheless, the swift pace at which AI is developing demands an urgent and proactive response from regulators. This necessitates Congress to pass legislation that allows for the regulation of AI, as the executive branch is limited in its ability to approach the issue without a legal framework. In November 2022, OpenAI released ChatGPT to the public. The response was nothing less than record-shattering: ChatGPT reached a million users in five days, a quicker ascent than both Instagram and Twitter. However, fears about how ChatGPT and its successors would be used to perpetuate misinformation, displace workers, and upend society were voiced by experts and laymen alike.

Unfortunately, there is little in the way of federal regulatory standards for AI. As Representative Ted Lieu (D-CA) notes, “We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future.” The Biden Administration recently provided some intellectual scaffolding for regulation. In October 2022, the administration released its blueprint for an ‘AI Bill of Rights.’ The bill of rights establishes five principles to protect society from the threats posed by AI: protection from unsafe or ineffective systems, preventing algorithmic discrimination, ensuring data privacy, awareness of when and how AI systems are being used, and the availability of a human alternative. Additionally, the FTC has warned companies not to overhype what their AI systems are capable of doing, cautioned Congress against relying on AI to combat harmful content online, and opened an investigation into how investments and partnerships involving generative AI might affect market competition.

Weighing the pros and cons of different regulatory regimes is a challenging task. AI could bring companies trillions of dollars in revenue every year. Yet, allowing these technologies to develop without constraints poses meaningful risks. Labor market disruptions, rampant misinformation, racial discrimination, and emotional and political manipulation via superpowered algorithms all stand to be exacerbated by AI technologies. 

On the one hand, less-regulated markets may be well-situated to exploit emerging AI systems fully. Whereas US companies like OpenAI, Google, and Meta have rolled out impressive generative AI systems over the past year and a half, their European counterparts have been slow to offer competitive alternatives, partly due to red tape imposed by strict data privacy and technology laws. On the other hand, when political gridlock and hyperpartisanship are as high as ever, the introduction of potentially-destabilizing technologies carries significant risk. With the 2024 election cycle now in full swing, both Republican and Democratic organizations have leaned on generative AI to develop political advertisements. In New Hampshire, bad-faith actors robocalled voters using a digital clone of President Biden’s voice to urge them not to vote in the state’s presidential primary. Some state legislatures have moved to act where Congress has not. A bill being considered by the Georgia state legislature would ban AI-generated deep fakes of politicians. To highlight the dangers of deep-fake technology, State Representative Brad Thomas made a deep-fake video of his colleagues speaking in favor of the legislation. Elsewhere, states have enacted AI-related employment, privacy, and transparency laws. 

There are several paths forward for policymakers. One option is to create a federal agency to certify emerging AI technologies. Unlike the Food and Drug Administration (FDA), which has the power to ban products, this ‘Artificial Intelligence Regulatory Agency’ (AIRA) would be able to impose limited tort liability on manufacturers of AI. Matthew Scherer, a legal scholar, argues that such an agency would protect public safety without smothering innovation. However, this approach gives regulators less power to shape how AI is developed and used in society.

Another option is to impose a stricter regulatory scheme on AI. For example, while the European Union’s draft Artificial Intelligence Act does not ban the technology, it does explore banning specific applications, such as a government-run social credit system, and regulating other applications, such as the use of AI to score resumes. AIRA (or one or more existing federal agencies) could be charged with enforcing a much broader set of regulatory criteria established by Congress. Therefore, this path allows the legislature to formally codify the Biden Administration’s AI Bill of Rights into law as a set of criteria by which products and services are regulated and approved for commercial use. Of course, this approach could risk hobbling the AI industry if overly cautious legislators and regulators impose unnecessarily stringent controls.

Although the sci-fi future promised by Hollywood screenwriters is not imminent, AI nevertheless offers substantial benefits and risks that must be weighed by policymakers. While many companies and application developers may be keen to reap handsome profits from new technologies, the need for competitiveness must be counterbalanced by an effective regulatory regime that harnesses AI for the benefit of society.

Leave a Reply