Share page:

As Vladimir Ilyich Lenin once noted, “There are decades where nothing happens, and there are weeks where decades happen.” With respect to artificial intelligence (AI) technology, we are now living through weeks when decades are happening. Advancements in AI have arrived with ever-increasing velocity since the November 2022 launch of ChatGPT, a large-language-model (LLM) AI chatbot developed by OpenAI. As AI’s thrilling potential and serious risks have become more public and immediate, calls to “regulate” AI have gained momentum. The need for such regulation is clear, as I’ve previously discussed, and many experts have called for a strategic pause on the development of the most powerful AI systems until robust regulation is established.  

 However, most of the discourse regarding the regulation of AI thus far has been light on specifics. I recently had the opportunity to further investigate the likely details of AI regulation as a panelist at the Milken Institute’s 2023 Global Conference, “Governing AI: Ethics, Regulation, and Practical Applications,” which was moderated by NBC News Correspondent Gadi Schwartz and included fellow panelists Paula Goldman (Salesforce’s Chief Ethical and Humane Use Officer), Paul Kedrosky (SK Ventures’ Managing Partner), and Kai-Fu Lee (Sinovation Ventures’ Chairman and CEO). Our discussion centered around four topics pertinent to the future of AI regulation.

 First, what is the status of AI regulation? While the EU is in the process of implementing its AI Act, first proposed in April 2021, and the Cyberspace Administration of China is currently processing comments to its Draft Measures for the Management of Generative Artificial Intelligence Services, first circulated in April 2023, regulation has lagged at the national level in the United States. This is not surprising: for practical and political reasons, domestic laws and regulations often take years—if not decades—to be passed. Indeed, despite longstanding worries about the effects of social media and the impact of the internet on privacy, there are still no national laws in the United States materially devoted to either subject. Although Senate Majority Leader Charles Schumer has recently been working on a high-level AI framework and various federal agencies (such as the FTC) are analyzing potential industry-specific AI regulations, most of the movement in the United States has occurred on the state or city level.  These “laboratories of democracy” have typically focused on managing specific aspects of AI, such as the use of facial recognition technology or the implementation of AI in employment decisions, rather than attempting to comprehensively tackle all AI technologies. Given the difficulty of getting legislation passed at the national level, this state- and locality-specific AI approach is likely to continue in the United States.

 Second, does the nature of AI create special regulatory challenges? Here, our discussion focused on the rapid pace of AI’s deployment across our economy and society. Along with its proliferation, the technology itself is developing with unusual—if not unprecedented—speed.  Every interaction with generative AI, especially LLMs, allows the software to learn and become more effective. This “auto-improve” capacity can benefit early market entrants (such as OpenAI), which creates potential monopolization concerns because they may be afforded an insurmountable head start over their competitors. The rapidity with which the technology is evolving also raises the worry that legislators and regulators will be “fighting old fires”—i.e., new rules, once enacted, are likely to become quickly superseded by technological changes, if not already out of date when passed. This is particularly troublesome given the current misalignment in incentives, where companies, facing intense competition for market share and revenue, are not motivated to effectively create their own AI guardrails.

 Third, why regulate AI in the first place?  Our panel emphasized both the economic and social factors driving the need to regulate AI. While AI is expected to boost global economic growth by $13 trillion over the next decade, that boost will not be evenly distributed. Many panelists were concerned about the potential displacement of jobs, as certain sectors—particularly within the software and call center industries—may be uprooted in the coming years. My portion of the discussion focused on the impact of AI on a jurisdiction’s values. AI is not monolithic: much of it is noncontroversial (for instance, AI that optimizes industrial processes, supply chains, and travel routes), and the stringency of AI regulation almost certainly will depend on the characteristics of the specific AI technology in question and the values affected by its operation. Thus, its regulation will likely vary application by application and sector by sector. Moreover, because different nations have distinct values, AI’s regulations will also materially differ between countries. For instance, China has developed a more sector-focused and vertical approach to regulating AI due to its concern with promoting socialist values and protecting against the unfettered dissemination of information (both real and false).  In contrast, the European Union’s focus on values such as autonomy, transparency, fairness, and non-discrimination has pushed it towards a more horizontal approach that will impose uniform requirements across industries.

 Fourth, and finally, what kind of AI regulation can be expected? On this topic, the panel was careful to note the practical limits of regulation, which cannot solve all the potential problems associated with AI, and must be careful not to handicap its benefits. However, there are a variety of tools within the regulatory toolkit that one should anticipate being deployed with some effectiveness.  While some types of AI may be banned outright as too dangerous, other forms of AI will face pre-marking filing requirements and regular post-release certifications. Bias audits and human-in-the-loop requirements will also be common safeguards, although the former may be technically challenging to effectively accomplish and the latter may hamper the effectiveness of AI technology if required too frequently. The twin requirements that (1) the use of AI be disclosed (with a potential opt-out by the user) and (2) the results of the AI process be explainable are likely to have populist appeal and thus be common regulatory measures.  Similarly, rules circumscribing the collection and use of data are also likely to be prevalent to safeguard the right to privacy and promote the value of “data minimization.”

 Ultimately, the recent developments in AI technology pose both exciting opportunities and difficult challenges that must be carefully addressed. Some of the issues to be tackled are merely variants of established concerns, amplified by the nature of AI, while other problems are wholly new. Regulators will not be alone in their task, as litigation will push the courts to define the limits of AI accountability and the contours of liability. But that private, piecemeal approach alone is insufficient. The recent momentum behind the further government regulation of AI is well-founded and should move forward with speed. Please watch our discussion here.

Written by:

John B. Quinn