Opening Pandora’s Box

AI warnings have provided Americans with fresh Congressional theater.” – The Lonely Realist

The risks of AI were aired in simultaneous hearings earlier this month by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law and the Senate Commerce Subcommittee on Consumer Protection, Product Safety and Data Security, and were accompanied by a proposal by Senators Richard Blumenthal (D CT) and Josh Hawley (R MO) to address some of those risks in their joint “comprehensive bipartisan framework.” Senator Blumenthal modestly described the joint proposal as a “milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections,” even though it is one of a number of Congressional AI proposals (a summary of which can be found here) … and one that is sorely lacking in specificity. Senator Hawley struck an appropriately cautionary note when he added that the “question is whether Congress has the willingness to see it through.” Congressional willingness to act on any proposal, whether or not “bipartisan,” is questionable.

No one doubts that AI algorithms’ present real and proximate dangers, both to Americans and to humanity. Witnesses at the September 12th hearings agreed that AI will increase the risks of disinformation, flawed decision-making, privacy invasions, and employment disruptions. They testified that AI could fuel public and voter manipulation with a near-unanimity calling for tough disclosure requirements that would require alerts whenever AI is the author of messages, videos, audiotapes, etc. That was the easy part. A more alarming message came afterward from Elon Musk who spoke with the media about “civilization risk”: “It’s not like … one group of humans versus another. It’s like, hey, this is something that’s potentially risky for all humans everywhere…. There is some chance that is above zero that AI will kill us all. I think it’s low. But … there’s some chance.”

Musk is a member of the Future of Life Institute which earlier this year warned about the destructive potential of AI by asking a series of rhetorically dystopian questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Views vary widely on how these dangers can ought to be addressed. Although the AI concerns that the Blumenthal-Hawley plan confronts are relatively straightforward, the solutions proposed by their “comprehensive bipartisan framework” are not. They would have complex and far-reaching consequences. Their proposal includes establishing a new AI regulator (that would promulgate regulations to which non-U.S. AI developers would not be subject), creating a licensing regime for those engaging in AI (raising a potential “big brother” concern), providing legal penalties for “harms” (a controversial term), defending national security and international competition (by adding new “export controls, sanctions, and other legal restrictions to limit the transfer of AI”), promoting transparency (an attainable goal), and protecting consumers and kids (a laudable goal akin to endorsing Motherhood). How these goals might be realized without stifling progress, innovation and entrepreneurship is unclear. But then, the purpose of Senators Blumenthal’s and Hawley’s “framework” and of the Senate hearings was not to examine how best to address AI issues. Industry witnesses were invited to the hearings to assure the American public that Congress is seriously considering AI risks. But is Congress truly serious? As Senator Hawley noted, the likelihood of the “comprehensive bipartisan framework” being written into law is small, similar to the odds that Senator Cruz’s proposal to impose term limits on members of Congress and Senators Graham’s and Warren’s proposal to regulate Big Tech will be enacted into law. (Commentators already have raised concerns about the potentially adverse impact of the framework.)

The Economist’s lead editorial last week focused on the obverse, “How AI can revolutionize science.” Although an advocate of free markets, The Economist’s editorial board began by acknowledging the dangers of AI, noting that it carries the risk of “algorithmic bias and discrimination, the mass destruction of jobs and even, some say, the extinction of humanity.” The editorial, however, addressed AI’s potential for human progress – it “could help humanity solve some of its biggest and thorniest problems … by radically accelerating the pace of scientific discovery” with “world-changing results.”

There are substantial upsides to AI. There also are substantial downsides, including the possibility for catastrophe … as there is in every revolution. How America’s government might mitigate the downside without stunting progress is a question that the Blumenthal-Hawley framework fails to address. Nevertheless, although the framework is merely an outline, it is specific in calling for the labeling of all AI-generated content so that consumers can distinguish between human output and AI output (“Users should have a right to an affirmative notice that they are interacting with an AI model or system.”). There is no reason why Congress can’t act now to require such labeling. On the other hand, how to address “algorithmic bias and discrimination” is more challenging and whether to address “the mass destruction of jobs” is highly questionable …, though both are politically popular. AI undoubtedly will automate away a variety of human jobs, perhaps quickly. (It is worth noting, however, that the countries with today’s highest rates of automation have the world’s lowest rates of unemployment.) The Industrial Revolution saw the mass replacement of home-workers by factories which, in the early 19th Century, led to Luddite protests against manufacturers whose machines eliminated jobs and drove down wages. The Technology Revolution has resulted in the replacement of massive numbers of toll-takers, elevator operators, secretaries, switchboard operators, cashiers, factory and warehouse workers, data entry clerks, bank tellers, travel agents, etc. … and those replacements continue. Americans nevertheless have experienced an unparalleled increase in their standard of living. Progress necessarily entails disruption. The survivalist question of how America should address Elon Musk’s “civilization risk” is qualitatively different. It is not a subject for Congressional action. After all, any legislation enacted in America will not affect AI development outside of America. China, Russia and others will pursue AI in their own interests. Without a global standard and international agreement – which today is an impossibility – unilateral American action would be to America’s disadvantage.

TLR Index

An index of TLR titles can be found here.

Finally (from a good friend)

No Comments

Post A Comment