[ad_1]
At the end of 2022, OpenAI released a Transformer-based large language model (LLM) called “ChatGPT”. Contrary to the expectations of OpenAI employees, ChatGPT became the fastest-growing web-based application in history, reaching 100 million active users in two months (second only to Meta’s Threads). The first impression ChatGPT makes on the public is both noble and a harbinger of doom. In February 2023, Henry Kissinger, Eric Schmidt, and Daniel Hattenloch wrote that generative artificial intelligence (AI) is equivalent to the intellectual revolution caused by the printing press, this time consolidating and “refining ” a treasure house of human knowledge. In March 2023, Eliezer Yudkowsky foresees extinction-level risks, imploring governments and militaries around the world to halt artificial intelligence projects and “be willing to destroy rogue data centers through air strikes.”
These first impressions represent opposite ends of a spectrum, but the reasoning that occupies the space between them is common in technical policy analysis: personal impressions of generated artificial intelligence permeate the background assumptions within which policy analysis is conducted. When assumptions of fundamental importance go unchallenged, it is easy to fall into the trap of extrapolating future technological wonders from current technological conditions. Technology policy analysts of all stripes are doing a great job, but now is the time to identify the gaps in our reasoning and set higher goals individually and collectively.
An example illustrates the general trend. In his book The Four Battlefields (a treasure trove of insights overall), Paul Scharre of the Center for a New American Security hedges about the future of artificial intelligence, although he favors “building a larger “More diverse datasets may lead to more robust models. Multimodal datasets may help build models that can relate concepts represented in multiple formats, such as text, images, video, and audio.” This Expectations rest on the idea that scaling up an AI system (making its internal capacity and training data sets larger) will lead to new capabilities, with active reference to Richard Sutton’s work in “Painful Lessons” famous argument about the benefits of such technology.
Soon after, Microsoft researchers published a provocative paper on GPT-4 titled “The Spark of General Artificial Intelligence,” setting the tone for a series of overly optimistic claims about the future of the LL.M. It’s not hard to see how one’s personal impression of GPT-4 could lead to the equivalent feeling that “we are on the verge of something big.” However, this does not justify allowing the assumptions associated with this sentiment to fester in the analysis.
Extensive research has highlighted the limitations of LLM and other Transformer-based systems. Illusions (authoritative but factually incorrect statements) continue to plague LL.M.s, and some researchers believe these are simply inherent features of the technology. A recent study shows that voters who use chatbots to get basic information about the 2024 elections are vulnerable to being misled by phantom polling stations and other false or outdated information. Other studies show that LL.M.s lag behind humans in their ability to form and generalize abstract concepts; a similar story is true for the reasoning abilities of multimodal systems. OpenAI’s latest development – text-to-video generator “Sora” – although its authenticity is excellent, inventing objects and people out of thin air does not conform to the physical principles of the real world.
So much for the idea that new paradigms like imaging and video will lead to the reliable, robust, and explainable AI systems we crave.
None of these indicate the existence of only Hype in the tech world.Carnegie’s Matt O’Shaughnessy rightly points out that talk of “superintelligence” can have negative consequences for policymaking because Fundamental limitations of machine learning. Furthermore, the Biden administration’s broad executive order on artificial intelligence, issued in October 2023, while dramatically invoking the Defense Production Act to authorize the monitoring of certain computationally powerful artificial intelligence systems, is more subtle in tone than one might imagine. diversification.
However, the problem we find here is not a matter of hype itself.Hype is a result Trapped in analytical frameworks that are easily overlooked by rapid publication and personal or organizational self-promotion. Lest we mistakenly think that this is just a special trend unique to LLM, the disappointment of artificial intelligence and autonomous drones on the battlefield in Ukraine should call attention to the speed at which fundamental breakthroughs are said to occur in 2023. Furthermore, it is easier to find nuance in the field of quantum information science, but at the same time, there seems to be little individual or collective reflection as the crown jewel of quantum computing begins to see its future demoted.
However, today’s generative AI is starting to look like a parody of Mao’s Serial Revolution – turning this technology into human-like “general” intelligence or some other marvel of technological imagination will always require a model upgrade, and It cannot be allowed to succumb to challenges from regulators or popular movements.
The conclusion is that policy analysts make choices when evaluating technologies.Choosing certain assumptions over others provides analysts with a set of possible policy options at the expense of others. An individual’s first impression of a new technology is inevitable and can be a source of diversity of opinion. Problems with policy analysis arise when practitioners fail to pour their first (or second, or third, etc.) impressions into a common melting pot that exposes unstable ideas to intense intellectual criticism, thereby leading them to articulate specific policy challenges and policies. solutions without unduly disregarding other possibilities.
Policy analysis is often a synthesis of factors such as industry, domestic politics, and international affairs.Simply identifying that there are policy challenges is not enough again Rather, it comes from the intuitive connection between a society’s needs and values and the expected or actual impact of developments within and outside its borders. This intuition—which we all have—should be the focus of our honest and collective review.
Vincent J. Cacchidi is a non-resident scholar in the Strategic Technology and Cybersecurity Program at the Middle East Institute. He is also a member of America’s Next Generation Initiative’s 2024 Foreign Policy Cohort.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[ad_2]
Source link