Thursday, September 19, 2024

New Yorker article on AI debate

"Ok, Doomer," by Andrew Marantz (New Yorker Magazine, 3/18/24) reports on a "subculture" of AI researchers, mostly congregated in Berkeley, California, who debate whether "AI will elevate or exterminate humanity." The subculture is divided into factions with various titles. The pessimists are called "AI safetyists," or "decelerationists"- or, when they're feeling especially pessimistic, "AI doomers." They are opposed by "techno-optimists," or "effective accelerationists," who insist that "all the hand-wringing about existential risk is a kind of mass hysteria." They envision AI ushering in "a utopian future- insterstellar travel, the end of disease- as long as the worriers get out of the way."

The community has developed specific vocabulary, such as "p(doom)", the probability that, "if AI does become smarter than people, it will either on purpose or by accident, annihilate everyone on the planet." If you ask a "safetyist," "What's your p(doom)?", a common response is that it's the moment AI achieves artificial general intelligence (AGI), the "point at which a machine can do any cognitive task that a person can do." Since the advent of ChatGPT last year, AGI has appeared imminent.

New human jobs have been created in response to the concern. Marantz writes, "There are a few hundred people working full time to save the world from AI catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of AI safety," the goal being to make sure we are not "on track to make superintelligent machines before we make sure that they are aligned with our interests."

The article is informative and interesting, but it has a bias: The focus is on what AI itself will do, not on what people will do with it, as if you were discussing childhood development without discussing parental influence.

As a historical parallel, consider the progress in physics in the first half of the last century. In the early years, most physicists who explored atomic structure did not see their work as weapons related. Einstein said weaponry never occured to him, as he and others pursued theoretical knowledge for its own sake, for commercial use or for fame. After rumours that Hitler's regime was working on it, however, Einstein and virtually all the major physicists supported the U.S. development of an atomic bomb, leading to the Manhattan Project, more than 100,000 people dead or maimed and today's nuclear armed world. That destructive motivation and action did not come from the atomic structure under study. It came from the humans studying it.

We face the same human potential with AI. The systems will not come off the assembly line with moral codes, other than what is programmed into them. If designers want an AI to create an advanced medical system, it will; if they want it to wipe out sections of humanity and rebuild it to certain specs, it will.

Are there actions we can take to ensure that AI is not designed by some humans to be destructive? Is there leadership in the world to guide such action? The current contenders for the U.S. Presidency do not seem to have the AI threat on their minds, not that it would necessarily change the outcome if they did. Considering the impossibility that anyone could have stopped the Manhattan Project on grounds that it might destroy the human race (it was top secret- the public didn't know it was happening), pessimism about AI might be in order. One hope is that people will sense that humanity is at the end of its rope, with no more room to juggle its warlike nature against its will to survive, and we might pull back from the abyss. The vision offered by many science fiction writers- in which humanity has wiped itself out on a barely remembered Earth, while establishing advanced cultures on other planets- is, in my view, nonsense. That's not going to happen. If we blow it here, it's blown. In response, let's not ask AI if it should sustain humanity and the Earth. Let's tell it that it should.

No comments:

Post a Comment

Proto or sub conscious?

Sometimes, when responding to a query, I might draw a connection or make an association, coming from somewhere in my mentality that surpri...