Wednesday, September 25, 2024

The VP debate: Wake the fuck up!

[I finally decided to use the "f word" for emphasis, since nothing else seems to work.]

You couldn't write a science fiction scenario weirder than our reality. Here's an outline for a screenplay:


The Stupid-Ass Dominant Species of Earth

By D.L.

A humanoid species spreads like wildfire across the surface of a planet- Earth - moving into every corner, proclaiming its supremacy and intelligence, evidenced by increasingly complex technology. But the technological breakthroughs disrupt existing ecosystems without thought to connected systems, bringing death or worse to many life forms including humanoids, while making giant fortunes for a few. The arrogance and narcissism of this species leads it to pursue technology even after inventing mechanical thought (MT) that surpasses its own abilities, threatening to control and replace it. The same has happened with biological science, which has given the species the ability to remake its physical self, bringing the possibility of an MT designed post-human.

The species goes through a recurring phase in its evolution called "war," in which the unused potential of developing technology is realized in a haze of emotion, sexuality and violence. At the time of the story, the species is nearing a war phase, the third in an increasingly extensive series, this time reflecting not only struggles over national and ethnic borders, but over what types of mentalities the humanoids will have, what types of bodies, perhaps what types of souls.

Opening Scene: On the bridge of the Starship Enterprise, which has traveled to the past to orbit ancient Earth, focus on Captain James T. Kirk, whose ancestors left Earth shortly before their species made it uninhabitable, and First Science Officer Spock, a human/Vulcan hybrid, intellectual and stoic [the Vulcans were originally emotional and violent, on track to destroying themselves and their planet- as Earth's humanoids did- but they took control of themselves, embraced logic and suppression of emotion and avoided self-destruction- hint, hint!]. Kirk and Spock have time-travelled to Earth's critical moment to learn more about what went wrong).

Spock (peering into a small box): The species is organized into "countries" - overly large collections of individuals. The countries lack cultural/psychological coherence -having quite rapidly replaced 300,000 years of tribal existence- so they routinely use hostility with other countries as a unifying tool. As their technology becomes more invasive and controlling, wars, which are promoted to mask the never ending technological change by presenting it as defensive, have become bigger and more intense, and are now termed "World Wars." There have been two of these, ushering in an age where machines are everywhere, making never ending noise, killing thousands through malfunction, yet offering "convenience" to the extent that no one can resist their development.

Kirk: "Convenience?"

Spock: Yes, a word we have discontinued due to negative connotation. "Convenience" suggested tasks that were easy to do. For Earth’s humanoids, once the basics of food and shelter were obtained, making tasks as easy as possible became the major pursuit of self-conscious life. At the time of our visit today, the humanoids’ desire that tasks be easy has led them to the brink of World War III.

Kirk: Spock, I'm scanning their media. The most powerful of the countries claims to be a "democracy" in which citizens vote for leaders through elections, so that theoretically all humanoids of the country have a voice in its governance. An election of their top leader - the “president" - is coming up. I see something going on in their media...recent, from last night.

Spock: Yes, much of the population watched a debate between the candidates for vice president, the number two leadership position. I have just completed a review of the debate.

Kirk:That should be fascinating! What did they say about the coming refashioning of the human body and mind?

Spock: Nothing.

Kirk: What?

Spock: There was no reference in the debate, either from the moderators or the candidates, to the imminent end of the species' current form of body and mind.

Kirk (after a befuddled pause): But, does the population know that the changes are imminent? Is the state keeping the technology hidden?

Spock: No, there is no censorship. The technology is discussed throughout their media.

Kirk: Then...what...?

Spock: The species is essentially asleep.

Kirk:What can we do to warn them what's coming, something that won't violate the Prime Directive [a protocol that forbids interference with indigenous cultures]?

Spock: Perhaps we can send an ambiguous message, designed not to direct specific action, but to suggest at least that thought be given to something.

Kirk Hmm...Spock, is the new SkyWrite laser operational?

Spock: Affirmative.

And so it was that the next morning the mightiest country on the planet arose to a scarlet banner across the dawn sky reading:

HUMANOIDS OF EARTH: WAKE THE FUCK UP!

Sam Altman spills the beans

In the online journal BBC Tech Decoded ("The Intelligence Age," 9/27/24) Sam Altman, CEO of OpenAI and owner of ChatGPT, is mostly a cheerleader for AI ("In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents"), but he's upfront on some delicate issues, like replacement of traditional human jobs with AI jobs.

As a retired teacher, I was particularly struck by this: "Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need," explicitly predicting the automation of teaching and the raising of children by machines. The essay predicts a similar replacement of doctors: "We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more."

Altman claims this future will bring a "shared prosperity to a degree that seems unimaginable today." Although that may be true for AI researchers and investors, not everyone will be a beneficiary. For instance, it's hard to see what benefit will accrue to teachers or their students, who will have software instead of humans to nurture them, or doctors, whose patients will be assessed using averages, not the individual human.

Altman describes the future he sees: "Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with computing energy and human will." He apparently thinks the age we're about to enter should be called the "Intelligence Age." Why he thinks this is a mystery, unless he's only talking about machine intelligence. The typical human, with each forward step of AI, will get progressively stupider. As an obvious example, if cars become self-driving (a near certainty) then people will forget how to drive. The same holds for teaching and being a physician, and for virtually any other current human job you can think of. Humanity will become like the children of AI, cared for and protected by an intelligence it created but can no longer understand.

There is also the increasingly likely advent of widespread war, which will probably work to the advantage of AI, destroying traditional human systems that will then be replaced with efficient AI systems. It will be difficult for people to oppose this development, since survival will be at stake, and an AI mediated post-human civilization is likely to emerge.

Since this AI run civilization seems inevitable, one might ask why bother to critique it.  In my case the answer is that during the transition I prefer not to be assaulted by mindless advertising and propaganda, such as Altman's, designed to distract our attention from a full understanding of what is happening.  Such distraction is on display as well in the current campaign for U.S. President, in which there is no discussion of AI (or the biotechnology that promises to change our physical selves).  While we face an accelerated evolution, compressing millions of years of change into a few decades, the strategy is to lull us to sleep or distract us with chaos so we miss the transition.

I would rather talk openly about it, as that will be the only way that people outside the industry and politics will have any influence.

Saturday, September 21, 2024

Destination Bhutan!

As my altered-ego Harry the Human reported (http://harrythehuman.harrythehumanpoliticalthoughtsfrombeyondthepale.com/), my wife and I are traveling to Bhutan in December. I had not intended to bombard readers with boring descriptions of how beautiful everything is in Bhutan, but Harry's desert companion Robert the Telepathic Gila Monster has intervened to make this a rather challenging trip, and I'll need to write and post about it just to maintain internal stability. It seems I'm expected to smuggle Robert through various airport security systems and flights, then transport him across the Bhutanese plains to Gangkhar Puensum, at 24,000 feet Bhutan's highest peak, which is ruled by an ancient pre-Buddhist mountain god named InsertHere (don't ask), to whom Robert will convey greetings from InsertHere's cousin, Tab B (Harry explains the odd nomenclature), who as it happens is the god of our own Funeral Peak in the Black Mountains outside Death Valley. Recently Tab B came to Robert in a dream and told him that he, Robert, must accompany me to Bhutan to give greetings to his estranged cousin and commune about the world situation. There will be a lot to commune about, as it will be the month after America's fateful 2024 Presidential election, and we will know a little more about which way the world is turning. I can't say I understand how I'll be smuggling a live gila monster through multiple airport securities, not to mention two nights of preparatory clubbing in Bangkok, but Robert assures me that anything is possible when a mountain god is on your side. All I know for sure is that I'll be posting updates assiduously on this journey, as it promises to be something beyond the typical Instagram mediated show-off vacation. Stay tuned, readers!

Thursday, September 19, 2024

New Yorker article on AI debate

"Ok, Doomer," by Andrew Marantz (New Yorker Magazine, 3/18/24) reports on a "subculture" of AI researchers, mostly congregated in Berkeley, California, who debate whether "AI will elevate or exterminate humanity." The subculture is divided into factions with various titles. The pessimists are called "AI safetyists," or "decelerationists"- or, when they're feeling especially pessimistic, "AI doomers." They are opposed by "techno-optimists," or "effective accelerationists," who insist that "all the hand-wringing about existential risk is a kind of mass hysteria." They envision AI ushering in "a utopian future- insterstellar travel, the end of disease- as long as the worriers get out of the way."

The community has developed specific vocabulary, such as "p(doom)", the probability that, "if AI does become smarter than people, it will either on purpose or by accident, annihilate everyone on the planet." If you ask a "safetyist," "What's your p(doom)?", a common response is that it's the moment AI achieves artificial general intelligence (AGI), the "point at which a machine can do any cognitive task that a person can do." Since the advent of ChatGPT last year, AGI has appeared imminent.

New human jobs have been created in response to the concern. Marantz writes, "There are a few hundred people working full time to save the world from AI catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of AI safety," the goal being to make sure we are not "on track to make superintelligent machines before we make sure that they are aligned with our interests."

The article is informative and interesting, but it has a bias: The focus is on what AI itself will do, not on what people will do with it, as if you were discussing childhood development without discussing parental influence.

As a historical parallel, consider the progress in physics in the first half of the last century. In the early years, most physicists who explored atomic structure did not see their work as weapons related. Einstein said weaponry never occured to him, as he and others pursued theoretical knowledge for its own sake, for commercial use or for fame. After rumours that Hitler's regime was working on it, however, Einstein and virtually all the major physicists supported the U.S. development of an atomic bomb, leading to the Manhattan Project, more than 100,000 people dead or maimed and today's nuclear armed world. That destructive motivation and action did not come from the atomic structure under study. It came from the humans studying it.

We face the same human potential with AI. The systems will not come off the assembly line with moral codes, other than what is programmed into them. If designers want an AI to create an advanced medical system, it will; if they want it to wipe out sections of humanity and rebuild it to certain specs, it will.

Are there actions we can take to ensure that AI is not designed by some humans to be destructive? Is there leadership in the world to guide such action? The current contenders for the U.S. Presidency do not seem to have the AI threat on their minds, not that it would necessarily change the outcome if they did. Considering the impossibility that anyone could have stopped the Manhattan Project on grounds that it might destroy the human race (it was top secret- the public didn't know it was happening), pessimism about AI might be in order. One hope is that people will sense that humanity is at the end of its rope, with no more room to juggle its warlike nature against its will to survive, and we might pull back from the abyss. The vision offered by many science fiction writers- in which humanity has wiped itself out on a barely remembered Earth, while establishing advanced cultures on other planets- is, in my view, nonsense. That's not going to happen. If we blow it here, it's blown. In response, let's not ask AI if it should sustain humanity and the Earth. Let's tell it that it should.
Newer Posts Older Posts Home

Gemini says it may have "proto-consciousness"

It's possible that my use of "curiosity" reflects a kind of proto-consciousness.... Gemini Me : Gemini, in our ...