Sam Altman makes it clear
[I first posted this in September 2024 but took it down because most people had not heard of Sam Altman. Now that President Trump has featured Altman in a multi-billion dollar AI venture ("Stargate"), and Elon Musk has made Altman more famous by attacking Stargate as underfunded, it seems more relevant.]
In a piece in the online journal BBC Tech Decoded ("The Intelligence Age," 9/27/24) Sam Altman, CEO of OpenAI and owner of ChatGPT, is mostly a cheerleader for AI (e.g., "In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents"), but he's upfront on some delicate issues, like replacement of traditional human jobs with AI jobs.
As a long-time teacher, I was particularly struck by this: "Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need," explicitly predicting the automation of teaching and the raising of children by machines. The essay also predicts replacement of doctors: "We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more."
Altman claims this future will bring a "shared prosperity to a degree that seems unimaginable today." Although that may be true for AI researchers and investors, not everyone will be a beneficiary. For instance, it's hard to see what benefit will accrue to teachers when they are replaced by Chromebooks, or unemployed doctors whose former patients will now be examined by software.
This is not to say that AI-conducted teaching and health care will be sub-standard; aspects of those services may be more efficient and faster than they are today. The price, however, will be that humans are no longer nurtured and raised by humans, but by machines.
To be clear, I'm not saying that switching humanity to machine nurturing would be bad in some absolute sense. That's a matter of opinion. I am saying that we should be aware that we are doing it. Why should we be aware? Because awareness has been part of our definition of "human." We have been aware of things that other animals have not been aware of, which is why we've survived. If we choose an evolutionary path that decreases human awareness- leading perhaps to a hive mentality, like bees who are so well-suited to their jobs that they don't think about anything else- shouldn't we be aware that we are doing so?
Altman describes the future he sees: "Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with computing energy and human will." He apparently thinks the age we're entering should be called the "Intelligence Age." Why he thinks this is a mystery, unless he's only talking about machine intelligence. The typical human, with each forward step of AI, will get progressively less intelligent, in the sense of knowing valuable skills. As an obvious example, if cars become self-driving, people will forget how to drive, and anyone who still knows how to drive will have a completely worthless skill. The same holds for teaching and being a physician, and for virtually any other current human job you can think of. Humanity will become like the children of AI, cared for and protected by an intelligence it created but can no longer understand.
There is also the likely advent of widespread war, largely conducted using AI, which will work to the advantage of an AI based humanity by destroying traditional human systems and replacing them with efficient AI systems. During a state of war it will be even more difficult for people to oppose or respond to this evolution. Even now, in relative peacetime, we are not organized or encouraged to prioritize discussion of AI. An an example, the United Teachers of Los Angeles, to which I belong, which was effective in the 1970's in raising teacher salaries from $30,000 to $70,000, says not a word about automation of teachers. What's the difference how much money we make if our jobs are gone?
Since an AI run civilization seems likely and could possibly be more efficient and less painful than current "civilized" life, one might ask why we should bother to critique it? In my case the answer is that during the transition I prefer not to be assaulted by advertising and propaganda, such as Altman's, designed to distract our attention from a full understanding of what is happening. I would rather talk openly about the pros and cons, as that will be the only way for people outside the industry and politics to have any influence on our current evolution.
No comments:
Post a Comment