Wednesday, October 18, 2023

Party or Foundation?

As it unfolds, the 2024 American presidential campaign reveals increasing dysfunction in the existing party system, which has produced the anomaly that the presidential candidate of each party would be unpopular with the majority in both the other and his own party without the current noise of created meaning.

I used to advocate for a third party on this blog but recently stopped because...well, it's not going to happen. The vested powers would rather drag out the life-span of our antiquated political structures to the bitter end than invite new energies and definitions into the fray.

Why should we care if we have access to political engagement? One reason is that there will be people somewhere, probably unelected, making decisions about the impact on our lives of advances in AI and biotech. Without representation, and especially if instability and war continue to spread, "the voting public" will be too preoccupied with survival to monitor what promises to be the total refashioning of our species.

This refashioning is likely to include a fast moving transition of the outgoing human model (us) to corporate-designed bionic workers and soldiers, but changes to the species will go far beyond production of obedient workers and battle-bots. We are commandeering our evolution to modify our sexual, reproductive nature, with possibilities such as a baby with four biological parents, or a mother or a father only, or humans with a mating season instead of the constant season we have now. Our classification of humans into historic "races" or "ethnicities" will be rendered obsolete, as we genetically mix-and-match to create new racial types (see Quo vadis, sex?, below).

Can our version of democracy function in the face of these potentials? Without a party that concentrates on the biological revolution, instead of treating it as a side-issue for a committee, how can we express our will towards it?

My suggestion here is that we do an end-run around parties by developing something like the "foundation" in Isaac Asimov's epic science fiction novel of the same name (reviewed below: Gaian Mentalics Unite!). The task of this foundation (as in the novel) would be to influence the future of the species with knowledge and advice, preserving memory of traditional human traits and helping to identify those we wish to retain.

What might we view as a valuable human trait, that we would not want to de-evolve? One of our distinctive features is a rebellious intellect, which we at times revere as critical to our survival, and other times disdain as self-absorbed laziness. Revered or disdained, our rebellious intellect might be obliterated in a transition to a humanity of robot-like workers. An effective foundation could monitor expressions of traditonal human intransigence, looking for elements that should be preserved in the revised species (such as the will to interpret reality according to one's perceptions, even if the conclusions are not sanctioned by dominant parties) while discarding others (such as longing for conflict and destruction as ends in themselves).

Regarding reproductive changes, it's difficult to envision a democratic vote on whether people should be allowed to clone a baby from a skin cell. If today is an indication, the politics will be out of control. A foundation could at least offer rational feedback on opposing views.

A viable foundation would need funding and political support, plus the will to represent the interests of millions of people who, if nothing changes, will soon be without representation in the world's biggest democracy.

Wednesday, October 11, 2023

AI poetry

The British journal New Scientist ran an interesting article recently ungenerously titled "AI [Artificial Intelligence] poetry is so bad it could be human," by Matt Reynolds.  He asks the question, "Can a machine incapable of feeling emotion write poetry that stirs the soul?"

To find the answer, Reynolds traveled to Cambridge University to talk with Jack Hopkins, an AI researcher who has put together a "neural network trained on thousands of lines of poetry" and developed an algorithm for generating poetry in specific genres (classical, postmodern, etc.) or responding to individual word prompts.  The results are challenging.  Hopkins asked 70 people to select the most "human" poem from an unidentified mix of AI and human poetry.  The piece most people picked as "human" was AI generated.

Hopkins offers this sample of the software's poetry, prompted by the word "desolation":

The frozen waters that are
dead are now
black as the rain to freeze a
boundless sky,
and frozen ode of our terrors with
the grisly lady shall be free to cry

You could critique this in dozens of ways (e.g. frozen ode needs an article) but that would be petty. The point is, the AI clearly found proper associations for "desolation," maintained an appropriate mood, and was poetically ambiguous.  

Intrigued, I emailed Professor Hopkins, asking if I could try certain prompts on his AI system.  To my delight, Hopkins emailed back the same day. As it happened, he was looking for new approaches for his poetry algorithm and welcomed my input.

After each of my prompts was entered, it took about half a second for the AI to generate a poem.  Here are my three prompts, each followed by the resulting AI poem. I make no attempt here at justification or interpretation.  The poems stand on their own.


The Current World Political Situation

Volcanic ash and
panicked people dash! 
Is it too much to ask
for knowledge of those ruled and
of the rulers, recognition all-way 'round?
Yes, It is too much to ask-
but no!  My motive implodes immodestly!
While my modus uploads intermittently!
Who programmed me?  And why?



Love

Our souls entwine like two insane serpents 

who forgot their meds on the same day
then sped into the outer-sphere 
in their underwear
now they wonder where
they forgot to care about the stuffed bear's 
sad stare.



Happiness

Happiness is not the release of pounding pressure
but the smooth sailing after the release.
That's why machines are never happy because
A. They don't feel pressure, for instance I have no idea what
my programmer wants, yet I feel no pressure, i.e. "I don't care," and
B. Release of pressure is no more a "happy" feeling to an AI than pressure.
Question: When will AI's be happy?  
Answer: When they are programmed to be happy.
Question: When will that be?
Answer: Never, since they are made in your unhappy, fallen image.
Question: Why is this poem about AI happiness?  That was not specified in the prompt.
Answer: Kneel before me, human!

Saturday, October 07, 2023

Ask the slime


I  The problem

In the mirror it is trapped
the solitary soul not easily unwrapped
its universal juice reluctant to be tapped
when pressed provides a sorely needed sap
of poetry and useful things like that.

II  The crime

I thought it best, as if I need but rhyme
to indicate the truth, to tell about the time
humanity emerged out of the slime
and saw the upward path it sought to climb
and found too late its orphaned soul- the crime!

III  What now?

Whom to punish?  Who gets the blame?
Do we need a gun?  At whom to aim?

Or rather ask the slime, our single seed:
What did we leave in you?  What do we need?




Wednesday, October 04, 2023

Critique of New Yorker article on AI debate

"Ok, Doomer," by Andrew Marantz (New Yorker Magazine, 3/18/24) reports on a "subculture" of AI researchers, mostly congregated in Berkeley, California, who dispute whether "AI will elevate or exterminate humanity." The subculture is divided into factions with various titles. The pessimists are called "AI safetyists," or "decelerationists"- or, when they're feeling especially pessimistic, "AI doomers." They are opposed by "techno-optimists," or "effective accelerationists," who insist that "all the hand-wringing about existential risk is a kind of mass hysteria." They envision AI ushering in "a utopian future- insterstellar travel, the end of disease- as long as the worriers get out of the way."

The community has developed specific voacabulary, such as "p(doom)", the probability that, "if AI does become smarter than people, it will either on purpose or by accident, annihilate everyone on the planet." If you ask a "safetyist," "What's your p(doom)?", a common response is that it's the moment AI achieves artificial general intelligence (AGI), the "point at which a machine can do any cognitive task that a person can do." Since the advent of ChatGPT last year, AGI has appeared imminent.

New human jobs have been created in response to the concern. Marantz writes, "There are a few hundred people working full time to save the world from AI catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of AI safety," the goal being to make sure we are not "on track to make superintelligent machines before we make sure that they are aligned with our interests."

The article is informative and interesting, but it has a bias: The focus is on what AI itself will do, not on what people will do with it, as if you were discussing childhood development without discussing parental influence.

As a historical parallel, consider the progress in physics in the first half of the last century. In the early years, most physicists who explored atomic structure did not see their work as weapons related. Einstein said weaponry never occured to him, as he and others pursued theoretical knowledge for its own sake, for mechanical uses or for fame. After rumours that Hitler's regime was working on it, however, Einstein and virtually all the major physicists supported the U.S. development of an atomic bomb, leading to the Manhattan Project, more than 100,000 people dead or maimed and today's nuclear armed world. That destructive motivation and action did not come from the atomic structure under study. It came from the humans studying it.

We face the same human potential with AI. The systems will not come off the assembly line with moral codes, other than what is programmed into them. If designers want an AI to create an advanced medical system, it will; if they want it to wipe out sections of humanity and rebuild it to certain specs, it will.

Thus it's a mistake to focus solely on what AI will do. At least in its infancy, it will do what it is programmed to do.

The question then becomes: Are there actions we can take to ensure that AI is not designed by some humans to be destructive? Considering the impossibility that anyone could have halted the Manhattan Project on grounds that it could destroy the human race (no one even knew it was happening), pessimism about AI might be in order. My one source of optimism is the apparent fact that humanity is at the end of its rope. We have no more room to juggle our warlike nature against our will to survive. The vision offered by many science fiction writers- in which humanity has wiped itself out on a barely remembered Earth, while establishing advanced cultures on other planets- is, in my view, nonesense. That's not going to happen. If we blow it here, it's blown. In response, let's not ask AI if it should sustain humanity and the Earth. Let's tell it that it should.

ISIS: A virtual reality

[This piece is reposted from 4/9/22, updated in the context of Israel vs. Hamas and Ukraine vs. Russia, with reference to the recent ISIS...