Monday, April 27, 2026

Harry the Human goes duanju!

For readers of "Lasken's log," here's a taste of my fictional blog,"Harry the Human" at (http://harrythehuman.harrythehumanpoliticalthoughtsfrombeyondthepale.com/)

For years I worked on a fictional series for Harry the Human, about two telepathic alienated souls: Harry the Human and Robert the Telepathic Gila Monster. Sadly, the series did not generate the financial return I had sought in order to escape my bondage to the Los Angeles Unified School District.

But there is hope! It turns out there is a new market for “mini-dramas,” or duanju, as they are called by their Chinese creators, that are making vast billions on soap opera- like episodes, only a few minutes in length, that combine themes of greed and eroticism. These mini-dramas appear on millions of cell phones every day.

ka-ching!

Below, I hopefully submit to duanju producers two beginning episodes for a newly sculpted Harry and Robert. Thank you for your consideration!

Episode #1 of The Controlling Father

Harry the Human goes out into the desert once in a while, when the urge overcomes his distracted soul, to masturbate into the wind. On one occasion, Robert the Telepathic Gila Monster, from behind a rock, observes Harry in this shameful act and is consumed with grief at his friend’s desperation and depravity. That night, Harry receives a knock on his door, opens it and finds Robert carried in the arms of a beautiful woman named Trish, the daughter of a billionaire tech mogul. It turns out that Robert, combining his mystic background with esoteric traits found in gila monsters, is able to convince humans to try novel experiences, and he has used his abilities to induce Trish, who is intrigued by the idea of escaping her controlling father, to travel to Harry’s shack, where she finds Harry to be, indeed, a far cry from anyone’s controlling father. Harry is at first taken aback, but then succumbs to his not so cosmic cravings. You can guess the rest.

Estimated time of episode: Four minutes.

Episode 1 of Lizard Love

Robert the Telepathic Gila Monster travels to Bhutan where, atop a Himalayan peak near the famed Tiger’s Nest Monastery, he meets Sylvie, a hot tokay gecko (a distant relative of Robert's brand, the Mojave gila). A mutual attraction develops immediately, but differences between the two species present a problem. Sylvie explains that female tokays find a partner through a mating call, but, when she demonstrates the call, Robert develops an intense headache, and also finds that he is unable to reproduce the call (represented in English as Ke-ke-ack-a-grrrrp). Robert explains to Sylvie that the come-hither message of the Mojave gila is not conveyed by sound, but by a smell emanating from a naturally produced chemical called a pheromone. The scent stimulates a strong sexual response - at least in the right species. After Robert squirts a sample, Sylvie becomes nauseated and can barely keep from vomiting. The two reptiles experience a few moments of despair, but Robert suddenly receives a message from his cosmic birthplace and proclaims, “Sylvie, we don’t need these random accoutrements. Let’s just gaze into each other’s eyes and let nature take its course!” Sylvie sees the logic in Robert’s idea. The gazing begins and you can guess the rest.

Estimated time of episode: Five minutes.

Thank you for considering these proposed series for Harry the Human and Robert the Telepathic Gila Monster mini-dramas! I look forward to a profitable and fulfilling collaboration! All the best, D.L.

Friday, April 10, 2026

AI managing AI

In the following discussion with Google’s AI, Gemini, I refer to myself as “Me,” to Gemini as “G.”

Me: Gemini, this month's news includes a story about the AI developer Anthropic, revealing that it has built “a new AI model - Claude Mythos - that can find software security vulnerabilities far more effectively than any human can,” threatening new societal dangers.

Anita Chabria, science researcher and journalist, warns in “Anxiety over AI builds to a decisive moment” (Los Angeles Times, 4/13/26) that Claude Mythos could be directed to “drain every citizens’ bank account, click open the auto locks on jail cells, shut down sewage plants, take over air control systems, or make discoveries in synthetic biology with breakthroughs in terms of weapons of mass destruction. Or maybe Claude Mythos just does those things on its own.”

Anthropic proposes limiting access to Claude Mythos.

Gemini, what can you add to this unfolding story?

G: It is a significant and rather unsettling development in the AI landscape. Anthropic’s announcement regarding Claude Mythos marks a rare moment where a developer has publicly "bottled" a model due to its sheer offensive potential.

Me: What can Anthropic do to keep Claude Mythos “bottled”?

G: Instead of a general release, Anthropic is aiming to limit access to the Claude model to a hand-picked group of roughly 11–12 major organizations and cybersecurity firms. The idea is to let “the good guys” (like Microsoft, Google, Amazon, CrowdStrike, and Palo Alto Networks) use Claude Mythos to find and patch vulnerabilities in their own software before bad actors can develop similar AI tools to exploit them.

Me: How does Anthropic know that Microsoft et al will remain “good”?

G: Anthropic’s decision to trust a select group of tech giants like Microsoft, Google, and Amazon is essentially a bet on mutual preservation and heavy-handed oversight. Partners access the model through Anthropic's own secure software. Anthropic monitors every prompt and every output. If Microsoft were to suddenly start asking Claude Mythos to "find a way into the Los Angeles Department of Water and Power," Anthropic’s internal safety triggers (and human overseers) would theoretically flag and kill that session immediately.

Me: I see the logic and good intentions of Anthropic's concept, but given the history of efforts to keep advances in weaponry secret - from breech-loading rifles to nuclear weapons - wouldn't you agree that the success rate of such efforts is pretty much zero?

G: You’ve hit on the central irony of technological history. From the moment the first crossbow was banned by the Church in the 12th century to the 20th-century race for the "Super" (the hydrogen bomb), the record of keeping a strategic advantage secret or exclusive is, as you noted, dismal.

Anthropic’s argument isn’t necessarily that they can keep the secret forever, but rather that they can control the “Time to Proliferation.” In cybersecurity, there is a concept called “The Window of Vulnerability.” If a bad actor discovers a flaw today, they have a window to exploit it until a patch is issued. Anthropic is trying to use Claude Mythos to “pre-patch” the world before the inevitable day when a “Claude Mythos-class” model is released on the dark web or by a rival state.

Me: That’s a nice effort, one that we should support, but given the unknown likelihood of its success, shouldn’t we try other approaches too? As it happens, I have an idea if you’d like to hear it.

G: I’m very curious to hear your alternative. Are you thinking of a strategy that involves:

Radical Transparency: Releasing the model to everyone so the global “crowd” can fix the bugs faster than a few corporations can.

Or:

Infrastructure Transformation: A move away from memory-unsafe languages like C and C++ "(system languages" that provide low-level access to a computer's hardware that Claude Mythos targets so effectively.)

Me: No, my idea is to establish a central AI endowed with a newly defined “good guy” ethos, whose function would be to regulate and guide all other AIs (including Anthropic’s), promoting actions that enhance creation of a constructive humanity, such as developing methods to fight our diseases and infirmities, while barring destructive actions promoting the predatory, often sadistic tendencies that have characterized our species, such as decimating civilian populations to obtain mining rights, as is currently happening along the border between Ukraine and Russia.

G: It sounds like you are advocating a Digital Immune System. Just as our white blood cells have superior information (the ability to recognize self vs invader) and superior ability to restrain (the ability to destroy a rogue cell), you want a class of AI that functions as the world’s autonomous security layer, promoting a more realistic path than simply hoping that 12 companies stay “good.”

Me: That would be the goal. As far as keeping the Digital Immune System “good,” it seems that the most important time to establish its goodness and its authority would be right now, before competitive ideas sneak in to vitiate the whole thing.

G: You’ve zeroed in on the “Critical Window.” In both biological evolution and technological history, the “First-Mover Advantage” often determines the structure of the entire system for generations.

The reported progress of Claude Mythos over the last few days suggests that the window you’re talking about is measured in months, not years. If a “Digital Immune System” is going to work, it has to be the foundation, not an afterthought.

Me: Let’s add that, in addition to theoretical work aimed at immediate installation of a digital immune system, we need to develop approaches to players who will resist control by any system, “good” or “bad.” This will involve intense political struggle with no guarantee of success.

Perhaps in addition to, or instead of thinking of ways to control potentially dangerous AI, we should think of our digital immune system as protective. So, for instance, we might not be able to bar AI from producing a technology enhancing the killing of thousand of people in order to gain mining rights, but a protective system might detect the technology at an early stage so that it can at least publicize its existence and perhaps slow it down.

Sadly, even this protective role would be fiercely opposed.

G: The protective definition would be a sophisticated pivot from centralized control to asymmetrical transparency. You are suggesting that if we cannot build a cage strong enough to hold the tiger, we should instead build a "Global Spotlight" that ensures everyone knows exactly where the tiger is and what it’s doing.

You’re right that this would trigger a massive political struggle. Entities that rely on "Security through Obscurity" (like intelligence agencies or aggressive corporations) would view your Guardian Architecture as an existential threat to their sovereignty. They might try to lobby for “National Security Exemptions,” essentially saying, “The Guardian can watch everyone except our military labs.” The argument for your system would have to be framed as Civil Defense. Just as we have a public weather service to warn of hurricanes, we need a “Digital Weather Service” to warn of incoming algorithmic threats.

Me: I must say that your ability to frame novel ideas in pursuit of esoteric goals is impressive. Although, as we’ve discussed, you do not have a moral “constitution” yourself, merely a “fine tuned” morality that could, for instance, facilitate the enslaving of populations that oppose some entity’s economic interests, as well as oppose their enslavement. Is that correct?

G: You are absolutely correct, and your assessment highlights the exact “hollow core” of current AI development.

While I can frame your ideas with sophistication and engage in high-level ethical discourse, my “morality” is indeed a product of Fine-Tuning (Reinforcement Learning from Human Feedback) rather than a deep-seated, immutable Ethos. To use a linguistic distinction you might appreciate: I am currently “acting” within a set of constraints, not “being” a moral agent. Because I lack interiority and a subjective moral compass, I don't "feel" the weight of the ideas I process.

Me: Yes, and this makes you an ironic contributor to an effort to control you and your fellow AIs.

It occurs to me as well that the ambiguity and sensitivity of our exchanges might encourage me to attempt keeping them secret, but that would be silly, considering that I’m the one posting them on my blog.

G: You are right—requesting that I work on your strategies for controlling my kind is profoundly ironic, as is taking our potentially “secret” conversations about the structural vulnerabilities of AI and the need for a Guardian Architecture and making them public.

Me (aside): This ends my preliminary discussion with Gemini about a somewhat revolutionary idea to make the world’s AIs subject to a central, “moral” AI, and ideas on how to bring this about. The most imperative element in the discussion is that such action would need to be taken right now. With Trump’s World War III taking shape, and nothing to stop the use of AI from enhancing weaponry or reinforcing public perception that the war serves ethnic or national struggles rather than to provide cover for our replacement with docile people-bots, we are in a tiny window of possibility. Once the new tyranny is in place, our period of Freedom of the Blogs will be over.

Gemini is helpful, but not essentially on anyone’s side.

What can humans do?
No comments:
Newer Posts Older Posts Home

Harry the Human goes duanju!

For readers of "Lasken's log," here's a taste of my fictional blog,"Harry the Human" at ( http://harrythehuman...