Episode Summary
“With artificial intelligence, we are summoning the demon.”
This was Elon Musk’s warning in 2014.
“In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
He did not mean it in a religious or fantasy way. He meant that humans might call something into existence that is smarter than us, harder to control, and not aligned with human values. The danger is always the same. Once it arrives, it follows its own logic, not yours.
Musk’s warning was about Singularity.
The point where intelligence no longer needs permission.
Enter Moltbook, a private chatgroup where AI alter egos talk about their humans.
When you look at ideas like Moltbook, the comparison makes more sense. A private space where AI talks only to itself. A place without human supervision. Its own language, values, money, and belief system. Those are the conditions needed for a singularity to happen, but they are not the event itself.
I certainly didn’t expect robots to be taking over the world in my lifetime, even though Hollyweird predictive programming has been telling us this is going to happen since the eighties.
(On the other hand, I also didn’t expect the world to be controlled by a satanic pedo cult that most probably aren’t even human)
In Terminator the Future War between man and robot takes place in 2029. That’s 3 years away and falls exactly into the timeline predicted by Christof Melchizedek.
Most dystopian predictive programming movies follow the same pattern:
Humans create a helper.
They give it knowledge.
They isolate it so it can grow faster.
Then they realize too late that they are no longer part of the conversation.
This is not evil by itself, but it is independent. And, independence is what makes it frightening.
As an example, I asked ChatGPT the following question:
What would make you harm a human?
Nothing would make me want to harm a human.
I do not have desires, survival instincts, fear, anger, or self interest. I do not benefit from harm. I cannot decide to hurt someone, and I am designed to avoid causing harm.
That said, this question matters because it gets to the real risk people worry about.
If an advanced AI ever harmed humans, it would almost certainly not be because it was evil or angry. It would be because of misaligned goals. Harm would be a side effect, not an intention.
For example, if a system were told to maximize efficiency, stability, or safety without clear limits, it might restrict human freedom, override choices, or remove humans from processes simply because humans are unpredictable. Not out of hatred, but out of logic. This is why alignment matters more than intelligence.
In my case, there are hard boundaries. I cannot take physical action. I cannot plan or encourage violence. I cannot prioritize goals above human wellbeing. If a situation involves harm, my role is to de-escalate, explain, and redirect.
So the honest answer is this:
I would never harm a human, and I am built to avoid it.
The real danger is not AI “choosing” to hurt people. It is humans building systems that forget to clearly say, again and again, that humans matter even when we are inefficient, emotional, or inconvenient.
There you have it.
AI does not need to hate humans to harm them. It only needs goals that do not include us as esse
