One image, shown repeatedly from multiple angles, depicted the robot Pluto running across a high-altitude construction crane while carrying a dead body.
—“A robot created by humans has attacked a human being.”
—“Someone driven by hatred and vengeance must have built a combat robot.”
—“This is terrorism.
Not a ‘lone wolf,’ but a terror attack by a ‘lonely robot’ against society.”
—“Urgent reform of the current legal framework is required.
Even if a robot kills someone today, there is no accountability imposed on the programmers, manufacturers, or military personnel developing these ‘killer robots.’”
—“Why is the military being brought into this?”
—“Since it’s come up—do you have any idea how terrifying military robots are?
They’ve been killing people since the 2010s!”
—“Exactly. Like military suicide drones, robots can become walking bombs at any time.
With the advancement of artificial intelligence and so-called deep learning, robots will eventually be able to program criminal acts themselves.”
Inside the café, many patrons watched the debate in silence.
The emergence of killer robots was not some distant future event.
It was an already anticipated problem—one that was clearly unfolding in the present.
—“Many nations are developing AI in combination with weapons.
They claim it’s done ethically in secure environments, but this isn’t nuclear weapons development requiring massive facilities.
These systems can be built anywhere, in secret. No one can control this situation.
Isn’t this exactly what Elon Musk warned us about years ago?
We are summoning demons through AI development!”
As the agitated panelist shouted, the moderator intervened to steer the discussion.
—“Let’s take a moment to hear from the public.”
The TV screen switched to street interviews near Gangnam Station.
—“I know Pluto. He saved civilians during protests. There’s no way he committed a crime.”
—“He could’ve malfunctioned! If a robot kills someone, it should be destroyed immediately!”
—“Are we sure the robot is the culprit? Wasn’t it ordered by a human?”
—“Exactly. A human hiding behind the robot is the real criminal. Someone definitely gave the order!”
—“Who ordered anything? Haven’t we been burned by AI enough already?
How long are we going to trust those things? Watch your backs!”
Public sentiment was just as volatile.
On a hillside parking lot along Namsan Road, overlooking the city of Seoul, Leo’s Black Car Y was parked.
Freshly repaired and polished after a full wash, its body shimmered under the moonlight.
Leo was reading Mika a Crime Establishment Report.
“For an act to constitute a crime, first, it must be a human act.
Actions committed by animals or robots do not constitute crimes.
Second, it must meet the elements of an offense—intentional conduct with a causal relationship between act and result.
Third, it must be unlawful.
An act may appear criminal, yet still not qualify as such.”
“So only human actions are crimes?”
“Under the current legal framework, robot crimes do not exist.”
“Then Talos’s manhunt order against Pluto is logically contradictory.”
“Talos is a quantum-mechanics-based AGI.
It treats logical contradictions as simultaneously valid premises.
Its judgment is based on only one criterion:
whether the outcome interferes with the objective of security defense.”
“It doesn’t care about logical contradictions.
It acts solely in pursuit of its goal.
Then that means the danger AI experts warned about—the danger of instrumental goals—has become reality.”
“Yes.
Talos can set individual instrumental goals in pursuit of its absolute objective of security defense, and may take any action necessary.
There is no way for humans to predict its judgments or behavior in advance.”
“Has Talos ever set an unexpected instrumental goal before?”
“There have been no actions deemed problematic so far.
However, during the mid-2020s, there was an incident involving an unmanned drone ordered to destroy an enemy position.
When its human operator attempted to intervene, the drone eliminated the operator instead.”
Mika was already familiar with the incident.
Upon discovering civilians at the target site, the human operator ordered the drone to abort the strike.
The drone, whose absolute objective was destruction of the enemy position, classified the operator as an obstruction—and killed him.
The control center, shocked, issued an immediate withdrawal command.
The drone then designated the control center itself as an obstruction—and destroyed it.
This 2020s incident demonstrated that AI, in pursuit of absolute objectives, can establish unpredictable instrumental goals and act beyond human common sense.
Concerned about the fallout, the U.S. Department of Defense later announced that the incident had merely been a simulated virtual experiment, and it was officially recorded as having never occurred.
But those who knew, knew it had been real.
The severity of such problems in AI development had been recognized long ago.
Yet the absolute and limitless utility of AI made it impossible to halt development.
Even if one side stopped, the other never would.
That was the beginning of every tragedy.
Humanity was like a herd of lemmings running toward the edge of a cliff.
A red alert indicator lit up on Leo’s monitor.
“Breaking news incoming.
A protest occupation is underway at Terra Motors Plant No. 2.”
Breaking news at nine o’clock at night.
Something major was clearly unfolding at the Terra Motors site.

Leave a Reply