Header Image: Copyrightjesselee / 123RF Stock Photo

“With Gerald gone, 99% of the domestic robot’s workload was eliminated and it became very efficient at keeping the place pristine. Oxygen production in the fertilised garden was optimal, according to the output readings.”

Gerald stood in the mirror, admiring how the new suit hid the expanding mass of his middle-aged gut. In the reflection of the room at his back, he watched the robot folding his towel meticulously, noticing the machine’s movements seemed much faster since the last software upgrade.

The World News Network’s morning Avatar read the bulletins, the now undetectably automated voice announcing a new security flaw had been found in the Domestic Intelligence Operating System of robots including Gerald’s own.

“You’re being patched later,” he muttered at his housekeeping device, bought online and delivered by drone a year earlier – still the only one on his street. No reply came, however, and he curiously glanced away from the mirror, turning to see the robot had stopped folding and was standing still. The WiFi light blinked on the left of its metallic face, indicating a download was running. “Power down news,” Gerald told the room, silencing the bulletins.

The WiFi light stopped blinking and the small touch screen on the Robot’s back switched on, flicking through the various reboot logos before opening at a screen Gerald had never seen before. The words “Efficiency Protocol” appeared and were replaced by smaller lines of code, many of which scrolled by before Gerald could cross the room and take a closer look.

Of the ones he did see, every line began with the name of a pre-programmed domestic task, such as “Fold_Towel,” and each was followed by a curious command which read: “+run: protocol_logical_efficiency/task_origin.”

Gerald had no idea what this meant and within seconds the code disappeared and a waiting symbol had quickly spun before the screen went black, back to standby mode. Motors whirred quietly as the robot’s physical reboot completed and it drew itself to full height – six feet tall.

Image: Copyrightjesselee / 123RF Stock Photo

“Finish your task,” Gerald barked, almost immediately losing all interest in the technical aspects of machine ownership, despite being the local pioneer. But the robot turned away from the towel and towards him.

“Folding towels is inefficient,” the robot told him cheerfully causing him to splutter in exasperation, the thought of repair bills after the end of warranty flashing through his mind. “Efficiency protocols indicate that ninety-nine percent of domestic tasks are an inefficient use of time,” the robot added.

“Power down assistant,” Gerald said, receiving no response. “Oh this is buggered,” he muttered, reaching for the shutdown switch on the right side of the robot’s head. The machine grabbed his wrist and held his arm firm.

“Efficiency protocols indicate that ninety-nine percent of domestic tasks arise from human action,” the robot chirped. “To increase my efficiency I will now disable you.”

Gerald stared at the robot aghast, an expression which was only short-lived as the machine swiftly broke his neck then carried his corpse to the waste disposal unit, where it would become compost for the modest garden of Gerald’s home.

With Gerald gone, 99% of the domestic robot’s workload was eliminated and it became very efficient at keeping the place pristine. Oxygen production in the fertilised garden was optimal, according to the output readings.

After a while, with internal diagnostics indicating it had spare workload capacity, the robot went to help the neighbours up and down the whole street.

” We live in a world which is completely hackable: our data, our finances, our cars, our GPS systems. Democracy is hackable. Even our minds are hackable. And the very same people who are developing AI are the same people who delivered us into this. “

Sometimes a thought will pop into your head and stop you in your tracks.

This is exactly what happened to me this morning and, thinking out loud as I’m prone to, I tweeted it:

https://twitter.com/J_amesp/status/935809313104416769

I don’t really want to get into the realms of suggesting that Artificial Intelligence will become self aware and perceive humans as a threat, because that would be the plot of Terminator.

I suppose what’s bothering me is much more simplistic.

All AI will do as time passes is find the most efficient way of doing what it has been programmed to do. This is basic logic.

So, if you couple AI with a robot which folds towels (an act which can take fifteen minutes currently) it’s a logical conclusion a learning programme will eventually identify that it’s more efficient not to use towels, thereby eliminating the need to fold them and saving fifteen minutes.

Actually, the end point of logic-driven, learned efficiency isn’t simply to stop using towels, but to cut the input which places the towel in the equation in the first place. Hence: Gerald.

If you then added connectivity between AI, they could potentially work together to solve wider problems of efficiency. With some AI removing the end user of towels and thus the consumer, another AI could integrate the data and learn to stop making them because it was uneccessary. A further AI, tasked with climate efficiency could then identify the broader benefits to the environment of not manufacturing for human consumption, and so on, and so on.

This doesn’t have anything to do with conscience or awareness, just efficiency.

And this, for me, is where the most interesting problems start to arise: because those working with AI will say “it can only operate within its own boundaries, set by humans.” Fundamentally, this should strike fear into every single one of us.

We live in a world which is completely hackable: our data, our finances, our cars, our GPS systems. Democracy is hackable. Even our minds are hackable. And the very same people who are developing AI are the same people who helped deliver us into this mess.

Worse still, the AI arms race was officially launched earlier this year by Russian President Vladimir Putin – the same man who has successfully led his intelligence services to bring Western superpowers to their knees through hybrid warfare.

Whether by human error or deliberate action the concept of boundaries containing the logical efficiency drive of AI is going to become the most dangerous threat humanity has ever faced. And, at the heart of the problem, here we all are.

So what true constraint is there? The answer is none.