The Three Laws of Robotics, as created by Asimov, don’t exist. But, as we move to a more automated world, should robots and AI fall under greater oversight?
In my previous post on the subject of the coming era of robotic process automation, I mentioned Asimov’s seminal sci-fi work The Caves of Steel. In it Asimov wrote of The City as the dominant force in human lives of the future:
“The City was the acme of efficiency, but it made demands of its inhabitants. It asked them to live in a tight routine and order their lives under a strict and scientific control.”
Asimov’s suggestion that there is a cost to progress might be seen as prophetic, but I think he was just one of a long line of writers who have warned that the future might be a bit ropey if we just pursue change in the name of progress, for its own sake.
But for all his attempts to conjure a dystopian image, Asimov was fundamentally a “technoptimist” with a repeating theme in his stories that progress would ultimately always be positive. In fact, his philosophy of robotics – and his “three laws” – have been so tightly woven into modern culture that it seems we hardly give a thought to the potential threats to our way of life, and perhaps to our lives from the advent of a totally automated future.
An Automated Future
Without labouring the point too much, the Three Laws of Robotics essentially mean that, in Asimov’s world, robots are inherently safe, trustworthy and beneficial. In fact, it is simply impossible to build a robot that does not comply with the three laws, the very architecture of the robotic AI being hard-wired around them.
It is purest fiction, of course, although to speak to some enthusiasts for the subject, Asimov’s Laws really do exist. But they really don’t, and that could spell trouble.
Life imitating art is all very well, but there is nothing whatsoever to dictate that an automated future can be assured as a “good thing.”
On the same day as I’m writing this piece, there are two news stories on the BBC website. In one, it is announced that robots will be working in two Belgian hospitals as receptionists, guiding visitors to the correct locations.
In the other, we’re told, a researcher at a university in the USA has built a robot that autonomously decides whether to inflict pain and bodily harm on a live human subject.
That the microcode for the two systems could be somehow swapped, or cross-fertilised, is the stuff of real dystopian sci-fi and, whilst highly implausible, it does raise questions about whether some progress is happening without sufficient oversight.
Robotics & Automation in Procurement
There is disquiet in many circles about the use of drones in warfare, and the step from human-operated to robotic drone is really only a matter of systems integration.
There are no Three Laws to guarantee that AI, robots and automation will be to our benefit. Yet they may very well be.
There are grounds to be hugely optimistic about what technology can do for us, from carbon capture and storage, to non-polluting safe transportation, to dramatically improved health and longevity in the poorest parts of the world.
Even in our little corner of the world we call Procurement, the sky’s the limit if we want to pursue automation. The potential to dramatically transform how we operate is very great indeed, and only a matter of investment and a few person-years of effort out of our reach.
But in all of this, it seems to me, it is we who should direct and dictate how that progress is delivered and what it actually does. Instead of being passive consumers and falling in line with the next developments, which may substantially change our working lives, the procurement industry has an opportunity to map out what the future could and should look like, and how we want the machines to work. For us.
Robotics are the future, and the sky’s the limit for automation in Procurement, say GEP. For more on this, download the latest white paper research.
For more information on high-performing procurement software, visit the Smart by GEP website.