When a little girl, my greatest aspiration was to live long enough to see the interplanetary manned flights. Of course, I was looking forward to other great things as well.
Three Laws of Robotics Are Soon to Come True?
- Videophones – Check
- Pocket size TV sets with cartoons in them – Check
- Worldwide information network – Check
- Instant mail – Check
- Magic cameras, displaying snaps the moment they are taken – Check.
- Robots helping around the house – Check
- Sentient robots helping around the house – No.
That’s a shame!
With humanlike robots and replicants featuring SCI-FI movies and TV shows, we’re still unlikely to get them any time soon in a round-the-corner electronics store. Yet, virtual AI is already here. Ask Siri or Cortana for that matter. Of course, they can’t be called sentient or even semi-sentient, but considering the rate with which the robotics develop, we’re bound to face some ethical problem soon.
Right now if you ask Siri “how to make an atomic bomb”, it will give you the list of links to nuclear science or historic web-sites. This is an in-built precaution the engineers have seen to, I guess, plus the limitations set by Google search engine. But let’s imagine, that Siri has its own flexible and self-educating intellect. How will it react? Will it be pacifistic or misanthropic? Will it take after its master or have its own principles and priorities? And talking of these, what they might be.
Interesting enough, Siri creators don’t worry about AI powered society future, but Microsoft does.
On September 28, 2016 Microsoft and IBM together with Amazon, DeepMind/Google and Facebook announced that they are creating a non-profit organization to “advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join”. End of citation.
And this brings us back to Si-Fi stories I’ve been so fascinated about long time ago. Particularly, to Azimov’s three laws of robotics. Really, I wonder, why wouldn’t they in IBM or Deep Mind just take a book from a shelf or download it on their readers and read:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given [sic] it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
When I first read the news about the Partnership on Artificial Intelligence I was so excited that I tried to apply the rules to the existing digital assistants.
Siri and Cortana cannot injure a human being right now. They can deliver potentially harmful information though.
Siri and Cortana of course obey all the orders of the user. Equally because they are incapable to evaluate the moral aspect of the order or its outcome. They have some in-built limitations, I believe.
Siri and Cortana are protected by Apple and IBM engineers; they can be switched off but cannot be deleted by a common user. It takes advanced knowledge of your computer system to delete Cortana, but Siri is indispensable.
In a word, the modern digital assistants do not meet any of the aforementioned rules. If we can’t control the assistants which just imitate intellectual behavior, what shall we do with their progenies? And why then Apple, possessing the most complicated digital assistant, has taken a backseat? Or is it involved in some undercover super-secret ‘wow’ project?
- Asking yourself where to sell used Apple electronics? You’ve come to the right place. We buy your old iDevices, Macs, iPhones, Displays, iPods, iPads included, for the highest price online: Sell your iPhone and your Mac now.
Why the Laws of Robotics Don’t Work [Video]
Three or four laws to make robots and AI safe – should be simple right? Rob Miles on why these simple laws are so complicated. Video published by Computerphile published on November 6, 2015.