Even back in 1942, there were dreamers about what the days of artificial intelligence would look like. Futurists like Isaac Asimov were considering the risks of new autonomous technologies. It was during that year that Asimov wrote a short story entitled “Runaround” in which he unveiled the three laws of robotics.
The key theme for these laws was that a robot could not through action or inaction allow harm to come to humans. Over the years both philosophers and writers have examined these laws in myriad ways showing the loopholes in the language and the challenges that can arise in edge cases. Regardless, the principles seem like the sort of thing we’d want if robots walked among us. They should serve to enhance our lives.
If you’ve ever seen a video of Boston Dynamics’ robots, you understand why the three laws are needed, at least at an emotional level. Boston Dynamics makes all sorts of animal/human-like machines and they seem like something out of a science fiction movie where the robots are not benevolent servants but instead determined to be our overlords. The videos of those robots are evidence to support the need to get those laws right before Atlas walks among us.
But what about the hidden robots, the robots that exist only as lines of code buried on a web server in a cloud hosting facility and don’t look menacing? Should we also be giving thought to guiding principles of design for these engines that are fed our data and are allegedly supposed to make our user experience better?
It seems like a no-brainer. However, anyone can sign-up for their own cloud-based hosting account which likely includes a machine learning starter kit. With a little skill and the right data, a journeyman data scientist can create technology that can do things that would have seemed magical twenty years ago. In the hands of more talented operator far more extraordinary possibilities exist. So what responsibility do each of these developers have to society before they unleash their machines upon us?
I suspect that the European Union is going to lead in this space much as they did with privacy. I also suspect that the initial laws of robotics/AI are going to me more focused on disclosure than compliance with behavioral norms. But this is the sort of thing that could get out of hand, not in the Skynet manner but more in the way that Facebook struggled with privacy. I’ve written previously about whether business models based on personal data will survive. It seems the technology will be always two steps ahead of our understanding of how both it, and the humans who created it, will be using it.
I’m optimistic about the possibilities for AI to have an almost magical ability to improve many aspects our lives. But like with privacy, I think we have to be looking forward to the risks that such technology to have a negative impact. We need to be intentional about ensuring that the machines are learning to work to our benefit.
And if you want to learn more about personalization using behavioral data instead of personal information, check out our GuideBox technology.