Concerns about 'evil' machines have been around about as long as the science fiction genre, but these fears have taken on added urgency in recent years as machines that can think and act have become increasingly commonplace.

Yet there are no inherently 'evil' machines. Instead, the conclusions drawn (and actions taken) by AI-powered systems and devices are derived from their 'worldview,' the intentions of those building AI algorithms.

As recent MIT research shows, that worldview is shaped in large part by the data an algorithm or machine is fed. In the MIT study, an algorithm fed death-related images (data) eventually began interpreting every image as death-related, even random Rorschach inkblots. In other words, if you want to build a killer robot, you train it to think and act like a killer robot, and if you want a librarian robot, you teach it the Dewey Decimal System.

But how can you stop humans from 'training' algorithms, machines, and robots to do bad things, any more than you can stop hackers from using networks to spread viruses, hold computers for ransom, or steal data, money, or intellectual property?

The truth is you can't, at least not completely. But you can attempt to highlight the problem in order to mitigate it through greater awareness. That's what people like Elon Musk are trying to do. The billionaire co-founder of Tesla and SpaceX has been expressing concerns since 2014 that technologists are running the risk of creating AI-enhanced machines that will destroy mankind. Microsoft co-founder Bill Gates and the late scientist Stephen Hawking also have raised alarms about the potential dangers of AI.

Musk, though, has gone a step further, launching a start-up called OpenAI, a non-profit research company dedicated to building safe AGI (artificial general intelligence). It's a collaborative effort intended to steer the AI research community toward ethical AI.

Unfortunately, that doesn't mean everyone is going to play along. Beyond the problem of bad actors, though, there's another issue: How do we arrive at a mutually agreed-upon definition of what constitutes ethical AI? Over at Big Think, contributor Scotty Hendricks explores the 'minefield' of teaching ethics to AI.

'As artificial intelligence gets smarter and our reliance on technology becomes more pronounced the need for a computer ethics becomes more pressing,' Hendricks writes. 'If we can't agree on how humans should act, how will we ever decide on how an intelligent machine should function?'

Good question. And we'll need an answer sooner than later as enterprises, academic researchers, and others continue to develop and create machines and programs that can think and act. If we're lucky, we'll manage to build smart machines that won't destroy us. And if we're really lucky, we'll build smart machines that will prevent us from destroying ourselves.

Attachments

  • Original document
  • Permalink

Disclaimer

DXC Technology Co. published this content on 11 January 2019 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 11 January 2019 13:33:02 UTC