Stephen Hawking Warns Us All to Think Twice Before Advancing AI Technology

AI

If physicist Stephen Hawking is taking pause with the future of AI technology, maybe we should take pause, too.

Or at least, you know, think about all the paths it could potentially take first.

Hawking, along with physicists Max Tegmark and Frank Wilczek and computer scientist Stuart Allen, wrote in The Guardian that AI could absolutely change the world — but the question is if it would eventually get to a point where, you know, there are Terminators everywhere.

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a “singularity” and Johnny Depp’s movie character calls “transcendence”.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

[via io9]


Geeks are Sexy needs YOUR help. Learn more about how YOU can support us here.