What is AI superintelligence? Could it destroy humanity? And is it really almost here?
Flora Salim, Professor, School of Computer Science and Engineering, inaugural Cisco Chair of Digital Transport & AI, UNSW Sydney. Maxim Berg / Unsplash In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems – “superintelligences” more capable than humans – might one day take over the world and destroy humanity. A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence”, but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal. What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky. Different kinds of AI In my view the most useful way to think about different levels and kinds of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google. Their framework lists six levels of AI performance: no AI, emerging, competent, expert, virtuoso and superhuman. It also makes an important distinction between narrow systems, which can carry out a small range of tasks, and more general systems. A narrow, no-AI system is something like a calculator. It carries out various mathematical tasks according to a set of explicitly programmed rules. There are already plenty of very successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov way back in 1997 as an example of a virtuoso-level narrow AI system. Some narrow systems even have superhuman capabilities. One […]