Some time ago, professional YouTube explainer of things CGP Grey recommended a book in his latest Q&A, and it sounded like exactly the kind of thing I like to read. Superintelligence: Paths, Dangers, Strategies is author Nick Bostrom’s addition to the growing body of work making the case that we may not be ready for the consequences of artificial intelligence (AI). A somewhat heavy read, the book covers topics including what actually counts as an AI, the idea of an “intelligence explosion” akin to the agricultural or industrial revolutions, and the different disciplines that may eventually produce a digital mind comparable to or greater than our own. But ultimately, it’s about whether we as a society and species are prepared or even cognizant of the potential ramifications of such an intelligence. But before we talk about his conclusions, I think it’s important to address his leading question: Why should we be worried about that right now?
Futurists have long been claiming that a digital generalized intelligence is “right around the corner”, but in practice this has proven to be far beyond our grasp time and time again. What’s so different about this point in time that makes the issue so urgent? Well, for Bostrom’s part, he approaches the discussion with a pretty measured response. Rather than suggesting we’re only a couple of decades away, his conclusion is that our arrival at this technology is more or less inevitable. From that standpoint, it follows that if we know it’s incredibly likely to come at some point in the future, we should make every attempt to prepare for it. I understand that Bostrom is trying to shy away from hyperbole here, but could it be that we’re a bit closer than he gives us credit for?

Oh, would you look at that. It’s already the 7th of The Future!
Of particular interest to us in assessing the closeness of general AI, I think, is the topic of evolutionary algorithms. By the way, from here on out, note that this is complicated material which is definitely outside my field of expertise. I’ve been reading up on some of the computer theory for the last eight years or so, but before you repeat what you read here to friends, I highly encourage you read any linked material or outside research for yourself. This is basically going to be a sort of “crash course” version. With that out of the way, evolutionary algorithms are pretty much what it says on the tin. You start with a model population that has measurable traits, then apply a “fitness function” to that population, which selects the most fit individuals for the next round of the algorithm. The idea is that, given a population of virtual minds, we could apply a fitness function of intelligence much in the same way natural evolution seems to have arrived at the same point. In fact, because this is artificial selection with a goal in mind, it stands to reason that intelligence developed in this way would happen much faster than in nature.
Although, when the bar is set somewhere in the millions of years range, “faster than nature” is an almost meaningless metric. So what’s the holdup? Well, such an algorithm would present us with what the author calls a “combinatorial explosion”. Think of it this way; say I want to plan a road trip. There are a few cities I want to hit, but I want to know the fastest way to hit all of them. Also, I don’t ever want to backtrack on a road I already drove on. Who wants to see the same scenery twice in one go, right? Well, this means that to find the fastest way, I have to look at every possible order of cities for my visit. For just three cities, this isn’t too bad – there are only 6 different orders. But what happens when we scale this up to 10 cities? It jumps to an astounding 3,628,800 potential paths. And as you get more cities, it only gets worse. This problem in particular has a name; The Traveling Salesman problem, and I find it’s one of the easiest ways to explain the idea of exponentially scaling problems. YouTube user poprhythm has an excellent video on the subject here if you have two minutes to spare. This is the same type of calculation scaling we see with evolution algorithms, and finding a way to get normal computers to solve them efficiently is an expensive open question in computer science.

With 40 years of dedication, a formal system that allows sufficient proof of P=NP, and a little elbow grease, you too can be a rich nerd!
Note, however, that I said
normal computers there, and not just
computers. This is because in the last few years, we’ve finally had some public visibility into the progress of quantum computers. Now, I know that as soon as the word “quantum” comes up it sounds like I’m trying to sell you some new-age, pseudoscientific medication, but quantum computers aren’t magic. In fact, for general use, they’re not significantly better off than the classical computers we’re used to. However, the folks over at
D-Wave have been hard at work engineering a quantum computer specifically for those combinatorial problems like Traveling Salesman. How much of an improvement are we talking about? Well, according to
this paper published by researchers from D-Wave and Google, their machine solves the problem about 10
8 times faster than a regular computer simulating the same physical processes a quantum computer uses. Instead of tackling the computer the same way a person might approach an optimization problem, quantum computers use a process called
annealing (or tunneling) to exploit some quirks of quantum physics. I have to admit that this is where my understanding of the details breaks down a bit, but basically a problem gets transformed into an energy landscape, with each different solution to the problem being represented as an energy level on that landscape. The idea is that once a local minimum is reached in computation, there is a fair chance that some of the entangled bits will then flip simultaneously in order to reach the absolute minimum, or the ground state. This ground state can then be translated back to the actual answer that was being sought in the first place. Sorry, I know that’s technical and people smarter than I am will probably be counting the million ways that I’m wrong, but the takeaway is that it allows the machine to skip large chunks of computation that should scale with the size of the problem.

Or…maybe not. Good effort, though!
The paper linked above seems to indicate that the decrease in scale factor hasn’t been achieved yet, but they do have a roadmap to future iterations of the technology. So coming back to artificial intelligence and how worried we should be about it’s arrival, what does this information tell us? In short, it shows us that the author may actually have been too reserved in his predictions and warnings of future events. It’s certain that we’re still not likely to see Skynet in the next 20 years, but the acceleration of quantum computing could be just the solution to the main problem with one of the avenues towards thinking machines. I think that this is something we should keep in mind not just going into the second part of this review, but also as we start to encounter machines in more and more aspects of our lives. I consider myself a futurist and a transhumanist, but bumbling our way through to the future without looking ahead could have dire consequences – consequences we’ll look at next week when we return to Nick Bostrom’s work. If you’d like to evaluate the book for yourself or if you’re smarter than me and want to let me know how wildly wrong I am about the contents of this book (entirely possible), you can get it from
Amazon or see if your local bookstore has it. Computer science books tend to be fairly niche, though, so online may be your best bet. See you guys in part 2!
Like this:
Like Loading...
About The Author
AndrewCartwright
By day a mild-mannered system administrator, but at night...an equally mild mannered video game enthusiast with an eye for design. Has delusions of being funny and insists that mathematical fictionalism is the most ridiculous thing he's ever heard. Follow him @shewantsthessd on Twitter!