Archive | April, 2015

On artificial superintelligence

22 Apr

I’ve seen a lot of discussion about machine “superintelligence” lately, and participated in some. Is superintelligence a threat to mankind? Personally, I’m not worried.

It’s a mistake to think that human intelligence is general enough to be evaluated in terms of logical computation. We tend to think that our view of the world is the only correct one (because from our perspective, it is), and draw the flawed conclusion that any being with enough computational power is destined to arrive at the same way of regarding existence. But the brain is an organ evolved to benefit survival and procreation given our place in our version of reality, not a general computation machine. Estimating the number of computations per second (“cps”) a human brain is capable of tells us only what we could theoretically get out of a human brain wired for computer work. That’s not a good measure of brain power, because computer work is not what brains are supposed to do. Of all organisms with a brain, humans are the only ones that even try to use it for logical computation. Not surprisingly, brains are quite bad at this, and computers have surpassed our actual (if not theoretical) capability of computing for some time.

So you want to create a being that operates like a human? Fine, we know how to do that: have children. You can program them (i.e., raise them) to become more intelligent than you – an important factor for the success of the human species. You want a computer program to function like the nervous system of a human being? Well, computer simulation of a brain should in principle be possible, given that you first crack the hardware operation of the nervous system, but it’s not clear how to program the simulated brain to do anything useful. Counting “cps” doesn’t tell us how difficult it is. Also, it should be noted that some problems that are easily transferred to a computer model are still practically or even theoretically infeasible in a computer setting. Computationally infeasible problems appear in simulation of some apparently quite simple and well-defined physical processes, and I would be less than surprised if simulating the brain is at least as difficult. Finally, if you manage to create and program a simulated human brain with huge computational power, it’s not clear that you get superintelligence. Why would an artificial brain with massive computational power be more capable than a human with access to similarly powerful ordinary computer (plus programming skills)? Would a machine with more simulated neurons than the actual neurons in a human brain necessarily be a superb thinker? Or would the simulated huge brain crumble under neuropsychological scalability problems, winding up psychotic and unusable? The answers to those questions are far from clear, and the arguments for the simulation achieving “superintelligence” are weak, at best.

According to a suggested definition, intelligence involves “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience”. That’s far too vague a definition to use for building and testing a computer program. You could design tests that purport to measure those traits, and in principle you can always code something that passes the tests, because, as John von Neumann put it, “If you tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!” But for every test you make, you collapse the generality into something very specific: the precise capability that the test measures. Hence, you create an artificial narrow intelligence, never a general intelligence, and, returning to my original point: human intelligence is not general, only human. A finite set of skills that evolution has taught us, pointless to measure by any other parameter than the capability to imitate human behavior, which Alan Turing realized in 1950 when he came up with the game later known as the Turing test.

Building a machine that matches every skill we include in the concept of human intelligence is not necessarily impossible, but it’s unlikely to ever happen. First, it would be difficult. Just like performing a brain simulation, some parts would plausibly involve computationally hard problems, and therefore be infeasible. Second, there’s not much point. Computers are much more useful for computational tasks (for which humans are less skilled) than for trying to be human. I don’t think an artificial person is significantly closer now than it was in 1927, when the writers of Metropolis came up with robot Maria. In those days, mechanics was perceived as the solution. Now it’s computers. But neither of the technologies is particularly human-like.

There is certainly much to be both concerned and hopeful about in future technological development, but I don’t believe we need to include machine superintelligence.