The internet’s indifference to power

8 Nov

It’s excellent that Cory Doctorow understands how the internet and computers work (everyone should), but in his ardor for the internet as a revolutionary tool he ignores what’s grown increasingly difficult to ignore in recent years, stuck in the perspective of “us” against evil states.

Many of us have lingered for too long in wishful thinking about the internet as a subversive power for Good, for individuals, for the oppressed against the power. The internet isn’t on “our” side. It’s neutral and indifferent, simply a tool for whomever is capable of exploiting it. Certainly, it can be used for the estimable goals that we saw in the beginning: allowing every human to receive and transmit information, to hear and to be heard. But it can also be used against individuals by those in power. For surveillance, and to shepherd us into acting as compliant subjects of capitalism. And it’s quite possible, which wasn’t so clear at first, for those in power to use their classical means of control to force unwanted elements off the net.

Also, importantly, the internet is used by collectives that were hardly visible or even existed before, both against “us” and against the state as we know it.

I expect that the political changes we see right now, still blurry and surrounded by an enigmatic mist, and still without knowing where they lead, can all one day be traced back to the power of the suddenly emerging internet. Just as developments leading to modern western society can be traced back to how energy availability suddenly exploded with the breakthrough of fossil fuels.

Mysterious machine decisions

12 Apr

I got to Will Knight’s MIT Technology Review article about the difficulties of understanding decisions made by artificial intelligence via a link to Monica Anderson’s sour comments to it, which slams the “reductionists at MIT”.

“When a human solves a problem, it would be preposterous to demand to know which neurons they activated in the process”, she exclaims. “Artificial intelligences will operate more like humans than like other machines. […] We need to treat [them] more like humans when it comes to issues of competence, reliability, and explainability. Read their resumes, ask for references, and test them for longer periods of time in many different situations.”

The article is interesting, and so are Anderson’s comments. I am thoroughly unsure what to think about this myself, but I’ll add a few tentative reflections:

I don’t have an issue with computer systems helping us out with things we can’t do ourselves, without explaining exactly how to do it. Letting an artificial neural network sniff out the onset of schizophrenia is like letting a dog sniff out a missing person in the woods. It has a skill we don’t, so we use its services, without worrying about exactly how it does what it does.

But letting computers make, or advise, decisions that humans would otherwise make is a more delicate issue. When we do that, we project our expectations from human decision-making on decision-makers with a completely different disposition. That’s especially problematic if we can’t extract its arguments, because if we could, we might find that they are preposterous and immoral. Knight mentions research that has found AI image recognition systems susceptible to certain equivalents of optical illusion that exploit low-level patterns the system searches for. There may be similar illusions in the case data of legal decisions.

imagenet_16_images_horizontal_0

Figure from Deep neural networks are easily fooled: High confidence predictions for unrecognizible images by A. Nguyen, J. Yosinsky, and J. Clune

(On a tangent, humans are susceptible to analogous illusions, targeted at how our brains interpret information. Art is based on them. A creature that recognized objects in a completely different way might be mystified as to how we can interpret a simple line drawing as, say, a cat, or a smiling face.)

It’s important to realize that even if a computer’s judgements are correct in a statistical sense, we may not find them acceptable. Say, for instance, that it is possible to improve predictions for human behavior or preferences based on a person’s sex or skin color, and that the prediction is useful in deciding how how to treat that person. Then using data about people’s sex and skin color could make a system more efficient. More people may could be satisfied with how they are treated. But what about the treatment of the others, those that are less satisfied? We have names for that: discrimination, prejudice.

We want to avoid that for moral reasons, at almost any price in overall performance. And in order to do so in constructing decision systems, we need to have a clue to how the decisions are made.

On artificial superintelligence

22 Apr

I’ve seen a lot of discussion about machine “superintelligence” lately, and participated in some. Is superintelligence a threat to mankind? Personally, I’m not worried.

It’s a mistake to think that human intelligence is general enough to be evaluated in terms of logical computation. We tend to think that our view of the world is the only correct one (because from our perspective, it is), and draw the flawed conclusion that any being with enough computational power is destined to arrive at the same way of regarding existence. But the brain is an organ evolved to benefit survival and procreation given our place in our version of reality, not a general computation machine. Estimating the number of computations per second (“cps”) a human brain is capable of tells us only what we could theoretically get out of a human brain wired for computer work. That’s not a good measure of brain power, because computer work is not what brains are supposed to do. Of all organisms with a brain, humans are the only ones that even try to use it for logical computation. Not surprisingly, brains are quite bad at this, and computers have surpassed our actual (if not theoretical) capability of computing for some time.

So you want to create a being that operates like a human? Fine, we know how to do that: have children. You can program them (i.e., raise them) to become more intelligent than you – an important factor for the success of the human species. You want a computer program to function like the nervous system of a human being? Well, computer simulation of a brain should in principle be possible, given that you first crack the hardware operation of the nervous system, but it’s not clear how to program the simulated brain to do anything useful. Counting “cps” doesn’t tell us how difficult it is. Also, it should be noted that some problems that are easily transferred to a computer model are still practically or even theoretically infeasible in a computer setting. Computationally infeasible problems appear in simulation of some apparently quite simple and well-defined physical processes, and I would be less than surprised if simulating the brain is at least as difficult. Finally, if you manage to create and program a simulated human brain with huge computational power, it’s not clear that you get superintelligence. Why would an artificial brain with massive computational power be more capable than a human with access to similarly powerful ordinary computer (plus programming skills)? Would a machine with more simulated neurons than the actual neurons in a human brain necessarily be a superb thinker? Or would the simulated huge brain crumble under neuropsychological scalability problems, winding up psychotic and unusable? The answers to those questions are far from clear, and the arguments for the simulation achieving “superintelligence” are weak, at best.

According to a suggested definition, intelligence involves “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience”. That’s far too vague a definition to use for building and testing a computer program. You could design tests that purport to measure those traits, and in principle you can always code something that passes the tests, because, as John von Neumann put it, “If you tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!” But for every test you make, you collapse the generality into something very specific: the precise capability that the test measures. Hence, you create an artificial narrow intelligence, never a general intelligence, and, returning to my original point: human intelligence is not general, only human. A finite set of skills that evolution has taught us, pointless to measure by any other parameter than the capability to imitate human behavior, which Alan Turing realized in 1950 when he came up with the game later known as the Turing test.

Building a machine that matches every skill we include in the concept of human intelligence is not necessarily impossible, but it’s unlikely to ever happen. First, it would be difficult. Just like performing a brain simulation, some parts would plausibly involve computationally hard problems, and therefore be infeasible. Second, there’s not much point. Computers are much more useful for computational tasks (for which humans are less skilled) than for trying to be human. I don’t think an artificial person is significantly closer now than it was in 1927, when the writers of Metropolis came up with robot Maria. In those days, mechanics was perceived as the solution. Now it’s computers. But neither of the technologies is particularly human-like.

There is certainly much to be both concerned and hopeful about in future technological development, but I don’t believe we need to include machine superintelligence.

The truth about facts

15 Jul

It’s easy to fool yourself that if people just knew the facts properly, they would all switch to your view (because your view is the right one, obviously, otherwise you would have switched yourself), but opinions are more social than rational. The normal approach in politics is to fool yourself into thinking that the opinion you already have is the correct one by selective use of facts, as shown by results discussed in Ezra Klein’s text How Politics Makes Us Stupid. “‘What we believe about the facts,’ [Dan Kahan] writes, ‘tells us who we are.’ And the most important psychological imperative most of us have in a given day is protecting our idea of who we are, and our relationships with the people we trust and love.”

The conclusion goes much deeper than having the public accept scientific evidence. It means that bringing facts is not the way to make someone willing to change their position. A much more effective way is to make it less threatening for them to change their identity, or to allow them to switch their views without changing their identity. Not a revolutionary insight perhaps. Some political strategists, like Tony Blair, have long been successful using it. But for me, Kleins text gave the insight a depth it didn’t have before. (Everybody should read it, by the way, and draw their own conclusions.)

The path to wisdom lies in not succumbing to the threats posed by changing your views. Threats to your identity, your social comfort, and your livelihood.

When Johannes Kepler stole Tycho Brahe’s astronomical data after Brahe’s death, it was in the hope of confirming the theory that he had given years or decades of work. But what Kepler found was that the data didn’t quite fit. His reaction wasn’t, which I believe most people’s would have been, to try to explain the mismatches away, or bury the data. He accepted the facts, threw his theory away, and started anew, which eventually lead him to inventing gravity (yes, inventing it). This tenaciously honest rationality is my ideal, in science, politics, and otherwise. I’m afraid I don’t always live up to it, but then, I am only human, and Kepler is a legend.

Surveillance made slightly too simple

27 Nov

The Guardian posted an excellent animation called “The NSA and surveillance… made simple”, which I urge you to watch to put some things in perspective. However, like most of the coverage of the seedy business of government surveillance, it leaves out two important points:

  1. It’s not just about you being under surveillance, and how you feel about that. You probably have nothing to hide, since you are not that interesting, not doing anything that challenges those in power or the law as it currently stands (except possibly the copyright laws). But democracy is dependent on being constantly examined and challenged. Not very long ago, racial discrimination was the law inside the western world’s largest democracy. Not much longer ago, the law against homosexual acts in one of the world’s oldest democracies may have driven the father of computer science to suicide. (Incidentally, he had worked for a predecessor of today’s spying organizations in the war against fascism.) It’s not always the bad guys that need to hide from the authorities. Sometimes it’s their innocent victims. Sometimes it’s people upon whom democracy depends. Reporters, whistleblowers, activists of oppressed minorities etc., all need room to maneuver. Allowing the authorities to keep tabs on everyone takes democracy away. It’s not just about holding elections; remember, the German Democratic Republic held elections too.
  2. “It’s not just that our present governments shamelessly peep through our windows. They are building an infrastructure for tomorrow’s tyrants.” (Formulation from Beelzebjörn.) You may think that the officials of your government are all fair, just, and competent. To a large part, I agree with you, but some developments scare me. Practices implemented to block criminal content from the internet (instead of charging the criminals) will now be used to block “extremist narrative”.  Publishing government secrets is labeled terrorism. And so on. Today’s activists may be tomorrow’s “terrorists”. Across Europe, political parties that reasonable people label fascist are gaining popularity and influence. You may not like the governments of tomorrow, and there won’t be much to stop them if basic democratic rights like privacy and free speech are already shattered. Remember also, that future authorities will have access to some of the records on us that our governments collect today.

Inflation, energy, and magic

18 Nov

On a bicycle tour of Rome, in front of Fontana dei Quattro Fiumi, the guide’s words put me on an interesting train of thought. “Do you see a pile of discs down to the right? Those are coins, symbolizing the first gold imported from America, which was used for building St. Peter’s Basilica and other parts of the Vatican city.” (There might be objections to this historical account, but never mind, I don’t think they dilute my reasoning too much.)

What value did the imported gold have to the construction of some of the most famous buildings in European history? It wasn’t building material. It didn’t contribute any labor per se, nor any knowledge about construction of monumental buildings. The gold wasn’t really used for the buildings – except possibly a small part for decorations – yet it drove the process of creating them.

When the gold was issued as payment to contractors who built the St. Peter’s Basilica, the amount of “money” in Rome increased, and the gold economy was, to some degree, inflated. Like entropy, inflation is the result of energy dispersing in pieces that cannot be put together again. But inflation in itself isn’t the driving force. The force comes from a displacement of value, a local tension that yearns to be relaxed. In the case of the American gold import, the tension lies in the amount of “money” owned by the church. A local amassment of a kind of potential energy, which drives a process of economic activity, like the displacement of water causes streams to cascade down mountains and drive the turbines of hydropower plants.

The sun throws four million tons of mass-energy into space every second, a tiny fraction of which feeds what we know as life on the surface of a rock 150 million kilometers away. Solar energy is called renewable, but in the large scale of things, it isn’t. The sun shrinks. Mass and energy are smoothed out, and the universe cools.

The source of economic energy is more mysterious. There is no shrinking fireball of money. Whether money is tied to some perceived value of a rare metal, or whether it’s created as debt when a bank registers a loan, it’s an entity that exists only in our minds. It’s something we agreed on. Made up.

Yet it drives the wheels that power society. It’s magic. Real magic.

Motivations for insight

24 Sep

You won’t be able to understand what the following two statements are about, because the context is missing. But never mind,  I just want you to compare their structure. The original:

Note that these concepts are invariant under the hyperbolic group. To see this, note that P is in the minor region (arc) if and only if PX crosses AB.

And a version that paraphrases something I read somewhere:

I once asked the Lord why these concepts are invariant under the hyperbolic  group. He said to me, “It’s because P is in the minor region (arc) if and only if PX crosses AB.”

To me, a contemporary scientismist in this aspect, bringing  “the Lord” into the argument weakens the validity of the statement. Even though the second version does contain all the arguments of the first one, and more, I am tempted to stop listening when “the Lord” is brought up, and throw the second statement away as rubbish.

But looking across history, I am probably in a tiny minority. I have come to think that the difference is less about valid reasoning, and more about a cultural difference in expressing what an insight is. The first version appeals to some intrinsic truth in the statement, which is found by contemplating it in the right way. The second version… does exactly the same thing, actually, with the difference that it’s formulated by and for someone to whom any truth emanates from “the Lord”. That doesn’t mean that there is anything wrong with the reasoning, only that the formulation conforms to a different world view.

I still think the first version is objectively better, because it encourages the reader to execute the reasoning herself, rather than trusting an authority. It’s more difficult to get away with an unreasonable statement if it’s judged from the perspective of science than that of faith. But someone without access to the concept of modern science may still have a working brain, reason correctly (albeit possibly with a different view of what correct reasoning is), and formulate profound truths.

I believe that there is a lot of knowledge out there, whose motivation needs to be reformulated in order to make sense to modern people. Not so much about geometry, but about how people should behave to be able to live in harmony with themselves and others. Rituals, moral ideals, communion, spirituality, these notions should not be automatically discarded just because their old motivations are dated, and can seem crazy to us. They can all be extremely valuable.

We should still question them of course. Evaluate them using the sharper tools of our modern science.

Technology or jobs

23 Jan

The assumption in political discourse that creation of new jobs is an indisputable goal irritates me. Especially when someone states that new technology is the way to achieve it. I have dedicated my professional life to technology, and in my view, the objective of technology is to reduce the need for human work. To get rid of jobs, not create them.

To get some perspective, look back to the time before the monumental improvement in human living conditions (in other words, our environment) that came with using energy from bodies of organisms long dead, as opposed to the bodies of the living.[1] In those days, people (except for a privileged few) were pretty much constantly employed with producing food. Unemployment for those able to work was not an issue. If that was the ideal state of society, we should abolish modern technology, not develop it.

But it wasn’t, of course. We don’t really want to work all the time.

Industrialism did create some jobs. Hard, dirty and monotonous jobs, which we should be especially happy about getting rid of. But that’s a past phase in our part of the world, and eventually, I am still sufficiently stuck in Star Trek utopianism to believe, it will be past across the globe. There may be a dent in the curve due to reduced use of fossil fuels, but I’ll believe it when I see it.

It’s not like there’s shortage of rewarding uses of our lives. We can, for instance, increase the overall quality of life by serving each other knowledge, art, healthcare, or caffè latte. That takes some hard work, but why make an effort to create more work than necessary?

I suppose it has to do with the dominating religion of our time, and the supernatural being at its center: the economy. The economy, our priests tell us, must forever grow in order for society to function. And for this growth, we should all work, not primarily to increase the quality of life, but to generate interest on invested capital.

I can’t help but feeling increasingly disturbed by the discrepancy between this faith and the reality of the world with which I am presented.

1. In case I was too obscure here: what I’m saying is before humanity started using fossil fuels for energy.

Times change

19 Nov

Not even 25 years had passed since World War II when I was born. A remarkably short period of time, considering that I would place 1945 and 1969 in different historical eras. The sense of a long time isn’t captured by just the number of years. It’s the changes brought by those years that lets us feel the time.

It’s an interesting exercise to compare to the change over the last 25 years, since 1987. When I was a child, 1987 was the shining future, and there’s a part of me that still hasn’t let go of that image. But from today’s perspective, 1987 doesn’t seem much different from 1969. Until recently, both of these years could even feel like the same historical era as the present. But that has changed.

On the surface, the change from 1987 to the present may look like just more of the same technological development that made 1945 into 1969, even disappointingly little of it. But the important changes are in our mental perception of the world, which is also true when comparing 1945 to 1969. I think it’s reasonable to say that we have, at long last, entered a new historical era.

The most critical change is not the end of the Cold War (which seemed significant at the time) or any other of the shifts in world politics. It’s how we’ve adopted digital communication and, as an effect of that, that we no longer live in just physical euclidian space. Life in Scandinavia, just to take one example, felt distinctly provincial in 1987. We were a considerable distance into nowhere. Now, we are living in a fully integrated part of everywhere.

I would much rather live in this era, with its boundless exchange of information and ideas (and even, to a large extent, commodities), than any other that I know of. But still, I have to admit that if a science fiction writer in 1987 had projected the future that we are living in, it would have had to be perceived as a rather bleak vision. More dystopian than utopian.

Luke Skywalker probably can’t code

17 Oct

The article by Ryan Britt about how society in Star Wars seems to have “slipped into a kind of highly functional illiteracy” is interesting, particularly if you catch the unpronounced warning that this may be where we are heading as well.

To add to the absence of passive written information in people’s lives, another major piece of culture that seems to have been completely left to the droids is the knowledge about active information, software code. Citizens of the Star Wars universe make use of lots of advanced information technology, but you don’t ever see them relating to it in any creative way. It’s all about using fixed interfaces. Very advanced interfaces, certainly, including flawless speech recognition, holographic visuals etc. But nothing that taps into the true power of IT by controlling what the tools can do.

Who created the fantastic future replacements for books and Ipads that the intergalactic counterpart of Apple Inc. churns out? Droids, no doubt. The only individual I can think of doing something that would require knowledge of programming is R2D2 hacking into various systems. (Obi-Wan turns off a tractor beam on one occasion, but that’s less impressive.) This is a boring variant of the singularity, machines becoming smarter than humans and not exterminating them physically, but intellectually. The droids have taken over the tasks that they do better than humans, and humans have become something on the level of pets, cattle, or vermin. They still see the machines as their servants, but the relationship is far from obvious if you try to see it objectively. (Then, the machines apparently stopped being creative too, because technology doesn’t change in the decades we see over the six films.)

Interestingly, humans still have the basic skills to tinker with hardware. Brat Anakin, for instance, constructs not only a racing pod but also a droid, C3PO. So didn’t he code C3PO’s mind? No, Anakin’s droid is a standard issue, with the same design in both hardware and software as lots of others. It’s like somebody building a car, using only existing ideas and blueprints. Anakin put the pieces together, connected the wires.

A final reflection: I wonder what mental capabilities our ancestors would have missed if they could see us today. After all, technology’s greatest contribution is to permit people to be incompetent at a larger and larger range of things.