Friday, December 5, 2014

Is Artificial Intelligence dangerous - or are its Makers?

Before history moves on and artificial intelligence (AI) erases any trace of this blog as part of the destruction of the human race, here are the figures you need to know.

Humans killed by "independent" artificial intelligence: 0, as of 2014

Humans killed by humans:  see the Wikipedia list of wars and anthropogenic disasters by death toll.

The death toll so far is clear, but the famous warners of AI are looking into the future. The rash of headlines this year may in fact worry you. The warners are respected people like Elon Musk and Stephen Hawkins.

I don't know about you, but if a guy who knows his way around black holes or another one who built a fully electric sports car in his spare time while leading a team of kids to replace NASA tell me they are afraid of AI, gosh, I listen.

The timing of the reports is pretty fucked, though. It made us interrupt an experiment designed to figure out the impact of the dumb German "ancillary" copyright law requiring search firms to pay money if they want to list anything beyond the smallest of text snippets from online content.

For a time, we stuck to just reading headlines, ignoring anything beyond it. A disaster.

The dire warnings of AI exterminating humanity erupted while people who think yellow pages denotes a book that hasn't aged well, or that a shopping portal without inventory, or warehouses, or without entering into a contract with the online user, is a legit online vendor are trying to break up Google. Bad timing for AI warnings, when we humans are still lacking the I ourselves.

We ended the experiment and read the AI warnings. They were much more balanced than the headlines suggested, great.

They were also quite clear, though Business Insider is not something to recommend as your only source of information: They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans.

The blogster's simple mind zoomed in on two things in the discussion. The first one is that AI is not happening by itself. Humans, we the people, are developing it. It's not like rain or foxes, or fungi. If you turn off the power, AI doesn't go anywhere. For the moment at least.
The self-sustaining robot feeding off of grass or solar power is yet to come.

If AI turns dangerous, the first events of such nature will be within human power to stop. 
It's called ethics, and it will probably not work too well, say cynics.

There is another aspect to the warning "AI may exterminate the human race", something not a single one of the many articles we read mentions.

If the threat becomes real, tangible, humans will finally know what it is like to be an elephant hunted without mercy for ivory, or a whale hunted not out of necessity but just because we can.


Wouldn't it be wonderful if the remorseless exploiters of the planet were afforded this ultimate view of the monkey facing the human in a face mask, holding a syringe?

Have a little faith, humans.

Truly intelligent robots won't wipe out the human race completely. Trust them to be good enough scientists. As such, they will either tolerate us or keep us in comfortable, animal friendly zoos - not some extraordinarily dumb cages of the Plant of the Apes.

Even if not a single human survives, we will have created the new guys. What's so bad about that? Looking around the hood, lots of humans like to play god.

Isn't the fear of AI really the fear that those guys will be just as predatory and unrelenting as us, only even more efficient?

[update 6/12] Just in time, here is an article about the BBC robot cameras going rogue....http://www.theguardian.com/media/2014/dec/05/bbc-robot-cameras-rogue-presenters-frustrated

No comments:

Post a Comment