Tuesday, August 19, 2014

Be afraid, be very afraid…(Damn, that technology can be scary sometimes.)

It’s not as if I don’t like technology.

Come on, I’ve spent the last 30+ years making my living in the tech biz.

It’s not as if I don’t embrace technology.

Come on, I’ve got the requisite gear: smartphone, laptop (soon to be replaced by a Surface Pro super-tablet), Kindle, special decoder-looking ring that I use for my Charlie Card (smartcard for Boston’s rapid transpo system), programmable thermostat (okay, so I haven’t actually programmed it yet…). And while I’m not “always on”, I spend an awful lot of time online.

It’s not as if I don’t understand technology.

Come on, I’ve actually ghosted articles that appeared in EE Times.

And yet, deep in my heart – or at least in part of one ventricle – there’s definitely an inner techno-worry wart. Maybe even a bit of a Luddite.

The latest thing to unleash that inner Luddite was an article on Bloomberg a week or so ago that warned that:

From Ancient Greece to Mary Shelley, it's been an abiding human fear: A new invention takes on a life of its own, pursues its own ends, and grows more powerful than its creators ever imagined.

For a number of today's thinkers and scientists, artificial intelligence looks as if it may be on just such a trajectory. This is no cause for horror or a premature backlash against the technology. But AI does present some real long-term risks that should be assessed soberly. (Source: Bloomberg)

AI has been around for quite a while.

Many, many years ago, a bunch of my colleagues fled the company we were all working at to join an AI start-up that was supposed to be capable of making the right business investment decision. I wanted to join them, but wasn’t artificial or intelligent enough to get an offer. (Formal word came back that I didn’t seem “ready” to leave the company where I was working; informal word came back that I’d asked too many questions, which led the hiring folks to determine that, in an environment that required true believers, I would be one of those ye of little faith types.)

Of course, it almost goes without saying that that start-up went out of business.

Since then, of course, AI has become a ton more intelligent.

Yet it still hasn’t gotten to the point where, when it comes to “’general’ intelligence”, it’s as good as one of us actual human beings. I.e., there’s still no such thing as a:

…machine that can independently solve problems and adapt to new circumstances, like a human.

Bravo, us!

Maybe AI can beat a humanoid at chess, but we’re still Number One when it comes to things like figuring out what to do when siblings are squabbling, spouses are aggravating, and the dog needs a scratch behind the ears. When it comes to things that require gut instinct, human experience, and emotional intelligence, which – so far – hasn’t leant itself to artificiality, we rule.

Which is not to say that AI won’t someday surpass us:

Experts in one survey estimate that artificial intelligence may approach the human kind between 2040 and 2050, and exceed it a few decades later.

Hmmmm. Wonder what the survey would have said if they’d asked AI machines themselves? Something to wonder about…

They also suggest there's a one-in-three probability that such an evolution will be "bad" or "extremely bad" for humanity.

There are some pretty big names that are worrying about machines going Frankenstein on us:

Elon Musk has warned that it could be "more dangerous than nukes." And Stephen Hawking has called it"potentially the best or worst thing to happen to humanity in history."

They worried that AI could get too big for its smarty-pants britches, to difficult for mere mortals to “understand or predict…more difficult or impossible to control.”

What could happen that’s so bad?

To use a simplified example: A self-learning AI programmed to calculate the decimals of pi might, as it became more intelligent, decide that the most efficient way to meet its goals would be to commandeer all the computing power on earth.

Swell. Satellites will fall from the sky, the grid will go down, heart monitors will stop monitoring, and I won’t be able to get my daily Daily Mail UK fix, all because some jerk of a machine wants to calculate pi  to well beyond the number of decimal places where even the most obsessive, nerdly kid would decide that the task was useless and boring.

What to do, what to do?.

… researchers in the field need to devise commonly accepted guidelines for experimenting with AI and mitigating its risks. Monitoring the technology's advance may also one day call for global political cooperation, perhaps on the model of the International Atomic Energy Agency.

The bottom line?

There may be no immediate danger, but, in the long run, there’s every reason to be afraid, be very afraid.

Technology capable of calculating pi to infinity and beyond could end up doing some pretty awful things.

No comments: