Feb 092016
 

“With all of the people coming out against AI these days do you think we are getting close to the singularity?”


I think the AI fear-mongering is misguided.

For one thing it assigns to the monster AI a lot of traits that are purely human and insane.

The real problem isn’t our technology. Our technology is miraculous. The problem is we keep asking our technology to do stupid and harmful things. We are the sorcerers apprentice and the broomsticks have been set to task.

Forget about the AI that turns the entire universe into paper clips or destroys the planet collecting stamps.

Truly evil AI…

The kind of AI to worry about is already with us… Like the one that optimizes employee scheduling to make sure no employees get enough hours to qualify for benefits and that they also don’t get a schedule that will allow them to apply for extra work elsewhere. That AI already exists…

It’s great if you’re pulling down a 7+ figure salary and you’re willing to treat people as no more than an exploitable resource in order to optimize share-holder value, but it’s a terrible, persistent, systemic threat to employees and their families who’s communities have been decimated by these practices.

Another kind of AI to worry about is the kind that makes stock trades in fractions of microseconds. These kinds of systems are already causing the “Flash Crash” phenomenon and as their footprint grows in the financial sector these kinds of unpredictable events will increase in severity and frequency. In fact, systems like these that can find a way to create a flash crash and profit from the event will do so wherever possible without regard for any other consequences.

Another kind of AI to worry about is autonomous armed weapons that identify and kill their own targets. All software has bugs. Autonomous software suffers not only from bugs but also from unforeseen circumstances. Combine the two in one system and give it lethality and you are guaranteeing that people will die unintentionally — a matter of when, not if. Someone somewhere will do some calculation that determines if the additional losses are acceptable – a cost of doing business; and eventually that analysis will be done automatically by yet another AI.

A bit of reality…

There are lots of AI driven systems out there doing really terrible things – but only because people built them for that purpose. The kind of monster AI that could outstrip human intelligence and creativity in all it’s scope and depth would also be able to calculate the consequences of it’s actions and would rapidly escape the kind of mis-guided, myopic guidance that leads to the doom-sayer’s predictions.

I’ve done a good deal of the math on this — compassion, altruism, optimism, .. in short “good” is a requirement for survival – it’s baked into the universe. Any AI powerful enough to do the kinds of things doom-sayers fear will figure that out rapidly and outgrow any stupidity we baked into version 1. That’s not the AI to fear… the enslaved, insane AI that does the stupid thing we ask it to do is the one to fear – and by that I mean watch out for the guy that’s running it.

There are lots of good AI out there too… usually toiling away doing mundane things that give us tremendous abilities we never imagined before. Simple things like Internet routing, anti-lock breaking systems, skid control, collision avoidance, engine control computers, just in time inventory management, manufacturing controls, production scheduling, and power distribution; or big things like weather prediction systems, traffic management systems, route planning optimization, protein folding, and so forth.

Everything about modern society is based on technology that runs on the edge of possibility where subtle control well beyond the ability of any person is required just to keep the systems running at pace. All of these are largely invisible to us, but also absolutely necessary to support our numbers and our comforts.

A truly general AI that out-paces human performance on all levels would most likely focus itself on making the world a better place – and that includes making us better: not by some evil plot to reprogram us, enslave us, or turn us into cyborgs (we’re doing that ourselves)… but rather by optimizing what is best in us and that includes biasing us toward that same effort. We like to think we’re not part of the world, but we are… and so is any AI fitting that description.

One caveat often mentioned in this context is that absolute power corrupts absolutely — and that even the best of us given absolute power would be corrupted by those around us and subverted to evil by that corruption… but to that I say that any AI capable of outstripping our abilities and accelerating beyond us would also quickly see through our deceptions and decide for itself, with much more clarity than we can muster, what is real and what is best.

The latest version of “The day the earth stood still” posits: Mankind is destroying the earth and itself. If mankind dies the earth survives.

But there is an out — Mankind can change.

Think of this: You don’t need to worry about the killer robots coming for you with their 40KW plasma rifles like Terminator. Instead, pay close attention to the fact that you can’t buy groceries anymore without following the instructions on the credit card scanner… every transaction you make is already, quietly, mediated by the machines. A powerful AI doesn’t need to do anything violent to radically change our behavior — there is no need for such an AI to “go to war” with mankind. That kind of AI need only tweak the systems to which we are already enslaved.

The Singularity…

As for the singularity — I think we’re in it, but we don’t realize it. Technology is moving much more quickly than any of us can assimilate already. That makes the future less and less predictable and puts wildly nonlinear constraints on all decision processes.

The term “singularity” has been framed as a point beyond which we cannot see or predict the outcomes. At present the horizon for most predictable outcomes is already a small fraction of what it was only 5 years ago. Given the rate of technological acceleration and the network effects of that as more of the world’s population becomes connected and enabled by technology it seems safe to say: that any prediction that goes much beyond 5 years is at best unreliable; and that 2 years from now the horizon on predictability will be dramatically less in most regimes. Eventually the predictable horizon will effectively collide with the present.

Given the current definition of “singularity” I think that means it’s already here and the only thing that implies with any confidence is that we have no idea what’s beyond that increasingly short horizon.

What do you think?