Will we ever have sentient AI? Jokes about Skynet and AI road rage aside (Incidentally, one AI company is calling itself Skymind, which seems needlessly risky to me)...maybe.
Elon Musk's OpenAI is the AI version of Linux. By open sourcing everything the project will accelerate development. In answer, Google and Facebook have now open sourced some of their AI development.
Wait? Google and Facebook?
We probably don't realize just how much we use AI in our daily lives these days. Deep learning AI - AI that can learn from its own and other people's actions - is now powering much of what we do online.
If I open a tab and type something into the URL bar, it's an AI that auto completes it, based off of what it knows about what I've searched for in the past and what other people are searching for. Every ad you see on the internet? It's picked by an AI, based off of what you have been searching for and other factors such as the time of year, time of day, your zip code and what instructions it got from the company placing the ad.
Of course, the AIs don't always get it right. If type in dancing, the AI thinks I mean "Dancing With The Stars." Oops. I have no interest in that show. Facebook is currently serving me ads for insurance - with the company I already have insurance with. Oops.
That's because the AIs are only as good as the data they're fed with - plus advertisers can overrule the AI and say, for example, they want their ad served to everyone in zip code X. But the underpinnings of how we find stuff on the web is AI.
And, of course, every time you play a computer game against an NPC opponent? That's an AI engine too. Computers have been able to play chess well enough to challenge Grand Masters for years - but chess is relatively simple. The real world is much more complex, hence how long it's taking to develop self-driving cars.
Flown anywhere lately? Early airline autopilots could do nothing but hold the plane at a steady speed and altitude. Nowadays, the autopilot and the pilot work together as basically one system and while it's not actually true that planes fly themselves. A plane can, however, land without a pilot. They generally don't - autoland is used only in extreme circumstances. But one can envision a future in which the pilot becomes only a backup system and then vanishes altogether. It's not as close as some people think, but it's not impossible.
So, what about sentient AI? So far, all of the AIs we've created are still computer systems. They take an input and give us an output. How different, though, is that from us? We have robot pets that can fool us momentarily into thinking they're alive. Robot home aides are being developed in Japan.
If task complexity is really what makes the difference, then sooner or later we will have an AI that turns around and asks us "Who am I?"
What's very important is how we answer that question. Elon Musk is afraid of AIs taking over the world (and thinks the answer is to make sure any developing AI has as many people in communication with it as possible).
I think, and have for a while, that the answer to that question should be "Our child." An AI designed by humans for humans will have an inherent humanity to it - if it is evil, it will reflect our evil. The fear that an AI will be a monster assumes that it will have no soul. I would argue that if one sentient being has a soul then all must.
So, while I think sentient AI is a ways away, I'm not afraid of it, and I'm happy to see some of the code being open sourced.
I don't think our AI children will take over the world if we treat them properly. But then, even if they do, they will still have inside them something of us.