Back to top

Image: Bigstock

AI in Action: DeepStack, DeepMind, and Deep Learning Intuition (Part 2)

Read MoreHide Full Article

Welcome back to Mind Over Money. I'm Kevin Cook, your field guide and story teller for the fascinating arena of Behavioral Economics.

In our last episode from October 17, we explored the powerful new computing technologies known as machine learning and deep learning. These AI “genies” were let out of the bottle by computer scientists and engineers who learned to harness semiconductors that were originally designed for advanced gaming graphics.

What the researchers discovered was that the GPU (graphical processing units) chips made by NVIDIA (NVDA - Free Report) and AMD (AMD - Free Report) provided the parallel processing required to teach machines to go beyond replicating simple human intelligence tasks like filtering your email for junk and to do things such as win at "extensive" -- or, extremely complicated -- games like the Chinese Go or Texas Hold'em.

With lots of data -- both structured and messy -- a computer can be trained to automatically analyze language, images, faces, and even behavior that seems like it could be financial fraud.

But what about the threats that AI could pose, especially if it is used by hackers or nation-state cyber-terrorists to wreak havoc?

Elon Musk, founder and CEO of Tesla (TSLA - Free Report) , has been an outspoken Cassandra on the dangers of AI that is misunderstood or misused. He knows, of course, how it can be used for good and will be applying it in the R&D of his own products and services.

And he is right to use his platform to warn others about the potential for unintended consequences, to say nothing of the odious ones.

But I see the evolution and application of AI as nearly unstoppable, and we'll have to take the bad with the good as it comes at us.

Because no government will be able to keep up with the innovations at the university, corporate, or nation-state levels.

In the Mind Over Money podcast that accompanies this article, I share some recent articles, studies, and resources for managing our own education in this brave, new world.

We either learn all we can about what's unfolding or, in the words of Will Ramey, NVIDIA's Director of Developer Marketing, "it will all seem like magic."

What AI Is, And Isn't

It's important to keep in mind what AI's workhorses, machine learning and deep learning, can and can't do.

Most programs are created and trained to do very specific tasks. If an algorithm was created with a neural network to recognize financial fraud, it's not going to be able to play you in chess, cook you dinner, drive your car, or empty your bank account.

Could a mega-AI be created to do all these things and appear truly "intelligent" as a human? Probably. But many natural constraints exist to both practical computing power (who pays for it, who maintains it, etc.) and undetectable crime.

Parallel processing platforms can create levels of training, "learning" through experience, and what computer scientists call "tuning" that allows inference based on that experience and repetition. From this structure, an AI platform can make predictions, judgements and recommendations that are probability-based. But it can’t act on decisions that it is not programmed to act on or decide for.

In essence, very specific types of tools can be created that are just like task-focused, rules-based software, except that machine and deep learning tools can both adapt and improve their skill in that task.

And that's why AI programs can act as “first filters” for massive amounts of data, automatically, from tailoring an online shopping, entertainment, or research experience to driving your car or detecting skin cancer.

Neural Isn't Neurological

But this is all to augment and amplify the human experience, not to completely take it over or replace it.

Computer scientists and AI practitioners love their work and love comparing it to human brains because the neural networks and if/then logic circuitry they create with clean, layered-and-ordered-complexity architecture (nodes) are easier to work with and "repair."

But despite their clean, efficient, rational world, these experts also know that the messy-complexity of neurons, synapses, dendrites, and biochemical messengers has orders of magnitude superiority when it comes to broad and general intelligence -- even if it took millions of years of chaotic trial and error to get here.

Of course, in just a few sentences, I run the risk of greatly oversimplifying the evolution of deep learning from machine learning.

There are many different paths that researchers at universities and corporations have taken using these technologies. And the best way to talk about them and learn from them is on a case by case, problem by problem basis.

I do that in this episode of the Mind Over Money podcast by approaching several examples, including IBM's Watson, Google's DeepMind AlphaGo, and our main topic last week, the team at the University of Alberta who "solved" Texas Hold'em poker with DeepStack.

AlphaGo Zero Takes AI Intuition Further

Just last week, an Alphabet (GOOGL - Free Report) AI division known as DeepMind conquered their next challenge in the Chinese game Go by creating a deep learning platform that completely taught itself to play without following human examples.

In other words, while the AlphaGo program that beat the world human champ in 2016 had relied on analyzing thousands of games involving good players, the new AlphaGo Zero began with a blank Go board and no data apart from the rules.

Then it “simply” played against itself, just like DeepStack did with no limit Texas hold’em poker.

"Within 72 hours it was good enough to beat the original program by 100 games to zero" according to Rory Cellan-Jones in his BBC News article Google DeepMind: AI becomes more alien.

I haven't see the DeepMind paper, but I'm going to go out on a limb and guess that, for a game with as many possible outcomes as Go,  Zero had to play tens of millions of games in 72 hours to teach itself to become that good.

I would also assume that it uses some form of CFR (counterfactual regret) training and that the result is a form of “intuition,” just like DeepStack. I'll be looking for the scientific paper associated with DeepMind's stunning achievement.

The Intelligent (and Invisible) Robots Are Coming

I ended last week's episode (Part 1) on the positive note of "Let's learn all we can cause AI is coming!"

This past week I found several articles and resources that got me excited about how to do just that, from MIT to Coursera.

One article that caught my attention because it seemed almost absurd was from the UK's Guardian citing a British business group calling for the government to launch an AI commission to predict the impacts on jobs and productivity.

If you caught my May 9, 2017 podcast What to Do Before the Machines Take Over, on the "threat of automation" and Yuval Noah Harari's book Homo Deus, you know that I've covered some of the dark side of the coming technology revolutions.

But as I stated earlier in this article, I see the evolution and application of AI as nearly unstoppable, and no government will be able to keep up with the innovations.

In other words, different aspects of business and society will already be in the middle of a "next-transformation" by the time some commission has figured out the last transformation.

As MIT recently titled a piece on their blog, corporations need to Keep Calm and... Massively Increase Investment in Artificial Intelligence.

To wit, companies like Amazon and Alibaba (BABA - Free Report) are doing just that. Wells Fargo analysts noted in a research report last month, when they set a price target on BABA shares of $225, that China will see a $7 trillion impact on its economy in ten years from AI according to PwC.

That should make the UK government analysts, who predict an $814 billion impact to their economy by 2035, go back to the drawing board.

The New Electricity

Be sure to listen in to this podcast and catch all the gems of insight that I don't have time to write down here, including that of Andrew Ng, VP & Chief Scientist of Baidu, Co-Chairman and Co-Founder of Coursera, and an Adjunct Professor at Stanford University.

Professor Ng calls AI "the new electricity" because it will soon be as powerful, ubiquitous, and ordinary for society as that technology was a century ago.

I also share a story about veteran AI theorist, cognitive psychologist, entrepreneur and educational reformer Roger Schank and his war of ideas (and words) against IBM's (IBM - Free Report) Watson.

Disclosure: I own shares of NVDA, AMD, and BABA for the Zacks TAZR Trader portfolio.

Kevin Cook is a Senior Stock Strategist for Zacks Investment Research where he runs the TAZR Trader service.