AI Advancements
Resources
Video Script
So let’s take a look at some early attempts of artificial intelligence. So Alan Newell and Herbert Simon were a couple of early researchers in AI. In 1955, they wrote a program designed to mimic the problem solving skills of a human being, and called it the logic theorist. This program is now widely considered to be the first AI program, it was used to prove theorems in the book of Principia Mathematica, and actually created a much more elegant proof for some of the theorems than the author wrote for the book. At the time, interest was swirling around the growing idea of AI, and the experts decided to come together and discuss that topic at length. This is a conference that John McCarthy had wanted and successfully organized. There’s a lot of different types of AI that have been developed over the years. And there’s a lot of different methodologies behind them to make them artificially intelligent. But a lot of this deals with how we represent knowledge and information for the AI. So there’s a ton of information in the world, even something as simple as a smartwatch that’s detecting your heart rate and oxygen levels and things like that. It ends up being a lot of information over the course of a small period. And so how do we represent that knowledge in order for our AI to actually be able to consume it and make good rational decisions from it.
That also includes a search of that information, so how do we find and dig our way through all of that data, which includes expert systems. So a really good example of an expert system is Amazon. So it learns your shopping habits and recommends items to you. And that recommender system is really what tries to learn your likes, and dislikes, and what you might need to buy, or want to buy next. The ability to plan right? To set out a course between information between two points. Reasoning, machine learning, which we’ll talk about here in a little bit with neural networks, special topics like a natural language processing, so AI that can understand human speech, which for us is not as difficult, but for a machine and for a computer understanding human like speech, and producing human like speech is an incredibly difficult problem.
So to dive a bit deeper into the topic of AI, we’re going to look at the last tool mentioned called neural networks. In 1969, Marvin Minsky, one of the founders of MIT’s AI lab, wrote a book called perceptrons that laid the groundwork for this idea of a neural network. Now, what are neural networks? So the idea behind a neural network lies behind the power of individual neurons, and the connections between them. Each neuron is capable of doing a certain task, and then its output is passed on to other neurons. The strength of a neural network comes in the form of the connections between neurons. If one of them tends to give correct answers to a problem, other neurons will be more likely to use its output based on the strength of the connection between them. And the process of strengthening good connections and weakening bad ones is how neural networks are able to learn how to do particular tasks. And this is really kind of how we’re trying to simulate the human brain, right? We have in our brain, we have lots of neurons and synapses and those synapses, those connections are a representation of the knowledge and things that we actually learned throughout our life. And so how do we actually get a computer to imitate that particular idea? Now, neural networks have been out for quite some time. But more recently, this has been the idea behind deep learning. Deep learning works at a very basic level, expanding the network to have numerous different layers to actually learn from. Deep learning is very much like an artificial neural network, but lots and lots of different layers. And each layer may actually be a different learning algorithm that is producing that particular output.
An example of neural network is here about classifying camouflage tanks. So in this experiment, the researchers wanted to create a neural network that would classify pictures of tanks hiding in trees from pictures of just trees. So we have pictures of trees and pictures of tanks in trees. This was a really interesting problem for the government at the time, and it worked pretty well for the original photos. But when the researchers brought in a new set of photos to test it on, the results were no better than random. The reason behind this behavior is that the original photos that the AI were trained on, were taken all on sunny days for the tanks, all the pictures trees were taken on cloudy days. And so what they really built here was a machine that determined whether or not it was sunny or cloudy. And so this is really kind of a funny ending result here about an AI that did a really good job, right? It was given information and without information, it classified these pictures. And based off of the pictures themselves, the classification of sunny or not sunny became a lot easier or more prevalent than tank or no tank. And so this is a really good example of how AI is really only as smart as, currently, as smart as how we program it or what we tell it to do. And sometimes it ends up finding things out or doing things that we totally didn’t expect. And sometimes it turns out for the better.
So let’s take a better look at some other AI that is a little bit more modern, or a little bit more recent. So in 1997, so this is still pretty old, a little over 20 years old now, Deep Blue, a AI that was developed by IBM beat Garry Kasparov at chess. So Garry Kasparov was a world class chess player. This is a very big achievement at the time because chess again, like many other games can end up being far more complex than what you actually think. So IBM’s next AI venture was IBM’s Watson. Watson was a research project that started out in 2006. And its goal was to be able to learn from the internet. So basically be able to answer lots of questions based off of the information that it can actually scrape from the internet. So basically, Wikipedia type information, things that you search on Google, that sort of thing. A really huge achievement of IBM’s Watson was in 2011, beat Ken Jennings in Jeopardy, which Ken Jennings at the time, if you never watched Jeopardy, or haven’t watched Jeopardy for a while, Ken Jennings was one of the best players in jeopardy at the time. So after IBM’s Watson beat Jeopardy, IBM kind of repurposed the AI to start targeting things like the medical field. So being a computer that is able to answer or intelligently answer medical questions, also things like industrial questions. So it’s basically been in a continuous innovation project where it is basically an AI that is essentially better than your Google search. So not only does searching the internet for information, but actually coming up with this specific answer for the question.
Now even more impressive, Google’s DeepMind project had this AI called AlphaGo. And in 2015, this was the first AI to ever beat a professional human player in Go. Go is a really ancient game that originated in China with a board with a bunch of squares on it, and the task here is to end up with the most colored stones on the board. So there’s white and black stones, and it’s kind of like reverse if you’ve ever played Reversi, but a lot more complex. AlphaGo continued its winning streak by defeating the world champion at Go in 2016. So this is a huge achievement, because again, right, this is the first AI to ever beat a human player in Go. But the real achievement here is that Go is an extremely complex game. It has 10 to the power of 170 possible board configurations. And so an AI that can actually play a game better than a human player at this, or professional at this, is really quite an achievement, because computationally wise it’s practically impossible to look at all board configurations instantaneously at any time for all the moves. So AlphaGo really started to train itself. How it works is it played variations of itself, millions upon millions upon millions of times, to start to learn different techniques and strategies to actually play Go. And AlphaGo was later expanded into an algorithm called alphaZero, which played games like chess and checkers. This is a little bit different than most algorithms or AI’s that play chess and checkers, where those just generates the possible game trees and choose the best move. But alpha zero is deep learning base, so it plays with a little bit more strategy instead of just looking five or six minutes ahead.
Now we could talk on and on and on about the uses of AI and machine learning because it’s pretty much ubiquitous in our current life. But things like Microsoft Connect was a huge innovation that is able to track and map out a human skeleton so you can move and interact, right? This is kind of a big portion or big push into things like AI virtual or augmented reality. Things like Apple Siri or Amazon, Alexa or Cortana all are natural language processing AI applications. So AI that is able to understand and answer human speech questions which are really impressive and improve on it on a daily basis. Things like Wolfram Alpha. AI is pretty much everywhere right has so many interesting capabilities and applications and really have started to become integrated and ingrained in our daily life. ASIMO A S I M O Honda is another example of an attempt at artificial intelligence in this time in a very human like form, this has so many interesting capabilities. And they hope that one day can be used to assist humans in everyday life. Just like a lot of the smart home devices and things like that have started to integrate into our daily routines. The hope here is that we have robotic agents and AI that is able to further assist us in our daily tasks.