Summary of AI
So in review, we’ve talked about Alan Turing and the Turing test, along with John Cirillo in Chinese room with the problems and issues that the Turing test actually has. Then, we talked about Newell and Simon’s logic theorist, one of the first AI programs ever to be put out there. Then, we talked a lot about the Dartmouth research projects where organizing the conference to start to kind of build out the idea of AI, and the vast amount of different subtopics, and tools, and projects, and applications of AI, along with things like neural networks with Marvin Minsky, and a bit about the current state of AI. So a lot of things like Deep Blue, AlphaGo, AI in the medical field. So there’s a vast range of different possibilities and applications that we’re currently experiencing, including things like self driving cars. So what’s missing from this discussion, because we could talk for a very long time about AI, and just purely just the applications of AI.
So things like philosophical implications of AI, ethical implications. So this is a really big one right now with self driving cars. So if you’ve ever heard of the moral machine, this is a project out there, I would encourage you to go check this out. But it’s for a self driving car. If you’re presented with a situation where, regardless of the decision you make, you’re going to kill something. What do you kill? Do you swerve to miss grandma crossing the streets? But when you swerve, you’re going to hit the little girl playing hopscotch on the sidewalk? Or do you swerve the other way and hit the dog walker walking down the street with a pack of cute little puppies? Or do you just speed on through the intersection and hit grandma? There’s a lot of moral implications here. Because, you know, when it’s a human making that decision, generally speaking, the human is going to be at fault, but when an AI makes that kind of decision, which involves human life, or other forms of life, who’s at fault? Is it the programmer who told the AI to do that type of behavior? Do we put the car in jail? Right? There’s a lot of ethical implications there around AI and the choice of life or death.
Solvability, right, are there things that a human can do, but the AI can’t, or vice versa? Singularity AAA the idea that we will reach a point where technology is growing faster than what we can actually do. So it basically grows out of control. And this is kind of like the robot overlords situation. If you’ve ever watched the movie iRobots with Will Smith. That’s a really kind of good example of singularity and human like intelligence and robotics. Like robots that have or AI has emotion and things like that. It kind of relates to consciousness as well, right? So we relate to humans being conscious. How to make an AI that has a conscience, or that human like emotion and human level decision? But again, these are some really deep topics that we could talk a lot about. But these are just some points to kind of remember, as you’re kind of going about learning about AI watching the videos that we have for this particular module and moving forward into a world where AI is as ubiquitous and ingrained into our daily routines, and it’s going to continue that way even more so as as we go farther into the future.