Artificial Intelligence for Beginners

Artificial intelligence is an interesting term. For some reason, most people hear the word “intelligence” but somehow miss the word “artificial.” It’s an over-hyped term; in fact, there is a term for it: technology bias. It’s the automatic belief that the next new thing will solve more problems than it possibly can. Of course, the problem is that movies have portrayed “artificial intelligence” as being actually intelligent, and sadly sometimes, people take their information from fictional movies. Artificial intelligence isn’t intelligent, but it does hold potential, but that potential is determined by us and the datasets we use to teach it.

First, artificial intelligence is just a fancy correlation engine that learns from the correlations you’ve provided. A correlation is a coincidence, but just because two things coincide, it doesn’t mean that they cause one another. Ionica Smeets presents a great example. There is a correlation between ice cream sales and drowning. As ice cream sales go up so do the incidences of drowning. So, does ice cream cause people to drown? Obviously not. A problem can occur when people want to believe that one thing causes another. For example, people wanted to believe that boosting a student’s self-esteem would improve their grades because it’s so much easier than actually studying, but it proved to be false. Raising self-esteem doesn’t make people smarter; it just makes them think that they are smarter. It’s really the worst-case scenario. Imagine a society of dumb people who think that they’re smart. Oh wait, you don’t have to imagine (TedX Talks, 2012).

Here is how AI works. So, let’s say that you’re a realtor, and you take a photo of every potential customer. You want to know if artificial intelligence can predict from a photo of a potential customer the odds of that person buying a home. So, you feed in a photo, the program turns the photo into numbers, you then tell the software to either associate that photo as a buyer or as not a buyer. You push thousands of past potential customers into it. This is called learning. Your software is learning who is a buyer and who is not a buyer according to the features presented in a photograph. This is important. The software doesn’t actually know your emotional state from the expression on your face. It’s simply correlating the expression to the person buying or not buying the home. And you don’t have to stick to just the photograph. You could add demographic data to the inputs, such as the person’s age, race, marital status, etc. So, the software isn’t really learning like a person does and it really doesn’t understand. It’s just correlating.

And correlations can be wrong. Very wrong. The educator Piaget asked children where the wind came from and they responded that it comes from the trees bending because they had associated the wind with bending trees. Baseball players can come up with some pretty fascinating rituals mostly involving kissing religious symbols (Brennan, 2010). There are many examples of this type of logical error causing harm and much of it relates to bogus medical treatments.

So, remember to be careful. AI isn’t intelligent. It will never be intelligent. It doesn’t think. It doesn’t love. It doesn’t care if it’s right or wrong. It just correlates.

Brennan, Jay. (2010). Major League Baseball’s top superstitions and rituals. Bleacher Report.

TEDX Talks. (2012). The danger of mixing up causality and correlation: Ionica Smeets at TEDxDelft [Youtube].