How artificial intelligence came to be almost everywhere

Glowing circuit head on blue background. 3D Rendering
Shutterstock
Impact
Updated: 
Originally Published: 

If you spend any modicum of your time on the internet, or even dabbling in the tech world, you've no doubt seen mentions of "AI," or artificial intelligence, here and there. In the past few years it's become something of a buzzy catchword in the industry. And it seems like every website, app, and game is using it.

There's a website that uses AI to rate how "cool" your selfies are. People "feed" AI scripts of popular movies and TV shows to have software spit out a new script based on what it "learned" from the source material. Recently, you may have even seen that an AI had "mastered" the video game StarCraft II, and performs at a level that's better than 99.8% of human players.

But what do people typically mean when they say something is powered by AI? What all does it encompass, and how do you interact with AI on a daily basis? If you're unsure about how all of these pieces fall into place, here's how you can understand the world of artificial intelligence and its various applications.

What is artificial intelligence (AI), and what do people typically mean when they say they're using it?

In the 1950s, when Marvin Minsky built the first neural network simulator alongside John McCarthy, the pair described artificial intelligence as tasks performed by a program or a machine that you couldn't reasonably determine if a human had performed or not instead.

To put it in even simpler terms: "It’s just another way of writing software. Relax!" Michael Capps, PhD, former president of Epic Games, and cofounder and CEO of AI startup Diveplane tells Mic. "Company.com from the 90s became eCompany and then iCompany and then CloudCompany are now Company.ai. Who cares?"

So we have two definitions: Artificial intelligence is both the latest "trendy" way to write software, so to speak, as well as a way of automating tasks that humans could do in a way that makes it hard to tell if a human or computer performed them. But the most important core component of AI constructs and programs, given that they're modeled after human intelligence, is the fact that they learn. In fact, they'll display some of the same behaviors as humans when they're beginning to understand something. Learning is an important tenet of any artificial intelligence project as well as problem-solving and knowledge.

"There’s really two terms that matter," says Capps. "One is machine learning, or 'ML' — when a machine learns how to do something. And the other is AI — which basically means, 'I have no idea how the computer did that, whoa.'"

The bottom line of determining whether or not you're actually dealing with AI in terms of an app you're using or even an online service is, according to Kishore Rajgopal, founder and CEO of intelligent adapting pricing software firm NextOrbit, is "the fact that your decisions and your actions change with time, and it [the AI] learns." Learning is the key point there.

"AI is the hot thing, so just about everybody says AI. 'Oh, my product is AI enabled.' But when you look deeper, what makes it AI? It's all about the rules engine. It's a decision engine. It's a decision tree," Rajgopal explains. "The question really is: does the program learn with time? You must ask this question of something purported to use AI. Does it get better with time? Probably not. The only way it can get better with time is if I, as a programmer, go back and change it."

How does the average person interact with AI on a daily basis?

You're likely using AI daily and don't even realize it — that's how ubiquitous the technology has become. It's in such widespread use, even though most individuals aren't quite sure exactly what it even means. Capps says the average person interacts with AI on a "nonstop" basis, for example, "from automated fraud detection on your credit cards, to air conditioning in large buildings, to airplane scheduling, to the sale prices of gold in your Clash of Clans game."

If you unlocked your phone using Face ID, you relied on machine learning to make a call or send some texts. If you then took a photo in the pitch black night and found that your phone automatically lightened it up, you can thank AI for that as well. What's more, that DoorDash customer service agent who helped ensure you got a discount for your missing milkshake? That was probably a bot powered by AI.

"It’s getting harder and harder to discern who is real and who isn’t," says Capps. "Right now it’s chat agents, but that will be voice and then fully generated video soon — it will get harder and harder to tell the real from the virtual, and it may not matter!"

How has AI changed over the past few years and what does the future look like?

AI has certainly become more ubiquitous over the last few years. It's more than just a buzzword. It's now the core focus of several businesses and a simple fact of life when it comes to innovation.

"Ten years ago, nobody could really afford AI. It was too expensive. It's only in the last two years that it's come out into the open, like using it in commercial software. That's just going to increase. You're going to find that line of programming will become prevalent. It'll no longer be a luxury. It'll just be part of the way [you program.]," says Rajgopal of the burgeoning AI revolution that's already beginning to take hold.

"In the next five years the exponential curve of technology will continue. You’ll have insanely powerful virtual assistants, automated cars, genetically targeted drugs, and the like. In 10 years, we’ll be like cute house pets for our AI overlords," concludes Capps.

What is the key to retaining the human element of AI to avoid any unforeseen mishaps?

With machines and programs that are literally created to learn, adapt, and continually improve themselves over time, there are obvious concerns that arise. For instance, what happens if an AI becomes "too smart," or if it "decides" that it knows best?

"Today, we’re building “black box” systems, where we don’t know what the system is actually doing," Capps says of some of the potential dangers AI has brought forth. "We’re trusting them at scale, and only later do we realize they might have been trained with biased data. Imagine a residential loan approval algorithm that’s been trained on 60 years of data in the Deep South — what are the chances it might incorporate some racial bias? And we’ve already seen issues with parole determination systems, etc."

We've already seen similar pitfalls of using AI to complete tasks as simple as a smart camera from Facebook's AR/VR team that was meant to focus on one female person of color telling a story that, instead, focused on her "white, male colleague" instead. With these potential biases in mind, how do we ensure the programs we're building still retain their human elements?

"Being able to look inside the ‘black box’ is crucial," says Capps. "It’s how we build software, with a debugger so we can see how it’s operating. AI and machine learning need the same ability to look inside, before we can trust it to make decisions with human lives on the line, or even just decisions that can be impacted by biased data."

The bottom line is that, even if we manage to create super-intelligent software that can perform at the same capacity as humans do, there's still the very real possibility that machines may end up reaching a state of intelligence that's eerily similar to a human's.

"Basically, if we can build a machine that learns like a human, it won’t be long before it’s smarter than a human..." Capps posited. "And then who’s in charge?"