Beneficial Consequence of Artificial Intelligence
Why embracing AI tools is essential, but also why you should never outsource your own thinking.
I arrive home from work to find my roommates in a frenzy. I had never seen them this animated before. When I ask what's happening, they mention "ChatGPT". I tell them I've never heard of it. Their eyes widen in disbelief. "Come over here, you won't believe this".
I sit down at a chat interface they're pointing at. They explain it's a chatbot powered by artificial intelligence, but it apparently outperforms any AI they've seen before. It can answer any question. I decide to test it. I ask it to write a three-hundred-word essay on Napoleon's life. There was no way it could do it.
As I type in the prompt, I have this arrogant smile on my face that makes any face very punchable. I hit enter. Within seconds, the chatbot digitally uppercuts me as it produces a near-perfect essay. I can’t believe it.
Once my initial shock subsides, excitement takes over. I realize I could delegate much of my work to this chatbot. No more writing emails, revising scripts, or summarizing business intelligence. But suddenly, I’m overcome by this feeling of horrible dread. My initial excitement wavers, and this one question terrifies me to the core: if this machine can perform my tasks, am I even needed anymore? I carried this thought with me for the next few weeks until I could no longer bear it. I once read somewhere that when something scares you, you should study it. As the saying goes, you must face your fears to overcome them. That’s why I set out to learn as much about artificial intelligence as possible.
First and foremost, it’s crucial to understand what artificial intelligence even is. There are numerous definitions, but this is one that I like:
Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. - IBM
But how do these machines mimic human intelligence?
In order to understand the technology fully, we can break it down into its most fundamental parts. This type of thinking is known as “First Principles thinking”. Aristotle already mentioned this type of thinking in his writings about 2400 years ago. Today, entrepreneurs like Charlie Munger, Peter Thiel, and Elon Musk advocate for this method as well – and if it’s good enough to build billion-dollar companies and shoot rockets into space (and recover the booster afterward), it’s good enough for me. Let's break down artificial intelligence into first principles using an example from the real world.
A father and his daughter stand next to each other, the air filled with the smell of wilderness. The daughter’s eyes couldn’t be wider, and her mouth hangs open in wonder. The father smiles as he first looks at his daughter, and then back through the thick glass that separates them from the creature. He can understand the disbelief of his daughter - even though he’s seen it many times, the sense of wonder is difficult to explain when it’s your first time laying eyes on a tiger. “What’s that?”, Julia asks her father, still in disbelief. Her father tells her that this particular animal is called a tiger, and you can distinguish it from other large cats by its stripes. Later that day, the pair passes by the lion enclosure. “Look dad, another tiger!”, Julia exclaims. Her father laughs and says that even though both animals might seem similar, they are wildly different. This particular large cat is a lion, not a tiger. After spending the whole day in admiration, Julia and her father come home. Her mother asks if they saw a lion. Julia tells her that they had seen both. “Tigers have stripes”, she states. Still in awe of what she experienced, Julia draws a tiger before she goes to bed.
What does this story have to do with AI? Let’s look at this story in the context of AI.
It’s important when creating an AI application (or any application in general) to define a specific problem the application should solve. In Julia’s case, she wants to get better at distinguishing between a tiger and a lion. In other words, Julia classifies the animals in being either a lion or a tiger. Computers can do the same thing, but they use classification algorithms to do so.
Now we know the goal of the application, but something is missing. Would Julia even know if a tiger or a lion exists without seeing them or hearing about them? Probably not. She wouldn’t have the knowledge that there is a distinction to be made in the first place because she wouldn’t even know about the existence of these animals! For computers, it’s similar. But when talking about computers, we wouldn’t necessarily call it knowledge, but rather data. The basis for every AI application is data - raw information. But just like Julia, the model doesn't know what to do with the information if there is no explanation. Only after her father tells Julia that the creature she sees is a tiger, she understands. The process is similar to training an AI. This type of model training is called supervised learning. Humans label the data fed to the system as “tiger” or “no tiger”. That’s how the system can ultimately decide if it’s a tiger or not. But in comparison to human learning, computers need much more data. Julia only needs to see a tiger once or twice to understand the concept of it. Computers need thousands of pictures to do the same thing. Supervised learning also requires humans to perform the mundane task of labeling these thousands of pictures. This is not only resource-intensive, but also mind-numbingly boring for the people involved. If only there were a way for the computer to recognize patterns on its own.
Of course there is – it’s called unsupervised learning.
Let’s imagine that this time Julia’s father doesn’t tell her that what she sees is a tiger. There’s no explanation whatsoever. But Julia walks past two large cats, a tiger and a lion, on the same day. At first, she could be forgiven to think that both animals are the same. But by inspecting them more closely, she would realize the differences between the two. Even though their builds are similar, the color of the fur is different. Then she might realize that their tails don’t look the same. One by one, Julia lists the differences between these two animals. Computers go through a similar process when they use an algorithm specialized for unsupervised learning. These algorithms filter through the dataset and look for hidden patterns or intrinsic structures. It’s a great technique to find similarities not known before. Also, it’s not necessary to label data, which saves time and money.
There are many more techniques, but supervised and unsupervised learning are the most common ones. Yet, up until now, these techniques have allowed computers to classify data that has already been created. Data made by humans. It would be neat if a computer could generate data on its own, right?
They can.
So-called “Generative AI” systems cannot just classify, but also create new data. They are exceptional at creating text, audio, and video. The most well-known of these systems are the transformers. ChatGPT, a transformer model built by the company OpenAI, is the sole reason I became fascinated by artificial intelligence in the first place!
In our real life example, Julia draws a tiger at the end of the day to commemorate the day. She has seen a tiger, so she has knowledge on the appearance of the animal. In other words, she can grasp the concept of a tiger. Amazingly, you can also train a computer to do just that. One example for such a generative AI model is the one created by Stability AI called Stable Diffusion. Stable Diffusion can generate images when it’s provided a text prompt by a user. In our case, we can enter a text prompt that goes something like this:
“Depict a realistic tiger standing in a lush green jungle, with a vibrant orange coat and black stripes, bathed in dappled sunlight.”
This prompt lets the system know what we want to have depicted within the generated image. By the way, the picture I used of Julia and her father above was also created by this model.
Only time will tell what limit this technology has, if it even has one. Right now, it seems that only two camps exist: artificial intelligence is one of the biggest bubbles ever created and will stagnate, or it’s the last technology we will ever create as a species because it will lead to the creation of a digital god. At the moment, the most rational course of action in my opinion is to stay up to date with artificial intelligence and learn to use these tools. But most importantly, don’t outsource your thinking to machines. Just like a muscle, your ability to think will atrophy if you use it less. Utilize artificial intelligence, but not at the cost of your biological intelligence.