Human + AI > AI. ChatGPT Is Useful Because It 鈥淭hinks鈥� Differently Than Humans Do.
By Troy Lowry
Anthropomorphism is defined as, 鈥淭he attribution of human traits to things, such as animals, plants, or inanimate objects.鈥� This concept is commonly used in literature, art, and religion to make non-human characters or things more relatable to humans.鈥�
In this blog, I will talk about the tendency for people to attribute human qualities to objects including AI. I will advocate that instead of treating AI as human, we should recognize that it is not and celebrate the difference.
I have a couple of deep philosophical ideas that I mask as jokes.1 One of them is: 鈥淚t鈥檚 awful the way we anthropomorphize our babies.鈥�2 Take a second and really think about that. It鈥檚 ok; I鈥檒l wait. I have time鈥� Thinking deeply on this statement makes you really think about 鈥淲hat does it mean to be human?鈥�
Obviously and undeniably babies are human. They may not speak or walk, but they are human. I have an adult autistic son who is barely verbal and struggles greatly to communicate, but he鈥檚 as human as they come. So, although many humans speak and walk, doing so is not part of what makes us human (to believe this would be to believe that newborns are not human).
On the other hand, most everyone would agree that ChatGPT is not human. Most AI experts say AI cannot think because it is, at its core, a bunch of statistical routines. You heard that right. At the heart of ChatGPT鈥檚 great logical reasoning and writing is statistics.3 This is no doubt a shock for all of you who went into English or law to avoid math. We should stop giving AI human characteristics.
There鈥檚 been a lot of talk recently about AI 鈥渉allucinations.鈥� Put simply, AI hallucinations are when AI performs in erroneous or unexpected ways. This is in stark contrast to humans where hallucinations are defined as sensory experiences that appear real but are created by the mind and which are not based on external stimuli.
In other words, simply being wrong is not a hallucination, unless you are an AI. This is the reason I brought up anthropomorphism. Calling AI errors 鈥渉allucinations鈥� is giving AI human characteristics and implying that they are human. Implying that AI is human by anthropomorphizing it overlooks AI鈥檚 greatest strength: that it comes to decisions differently than humans. As studies show, diverse groups outperform homogenous ones, and this goes for groups with an AI member as well. More on that later.
As I said, AI doesn鈥檛 have hallucinations, but they do have errors. These errors have several major sources.
First and foremost, since AI is built on statistics, or in a word, probability, any time you are dealing with probability there鈥檚 a chance, however unlikely, for wildly unlikely outcomes to happen. For instance, it is wildly unlikely that you will win the big jackpot in the Power Ball lottery (a 1 in 292.2 million chance), but it is possible. There are actual winners.4
In that same vein, probability is used both in the 鈥渢raining鈥� of the AI, which is the way in which the AI 鈥渓earns,鈥� as well as the output to any queries. In the training, billions of random numbers are used to train the model. All told, trillions of probabilities were used, and with so many numbers, strange things, often not easily reproducible, occur. If you bought 10 trillion PowerBall tickets, you鈥檇 win the jackpot thousands of times (on average 3,422 times). Given enough chances, even the wildly improbable happens.
Another major factor is that AI is trained by looking at giant amounts of text on the internet. It doesn鈥檛 take much experience with the internet to know that it is far from the font of perfect knowledge. Given the challenge of learning from such an imperfect teacher, ChatGPT and other AI does a remarkable job of being accurate most of the time.
Lastly, ChatGPT is programmed to give an answer. Like a child who is so eager to please they sometimes speak too quickly, ChatGPT can be so eager that it gives incomplete or downright inaccurate answers.
An example: I had asked ChatGPT about the meaning of the song 鈥淟assie Come Home鈥� by the band A-Ha. This is one of those songs that makes so little sense it is either an extremely deep song, or else it鈥檚 entirely meaningless, I highly suspect the later. ChatGPT responded with an amazingly rich answer, using specific lyrics from the song to support several specific themes about longing and home coming.
Unfortunately, I had made a mistake. The song I was thinking of was by the band Alphaville. A-Ha never recorded a song by this name. A quick google search didn鈥檛 find the lyrics ChatGPT quoted in any song by any artist. ChatGPT made them all up. It made up a very convincing set of fake lyrics to support non-existent themes for a fictional song.
Highly concerning, to be sure! However, my feeling is the hype about these mistakes has been taken too far. We are used to computers always giving us the exact same results when they have the same inputs, so we are not easily able to deal with different results, some small percentage of which are not factually correct.
I would counter that human writers and editors also make frequent mistakes. While they don鈥檛 usually make up song lyrics from whole cloth (I hope!), nonetheless they often get facts wrong or make edits I disagree with.
In either case, I must be vigilant and make sure that I agree with the suggestions made.
Human + AI > AI
My blog posts all share a common theme, which is that humans and AI working together do a better job than AI alone. Because AI works differently than human thought, it can add a new element that can be used to make your work stronger.
I take all my blog posts and run them through a private version of ChatGPT (for data privacy reasons), asking it how I could make my writing better.
It usually gives me between five and seven recommendations. I find myself only acting on about one in three recommendations.
This may seem like a poor hit rate, but it鈥檚 incredibly useful. In part because it makes these recommendations fast, returning them within a minute of when they're asked. If I asked a human editor for recommendations, they would likely have a better hit rate, but in the best case it would still take an hour or so to get results. Having immediate results of good quality are often more useful to me than more expert opinion later.
I still use the human editors but only after I use the AI editor. The results? The human editors constantly tell me that my writing quality is improving.
I, for one, am glad that AI 鈥渢hought鈥� isn鈥檛 the same as human thought. That difference allows us to work together to create a better end product than either of us5 could alone. Whether ChatGPT actually thinks or not, it is an incredibly helpful tool.
AI may 鈥渢hink鈥� differently, but viva la difference!
- There is a rumor that I started this blog as an outlet for dad jokes. I will neither confirm nor deny whether this rumor is correct.
- One of my favorites is: 鈥淭here are two types of people in the world. Those who create false dichotomies, and those of us who don鈥檛.鈥�
- Since we don鈥檛 fully understand how people 鈥渢hink,鈥� it seems to me that human thought might all be based on statistics also.
- Another one of my philosophical statements masquerading as a cerebral joke is when I say I hope I win the lottery. Surprisingly often people will ask 鈥淒o you play?鈥� To which I respond, 鈥淣o. But that really doesn鈥檛 change my odds.鈥� If someone presses the issue I will say, 鈥淚 find them on the ground from time to time. Finding a ticket and winning the jackpot with it is just about as likely as hitting the jackpot if I buy a ticket.鈥�
- Here I go anthropomorphizing AI by saying, 鈥淯s鈥�! I don鈥檛 say 鈥淯s鈥� when I use a calculator. At some basic level, AI just feels different, even if it doesn鈥檛 actually think.