Why You Shouldn’t Always Trust ChatGPT
Chatbots are wrong 60% of the time. That means you are more likely to be wrong when you use it to win a petty argument.
Chatbots have quickly become the new way to “Google it”, and many Internet users have been accepting them as all-knowing and all-powerful. We have already begun referencing it as our sole informational source. In arguments, disagreements, and discussions, chatbots have become the go-to adjudicator, arrogantly providing us with answers as if they were unequivocally correct.
But their answers are, in fact, equivocal, especially as chatbots provide incorrect answers to 60% of search queries. Don’t rub it in so fast, taunting your adversary about your petty victory, as using ChatGPT is statistically more likely to make you wrong in an argument:
So, if you’ve found yourself saying, “ChatGPT, something went wrong”, here are five reasons why you were saying the right thing:
1. ChatGPT has unjustified arrogance
ChatGPT is like the drunken guy at the pub who claims to know everything about relationships - they are usually confidently wrong. They blabber on and tell you everything you want to hear regarding your own situation. However, what they say isn’t based on verified facts, and they only appear to be right because they say it with such confidence.
ChatGPT is notorious for presenting misinformation with unwavering confidence. This leads users to accept everything they say as absolute truths. If ChatGPT is the robotic incarnation of the pub drunk, then we have to be his more sober mate telling everyone, “don’t listen to him - he’s full of sh*t”.
2. ChatGPT learns from us (& we’re often wrong)
ChatGPT’s training data comes from human sources. Yes, they are essentially processing huge amounts of information and relaying it to your gullible eyes. However, we both know how incorrect we humans are about many of life’s questions, and this means that our chatbots are likely to present us with wrong answers.
What’s more, ChatGPT can even be intentionally manipulated to spread misinformation, with just a few sneaky questions enough to ensure that you have an accomplice in your own nefarious deeds. We humans, whether providing them with the information they need to relay to others or tricking them into spreading new falsehoods, are a negative influence on ChatGPT and help to feed its cycle of lies.
3. ChatGPT doesn’t really get it
If any of your recent query results have found you saying, “ChatGPT, something went wrong,” then it is more than likely that the chatbot didn’t understand your query. This is why it so often happens that people get frustrated with the searchbot - it literally didn’t understand your question. This is even more likely to occur when context is absent in your request.
For example, you ask your friend to put a cooking utensil on another part of the bench. You will probably say something like, “Put that there”, and your friend knows exactly what to grab and where to put it. Now think about the vague questions you ask your chatbot and how they often leave you frustrated. This is because they rely on patterns - and not reasoning - to provide answers.
4. They are no substitute for the experts
ChatGPT is not exactly number one when it comes to providing a comprehensive topic discussion. It is designed to provide a general understanding of your questions, but it can’t provide critical thinking (only the critical thinking of others). This immediately reduces its potential for expert status and ensures that it can’t be used to win any debate where nuance is a critical factor.
5. It also reduces the human ability of critical thought
Over-reliance on ChatGPT is eroding our own human ability of critical thinking. This is because we are using the chatbot to provide us with all the answers, reducing our ability to cross-check, question, and research the topic ourselves. The erosion of critical thinking could have a serious impact on the way we communicate. So, the next time you find yourself saying, “ChatGPT, something went wrong”, remember that the technology is still very much in its infancy and will take some time to reach a state of trustworthiness.