Why is ChatGPT Multi-talented but Easily Tricked

This blog post was written by Tonye Harry as part of the Deepchecks Community Blog. If you would like to contribute your own blog post, feel free to reach out to us via blog@deepchecks.com. We typically pay a symbolic fee for content that's accepted by our reviewers.


Have you ever interacted with a chatbot and been amazed at how human-like its responses were? Chances are you were using ChatGPT (GPT-3 chatbot), a variant of the GPT-3 language model developed by OpenAI for chatbot applications. The launch of ChatGPT caused quite a stir on the internet, with the chatbot gaining over one million users in just one week!

ChatGPT is designed to process and generate human-like responses to prompts, based on the input it has been trained on. This makes it a multi-talented tool, capable of performing a wide range of tasks, including question answering, language translation, summarization, code generation, and even creative writing. It is useful for various applications like customer service, language learning, debugging code, content creation, and much more.

However, despite its impressive abilities, ChatGPT is not without its flaws. For one, it can be easily deceived by certain types of inputs.

In this article, we explore the reasons behind ChatGPT’s multi-talented yet credulous nature and discuss ways to use it effectively while avoiding common pitfalls. Let’s go over the good, the bad, and the ugly of ChatGPT.

ChatGPT Interface

Fig. 1: ChatGPT Interface. Source: Wikipedia

How does ChatGPT work?

ChatGPT is a language model AI, which means it is specifically designed to process and generate human language. To accomplish this, it learns structures and patterns in languages by analyzing large amounts of text data such as books, articles, and conversations. Once it has been trained on this data, ChatGPT generates responses to prompts based on the received input. If you ask ChatGPT a question, it will try to generate a relevant and coherent answer using the patterns and structures it learned from the training data it utilized. This process is known as “natural language generation (NLG)“, and it allows ChatGPT to complete a wide range of tasks like language translation and summarization.

Note that you can add follow-up responses and ChatGPT understands its context, owing to its ability to refer back to earlier parts of the conversation (up to approximately 3,000 words from the current conversation). This feature allows ChatGPT to maintain coherence and relevance in the conversation, even as it progresses. The variety of tasks ChatGPT can perform is amazing — it can generate music and poetry, write and debug code, summarize text, translate languages, do creative writing (generating movie scripts), analyze data, and much more.

Sitcom Script Generated by ChatGPT

Fig. 2: Sitcom Script Generated by ChatGPT. Source: somethingawful


Why is ChatGPT Multi-talented but Easily Tricked

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison

Understanding ChatGPT Limitations: A Closer Look

Despite its impressive skills, ChatGPT has several limitations that can impact the accuracy and efficiency of its responses.

A significant limitation is its heavy reliance on the data it’s been trained on. As a Machine Learning model, ChatGPT’s responses are only as accurate as the data it has been fed during training. If the training data contains errors, biases, or other inconsistencies, ChatGPT generates inaccurate and/or untrustworthy information.

The images below demonstrate this. The first picture shows an example of ChatGPT generating an incorrect response, and the second is a biased response.

Understanding ChatGPT Limitations

Fig. 3: ChatGPT confidently generates a wrong answer. Common names of animals that have matching genus and species names the same with their common English names are “Gorilla Gorilla” or “Boa Constrictors” and none were mentioned above. Source: Author

ChatGPT generates biased output

Fig. 4: ChatGPT generates biased output. Source:  Spiantado

Moreover, it may have been trained on the literal meaning of the input it receives, which makes it unsuitable for recognizing irony or sarcasm. The model lacks the ability to reason or understand concepts and lacks common sense or logic in the same way humans do. As a result, it may sometimes generate responses that are not in line with the prompt’s intended tone.

ChatGPT’s reliance on the training data also gives way to another limitation: any prompt outside the scope of its training data will likely result in irrelevant and inaccurate responses. This is because certain topics or concepts may not be represented in the training data. It is worth noting that ChatGPT only provides users with information prior to 2021 so responses would be limited if prompted to answer questions that need current knowledge.

Another downfall is that ChatGPT doesn’t generalize well to new inputs or prompts (overfitting) especially if it was trained on a small, highly specific, or non-diverse dataset. This can cause the generated responses to be overly specific, repetitive, or lacking diversity.

ChatGPT doesn’t provide real-time information

Fig. 5: ChatGPT doesn’t provide real-time information. Source: OpenAI

In addition, ChatGPT relies on the context and structure of the input it receives to generate relevant and coherent responses. It may struggle to generate a response if the prompt is not specific enough, or may sometimes generate responses that are out-of-context as it may base its responses on its understanding of the individual prompts rather than the overall context of the conversation. For the same reason, ChatGPT tends to generate incomplete or nonsensical responses if given an incomplete, ambiguous, or confusing prompt. This is known as an “adversarial example” – a prompt that is specifically designed to mislead or confuse the model.

ChatGPT provides a confusing prompt

Fig. 6: ChatGPT provides a confusing prompt. Source: Andrew Ng

Improving ChatGPT Responses

It is possible to improve ChatGPT responses in several ways from those creating the service and the users:

  • One solution that could be used by the creators  is to carefully curate the training data used to create the model, making sure it is as accurate, consistent, and representative as possible. This helps to minimize the risk of errors or biases being introduced into the model reducing ChatGPT mistakes.
  • Additionally, users should avoid using ambiguous or confusing language when interacting with ChatGPT, as well as intentionally provocative or humorous prompts (trolling inputs). This is because ChatGPT generates relevant and coherent responses based on the context and structure of the input it receives.
  • Another potential solution might lie with ChatGPT itself. Large Language Models (LLM) like GPT-3 used in ChatGPT can self-improve. This means they can generate training data and create high-confidence answers through chain-of-thought prompting (CoT) and self-consistency to improve the correctness of their responses. This is based on research by Jiaxin Huang, et al and Google. Simply put, it is consistently self-reflecting, posing questions about different topics, and practicing answering them.

A more convenient but expensive way of improving LLMs might be through regular model training with new data collected by the creators and self-training with augmented data. Relying on the users to help improve the system might be less logistically demanding, but might not work well enough due to the variability in the quality of prompts by the end-user in terms of its ambiguity or coherent language. LLMs like GPT-3, LaMDA, and PALM are powerful tools that can be advantageous to the end user because they are well-trained conversational AI technologies and the creators of such models have surplus resources to maintain and improve them. So in this case, the cost might not be an issue.


It is important to remember that ChatGPT is a Machine Learning model and cannot be expected to understand or process human emotions and intentions in the same way a human would. Consequently, ChatGPT may generate responses that are inappropriate or offensive as a result of misunderstanding the tone or intent of a prompt. It is also worth noting that ChatGPT is not the only language model vulnerable to adversarial examples or trolling inputs. It has also become apparent that other models like GPT-3 and BERT, are susceptible to inputs of this type. Even though these models have limitations, they can still be effectively used and produce good results if you understand their limitations and take steps to mitigate their risks. With GPT-4 scheduled for release in 2023, the current limitations of ChatGPT may be limited for a short period of time since it is expected to have significantly more parameters than its predecessor. This will potentially allow it to produce more accurate responses at a faster rate and may address ethical issues, lack of diversity, limited knowledge, and much more.

While ChatGPT can be a useful tool, it is not a replacement for human judgment and expertise. Make sure to use ChatGPT in combination with other resources, and be prepared to verify and validate its responses before relying on them.


Why is ChatGPT Multi-talented but Easily Tricked

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison

Recent Blog Posts