In the picture above, ChatGPT has introduced a book that seems to be a completely wrong answer, and then corrected his previous answer with an apology.
Did you notice the mistake? When we informed ChatGPT of its mistake, it quickly corrected itself. Unfortunately, while he admitted his mistake, it still shows how AI can be so wrong about a question that is easily answered on various websites.
AI chatbots have limited information but are programmed to respond anyway. They rely on their own training data and can also learn from interactions with you and machine learning capabilities. If the AI refuses to respond, it cannot learn or correct itself. This is why AI sometimes makes mistakes and learns from its mistakes.
While this is just the nature of AI, you can see how it could become a big problem. Most people don’t check their Google searches, and the same goes for chatbots like ChatGPT. This can lead to receiving false information and leave consequences that will not be known soon.
2. Artificial intelligence can easily be used to manipulate information
It’s no secret that AI can be unreliable and prone to error, but one of its most mysterious features is its tendency to manipulate information. The problem is that the AI doesn’t have an accurate understanding of the context you’re talking about, causing it to change the facts to suit its own purposes.
This is exactly what it is about Microsoft BingChat happened. A user on Twitter asked for a release date for the new Avatar movie, but the chatbot refused to provide the information, claiming that the movie had not been released yet.
Of course, you can easily treat this answer as a bug. However, that doesn’t change the fact that AI tools are imperfect, and we should proceed with caution.