New Algorithm to Help Manage Misinformation by AI Chatbots

New Algorithm to Help Manage Misinformation by AI Chatbots

by

An Australian politician, Brian Hood, recently found himself amidst a legal battle with OpenAI, the maker of the AI chatbot, ChatGPT. Hood discovered that the chatbot was spreading false information about him, claiming that he was a convicted criminal. This incident raised concerns about the potential harm caused by AI programs that generate inaccurate or misleading results. The issue with these chatbots is that retraining them to correct their mistakes is both expensive and time-consuming. As a result, scientists in the field are exploring more targeted solutions to address this problem.

Hood attempted to resolve the issue by contacting OpenAI directly, but he found their response to be unhelpful. Eventually, a new version of the software was released, which no longer provided the false information. However, Hood never received an explanation for the initial misinformation. Despite the lack of assistance from OpenAI, Hood expressed his gratitude for the significant media coverage his case received, as it helped correct the public’s perception of him.

OpenAI has not commented on the matter, leaving many to speculate on the potential impact of inaccurate AI-generated information. This issue may become more prevalent as companies like Google and Microsoft incorporate AI technology into their search engines. With AI models, the process of removing specific information is not as straightforward as deleting individual entries from a search engine index. Consequently, a group of scientists has begun developing a new field called machine unlearning. This field aims to train algorithms to forget particular data segments that may be offensive or incorrect.

Meghdad Kurmanji, an expert from Warwick University, stated that machine unlearning has gained significant attention in recent years. In fact, Google DeepMind, a prominent AI research laboratory, co-authored a paper with Kurmanji introducing an algorithm to remove selected data from large language models. This can have implications for chatbots like ChatGPT and Google’s own Bard chatbot. Google has also launched a competition to further refine unlearning methods, with over 1,000 participants already involved.

Kurmanji believes that machine unlearning could be an effective tool for search engines to address takedown requests related to data privacy laws. Additionally, his algorithm has performed well in tests focusing on removing copyrighted material and addressing biases. However, not all AI experts are equally enthusiastic about machine unlearning. Yann LeCun, the AI chief at Meta (the parent company of Facebook), expressed that while the concept is interesting, it does not currently rank high on their list of priorities. LeCun emphasized the importance of making algorithms learn faster and improve their fact retrieval capabilities.

Despite varying opinions within the AI community, it is widely acknowledged in academia that AI companies must have the ability to remove information from their models to comply with data protection regulations, such as the European Union’s General Data Protection Regulation (GDPR). Lisa Given from RMIT University emphasized the criticality of this aspect for the future of AI. However, she also highlighted the lack of knowledge surrounding AI models and their training datasets, making a solution to these issues still distant.

Michael Rovatsos of Edinburgh University echoed Given’s concerns, particularly in the context of responding to numerous takedown requests. Additionally, Rovatsos emphasized that machine unlearning does not address broader questions about the AI industry, such as data gathering practices, accountability for harmful algorithms, or profit distribution. He stated that a technical solution alone cannot resolve these complex issues.

While the research on machine unlearning is still in its early stages and regulations surrounding the field are limited, skeptics and proponents alike agree that users should double-check information generated by chatbots to ensure accuracy. Brian Hood, despite his negative experience with ChatGPT, remains a supporter of AI but emphasizes the need for users to verify the information provided by these chatbots.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it