Got It AI Develops AI to Identify and Address ChatGPT Hallucinations for Enterprise Applications

Got It AI Develops AI to Identify and Address ChatGPT Hallucinations for Enterprise Applications

Company says conversational AI chatbots for enterprise knowledge bases cannot afford to be wrong 15% to 20% of the time

Got It AI, the Autonomous Conversational AI company, announced an innovative new “Truth Checker” AI that can identify when ChatGPT is hallucinating (generating fabricated answers) when answering user questions over a large set of articles or knowledge base. This innovation makes it possible to deploy ChatGPT-like experiences without the risk of providing incorrect responses to users or employees. Enterprises can now confidently deploy generative conversational AIs that leverage large scale knowledge bases such as those used for external customer support or for internal user support queries.

The Truth Checker AI, uses a separate, advanced Large Language Model (LLM) based AI system and a target domain of content (e.g. a large knowledge base or a collection of articles) to train itself autonomously for one task: truth checking. ChatGPT or an underlying LLM such as a GPT-3.5 model, provided with the same content, can then be used to answer questions in a contextual, multi-turn chat dialog. Each response is evaluated for truthfulness before being presented to the user. Whenever an inaccurate response is detected, the response is not presented to the user. Instead, a reference to relevant articles which contain the answer is provided.

Marketing Technology News: MarTech Interview with Lisa Chaikin, Group Marketing Director at Awin

“We tested our technology with a dataset of 1000+ articles across multiple knowledge bases using multi-turn conversations with complex linguistic structures such as co-reference, context, and topic switches,” said Chandra Khatri, former Alexa Prize team leader and co-founder of Got It AI. “ChatGPT produced incorrect responses for about 20% of the queries when given all the relevant content for the query in its prompt space. Our Truth Checker AI was able to detect 90% of the inaccurate responses, without human help. We will also provide the customer with a simple user interface to the Truth Checking AI, to further optimize it, identify the remaining inaccuracies and eliminate virtually all inaccurate responses.”

“Our technology is a major breakthrough in autonomous conversational AI for ‘known’ domains of content, such as enterprise knowledge bases, versus ‘open domain’ such as the entire world wide web,” said Amol Kelkar, formerly an architect for Microsoft Office Online and co-founder of Got It AI. “It goes beyond prompt engineering, fine tuning, or just a UI layer. It is a proprietary model that enables us to deliver scalable, accurate and fluid conversational AI for customers planning to leverage generative LLMs. Truth checking the generated responses cost-effectively, is the key capability that closes the gap between an R&D system and an enterprise ready system.”

Marketing Technology News: Advertising through Economic Uncertainty

Picture of Globe Newswire

Globe Newswire

GlobeNewswire is one of the world's largest newswire distribution networks, specializing in the delivery of corporate press releases financial disclosures and multimedia content to the media, investment community, individual investors and the general public.

You Might Also Like