AI Can Now Detect Its Own Hallucinations in Real-Time, Study Shows

AI Can Now Detect Its Own Hallucinations in Real-Time, Study Shows

Photo by cottonbro studio on Pexels

Researchers have developed a new system that can detect factual inaccuracies, or ‘hallucinations,’ in long-form text generated by artificial intelligence models as they are being written. This real-time detection capability is enabled by a ‘streaming hallucination detector’ trained on a massive dataset of over 40,000 annotated text samples produced by various open-source AI models. The system focuses on identifying hallucinations at the entity level – specifically, incorrect or fabricated details related to named entities. Interestingly, the data used to train the detector was itself generated using closed-source AI models. The discovery is being discussed on Reddit: [https://old.reddit.com/r/artificial/comments/1nxhva9/most_interestinguseful_paper_to_come_out_of/]