Photo by Valeriia Miller on Pexels
A Reddit user has shared an intriguing discovery on the r/artificialintelligence forum, claiming to have found a technique, dubbed ‘Nexus,’ that reduces AI hallucinations when using high temperature settings in Large Language Models (LLMs) such as Llama. The user reports that ‘Nexus’ allows for increased temperature settings – which typically lead to more creative but often nonsensical outputs – without producing unreadable or nonsensical results. While encouraging others to experiment with the ‘Nexus’ effect, the user cautions that results may vary. The original post can be found here: [Reddit Post: https://old.reddit.com/r/artificial/comments/1nh8vpo/nexus_how_i_went_from_max_temp_130_before/]