MIT Researchers Develop New Method to Identify Overconfident Large Language Models and Flag Hallucinations
MIT researchers have introduced a total uncertainty metric that compares a model's outputs across an ensemble of LLMs from different developers, more accurately detecting overconfident and hallucinated predictions than existing self-consistency methods.


