AI Large Language Models Susceptible to Medical Misinformation, Mount Sinai Study Reveals
Mount Sinai research shows AI LLMs believe medical misinformation 32-46% of the time, especially when framed as expert advice.
Mount Sinai research shows AI LLMs believe medical misinformation 32-46% of the time, especially when framed as expert advice.
AI-manipulated images of Minneapolis shootings go viral with 9 million views, as Senator displays fake photo in Senate, raising concerns about digital authenticity.
Tests reveal that OpenAI's latest ChatGPT model is citing Elon Musk's AI-generated encyclopedia, Grokipedia, as a source, leading to concerns about the spread of misinformation and biased narratives.
Experts are raising concerns that Google's AI Overviews can provide 'completely wrong' medical advice, putting public health at risk. A new study found that YouTube is cited more often than any medical website, creating an 'unregulated medical authority'.
A new analysis reveals a 50% year-over-year increase in reported AI-related harm from 2022 to 2024, with a significant spike in incidents involving deepfakes and malicious use of AI, according to the AI Incident Database.