Large Language Models (LLMs) — such as those used in chatbots — have an alarming tendency to hallucinate. That is, to generate false content that they present as accurate. These AI hallucinations pose, among other risks, a direct threat to science and scientific truth, researchers at the Oxford Internet Institute warn. According to their paper, published in Nature Human Behaviour, “LLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact.” LLMs are currently treated as knowledge sources and generate information in response to questions or prompts. But the data they’re…
Read Entire Article