A recent study published in Nature Human Behavior warns that Large Language Models (LLMs) used in chatbots can be dangerous, as they are capable of hallucinating and presenting false content as accurate. These models are designed to generate helpful responses without any guarantees of accuracy or alignment with fact, which makes them a threat to science and scientific truth.
The paper emphasizes the importance of information accuracy in science and education and urges the scientific community to use LLMs as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code, rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.
The researchers also stress that users should be cautious when using LLMs as they often rely on online sources which can contain false statements, opinions, and inaccurate information. Users tend to trust LLMs as human-like information sources due to their design, leading them to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.
While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. The paper emphasizes that LLMs should not be treated as knowledge sources but rather as tools for generating helpful responses based on available data.