The study by researchers from the University at Buffalo, in collaboration with the University at Albany and the Chinese University of Hong Kong, suggests that large language models (LLMs) may prove to be more effective in detecting deepfakes compared to current state-of-the-art algorithms. While LLMs were not specifically designed for deepfake detection, their natural language processing capabilities and semantic knowledge make them well-suited for this purpose.
The study found that LLMs like OpenAI’s ChatGPT and Google’s Gemini were able to accurately identify synthetic artifacts in images generated by different methods, comparable to previous deepfake detection algorithms. However, LLMs may struggle in capturing statistical differences at a signal level, limiting their detection capabilities in some cases. Other LLMs like Gemini may provide nonsensical explanations for their analyses or refuse to analyze images altogether.
Despite these limitations, the researchers suggest that fine-tuning LLMs for deepfake detection could improve their performance and make them more efficient tools for users and developers. By leveraging their unique strengths in natural language processing and semantic knowledge, LLMs like ChatGPT could play a valuable role in combating the spread of AI-generated misinformation in the future.
In South Carolina, a new agency will be taking over public health services on July…
Amidst the thrilling 7-run victory of Team India over South Africa in the T20 World…
White Sox frontline starter Garrett Crochet has been a hot topic in trade rumors and…
The national mood on the economy was low, leading to a heated debate between President…
The Sacramento Kings are considering making a major move in the NBA by exploring the…
Bytes Technology Group Plc (GB:BYIT) has recently granted 449,394 options to its employees through its…