How Microsoft’s AI chatbot ‘hallucinates’ election information


The upcoming year will be a busy one for democracy. Major elections will take place in the US, the EU, and Taiwan, among others. Just as the rollout of generative AI era is gathering pace in earnest — something some fear could prove detrimental to the democratic process. One of the main concerns about generative AI is that it could be used for the malicious spread of disinformation, or that models can make up false statements presenting them as fact, so-called hallucinations. A study from two European NGOs has found that Microsoft’s chatbot — Bing AI (running on OpenAI’s GPT-4)…

Thi...
Read Entire Article

© 2024 Thiratti. All rights reserved.