Why does AI often "hallucinate" or make mistakes?

According to Microsoft CFO Shereen Chalak, the root cause is often bad or biased data. Speaking at the New Age Finance and Accounting Summit in Dubai, she explained that AI models depend heavily on clean, well-structured, and well-governed data—without it, even the most advanced systems can produce inaccurate or discriminatory results. She cited a real-world example where a global bank's AI lending model unfairly favored men in a certain age and income group due to flawed data. Chalak urged companies to prioritize data quality, build transparent systems, and create a culture that rewards innovation and responsible AI use. What do you think—should organizations be held accountable for AI bias caused by poor data? Share your thoughts below.

Add your Comment:


+