Google delays launch of Bard chatbot in EU due to privacy concerns.

The tech giant Google LLC (NASDAQ: GOOGL) has had to delay the launch of its Google Bard chatbot in the European Union due to a ban from the Irish Data Protection Commission (DPC). The DPC cited privacy concerns as the reason for its decision. According to the DPC, Google has not provided sufficient information about Bard and specifically about how its generative artificial intelligence (AI) tool ensures the privacy of Europeans. In other words, Google has not justified the launch of Bard in the EU.

Deputy Commissioner Graham Doyle commented:

“Google recently informed the Data Protection Commission of its intention to launch Bard in the EU this week. The DPC had not had any detailed briefing nor sight of a DPIA [data protection impact assessment] or any supporting documentation at this point. It has since sought this information as a matter of urgency and has raised a number of additional data protection questions with Google to which it awaits a response and Bard will not now launch this week.”

Meanwhile, a Google representative stated that the company had been in talks with the Irish Data Protection Commission and made the date of Bard’s launch in the EU clear to the regulators.

A Google spokesperson explained:

“We said in May that we wanted to make Bard more widely available, including in the European Union, and that we would do so responsibly, after engagement with experts, regulators, and policymakers. As part of that process, we’ve been talking with privacy regulators to address their questions and hear feedback.”

Prior to obtaining approval for Bard to go live in Europe, Google must provide detailed answers to a list of questions from the Irish Data Protection Commission.

Google launched Bard in the US back in February. The chatbot quickly gained traction, expanding to the United Kingdom, Australia, India, Argentina, and more. It is available in over 180 countries but has not yet reached the EU.

EU’s Regulation of AI

The EU’s approach to regulating the development and use of artificial intelligence is currently the strictest in the world. The European AI Strategy aims to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy. In pursuit of this goal, EU regulators have come up with an AI legal framework that includes an AI Act, an AI Liability Directive, and a revised Product Liability Directive.

According to the AI Act, AI systems can be categorized according to four levels of risk: minimal, limited, high, and unacceptable, with specific regulations applied for each level. While AI with minimal risk requires minimal regulations, applications with unacceptable risk are banned from use. These include systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

The established rules have raised concerns in the tech industry. Some experts believe that if the scope of the AI Act is broadened too much, it may affect harmless forms of AI.