Europe joins the AI fray

Scarcely a day goes by without artificial intelligence making news in some form or other. Much of it is about the new applications for AI, which have been used to create new beers, diagnose depression, detect cardiac arrests, and even write poetry. But there are also ominous warnings about the dangers of AI, with Google co-founder Sergey Brin last month joining Tesla’s Elon Musk, Microsoft founder Bill Gates and the late Stephen Hawking worrying about the technology’s threat to humanity.

AI may still be in its infancy, but it’s moving fast. And regulators are starting to sit up and notice. On April 26, the European Commission published a strategy paper calling on Europe to spend at least €20 billion a year from 2020 on AI if it is to catch up with its rivals. “Just as the steam engine and electricity did in the past, AI is transforming our world,” said European Commission Vice-President Andrus Ansip.

The Commission paper is a response to the many socio-economic challenges and opportunities of AI – and the increasing public uncertainty about the technology. While we are some way from the moment when robots might take over the world, there have been incidents raising concerns, from the revelations about Facebook’s role in swaying voters in the 2016 US elections to the road accidents involving self-driving cars.

Our own survey, the BrAInstorm, confirms this unease: people want to know more about how and when AI is being used. The questions whirling around AI show how tech policy stepping out of its previously niche corner: now, we are all stakeholders in these issues.

Big opportunities

Yet the opportunities are substantial. And Europe is producing some exciting AI ventures including SwissCognitive’s AI information hub; British AI start-up Cortexica’s ‘visual search engine’ for retailers; German based robotics company Kuka; the Chat Yourself messenger service helping Alzheimer’s sufferers; and German Research Centre for AI, DFKI, is collaborating on Easy2Go, mobile app that uses machine learning to simplify journeys for Europeans across Europe.

AI has even become an academic field, with Fujitsu setting up the AI Center of Excellence (CoE) in the Université Paris-Saclay, and the European Institute of Innovation and Technology (EIT) funding the Ai-Move health programme. In April, a group of leading scientists unveiled plans for a vast multinational European Lab for Learning and Intelligent Systems (ELLIS) devoted to world-class AI research, echoing the CERN particle physics lab near Geneva.

The Commission’s AI strategy tries to address the opportunities, as well as the fears such as loss of control, job losses, and Europe becoming a buyer of AI rather than a producer. It sets out three priorities:

  • raise investment by both the EU and the private sector in AI;
  • forge a ‘Charter on AI Ethics’ to deal with issues like product liability and potentially biased decision-making;
  • use social funding to modernise education and training, while improving labour market transitions.

It’s an ambitious scheme. It looks at key sectors like education, health and transport. It examines how to increase the availability of data in the EU, from public utilities and the environment as well as research and health data. It says the EU should set up a European AI Alliance with stakeholders to shape AI ethics guidelines on issues like safety, privacy, and consumer protection. And more than 60 different directives and regulations will be analysed by the Commission’s Expert Group on liability, including those covering electronics, energy, transport, plastics, medical devices and even toy safety.

Money gap

But it won’t be easy to roll out. Just on the money side, the Commission calls on businesses to fill much of the gap in AI, yet if past European investments in tech innovation are any guide, this might fall on deaf ears.

It also calls for a unified European stance. But many countries are already forging ahead with their own national plans. In March, France released its own AI strategy, while German, Italian and Finnish plans are also being developed.

There is a deeper theme in the strategy: this is Europe’s opportunity to set the norms and ethical standards. For example, it calls for algorithmic transparency to be addressed in the AI ethics guidelines to be developed by the end of the year. There is chance for the EU to become a reference on AI, perhaps in the same way that it did with GDPR and data privacy.

Vague so far

Understandably much of the strategy at the moment is vague. It talks about what the EU is already doing, but not so much on what businesses, researchers or institutions dealing with AI can expect. They recognise the need to tread carefully.

Trade associations and think-tanks have weighed in too: the American Chamber of Commerce to the EU (AmCham EU) released a position paper on how to foster AI in Europe, while the Centre for European Policy Study (CEPS) has set up an AI task force with business representatives, politicians, policy-regulators and academics. Our own AI task force in Burson Cohn & Wolfe (BCW) includes experts in the digital single market, including copyright, online platforms, telecommunications, algorithmic transparency, cybersecurity and GDPR.

But at least the EU is addressing the issue. It can see how AI is reaching into almost every corner of our lives, from self-driving cars to robot-powered factories, from better health-care to personalised entertainment. And while it does not see it as a sinister force, it also recognises that AI will need some sort of regulatory framework to ensure it stays safe. But with AI developing at such a fast pace, it will be hard for the EU, or any other regulator, to keep up.


For more insights, see our recent paper ‘The Brainstorm: Burson Cohn & Wolfe Report on Artificial Intelligence’.


Authors: John Higgins & Tom Korman

John Higgins is Burson Cohn & Wolfe’s Senior Advisor on digital technology

Tom Korman is a Manager at Burson Cohn & Wolfe, covering projects in copyright, transport, employment and online platforms

Leave a Reply