Two years after a code of practice to fight online disinformation, the European Commission has concluded the self-regulatory mechanism fails to guarantee enough transparency and accountability from the tech platforms and advertisers that signed up to it.
“The code of practice has shown that online platforms and the advertising sector can do a lot to counter disinformation, but the time has come to go beyond self-regulatory measures,” EU commissioner for values and transparency, Věra Jourová, said on Thursday (10 September).
“Platforms need to be more accountable, responsible [and] transparent,” she said, adding that Europe is ready to lead the way and propose instruments for a more resilient and fair democracy.
The main platforms used in the EU, Facebook, Google, and Twitter, Microsoft, Mozilla and TikTok, together with several advertisers’ groups, all signed up to the code, trying to avoid stronger regulation in an ongoing tug of war.
On a positive note, the commission’s report states that the code prompted “concrete actions and policy changes by the platforms aimed at countering disinformation”.
However, the EU executive also identified several shortcomings, including: the inconsistent application of the code across platforms and member states, the lack of shared definitions, gaps and vague language in the code commitments, as well as limitations linked to the self-regulatory nature of this instrument.
The lack of transparency related to paid-political ads – especially political micro-targeting – and users engagement with detected disinformation campaigns is also recognised as an obstacle to holding tech platforms accountable.
Earlier this year, MEPs, journalists, publishers and broadcasters criticised that Europe is too dependent on the goodwill of online platforms for crucial issues.
Additionally, the European Regulators Group for Audiovisual Media Services (ERGA), which supports the commission assessment of the code, also criticised tech companies for tending to provide data aggregated for the whole EU – which make it difficult to evaluate the differences among member states.
Previously, the ERGA warned that the code’s implementation reveals “significant weaknesses” linked to the lack of transparency and voluntary approach – and proposed shifting from the current flexible self-regulatory to co-regulatory approach.
Meanwhile, the commission also published also on Thursday the first of a series of reports on action taken by Facebook, Google, Microsoft, TikTok, Twitter and Mozilla to fight false and misleading coronavirus-related information.
It comes after the commission called on tech companies to promote content from scientific sources, raise users’ awareness and counter the so-called ‘infodemic’ linked to Covid-19 – such as fake news, false health claims or conspiracy theories.
According to the commission, platforms have stepped up their efforts to detect cases of social media manipulation and malign influence operations – while increasing the visibility of Covid-19 information from the World Health Organization and national health organisations.
In a second phase, the commission will asses how effective the actions taken by tech platforms to fight the Covid-19 ‘infodemic’ have been.