The U.K. is planning to create a new, independent tech regulatory body that will rule on harmful content and hand down penalties potentially in the billions of dollars to companies that don’t act fast to remove offending posts, according to U.K. digital minister Margot James in an interview with Business Insider.
James anticipated more details in a policy paper on internet safety next month. The sanctions regime would not be all that different from the powers that the Information Commissioner’s Office (ICO) already has. Europe’s new GDPR (General Data Protection Regulation), which went into effect last year to enforce privacy rules, empowers ICO to level fines of up to 4% of global revenue for significant data breaches.
The British government, James said, wants tech firms to eradicate illegal hate speech, more subtle forms of abuse like child grooming (befriending and establishing emotional connections with kids) and problematic content around suicide and self-harm. “One of the guiding principles would be that what is illegal and unacceptable offline should be illegal and unacceptable online.” she told the publication.
Earlier this month, a British Parliamentary Committee released a highly critical report on Facebook, accusing it of violating data privacy and competition laws, and calling for new regulations on the tech industry.
“The era of self-regulation for tech companies should come to an end,” said Damian Collins, the chairman of the Digital, Culture, Media and Sport Committee, which published the report. It proposed making tech company’s legally responsible for user-posted content identified as harmful. “Platforms do have responsibility, even if they are not the content generator, for what they host on their platforms and what they advertise,” Sharon White, CEO of U.K. communications regulatory Ofcom, said in the report.
In the U.S., a law from the late 1990s when the Internet was in its infancy, said Internet platforms cannot be held liable for user generated or third-party content. However, a clause in the law also said companies could not be penalized for removing content if they wanted to. The law’s creators felt that clause — called Good Samaritan — would encourage companies to be better policers. It did not work out that way.
So lawmakers across the world, including the U.S., are growing more proactive. Facebook’s Cambridge Analytica scandal launched the global debate about how social media giants use and protect consumer information. That was followed by scary revelations over election tampering, fake accounts and misinformation and lawsuits over breach of civil liberties.
In Washington, D.C. yesterday, Randall Rothenberg and Dave Grimaldi, respectively CEO and EVP of the IAB, or Internet Advertising Bureau, testified before both houses of Congress on the need for federal privacy legislation to head off what they call a patchwork of new state laws that would become a major headache for the industry and, they claimed, hurt consumers.