- Share this article on Facebook
- Share this article on Twitter
- Share this article on Email
- Show additional share options
- Share this article on Print
- Share this article on Comment
- Share this article on Whatsapp
- Share this article on Linkedin
- Share this article on Reddit
- Share this article on Pinit
- Share this article on Tumblr
Technology giants, such as Google, Facebook and Twitter, could face fines of billions of pounds if they fail to remove and limit the spread of harmful online content under U.K. government proposals unveiled on Tuesday.
The government of Boris Johnson announced details of its proposed Online Harms Bill, first set in motion by then-Prime Minister Theresa May in the spring of 2019, which aims to tackle child abuse and sexual abuse imagery, terrorist materials, misinformation, and other digital content.
Secretary of State for Culture, Media and Sport Oliver Dowden is planning to bring the bill before parliament next year, but it may not come into force until 2022 or later, according to the BBC.
Britain’s media and communications regular, Ofcom, will get powers to enforce the regulation, including the ability to block access to online services that it decides are not doing enough to protect children and others.
Policing online misinformation will be part of its remit in cases when content is legal but could cause “significant” physical or psychological harm to adults.
Tech giants will get a chance to draw up their own standards and rules under the government proposals. But if they fail to stick to them, the government said it could introduce secondary legislation to hit companies with fines of up to £18 million ($24 million), or 10 percent of global revenue, whichever is higher. The Guardian calculated that this could see Facebook, for example, potentially get hit with a £5 billion ($6.66 billion) fine based on current financials.
The government proposals do not include plans to bring criminal cases against individual executives, as some had suggested in earlier stages of discussion on the legislation. Online scams and other types of internet fraud are also not part of the new regulation.
The legislation would, however, allow Ofcom to demand tech firms take action against child abuse imagery shared via such encrypted messaging apps as Facebook’s Whatsapp, Apple’s iMessage and Google’s Messages, the BBC reported, quoting a government representative as saying this would be a “last resort” against “persistent offenses.”
“Today, Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation,” said Dowden.
“Today marks a major step forward in laws that will see powerful tech companies held to account,” said Julian Knight, chair of the Digital, Culture, Media and Sport Committee of the U.K. parliament’s House of Commons. “The government listened to the case for tech companies to be given clear legal liabilities to act against harmful or illegal content, covered by a compulsory code of ethics and policed by an independent regulator.”
He added: “A duty of care with the threat of substantial fines levied on companies that breach it is to be welcomed. However, we’ve long argued that even hefty fines can be small change to tech giants and it’s concerning that the prospect of criminal liability would be held as a last resort. We warn against too narrow a definition of online harms that is unable to respond to new dangers, and question how such harms will be proactively monitored.”
Melanie Dawes, the CEO of Ofcom, said: “Being online brings huge benefits, but four in five people have concerns about it. That shows the need for sensible, balanced rules that protect users from serious harm, but also recognizes the great things about online, including free expression.”
Sign up for THR news straight to your inbox every day