Euro 2020: What could social media companies actually do about racist abuse – and would it work? independent.co.uk - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from independent.co.uk Daily Mail and Mail on Sunday newspapers.
LONDON: The UK government announced plans on Wednesday to introduce age verification for users accessing social media platforms as part of efforts to protect children online.
A new online safety bill will regulate social media with terms and conditions on minimum age thresholds, while tech giants such as Facebook, Google and Twitter will face hefty fines if they allow underage children to access their services.
Ofcom, the government-approved regulator for broadcasting and telecommunications, will be responsible for enforcing the new bill.
Currently, children under 13 are not allowed to sign up to Facebook, Twitter, Instagram and YouTube, while those under 12 are prohibited from creating a Google account. Meanwhile, the Facebook-owned chat service WhatsApp has a minimum age of 16.
Online safety bill risks being as shallow as a puddle , warns campaigner
The report highlights the government’s commitment to ‘promote a free, open, peaceful and secure cyberspace’.
However, it does not reflect the amount of time people currently spend online, not does it offers any solutions to the problems and issues it analyses, an online harms campaigner told
City A.M.
According to Tom Ascott, policy manager at the London-based Online Harms Foundation, the review stops short of detailing how the government plans to tackle a number of fundamental issues.
“We don’t want a situation where the internet is being blamed for criminal activity. Internet platforms shouldn’t be solely held responsible for this. Tech is part of the solution. But they are not a scapegoat for better laws,” he said.
Government Threatens Tech Firms with Fines of 10% of Turnover
Phil Muncaster UK / EMEA News Reporter , Infosecurity Magazine
The UK government will introduce an Online Safety Bill next year which could result in fines higher than the GDPR for companies that allow illegal content to be posted on their platforms.
The plans are nominally aimed at protecting children online by banning things like terrorist content, child sexual abuse material, and anything promoting suicide. Misinformation is also included, if it is deemed to cause major physical or psychological harm.
Regulator Ofcom will be given the power to fine companies up to 10% of global annual turnover or £18 million, whichever is higher, for serious transgressions. It will also be empowered to block such services if they choose not to comply, although it’s unclear exactly how.
Online harms bill: firms face up to £18m fine for illegal content theguardian.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from theguardian.com Daily Mail and Mail on Sunday newspapers.