YouTube has been condemned by an influential committee of MPs for "pursing and promoting radicalisation" instead of cleaning up its platform in recent years.
Representatives from Google-owned YouTube, Facebook and Twitter today appeared before the Home Affairs Select Committee, where they faced a barrage of frustrated questions about the spread of hate crimes and extremist content on their platforms.
Yvette Cooper, committee chairman and former cabinet minister, took aim at YouTube in particular.
"You will have heard the concerns from every member of the committee, and we have taken evidence from your representatives several times over several years and we feel like we are raising the same issues again and again," she said.
"We recognise you have done some additional work, but we are coming up time and again with so many examples of where you are failing, where you [are] maybe being gamed by extremists or where you are effectively providing a platform for extremism – you are enabling extremism on your platforms."
Cooper went on: "Frankly, in the case of YouTube, you are continuing to promote it – you are continuing to pursue and promote radicalisation that, in the end, has hugely damaging consequences to families’ lives and to communities right across the country.
"Particularly for YouTube, I am just appalled that the answers you have given us are no better than the answers that your predecessors gave us in every previous evidence session. As far as it seems from your organisation in particular, very little has changed.
"I think you can see why parliaments across the world are despairing at your ability to do what you need to do to keep people safe. We hugely value the work that social media companies do, but we need you to keep us safe and you’re not doing so."
Cooper’s forceful speech came at the end of a two-hour evidence session in which the platforms were repeatedly grilled over specific terror incidents, such as last month’s massacre in Christchurch, New Zealand, and how video of the attacks spread on their platforms.
"This is an issue we have raised with all of you many times before… surely this is not new, surely you have been aware of this as a problem when it comes to dealing with child abuse images, jihadist propagada… I can’t believe that Christchurch was the first time you had people attempt to game your system," Cooper said.
Facebook’s public policy director, Neil Potts, explained how the Christchurch case was unique because the terrorist had used a GoPro camera to film the attack in the style of a first-person shooter video game.
Potts said: "This time, what was unique was the use of live stream and people sharing mini-copies of this and people filing it with their phone, which created its own issues."
Facebook has said it employs 30,000 moderators worldwide (as of the end of 2018) and is trying to improve its machine-learning capabilities to detect and flag extremist content.
However, Potts warned that "machine learning is not infallible", adding: "It takes a machine multiple times of seeing [content] so it can learn from itself to enact the right policies."
Meanwhile, Marco Pancini, YouTube’s director of public policy, insisted its automated filters are "working and allowing us to take down a lot of content in a fast way".
When challenged by Cooper (pictured, above) over why YouTube has been seen issuing content warnings on extreme videos, instead of removing them completely, Pancini said there are "different levels" of implementation and enforcement of its safety policies.
"If it’s about the same video, the same video is blocked," he said. "If it's about pieces of the same video, this is sent for review. In the case of something like Christchurch, we change our policy so it’s not even going up online; it has to be reviewed by a human. If there’s a doubt, it’s escalated. In the comprehensive approach that we are taking to this type of content, there can always be issues… but I think we have a robust approach."