Facebook, Google, and Twitter have been screening and filtering extremist content for years, but on Wednesday, the gatekeepers of the internet confirmed to Congress that they are accelerating their efforts and will target users who may be exposed to extremist/terrorist content, redirecting them instead to “positive and moderate” posts.
Representatives for the three companies testified before the Senate Committee on Commerce, Science and Transportation to outline specific ways they are trying to combat extremism online. Facebook, Google, and Twitter aren’t just tinkering with their algorithms to restrict certain kinds of violent content and messaging. They’re also using machine learning and artificial intelligence (AI) to manufacture what they call “counterspeech,” which has a hauntingly Orwellian ring to it. Essentially, their goal is to catch burgeoning extremists, or people being radicalized online, and re-engineer them via targeted propagandistic advertisements.
Monika Bickert, Facebook’s head of global policy management, stated:
“We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That’s why we support a variety of counterspeech efforts.”
Meanwhile, Google’s YouTube has deployed something called the “Redirect Method,” developed by Google’s Jigsaw research group. With this protocol, YouTube taps search history metrics to identify users who may be interested in extremist content and then uses targeted advertising to counter “hateful” content with “positive” content. YouTube has also invested in a program called “Creators for Change,” a group of users that makes videos opposed to hate speech and violence. Additionally, the video platform has tweaked their algorithm to reduce the reach of borderline content.
In his testimony, Juniper Downs, YouTube’s head of public policy, said, “Our advances in machine learning let us now take down nearly 70% of violent extremism content within 8 hours of upload and nearly half of it in 2 hours.”
On the official YouTube blog, the company discussed how they plan to disrupt the “radicalization funnel” and change minds. The four steps include:
- “Expanding the new YouTube product functionality to a wider set of search queries in other languages beyond English.
- Using machine learning to dynamically update the search query terms.
- Working with expert NGOs on developing new video content designed to counter violent extremist messaging at different parts of the radicalization funnel.
- Collaborating with Jigsaw to expand the ‘Redirect Method’ in Europe.”
Starting at the end of last year, the company had already begun altering its algorithm so that 30% of its videos were demonetized. The company had explained that it wanted YouTube to be a safer place for brands to advertise, but the move has angered many content producers who generate income with their video channels.
The effort to use machine learning and AI as part of a social engineering funnel is probably not new, but we’ve never seen it openly wielded on a vast scale by a government-influenced corporate consortium. To say the least, it is unsettling for many. One user commented underneath the post, “So if you have an opinion that’s not there [sic] agenda You are a terrorist. Free speech is dead on YouTube.”
For its part, Twitter’s representative told Congress that since 2015 the company had taken part in over 100 training events focused on how to reduce the impact of extremist content on the platform.
In a post called “Introducing Hard Questions” on its blog, Facebook discussed rethinking the “meaning of free expression.” The post posed a number of hypothetical questions, including:
- How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
- Who gets to define what’s false news — and what’s simply controversial political speech?”
The three tech giants have been under intense scrutiny from lawmakers who feel the platforms have been used to sow division online and even recruit homegrown terrorists. While the idea of using an algorithm to fight extremism online is not new, a unified front of Facebook, Google, and Twitter has never collectively produced original online propaganda, the specifics and scope of which remain vague despite the companies’ attempts at transparency.
Only recently, in the 2012 National Defense Authorization Act (NDAA), was the use of propaganda on the American people by the government formally legalized. Then-President Barack Obama continued strengthening government propaganda at the end of his administration with the dystopic Countering Disinformation and Propaganda Act of 2017, which created a kind of Ministry of Truth for the creation of so-called “fact-based narratives.”
It appears that while the government continues to strengthen its potential to conduct psychological operations (psyops), it is also joining forces with internet gatekeepers that can use their algorithms to shape billions of minds online. While one may applaud the ostensible goal of curbing terrorist recruitment, the use of psyops for social engineering and manufacturing consent could extend far beyond the original intent.