How Google plans to fight the spread of terrorist-related YouTube videos

ajc.com

Credit: Andrew H. Walker

Credit: Andrew H. Walker

Google is unveiling four new policies to combat extremist videos on YouTube, the company announced in a blog post Sunday.

» RELATED: London mosque terror attack: What we know now

“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them.” Kent Walker, general counsel for Google, wrote.

More engineers, automated systems to identify extremist content

According to Walker, the tech giant will be increasing its use of technology to help identify extremist and terrorism-related content on YouTube. Over the past six months, engineers have used video analysis models to examine and remove more than 50 percent of extremist content.

Now, the company will use the latest advanced machine learning research and dedicate more engineering resources toward training “content classifiers” to find and remove such content.

» RELATED: American killed in London terror attack was celebrating 25th wedding anniversary

More experts in YouTube’s Flagger program

Next, Google plans to increase the number of independent experts in YouTube’s Trusted Flagger program, in which user-flagged content is reported for expert investigation.

According to Walker, Trusted Flagger reports created by experts are accurate more than 90 percent of the time.

Google will add 50 expert NGOs to the current 63 organizations and support them through operational grants. The company also plans to begin working closely with counter-extremist groups to identify content being used to recruit and radicalize extremists.

» RELATED: London Bridge terror attack: What we know now

How the company will tackle extremist content that doesn’t directly violate policies 

Videos that don’t directly violate Google’s policies, but still contain “inflammatory religious or supremacist content,” will come with a warning.

On top of that, these videos will not be monetized, recommended or eligible for user endorsements or comments. These changes would make such videos harder to find.

“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” Walker said.

» RELATED: Metro Atlanta gets funding boost for terrorist threat preparedness

Changing minds of potential extremist recruits

Lastly, YouTube will expand its role in combating radicalization by promoting voices against hate and radicalization.

As part of its Creators for Change program, YouTube is working with Jigsaw on the “Redirect Method,” which uses targeted online ads to reach potential ISIS recruits and redirect them to watch anti-terrorist videos.

In the past, potential extremist recruits have clicked on the ads and watched more than half a million minutes of anti-terrorist content.

The plan is to expand the program more broadly across Europe, Walker said.

In addition to the aforementioned program, YouTube will be working more closely with other tech giants such as Facebook, Twitter and Microsoft to share knowledge and develop technology and ultimately support smaller companies in the fight against online terrorism and radicalization.

» RELATED: The path to radicalization of terrorists, from Omar Mateen to Dylann Roof

The outlined steps were announced following the recent deadly terror attack in London, after which British Prime Minister Theresa May called for new regulations on internet companies to combat online terrorism.

According to The Verge, Germany is considering a law that would levy huge fines against social media companies that don't remove extremist content quickly. And the European Union recently approved a new set of proposals to require companies to block such content.

In 2016, Facebook, Microsoft, Google and Twitter teamed up to create a shared industry database of unique digital fingerprints for images and videos that are produced by or support extremist organizations to help identify and remove extremist content.

According to the Associated Press, Twitter claims it suspended a total of 376,890 accounts for violations related to the promotion of extremism.

And Facebook says it alerts law enforcement if it identifies a threat of an imminent attack or harm to someone.

Read the full Google news release.