The legal complaint is here.
No wonder Dennis Prager (Prager University) is suing YouTube for censorship.
photo
YouTube Trusted Flagger program
The YouTube Trusted Flagger program was developed by YouTube to help provide robust tools for government agencies and non-governmental organizations (NGOs) that are particularly effective at notifying YouTube of content that violates our Community Guidelines.
Individuals who are part of the YouTube Contributors program also frequently report videos that may violate YouTube’s Community Guidelines.
The YouTube Trusted Flagger program includes:
- A bulk-flagging tool that allows for reporting multiple videos at one time
- Private forum support for questions about the policy enforcement process
- Visibility into decisions on flagged content
- Prioritized flag reviews for increased actionability
Program eligibility
Government agencies and NGOs are eligible for participation in the YouTube Trusted Flagger program. Ideal candidates flag frequently and with a high rate of accuracy.
Before becoming deputized for participation, applicants must attend a YouTube training to learn about our Community Guidelines and enforcement processes. These trainings are led by YouTube’s Trust & Safety and Public Policy teams.
Flag review process
Videos flagged by trusted flaggers are reviewed by YouTube content moderators according to YouTube’s Community Guidelines. Content flagged by trusted flaggers is not automatically removed or subject to any differential policy treatment — the same standards apply for flags received from other users. However, because of their high degree of accuracy, flags from trusted flagger are prioritized for review by our teams.
The Trusted Flagger program exists exclusively for the reporting of possible Community Guideline violations. It is not a flow for reporting content that may violate local law. Requests based on local law can be filed through our content removal form.
Google, really?
YouTube Trusted Flaggers help Google fight terrorism online, along with better automated detection, content warnings, and counter-radicalization content
Google and YouTube are working along with Facebook, Microsoft and Twitter to help fight terrorism online.
Google has pledged a four-pronged strategy:
1. Improving automated systems that detect problematic videos.
We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.
2. Expanding the Trusted Flagger program
Trusted flaggers – both individuals and organizations – flag content correctly more than 90% of the time.
We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants. This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.
3. Making inflammatory content harder to find and endorse
In future [videos that do not clearly violate policy, but contain inflammatory religious or supremecist content] will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements.
4. Using the “Redirect Method” for counter-radicalization efforts
… we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.
*** So…..they use the Southern Poverty Law Center as a trusted flagger?
The Southern Poverty Law Center is assisting YouTube in policing content on their platform, The Daily Caller has learned.
The left-wing nonprofit — which has more recently come under fire for labeling legitimate conservative organizations as “hate groups” — is one of the more than 100 nongovernment organizations (NGOs) and government agencies in YouTube’s “Trusted Flaggers” program, a source with knowledge of the arrangement told TheDC.
The SPLC and other program members help police YouTube for extremist content, ranging from so-called hate speech to terrorist recruiting videos.
All of the groups in the program have confidentiality agreements, a spokesperson for Google, YouTube’s parent company, previously told TheDC. A handful of YouTube’s “Trusted Flaggers,” including the Anti-Defamation League and No Hate Speech — a European organization focused on combatting intolerance — have gone public with their participation in the program. The vast majority of the groups in the program have remained hidden behind their confidentiality agreements.
The SPLC’s close involvement in policing content on YouTube is likely to cause consternation among conservatives who worry that they may not be treated fairly. The left-wing group has consistently labeled pedestrian conservative organizations as “hate groups” and has been directly tied to violence against conservatives in the past. Floyd Lee Corkins, who opened fire at the Family Research Center in 2012, said he chose the FRC for his act of violence because the SPLC listed them as a “hate group.”
It’s unclear when the SPLC joined YouTube’s “Trusted Flaggers” program. The program goes back to 2012 but exploded in size in recent years amid a Google push to increase regulation of the content on its platforms, which followed pressure from advertisers. Fifty of the 113 program members joined in 2017 as YouTube stepped up its content policing, YouTube public policy director Juniper Downs told a Senate committee in January.
Downs said the third-party groups work closely with YouTube’s employees to crack down on extremist content in two ways, both of which a Google spokesperson previously confirmed to TheDC.
First, the flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel. Second, the partner groups act as guides to YouTube’s content monitors and engineers designing the algorithms policing the video platform but may lack the expertise needed to tackle a given subject.
“We work with over 100 organizations as part of our Trusted Flagger program and we value the expertise these organizations bring to flagging content for review. All trusted flaggers attend a YouTube training to learn about our policies and enforcement processes. Videos flagged by trusted flaggers are reviewed by YouTube content moderators according to YouTube’s Community Guidelines. Content flagged by trusted flaggers is not automatically removed or subject to any differential policies than content flagged from other users,” said a YouTube spokesperson, who would not specifically comment on the SPLC’s participation in the program.
The SPLC did not return multiple voicemails and emails seeking comment.
The overwhelming majority of the content policing on Google and YouTube is carried out by algorithms. The algorithms make for an easy rebuttal against charges of political bias: it’s not us, it’s the algorithm. But actual people with actual biases write, test and monitor the algorithms.
As noted above, Google’s anonymous outside partners (such as the SPLC) work closely with the internal experts designing the algorithms. This close collaboration has upsides, Google’s representatives have said, such as in combatting terrorist propaganda on the platform.
But it also provides little transparency, forcing users to take Google’s word that they’re being treated fairly.
The SPLC has faced criticism for its cavalier definitions of “hate group” and “extremist.” The organization stoked controversy in 2015 by labeling Dr. Ben Carson, now the Secretary of Housing and Urban Development (HUD), an anti-gay “extremist.” After a backlash, the SPLC reversed its ruling and apologized to Carson.
The organization faced a similarly intense backlash in 2016 for labeling Maajid Nawaz, a respected counter-extremism activist, an “anti-Muslim extremist.”(RELATED: SPLC Says Army Bases Are Confederate Monuments That Need To Come Down)
The Washington Examiner’s Emily Jashinsky noted last year that “the SPLC’s claim to objectivity is nothing less than fraudulent, a reality that informed observers of its practices from both the Left and Right accept.”
“The routine of debunking their supposedly objective classifications occurs like clockwork each time a major outlet makes the mistake of turning to them when reporting on the many conservative thinkers and nonprofits the group absurdly designates as hateful.”
The SPLC has faced tough criticisms not just from conservatives but from the mainstream press as well.
“At a time when the line between ‘hate group’ and mainstream politics is getting thinner and the need for productive civil discourse is growing more serious, fanning liberal fears, while a great opportunity for the SPLC, might be a problem for the nation,” Politico Magazine’s Ben Schreckinger wrote last year.
Bloomberg columnist Megan McArdle similarly noted last year that the SPLC commonly lumps in principled conservatives alongside actual racists and extremists and warned of the possibility that tech companies could rely on the SPLC’s misleading definitions.
“Given the increasing tendency of powerful tech companies to flex their muscle against hate groups,” she wrote, “we may see more and more institutions unwittingly turned into critics or censors, not just of Nazi propaganda, but also of fairly mainstream ideas.”