About CEETV    |    Contact Us        
Calinos_CEETV_Newspage_Seriesmania2024_160x220

KanalD_160x280_NGvedamektubu_May1

Madd-160x280_ march30-7title

Raya_160X100_LIMIT_website

NATBU24.WebAds-160x100

All3Media_WEB_18April_160x280_UKRAINE_ENEMY_IN_THE_WOODSL

Ceetv_160x280_Deception_PoisonIvy_LoveAndPride_18April

ATV_160x280_May

NEM_160x100_9

 CEE
Twitter, YouTube ignore takedown requests by Ukrainian Government
 18 Jul 2022
In the wake of Russia’s full-scale invasion of Ukraine, Big Tech companies announced they would intensify their efforts to cooperate with the Ukrainian Government to mitigate Russia’s information warfare. While the substance and effect of specific measures are unknown to the public, the overall efforts usually include the creation of so-called escalation channels, by which the platforms prioritize the moderation of content flagged by designated partners, such as the Ukrainian Center for Strategic Communications and Information Security (UCSCIS), an official Ukrainian agency established under the Ministry of Culture. Since the outbreak of the war, the Center has regularly sent Big Tech companies datasets with content and profiles that, in the Center’s opinion, violate platform terms of service and pose acute threats to the security of individuals or the public through their dissemination of Russian war propaganda, hate speech, or inciting language.

Analysis of the available data suggests that Meta has responded fairly effectively to content flagged by the Ukrainian Government, though the majority of accounts disseminating such content have been permitted to remain on the platform. By contrast, the response rate to flagged content by YouTube, Twitter, and LinkedIn is significantly lower. In addition, staffers working for the Ukrainian Government report that at times it takes up to several weeks for platforms to respond to individual flags, and often there is no response whatsoever. More than three months into the war, officials similarly note that Big Tech companies are still not engaging in structured dialogue with the Ukrainian Government or civil society, and all relevant content and integrity policy decisions continue to be taken by US-based teams, which lack insight into local context in Ukraine and are therefore insufficiently responsive to emerging threats. As a result of these shortcomings, the report offers a list of eight recommended actions Big Tech companies could take to improve their effectiveness in mitigating the threat and impact of Russia’s information warfare going forward.

Accounts propagating Kremlin war propaganda and hate speech

Researchers analyzed the availability status of accounts that posted content flagged as Kremlin war propaganda and hate speech in an attempt to evaluate the degree to which platforms took action against the violators. The graph below shows that the majority of the accounts responsible for reported content remain active as of the date of this report’s publication. On a proportional basis, platforms removed more accounts responsible for distributing Kremlin war propaganda than accounts responsible for propagating hate speech.

Account impersonations on Facebook and Instagram

The impersonation samples consist of accounts purporting to be selected Ukrainian government officials and agencies, such as President Zelensky, Mayor of Kyiv Klitschko, Minister of Foreign Affairs Kuleba, the Ministry of Defense, or the Security Service. Imposter accounts can pose a direct threat to the audience, especially in a country with an active conflict, as users may mistake content from an imposter account as official, trustworthy information. The analysis found Facebook removed significantly more accounts than Instagram with a margin of 40% of reported accounts. The raw numbers indicate that account impersonations occur more often on Instagram, with 89% more Instagram imposter accounts detected.

Further analysis of the subset of available accounts revealed that some of the accounts do not conduct any activity and contain only one indicator of possible impersonation, such as the name of the page, while others appear to be “fan pages” or individuals with the same name as the public figure. Other accounts, however, clearly impersonate officials. Our primary concern with these ambiguous accounts is that even if they are dormant or seemingly harmless, they can be mobilized at any time. Notably, the researchers also observed that some of the Facebook impersonation accounts are not available in Ukraine, but are available in the EU. This variance raises questions about the consistency of impersonation policy application and the justification behind any such rule (if an account impersonates someone, how/why does location matter).

Kremlin war propaganda on LinkedIn

The analysis revealed that 34 of the 65 LinkedIn posts flagged for review by the Ukrainian Government are still accessible on the platform. Many of these posts spread false or misleading content in an attempt to justify Russia’s military aggression. One such post that remains on the platform – despite having been flagged to LinkedIn by the Ukrainian Government – claims that the country’s independence has allegedly led to a massive rise of fascism, chauvinism, and Russophobia. Another post promotes a disinformation narrative forged by the Russian Ministry of Defense, according to which secret US biolabs in Ukraine were used to cultivate dangerous pathogens. Notably, the post at issue contains a link to a YouTube video, which has been already removed.

Hate speech on Facebook, YouTube, and Twitter

The samples of hate speech reviewed in our analysis focused predominantly on derogatory terms referring to Ukrainians (such as “ukronazis” or “kh0khols). Notably, the words expressing hate in one context may be used as satire in another, so automated processes do not always result in accurately flagged comments. Per the analysis, Facebook removed the majority of the reported content (taking down all reported posts and 83% of reported comments). By contrast, YouTube and Twitter left approximately two-thirds of the reported content up on their platforms. Twitter removed less than one-third of the reported content, despite the fact that the overall volume of problematic content was by far the largest on its platform (there were more instances of reported content on Twitter than on YouTube and Facebook combined).

Ads that constitute war propaganda on Meta products

The preliminary analysis determined that all of the ads that were flagged and submitted to Meta were removed by the platform. Upon further review, however, one out of the nineteen reported ads is now visible in the ads library again. The ad is an Instagram post of a Russian female blogger and psychologist who interprets the events surrounding the Russian invasion as NATO’s fault and Russia’s need for self-defense. According to Meta, the ad initially ran without a disclaimer, but was later taken down after Meta determined that the ad’s content was about the war. Although the post is no longer sponsored, it can still be found on Instagram and Facebook (despite the nature of its content).

Recommendations for Big Tech companies

In light of the shortcomings identified in this report, Big Tech companies could take the following actions to improve the effectiveness of their efforts to mitigate the threat and impact of Russia’s information warfare as it relates to the war in Ukraine:

1. Disrupt Russian hybrid attacks, false flag operations, or coordinated trolling attacks within one hour of receiving notifications by competent Ukrainian authorities or civil society organizations. Accounts that share Kremlin propaganda content (other than for journalistic purposes), threaten Ukrainians, or justify the war on false pretenses should be disabled.

2. Enforce mitigation measures – such as post removal, account suspension, and algorithmic deprioritization – in a proactive manner across all content that is identical or very similar to the type/form/origin of content which has been mitigated in the past.

3. Report suspected or proven breaches of Ukrainian law, human rights violations, or coordinated disinformation attacks to Ukrainian authorities in real time.

4. Preserve content and accounts removed in relation to the war, including any evidence of war crimes and Kremlin-backed information operations, for later use by appropriate Ukrainian authorities.

5. Protect Ukrainian users, journalists, politicians, and civil society by fast-tracking notifications from accredited accounts; verifying new accounts or pages containing the names of Ukrainian politicians or institutions before allowing them to go live; and monitoring the accounts of Russian soldiers in Ukraine as well as dormant pages and accounts that were likely created as part of Russia’s information operation.

6. Establish an early warning system to alert particular groups and individuals exposed to online attacks. This should include a streamlined process for Ukrainian Government and civil society, as well as international partner organizations, to flag offending content or nascent channels.

7. Secure the flow of reliable information in Ukraine by calibrating newsfeed algorithms and recommender systems to prioritize engagement signals in favor of verified Ukrainian sources. Verified Ukrainian sources should be exempt from automated bans and suspensions (triggered by malicious user-flagging) and users in areas of active conflict should be directed to authoritative information provided by the Ukrainian Government.

8. Consult Ukrainian Government and civil society, as well as their international partner organizations, in a structured format regarding the formulation and execution of policies related to the war (and share attribution and enforcement standards with them). Transparency and access to aggregated data on the views of, and engagement with, high-reach public accounts should be expanded for independent researchers.
RELATED
 SEARCH
 
 TVBIZZ LIVE

 
   FOCUS
 GET OUR NEWSLETTER
 
About  |  Contact  |  Request  |  Privacy Policy  |  Terms and Conditions