As the midterms draw closer, Facebook is taking measures to curb the spread of misinformation from Russian propagandists alleged to have been the perpetrators behind false news spread during the 2016 U.S. presidential election.
Now, the social media company has announced that it will begin to fact-check not just articles and links, but also photos and videos in a bid to eradicate the spreading of false news to the masses.
In a statement from the company’s product manager, Tessa Lyons, it was revealed that Facebook plans to use both technology and human reviewers to eliminate mass misinformation on their platform with a specific focus on verifying visual formats.
Lyons spoke about how the company intends to check for instances of misinformation in articles, links, visual media like photos and video, but in addition to that, reviewers will also be on the lookout for subliminal messages hidden in both video and audio files.
The fact-checking team will target videos and photos that appear to be used out of context, digitally manipulated, or are plainly misinforming.
However, this will not be an easy task for the social media giant, and as Lyons explained. The reason for that is a definitive lack of solid guidelines to streamline the process.
Manipulated information is not necessarily bad information, she said, adding that Facebook intends to use technology that identifies manipulated content, but will ultimately rely on the fact-checkers to determine whether a post has indeed been doctored to spread false information.
Russian Group in Question
It is alleged that a group with ties to Russia known as the Internet Research Agency is behind the creation and spreading of doctored photo and video content that’s been altered to spread misinformation.
Facebook reacted ahead of the upcoming midterm elections to stop the spread of false digital content following a string of false media that has been posted on the site since the 2016 presidential elections.
Facebook will also be counting on its users to flag content which they deem misinforming or appears to be manipulated to spread false information. Facebook intends to analyze the comments section on suspicious posts as well—as a means of weeding out illegitimate accounts.
Many anticipate that this will lead to mass-reporting problems, which have already been witnessed on the platform and on other social networks like Twitter.
The Internet Research Agency
Earlier this year, 13 Russian citizens and three Russian companies were implicated in an indictment spearheaded by U.S. Department of Justice Special Counsel Robert Mueller.
According to Mueller, the Internet Research Agency, an independent group believed to have strong affiliations to Russia, allegedly helped to create videos and other graphical content in a bid to spread misinformation on social networks during the 2016 presidential election.
Facebook is expected to encounter hurdles when designing algorithms that detect manipulated content, not to mention un-doctored graphical material that has been posted out of context.
The company seeks to shut down an avenue for propaganda that was reportedly used relentlessly by suspected Russian propagandists to spread misinformation in the last presidential elections.
Fact-Checking Information Expected to Be an Extensive Process
In the statement, Facebook admitted that the task ahead was a difficult one, though not an insurmountable one. Propagandists are known to spread the same agenda repeatedly and sometimes simultaneously as various sources of media.
As such, it is possible for doctored information to appear in a video, a photo, an article or even an audio file tucked behind a harmless video.
Fighting misinformation, according to Lyons, will rely heavily on the social media company’s ability to fact-check information across different types of media.
That, plus the analysis of user comments posted under content suspected to be altered, will guide them towards weeding out users with a history of posting false or misleading content.