Meta, parent company of WhatsApp, Facebook, and Instagram, is discontinuing its third party fact-checking program and turning to a communal approach of combatting disinformation.
In a statement on Tuesday announcing the changes Joel Kaplan, chief global affairs officer, said the company would move to a community notes program — an approach used on X.
Community notes allow social media users to write fact-checking labels or context to potentially misleading posts.
Kaplan said the third party fact-checking program did not fulfil the intent of which it was created.
“Experts, like everyone else, have their own biases and perspectives,” Kaplan said.
“This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate.
“Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.”
The Meta spokesperson said the company observed that the community notes program worked on X which informed it’s decision to switch approaches.
“We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias,” Kaplan said.
Meta is the latest tech company to make changes to the way social media users consume information.
Google’s YouTube has also begun experimenting with community notes.
Last June, the company started a pilot program to allow a group of users to place community notes on videos, adding more context and information.
The platform is still relying on third-party evaluators, however, to judge whether the notes are helpful.
The changes also come as US President-elect Donald Trump prepares to take office for a second time.
A POSSIBLE SURGE IN THE SPREAD OF DISINFORMATION
Community-based content moderation, an approach that utilises user-generated knowledge to shape the ranking and display of online content, is contested among experts as a potential tool in combating disinformation.
While some argue that the method is cost-effective and foster a sense of responsibility, fears have emerged that platforms may struggle to maintain consistent quality control, as community members may prioritise different issues based on personal biases or experiences.
X, formerly Twitter, has faced constant criticism over its implementation of “curbing” disinformation.
Among Elon Musk’s first actions on buying the company was a drastic reduction in online moderation, accompanied by the relaxation of previous safeguarding rules.
Coupled with the sale of verification ticks and adjustment of algorithms, these actions facilitated the spread of disinformation.
Musk, who played a significant role in Trump’s recent electoral victory, has been accused of using propaganda to increase the president-elect’s popularity.
The World Economic Forum (WEF), through its 2024 global risks report, said that as technological risks remain, accurate information will come under pressure.
Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors are believed to leverage misinformation and disinformation to further widen societal and political divides, especially by undermining the legitimacy of elected governments.
A recent survey named Nigeria as one of the top 10 countries in the world where false information is posing the biggest threat.