
In recent weeks, public attention has once again focused on the actions of Meta—one of the world’s largest tech companies. The reason was an incident involving the spread of a video on Facebook, in which real footage of protests in Serbia was artificially altered: false subtitles and audio were added to create the impression that the events depicted support for former Philippine President Rodrigo Duterte in the Netherlands. Despite the obvious forgery, the video quickly gained traction and reached more than 100,000 users before platform algorithms limited its distribution outside the United States. Human moderators intervened only after the issue was brought before the independent Oversight Board.
This incident once again cast doubt on Meta’s effectiveness in combating disinformation. Although the company has repeatedly declared its commitment to transparency and accountability, in practice its actions often prove to be inconsistent. In this case, the Oversight Board concluded that, despite formal compliance with internal rules, the video should have been flagged as high-risk and clearly manipulated. However, Meta chose to keep it online, taking advantage of the vagueness of its own standards.
The board recommended that the company improve labeling of such materials, implement separate protocols for cases of clear disinformation, and strengthen automatic detection mechanisms to catch fakes before they spread widely. However, none of these measures were promptly implemented. The video remains accessible, and Meta’s stance has not changed. As a result, the board’s recommendations remained purely formal and had no impact on the platform’s actual policy.
A systemic problem or an isolated failure?
Experts point out that such incidents are not the exception, but the result of the company’s consistent approach. Meta does not so much fight falsehoods as coexist with them—balancing between user interests and profits from engagement. As long as algorithms drive up activity, and legal loopholes allow the rules to be interpreted in the company’s favor, the spread of manipulative content becomes part of the business model.
In a world where millions of people get their news and form opinions through social media every day, such a policy takes on particular significance. Moderation decisions become not a means of protecting the truth, but a rhetorical tool to justify inaction. In essence, Meta does not aim to stop the spread of fakes, but merely seeks ways to explain their presence on the platform.
Public Response and the Consequences for the Digital Space
Public reaction to the latest scandal was predictably intense. Users and human rights advocates are demanding greater transparency and accountability from Meta, warning of the dangers of legitimizing disinformation. As the lines between truth and fiction blur, each such episode undermines trust in digital platforms and heightens anxiety over their influence on public opinion.
While Meta responds with formal statements and is slow to introduce real changes, experts warn that unless the situation improves, similar incidents will keep happening. This risks further eroding trust in social networks and will increase pressure from regulators and civil society.
Internal mechanisms and the role of the Oversight Board
Meta’s Oversight Board was established as an independent body to review controversial moderation cases. However, its recommendations are often left unimplemented. In the case of the fake video, the Board explicitly called for stricter policies on manipulative content, but the company responded only with formal statements.
This approach raises questions about the true independence and effectiveness of oversight structures within major tech corporations. As long as the Board’s decisions remain non-binding, its impact on company policy will remain limited.
By the way: What is known about Meta
For reference, Meta is an American technology corporation formerly known as Facebook Inc. The company was founded by Mark Zuckerberg in 2004, originally focusing on the Facebook social network. In 2021, the brand changed its name to Meta to highlight its shift toward developing the metaverse and new digital platforms. The company’s portfolio also includes Instagram, WhatsApp, and Oculus. Meta has repeatedly faced criticism for its privacy policies, handling of personal data, and content moderation practices. Nevertheless, the platform remains one of the most influential in the world, connecting billions of users across the globe. In recent years, Meta has invested heavily in artificial intelligence and virtual reality, but ethical and accountability concerns remain at the forefront of public attention.












