
From Private Island to Personal Hell: ‘Blink Twice’ Shows the Power of Survivors Working Together
June 27, 2025
REPORT: Texas State expected to finalize move to Pac-12 at Monday board meeting
June 27, 2025On Thursday, Brazil’s Supreme Court ruled that digital platforms are responsible for users’ content — a major shift in a country where millions rely on apps like WhatsApp, Instagram, and YouTube every day.
The ruling, which goes into effect within weeks, mandates tech giants including Google, X, and Meta to monitor and remove content involving hate speech, racism, and incitement to violence. If the companies can show they took steps to remove such content expeditiously, they will not be held liable, the justices said.
Brazil has long clashed with Big Tech platforms. In 2017, then-congresswoman Maria do Rosário sued Google over YouTube videos that wrongly accused her of defending crimes. Google didn’t remove the clips right away, kicking off a legal debate over whether companies should only be punished if they ignore a judge.
In 2023, following violent protests largely organized online by supporters of former President Jair Bolsonaro, authorities began pushing harder to stop what they saw as dangerous behavior spreading through social networks.
Several countries including India and Indonesia already have laws to enable quick removal of content deemed illegal or inappropriate — in contrast to Section 230 in the U.S. that shields online platforms from liability for content posted by users. With this week’s order, Brazil will have one of the strictest regulatory regimes in the world for online platforms.
Brazil is one of the largest markets for social media companies, with about 144 million users on YouTube, some 112 million on Facebook, and around 140 million on Instagram. It also has more than 145 million active users on WhatsApp, the most popular social media and communication platform in the country.
The court’s decision could lead to a standoff between the U.S. and Brazil. Last month, the Donald Trump administration said the U.S. would punish foreign leaders who go after American tech firms.
Here’s a roundup of expert comments:
What is the news?
Patricia Peck Pinheiro, a specialist in digital law and member of the National Data Protection Commission of Brazil’s National Council of Justice.
While earlier, platforms were liable for user-generated content only if it wasn’t removed after a court order, now they can be held liable upon mere notification. If a platform becomes aware of illegal content and fails to act with due diligence, especially in serious cases like hate speech, deepfakes, or fraud, it will be subject to a fine.
The court’s decision also introduced the concept of systemic failure, which holds providers liable when they fail to adopt preventive measures or remove illegal content. Now, platforms will be expected to establish self-regulation policies, ensure transparency in their procedures, and adopt standardized practices.
Why is this significant?
Ronaldo Lemos, co-founder and chief science officer at the Institute for Technology and Society of Rio de Janeiro.
The impact of this ruling will be very broad. The most affected content will be related to political criticism, reports of corruption, and sensitive discussions involving human rights. This type of content may end up in a gray area where providers feel pressured to take it down out of fear of being held jointly liable.
There will also be an increase in obligations for prior moderation. This will push platforms, and even small internet services or forums, into a state of fear and caution, leading them to proactively remove legitimate content just to avoid lawsuits or financial penalties. This creates a regime of fear and doubt in Brazil.
The Brazilian judiciary, rather than being spared, will become even more of an arbitrator in all types of disputes, with an added problem: Today, disputes happen while the content is still online. This leaves Brazil in a very dark place.
What happens next?
Paloma Rocillo, director of the Brazilian Institute for Research on Internet and Society (IRIS).
Digital platforms will be subject to liability on a much wider range of online content. Before the ruling, platforms were only held liable if they failed to comply with a court order. Now, there is a significantly larger set of normative elements that lower courts must consider when issuing their decisions.
IRIS conducted a study analyzing over 300 court rulings on content moderation this year and found that 28% of them had no legal basis at all. In other words, judges were deciding based on their personal judgment. With the new understanding established by the Supreme Court, the judiciary now has a wide range of parameters for interpreting the law.
On the side of application providers, the decision is also likely to significantly affect business models. That’s because it introduces new layers of liability, requiring providers to be more rigorous in moderating content, precisely because they can now be held accountable for omissions.
What is expected of Big Tech companies?
Alessandra Borelli, specialist in digital law and data protection, and CEO at Opice Blum Academy, Brazil.
The new legal landscape requires proactive action from Big Tech companies in cases posing high risk to public safety, fundamental rights, or democracy; rapid identification of content boosted or promoted by bots and artificial networks; and the adoption of appropriate measures to prevent the widespread circulation of unlawful content.
Big Tech companies will also be required to comply with new structural obligations, including the creation of clear self-regulation rules with due process, user support channels, and mechanisms to appeal content removals. They will now need to publish annual transparency reports, particularly regarding removed content, received complaints, and paid promotions, and establish legal representation in Brazil with full authority to act in judicial and administrative matters.
If these measures are not implemented, especially in the cases outlined in the Supreme Court’s ruling, platforms may be held liable for systemic failure — a new category of civil liability that reinforces the duty of care within Brazil’s digital ecosystem.
How have Big Tech companies responded?
The shift “imposes strict liability regardless of prior notice to providers, moving away from the principle of content moderation to force platforms into active monitoring, which would constitute a serious violation of fundamental rights,” industry lobby groups representing Big Tech companies in Brazil said in a statement in December.
“Over the past few months, Google has expressed concerns about changes that could affect freedom of expression and the digital economy,” the company said in a statement Thursday. “We remain open to dialogue.”
Meta is “concerned about the implications of the ruling on speech and the millions of businesses” that rely on their apps in Brazil, a spokesperson for the company told Rest of World.
Great Job Jorge C. Carrasco & the Team @ Rest of World – Source link for sharing this story.