Censorship in the digital age: can humans be manipulated by algorithms?
- norakwr
- 8 juin
- 4 min de lecture
Dernière mise à jour : 30 juin
In the digital age, platforms like TikTok and Instagram have become more than just sources of entertainment - they are where millions get their news, form opinions and engage with the world. To us, the general public, social media is marketed as a place for self-expression, connecting with people that have similar interests, and where different cultures and religions can freely interact, but has it become a weapon for the destruction of the same things it promised to promote? Algorithmic censorship on social media platforms is not just a safety mechanism - it may be subtly shaping political discourse and numbing collective outrage.
In the novel 1984 by George Orwell, the author introduces the concept of Newspeak, a language designed to limit thought and expression. Although this might all sound very dystopian and fictional, the rise in popularity of words such as “unalive” when referring to death and “grape” when referring to rape, proves that censorship is, subtly, invading havens of self-expression. It is impossible to pinpoint the exact origin of these words, but it is undeniable they have spread like the plague and the chances that you will be hearing words like "gun” (gvn) and “lesbian” (le$bean) said on TikTok is near zero.
The question is, why have we acceded to this TikTok version of Newspeak and how does it relate to censorship? By masking negative and uncomfortable words with these “family-friendly” versions of it, creators ensure their videos won't be taken down by algorithm and they will still have their chance at the “15 minutes of fame" phenomenon. As creator @i_am_a_kar pointed out, it has gotten to the point where the use of these words has become so widely accepted that it could even be considered disrespectful. Picture this: your friend is brutally murdered due to a racist police officer and the story goes viral. Suddenly, everyone is discussing the issue, and you decide to watch some videos on it, only to be faced with creators novelizing their death and saying your friend was “unalived by an ungood r4ci$t cop.” How does that accurately paint the picture of what happened? How does that phrase depict the brutality and inhumanity of the situation? It doesn't. Communities on the internet have become so averted to being faced with negativity that we are all hiding behind the algorithm so it can censor negative words, hence we can be told that someone was “sa'd” instead of sexually assaulted.
Here comes another question, though, is this a psychological circumstance, or a way for tech giants and governments to distract us from the current state of our society? In a way, all of these words are euphemisms for horrible events, so is TikTok Newspeak a way for us to become insensitive to the world around us and not react, not resist? One could argue it is just a way to avoid being shadowbanned, but in a world of war and control, social phenomena such as this one shouldn't be ignored.
Moreover, since the Trump administration has taken office in January 2025, the American audience of social media has been increasingly deprived of important news that may oppose the values of the White House. It is no secret that the White House is trying to put a ban on certain books - as conveyed by the American Civil Liberties Union - and limiting the media outlets and reporters present at press conferences. But, there are other ways censorship is being more discreetly imposed. The most current demonstration of this occurrence is when American creator @longlivejudah made a video expressing his feelings on the looming dictatorship in the USA. In his video, he says “its not pre-dictatorship, it’s not up and coming, it’s here, real time.” He discusses the fact that extremists now feel free to communicate their oppressive views and criticizes Trump. Constitutionally, this is all in the creator's rights. He is simply exerting his right to freedom of speech, yet when a user tries to share the video, a warning shows up, saying “check your sources before sharing”. This warning only comes up for American users, and has become a topic of discussion within the community. People are voicing their opinions and being encouraged to see if they too have been affected.
In the end, algorithmic censorship is not just about protecting users from harm - it’s about shaping how we communicate, how we feel, and how we respond to reality. By forcing us to self-censor through euphemisms and sanitizing politically sensitive content, platforms risk producing a society that is emotionally detached, politically passive, and incapable of naming injustice for what it is. The digital world may appear free and open, but if algorithms are quietly deciding what we can say, see, and share, then freedom itself is an illusion.
We must begin to question who benefits from this silence and who suffers because of it. Otherwise, we risk becoming fluent in a language that no longer allows us to resist.
Helena Valansi
Rio de Janeiro, Brazil






Commentaires