In the twentieth century, totalitarian rule was built on simple things: paper lists, fear, and backrooms with wires in the walls. People disappeared into archives, interrogations, and camps. But even then, it all worked thanks to the technologies of the time — typewriters, intercepted letters, shortwave radio. Technology didn’t save anyone from dictatorship. It merely served it.
In the twenty-first century, things became more convenient. The user leaves the traces himself, clicks “like” on his own, turns on his camera and location voluntarily. You no longer need to search for him — he is everywhere. A government that knows how to use those traces no longer needs a schoolboy informant or a paper questionnaire. It has analytics, algorithms, machine learning.
Today, digital tools are not just auxiliary instruments. They are a part of the political machine. In countries like China, Iran, and Russia, this is no exaggeration. There, technology is the vertical — not just a means of control, but the very environment in which control is realized.
Russia followed this path in its own way. First, as a state that entered the digital age late. Then, as one that learned to wield it with a precision and ruthlessness worthy of its Soviet past. From the Yarovaya surveillance laws to VPN bans, from the development of a “Sovereign Internet” to Telegram bots operated by the Ministry of Internal Affairs — everything serves a common goal: to keep the information space under lock and key.
This is no anomaly. It’s a trend. And it deserves a closer look — how exactly technology has become one of the central instruments of a new kind of totalitarianism.
From Typewriters to Traffic Cameras
The idea that dictatorship requires brute force and shouting is outdated. The more efficient regimes have always preferred paperwork. Control is clearest when it’s documented.
In Nazi Germany, IBM punch cards helped track train schedules, census records, and — eventually — populations deemed undesirable. A machine, a database, and a bureaucrat with a pencil could do what a thousand soldiers could not: organize extermination with clockwork precision.
The Soviet Union, for its part, was less technologically advanced but no less systematic. The KGB didn’t need facial recognition — it had neighbors. The system worked because it didn’t trust individuals; it trusted structure. Files were kept by hand. Telephones were tapped manually. But even analog repression was still technical repression. Devices amplified control.
East Germany’s Stasi pushed this to near perfection. Their archive of handwritten reports, surveillance photographs, and magnetic tape recordings is now a museum — but only in form. The spirit of the system survives, reborn in digital infrastructure.
The transition to digital began quietly. In the 1990s, with the collapse of old systems and the explosion of the internet, many assumed repression would weaken. But what really happened was an upgrade. Where wiretaps once required a court order (or at least a technician), now metadata does the work. Where informants once had to observe and report, now user behavior is tracked in real time.
In the early 2000s, authoritarian governments began to understand: data is loyalty. Or, more precisely, data is obedience. And those who control the network don’t need to shout — they can simply disconnect.
Russia, again, came late to this realization. But once it arrived, it moved quickly.
Building the Digital Panopticon
The genius of Jeremy Bentham’s Panopticon was not in its walls but in its uncertainty. Prisoners never knew when they were being watched — only that they could be. The result was obedience, self-regulated.
Authoritarian regimes in the 21st century understood this concept intuitively. But unlike Bentham, they no longer needed a central tower. They had something better — servers, algorithms, and user consent.
In China, the system is perhaps most refined. More than a billion surveillance cameras, many with facial recognition, cover streets, metro stations, classrooms, and homes. Artificial intelligence scores behavior. Cross the road improperly, and your face appears on a public screen. Speak ill of the Party online, and your social credit plummets — no loan, no flight, no job interview. It is not repression by force. It is control by architecture.
In Iran, censorship works through blunt instruments — internet throttling, blackouts, mandatory data localization. But it’s not crude. It is calibrated. During protests, mobile networks are “accidentally” disabled in targeted neighborhoods. VPNs are outlawed. Platforms are forced to cooperate or blocked entirely. The message is clear: the state is the gatekeeper.
And in Russia, the model is hybrid. Technocratic without being fully efficient, digital without being truly advanced. But effective nonetheless.
The “Yarovaya Law” mandates that telecom providers store all user data — calls, messages, traffic — for six months. Operators must give FSB direct access. The “Sovereign Internet” project goes further: it gives the government the infrastructure to isolate the Russian segment of the internet from the global web entirely. Not just censorship, but autarky.
Facial recognition systems, deployed under the guise of public safety, are now used to identify protesters in Moscow. Metro turnstiles read faces. Police receive instant alerts. No court orders needed — just a database and a camera.
Citizens walk through the system unaware. They unlock their phones with their faces. They order food through apps linked to their ID numbers. They leave a trail. And the state, quietly, follows it.
The Panopticon no longer needs walls. It is inside the device, inside the feed, inside the code.
When Censorship Goes Automated
In the past, censorship was a job. Someone had to read the manuscript, watch the film, listen to the recording, and decide what would reach the public. Mistakes were possible. Delays were inevitable. But at least it was human.
Today, censorship is code. Automated systems scan millions of messages, images, and videos in real time. There is no waiting. No appeal. A keyword triggers a flag; an image disappears; a user is suspended. No explanation is given — and none is required.
China leads in this field, as in many others. Content moderation on platforms like WeChat or Weibo operates at a pace no human could match. Images from protests, quotes from dissidents, even obscure references to banned topics are scrubbed within seconds. Entire conversations vanish mid-sentence. AI models are trained not only to detect dissent, but to anticipate it.
Russia adopted a more chaotic but no less effective approach. The Federal Service for Supervision of Communications, known as Roskomnadzor, maintains a constantly expanding blacklist of URLs, services, and platforms. Telegram was once blocked — ineffectively — but the attempt marked a turning point. Since then, the state has refined its methods.
Now, content is filtered through automated DPI systems — deep packet inspection — deployed at the provider level. Social networks face demands to delete “extremist” material within 24 hours. The definition of “extremism,” naturally, remains fluid. Navalny’s videos? Extremist. Articles about war crimes? Extremist. Posts about the war in Ukraine? Blocked, if not criminalized.
Even memes are not safe. In 2023, a Russian schoolteacher was fined for sharing an image of Putin with a clown nose. The detection didn’t come from a neighbor’s denunciation, as in Soviet times. It came from a bot trained to scan social media for disrespectful imagery.
At the technical level, this is impressive. At the human level, it is chilling.
Iran, Vietnam, Turkey, and others now buy or copy these systems. Israeli and Western firms sell the surveillance software; local regimes integrate it into their repressive toolkits. What used to be manual — reading, watching, judging — is now mechanical. Efficient. Scalable.
And the user? The user learns to censor himself. Avoids certain words. Switches to code. Stops speaking. The system doesn’t need to block everything. It only needs people to believe that it might.
Tech Companies as Enablers — or Victims?
No totalitarian system can build a digital fortress without external help. Someone writes the code. Someone sells the servers. Someone licenses the software. And more often than not, that “someone” operates in California, Tel Aviv, or Berlin.
The modern surveillance state is not homegrown. It is assembled — from Western parts.
In the early 2000s, Yahoo! handed over email data to Chinese authorities. The result: a journalist imprisoned. Cisco sold networking gear that helped construct the Great Firewall. Nokia Siemens Networks supplied Iran with monitoring centers capable of deep interception. These were not rogue sales. They were contracts. Deals. Partnerships.
The companies didn’t need to support the ideology. They just had to follow the money.
Today, the pattern continues — more discreet, more complex. Western firms claim they sell “dual-use” technology: for marketing, for customer analytics, for network optimization. What the buyer does with it is “not our responsibility.”
In Russia, this ambiguity worked for years. Microsoft, Kaspersky, Huawei — all had a presence. Cloud services were licensed. SDKs embedded. Tools intended for business optimization were repurposed for surveillance. After 2014, some firms left. Others stayed — until 2022 forced a more explicit reckoning.
But not all collaborators are foreign. Russia developed its own ecosystem: VK, Yandex, Sber AI, Gosuslugi. Convenient, fast, and thoroughly integrated into the state. The boundaries between private company and government actor dissolved. An app used to order food can share data with the police. A search engine can rank state propaganda above all else.
China’s model is more honest — or perhaps more direct. There is no pretense of separation between tech and state. Companies like Alibaba, Tencent, and ByteDance are required by law to share data with authorities. Internal Communist Party committees guide company policy. The illusion of independence is minimal. Efficiency is maximized.
Even platforms that claim neutrality — like Apple, Google, or Meta — make quiet compromises. Apps are removed from stores. Maps blur borders. Search results are geo-tailored. In authoritarian markets, neutrality is just good business.
So who is responsible? The regime that censors — or the vendor that sells the tools? The engineer who builds the facial recognition model — or the state that points the camera?
The truth is, most repression today is not built by ideology. It is built by procurement.
Resistance Through Tech — The Flip Side
Total control, like total obedience, is a myth. Every system leaks. Every fortress has a blind spot. And in the same way that technology has strengthened authoritarian regimes, it has also created tools to undermine them.
The same VPN that hides a corporate IP address hides a dissident’s location. The same encrypted messenger that secures business deals secures protest logistics. The difference is not in the tool, but in who is using it — and for what.
In Iran, each wave of protest sees the same cycle: the internet slows, platforms go dark, access vanishes. But within hours, users switch to proxies, mesh networks, and satellite links. Information still flows — slower, riskier, but unstoppable. Citizen journalists upload videos through mirror sites. Developers build local alternatives. The more the regime tightens its grip, the more creative the response becomes.
In Russia, the internet once promised openness. But as it hardened into a zone of surveillance, users adapted. Telegram became a lifeline, despite the government’s attempts to block it. Anti-war campaigns emerged from pseudonymous accounts. Navalny’s team livestreamed investigations that millions watched — before they were censored. The footage remains online, mirrored a thousand times, immune to deletion.
Encryption has become a quiet revolution. Signal, ProtonMail, Tor — these are not protest tools in the traditional sense. They are protocols. Invisible defenses. A new kind of underground. No leaflets, no samizdat — just disappearing messages and randomized metadata.
Even hardware is being repurposed. In Belarus, hackers from the group “Cyber Partisans” breached state databases and paralyzed the Ministry of Internal Affairs. In Myanmar, activists disabled military drones using spoofed signals. The state may have the infrastructure — but infrastructure has bugs.
Still, the balance is precarious. Many of these tools are difficult to use, risky to operate, or easily criminalized. Possession of a VPN app can be grounds for arrest. A shared tweet can lead to interrogation. The digital underground is fragile — and heavily surveilled.
And yet, it endures. Because even in systems built on visibility, there is always a margin of error. A window. A backdoor. And in authoritarian regimes, that’s all resistance needs to begin.
The System Is Watching
Modern authoritarianism is not about mass rallies or cults of personality. It no longer requires mobs in the streets or iron statues in public squares. It is quieter now. More technical. Less visible. And in many ways, more effective.
The real power lies not in the symbols but in the systems.
A camera does not question orders. A server does not forget. A content filter does not ask for context. In the past, repression needed people — censors, informants, executioners. Today, it needs only a machine that has been trained to recognize patterns — and a law vague enough to punish them.
The terrifying thing about digital authoritarianism is not that it is so advanced, but that it is so ordinary. Users scroll, swipe, post, without noticing the infrastructure around them. The control is ambient. Routine. Embedded in default settings and updated terms of service.
Russia’s model is illustrative. It is not the most efficient, nor the most innovative. But it is scalable. It combines legal ambiguity with technical flexibility. It leaves just enough freedom to maintain the illusion of normality — and enough punishment to make testing the limits risky.
China’s model is more complete. Iran’s more brutal. But the principle is the same: technology as the new terrain of power. Not just a tool of repression — the very shape it takes.
And yet the question remains: who builds these systems? Who maintains them? Who profits from them? Because the algorithms do not write themselves. The cameras do not install themselves. Somewhere along the chain, there are hands — often foreign — that tighten the screw.
This is not just about dictatorships. It is about the infrastructure of the 21st century. About the idea that neutrality in technology is possible. About whether the same tools that track consumer behavior should be trusted to police thought.
The system is watching. But someone built the system. And someone, eventually, will be asked to answer for it.
When Free Societies Build Tools for Tyrants
Totalitarianism is not a fixed geography. It is a possibility — latent in any state that mistakes efficiency for virtue and control for order. The countries we call “free” today are only a few legal amendments, a few crises, or a few elections away from drifting into something else. The infrastructure is already there: surveillance cameras on every corner, predictive algorithms in law enforcement, biometric databases “for convenience.” What’s missing is only political will — or a pretext.
That is why the lesson is not just for China or Russia. It is for everyone. A system that monitors everything, remembers everything, automates everything, can be taken over by anyone. And once it is taken over, it rarely lets go.
Technology should not be allowed to govern every corner of human life. Not because technology is evil — but because it is neutral, and neutrality obeys the strongest force in the room. A society that builds systems without limits may soon find itself living under one.