by Brian Hioe
Photo credit: Guerilla Publishing
New Bloom editor Brian Hioe spoke to James Griffiths, the author of The Great Firewall of China: How to Build and Control an Alternative Version of the Internet, on May 28th.
The Chinese edition of The Great Firewall of China will be released in Taiwan on June 4th, featuring recommendations from Puma Shen, Freddy Lim, Lai Pin-yu, Chen Bo-wei, and others.
Brian Hioe: First, I wanted to ask about the parallel relationship between western tech companies and China. You point out in your book that both sides have contributed to the development of surveillance technologies and technologies for digital censorship, pushing each other to do so, and that many moments return to western tech companies seeking to enter the Chinese market.
James Griffiths: Yes, I took a look at it historically. There was this idea in the early stages of the Internet, that it would serve as an engine of free speech or an engine of democracy. There was this idea that this technology was inherently liberalizing or anti-censorship. There was a famous quote that used to be bandied about, that the Internet routes around censorship.
What this missed was that the benefits of these technologies didn’t only go to one side, they didn’t just go to activists and users, or people who tried to organize online. They also went to people who tried to stop that. And that’s what one of the big successes China experienced in the early stages of the Great Firewall was.
Cover of the Chinese-language edition of The Great Firewall of China. Photo credit: Guerilla Publishing
It was a relatively small investment in technology at a time, but it’s become a vast investment. It gave China a large degree of control over what people could access and provided basic early surveillance technology.
As it developed, the Chinese government embraced the technology that it could buy initially and what it could get from American companies, which were the industry leaders in the early stages of the Internet. As Chinese companies became further competitive, they outstripped some of their competitors, technology was actually developed in China, and China has emerged in some spheres as an exporter of certain Internet control technologies. Obviously, the censorship and also with surveillance, Chinese technology companies are very adept at online surveillance.
BH: You mentioned that in many contexts around the world, the Internet was seen as a tool of empire or of western countries, because of the fact people point to the democratic potential of the Internet. The Internet is, then framed, by countries as that the Internet is used for efforts at regime change.
What anxieties do you think existed from the Chinese political leadership regarding the Internet during its introduction into China, led to the development of the Great Firewall in the form it took in China?
JG: When the Internet just came to China, I don’t think a lot of these geopolitical concerns were as widespread or were in the forefront of people’s minds. At the time, the Internet was still very much a scientific technology that was being utilized by researchers at universities. When it first rolled out, it was just limited to universities and it was a fairly kind of specific usage even in just the academic sphere.
But, as the Internet spread out, and as it developed, the authorities very quickly recognized this as a challenge as a medium for organizing, and also just as a way for spreading anti-regime information or anti-regime literature. The Communist Party has always been very conscious of the potential for any medium or any form of publishing to be used against it, they’ve always exercised a degree of control. That goes back to the fall of the Soviet Union and the lessons the Communist Party took from that. As we get into any real point of conflict where you get mass media and mass communication, and everyone is using this, and it becomes much closer to the modern Internet that we know today that there is a greater degree of concern in China.
That’s not helped by this kind of narrative, which is very common in Washington and promoted by a lot of American officials. Thomas Friedman called it a nutcracker to open closed societies.
It’s based on a very self-serving concept—for example, looking at what caused the collapse of the Soviet Union, just focusing on the work of projects such as Radio Free Asia, ignoring every other factor that caused the USSR to be in dire straits. From that they extrapolate out, “Well, you know, if Radio Free Asia brought down the Soviet Union, then the Internet can bring democracy for anyone.”
Maybe there is a tiny element of truth to that in the way the Internet could be used as a tool for spreading solidarity and tool for organizing, outside of government structures. But a certain element within the US latched onto this claim very heavily and began promoting it, claiming credit for the Internet for events such as the Green Revolution in Iran or the Arab Spring, even though when you come back and look at it in the cold light of day, the actual of influence of online networks on those kind of mass movements are very limited.
In response to that, you get Chinese officials who are already fairly suspicious of the Internet. Americans didn’t cause them to have these attitudes, they were already fairly suspicious. They already saw the Internet as something of an American technology, because of its history, and the fact of who a lot of the dominant companies are. But to see US officials and leading US commentators start talking about the Internet as a potential way to take down China or open up authoritarian regimes just fuels that kind of paranoia. That provides a justification for further action.
As I recount in the book, there was a poignant moment during Google’s last year or so in China when the government was really cracking down on it. You’ve got the American government speaking out on Google’s behalf.
There was a message from leaked WikiLeaks cables. There was a Chinese expert who went in to talk to the US embassy in Beijing and basically said, “Look, I get what you guys are doing. But the way you’re talking at present is making the Internet a battlefield for the US-China relationship. That is only going to further drive suspicion and hostility towards the Internet.” That is exactly what we’ve seen in the years since.
BH: On those lines, why do you think it is such a priority for China to surveil for overseas dissident communities, the Tibetan overseas community, and so forth? Is it because of this kind of anxiety that the Internet could be used as a way to sow dissent in China? You discuss in the book the surveillance of the Tibetan overseas community, that they have to take precautions because they are so surveilled. That even children are taught digital security at a very early age.
JG: I think that comes from two intertwined attitudes within Chinese governance. There is this deep paranoia and distrust of any organizing or any kind of system which exists outside of the Communist Party sphere, whether it be political or not. We see we see this in China, both with openly Marxist groups that get cracked down on in China despite conforming to the official ideology, and then dumb meme groups that get banned or restricted because they’re getting too popular and there is development of an identity that is is associated with them.
If you combine that with the kind of broader idea that the Chinese diaspora, in the widest sense, belongs to Beijing or should at least owe a certain degree of loyalty to Beijing, that makes these diasporic groups, such as Tibetans or Uighurs, or people in Hong Kong and Taiwan, seen as Chinese diaspora. That makes them part of China’s responsibility, as it were.
They are obviously organized outside of the official system and a consequence of that is a very strong desire to surveil them and see what’s going on. With Tibet, that’s been one of the most studied examples, both because the Tibetan diaspora as it exists in Dharamsala at least, is pretty small, and it’s pretty easy to study. It has connections with a lot of the Western human rights groups. So researchers and reporters have been able to kind of see through the Dharamsala community—the experience of constantly being targeted with hacking, with phishing attempts, with open surveillance—and the degree to which this goes on.
We’ve found that plenty of Uighur activists overseas have spoken about similar things. There isn’t so much as a sort of headquarters of the Uighur diaspora the way there is for Tibet, but Uighurs in Germany and the US have talked a lot about how they have been subject to constant surveillance.
This is mainly motivated by the potential for these groups and these networks to be used for organizing resistance or even just organizing communities, organizing politically back in China in a way that is outside of the Communist Party’s remit.
Dharamsala, India, where many members of the Tibetan exile community reside. Photo credit: Wojciech Kocot/WikiCommons/CC
BH: That brings us to our next question. You discuss in the book that the Chinese government places a priority on preventing forms of collective organization, even sometimes the organization that is actually of pro-government. This will be viewed as creating networks that are outside of government control, as a result of which, there are attempts to kind of crack down on that.
Do you think that’s the reason for the moment, for example, increasing automated technologies for censorship, using algorithms or big data, or just collecting data? There’s a lot of panic regarding the issue of social credit. I think it’s very interesting because the general perception may be that you’re trying to keep certain ideas out of circulation. But what you talk about in the book is attempts to prevent any form of relation that’s outside of the control or the purview of the state.
JG: This is a couple of things. Absolutely, there is this desire to prevent any kind of group or organization springing up outside of the official party system. When it comes to automation, some of that is, I think, a natural trend towards most automation in a lot of spheres. It’s just the way the Internet is developing.
But also, it’s part of this pattern which has existed since at least the 2000s, if not in the 1990s. Of almost privatizing the censorship system within China, which once was run almost directly by the party. These were state security employees, they were police employees, they were the ones actively monitoring things originally.
While those people still exist, the vast majority of day-to-day censorship as opposed to surveillance-type activity, that is, mass censorship, is actually carried out by private Internet companies. It’s done by employees of Weibo and WeChat and it’s done to avoid various regulations. By putting the responsibility on these companies, China has saved itself a lot of money as well created incentives for innovation in the sphere of censorship, especially regarding automated censorship and control.
There is the potential for the evolution of this. When we talk about things like social credit, I think it should always be emphasized when we talk about social credit that it’s still a long way off and it’s still in the early stages of development. And supposedly, something is supposed to launch this year. But even before the coronavirus, that looked nowhere near being ready to actually launch.
Yet talking about it as a kind of speculative, it’s a proposal, and if you read the proposals have been put forward in various white papers, the interesting thing it does from a censorship perspective is that if the first transition was from the kind of top-down government-run censorship to the intermediary private company censorship, social credit and AI-driven censorship and surveillance pushes it down to another level further to putting the onus on the user to self censor even more, to be very conscious and wary of their behavior online.
In the current system, if you’re an activist or a politically involved young person, you may be willing to risk your personal safety or your political freedoms or future job prospects. You may be willing to risk that to be politically active in China and to organize online and to do political shitposting and things like that.
If so, you’re probably the only person to get punished. Even then, most of the time, you’re probably not going to get punished. You’re just going to get censored and blocked and whatever. You don’t get the kind of major repercussions as proposed in the social credit system.
Hypothetically, we get to the position where you’re automatically punished under the system and not only you, but your actions could have repercussions for your family or for your friends. So the potential costs of stepping out of line in these circumstances becomes suddenly much greater. It’s not just your employment you’re risking, it’s also your mom’s shop that you might drive out of business because suddenly her social credit score has gone through the floor. It’s kind of pushing this self-censorship onto people that has never been seen at this level before.
BH: Returning the concerns of your book to the present then, perhaps we could discuss contemporary Chinese tech companies. There was a ruling on the Meng Wanzhou case today, so it seems quite timely.
Based on the history you outline in your book, what do you see as the continued relation between private companies and the Chinese state in terms of advancing surveillance technology and this interface with the West. There is concern about China propagating its platforms overseas, Chinese tech companies such as Huawei, and so forth.
Then as you discussed earlier, there’s still this desire from Western tech companies to enter China. But this also occurs the other way around, with many Chinese companies seeking to enter the US. Things such as social credit will be brought up as a way to attack these companies, though it can be sometimes a little misleading. It’s always this sort of dystopian vision when, as you mentioned, it’s quite early for this kind of thing to happen.
JG: Well, yeah. It’s especially ironic when we see certain Western companies that have a history of cooperating with US government surveillance and attacking some of their Chinese competitors for doing the same.
But this is something that has changed quite rapidly since the book first came out. Even just a year or so ago, the level of comfort and the level of suspicion around Chinese companies coming into various markets has just increased dramatically.
Not just over Huawei, which you can see why there are security concerns over, because it is literally just the type of technology that it creates. This is something that has great surveillance capabilities. That could cause a certain level of concern, regardless of where this company is based.
It’s not just Huawei that now faces this kind of scrutiny. It’s also TikTok, which is having to go a very long way to distinguish yourself from its Chinese ownership, moving their official parent company to the Caribbean, and having to take steps to distinguish itself from its parent company…
BH: Or even Zoom.
JG: Yeah, exactly. As a result, many Chinese companies have to deal with attention, scrutiny, and even hostility that they’ve never had to deal with before. What’s unfortunate sometimes is that this is moving more in the direction of just blocking Chinese companies from engaging with the rest of the world entirely, rather than using this as an opportunity to try to influence those companies to change how they behave internationally.
One example I would use there is that it has been shown multiple times now, Citizen Lab in Toronto has done a lot of research in this area, that WeChat, that TenCent is surveilling and censoring people outside of China. They can use Chinese laws as a justification for doing that within the Great Firewall. But it’s difficult to see what legal justification or responsibility they have to be doing the same outside of the firewall.
This is an opportunity for large economies to be saying, look, X Chinese company, you can come into our market. But you have to behave a certain way and you have to meet a certain level of standards, rather than having this hands-off approach and going, “Oh, no, they are behaving in this way,” which in the end, really shouldn’t be that surprising.
BH: Along the lines, what do you think are the lessons that can be taken away from this history that you outline regarding how to pressure tech companies and their practices? Are there ways of doing that? Or are we just stuck in this bind of great power competition between the US and China, in which surveillance and censorship technologies are developed on both sides, and both sides are pushing each other to new heights in propagating and developing such technologies?
JG: Yeah, I think we have an opportunity when it comes to regulating technology companies and in changing the kind of conversation around the expectations that we put on these companies. One of the motivations for writing this book and one of the overriding concerns was that as the kind of hyper-libertarian privatized model of the Internet which grew out of Silicon Valley and in the 1990s and 2000s has failed, that the cost of not regulating these companies and the massive monopolies that they built up and the huge powers they gained over users—as that failure has become more obvious. the fear I had was that there would be a tilt towards the other major model of technological and Internet development, which is China’s—as with the Great Firewall.
That is taking place in a lot of countries, particularly in the developing world. But I also think we are in an in-between stage right now where there is an opportunity for us to push for a kind of more user-driven model and to force companies to have more responsibility towards users, to not be able to just handwave concerns away by pointing to incredibly legalistic terms and conditions that only a tiny percentage of their users really could have looked at.
You are seeing some developments in this regard. I don’t think it goes anywhere near far enough, but Facebook’s idea of introducing a semi-independent oversight board is getting to the right idea of user participation and user ownership in a way of pressuring a lot of these companies, which would not exist without their users. Tech companies should be answerable to their users beyond this idea that they could simply go somewhere else.
In the kind of context of many of the biggest tech companies, this isn’t possible. Most of these companies do not really have a serious competitor. And so the market cannot hold them to account. If we could get to a point where users can hold them to account, then I think that could be a really positive development.
BH: Is there anything you’d like to say in conclusion to readers both Taiwanese and international? Particularly to a Chinese-language audience, since your book is being translated and released in Chinese. Are there takeaways that you hope readers will get from your book?
JG: Taiwan is in maybe a slightly enviable position to a lot of places comparatively, given how conscious both its people and its government are of the contrast across the straits. That creates a pretty high level of suspicion and hostility toward anything that looks like surveillance and censorship. Taiwan’s own experience under past dictatorship plays into this as well.
But the big takeaway of this book, I think, is to look at censorship and control of the Internet as not just a Chinese thing. China has gone a great way to perfecting and to building this model. But it’s attractive for so democratic governments as well. Plenty of democratic governments have expressed the desire to control what people say and do online, even if only in certain spheres.
We’ve seen it since the Snowden revelations and even before then to the degree to which plenty of democratic governments engage in huge amounts of surveillance online. The big takeaway should be to not be complacent about the risks or for this amazing liberatory technology to be undermined both by local governments or by big technology firms, acting in concert with governments or which are aiming to get into massive markets like China.
BH: Thank you.