China’s 5G technology has now been banned in many countries, including Australia, New Zealand, the US, and many in the European Union. In 2019, a NATO Cyber Defense Center report identified Huawei’s 5G technology as a security risk.
Since September, telecommunications providers in the US have been able to apply for compensation through a $1.9-billion program designed to “rip and replace” Huawei and ZTE equipment, due to perceived risks to national security.
But fears over China’s attempts to export its digital and surveillance technologies go far beyond just Huawei and 5G. China has been accused of exporting “digital authoritarianism” and spreading “techno-authoritarianism globally.” It’s been declared a danger to the rest of the world.
In my research, I argue the story of digital authoritarianism is not that straightforward.
Technologies that help authoritarian leaders collect information and control their populations have been exported with few restrictions for decades. Although China does export ready-made surveillance systems to governments deemed as oppressive, countries in Europe and North America have also done so, albeit more covertly.
China falls in the direct line of fire for criticism on this front.
First, the country follows an authoritarian system. In a compilation of speeches by President Xi Jinping from 2012-18, he critiqued western political systems and called for greater “South-South collaboration” between China and countries in the developing world.
These views have since been incorporated as part of a new national ideology and China’s influential Belt and Road Initiative.
Second, both Chinese companies and the Chinese government have firmly maintained that countries are free to decide what they want to do with the technologies they purchase from China. They are neutral actors selling neutral technologies to other countries.
China is the largest exporter of telecommunications equipment, computers, and telephones in the world, with the US as its biggest destination. It has also exported digital infrastructure to more than 60 mostly developing countries through its Belt and Road Initiative.
Some of the most problematic exports of Chinese surveillance technologies include:
o CloudWalk’s facial recognition database in Zimbabwe, which opponents say may be used to monitor government critics;
o technicians from Huawei engaging in political espionage in Uganda and Zambia;
o the development of a controversial new “fatherland card” to monitor civilian activities in Venezuela;
o the sale of smart video surveillance technologies to the previous authoritarian government of Ecuador.
However, Chinese companies are not the only actors in the global trade arena that benefit from the argument of “technological neutrality.”
Companies from Europe and North America jumped at the first chance they got to sell surveillance systems to China in the early 2000s. Many of those technologies strengthened China’s online censorship system.
In a watershed report in 2001, an independent researcher, Greg Walton, showed that international companies started marketing their products to Chinese public security agencies as early as 2000 during a large security expo in Beijing. The same expo continued to attract international companies until the COVID-19 travel disruptions in 2020.
In 2006, Cisco was investigated by a US House subcommittee for selling surveillance technologies to China. The company defended itself by stressing its right to international trade and technological neutrality.
A couple of years later, Cisco again defended its right to sell to China in a meeting with the US Senate Judiciary Subcommittee on Human Rights. A representative of the company argued: One thing tech companies cannot do, in my opinion, is involve themselves in politics of a country.
Earlier this year, investigative journalist Mara Hvistendahl also reported that Oracle (the same company that won the bid to co-host TikTok’s data in the US) had pitched its predictive policing analytics to public security agencies in China.
And in 2019, the UK was found to have exported telecommunications interception equipment to multiple countries, including Saudi Arabia and the United Arab Emirates.
A political science researcher at the University of Cape Town, Mandira Bagwandeen, argues it’s easy to point fingers to China, diverting attention from other countries.
Let’s face it, if the US was really serious about restricting the spread of so-called “authoritarian technology,” then it should also impose comprehensive measures and restrictions on both democratic and autocratic producers.
The fact is surveillance technologies with the capability to gather and analyze information about people are inherently political.
Princeton University Professor Xu Xu argues that digital surveillance resolves the “information problem” in authoritarian countries by allowing dictators to more easily identify those with anti-regime beliefs.
But regulating new technologies is difficult even in democratic countries. Australia is seeing this play out with the unregulated use of number plate recognition technologies by the police to monitor lockdown compliance.
The police have also tried to use COVID QR code check-in data numerous times as part of criminal investigations.
Unlike other electronics goods, surveillance technologies have the capability to shape and restrict people’s lives, rights and freedoms. This is why it is important they are regulated.
While it may be difficult to enact a unified set of rules internationally given the current tensions between China and the west, better monitoring and regulations at the domestic level could be the way forward.
One large initiative is a multi-year project run by the Australian Strategic Policy Institute to map the international expansion of Chinese technology companies.
This is helping to monitor the activities of Chinese surveillance tech companies and providing data for government policy briefs. When iFlytek, a Chinese artificial intelligence technology company tied to surveillance of Uyghurs in Xinjiang, marketed its products in New Zealand, the media relied on ASPI’s findings to pressure a New Zealand company to cease its collaborations with the company.
And the European Parliament commissioned and published an extensive report on artificial intelligence in June 2021, which recommended establishing a security commission and new research center devoted to AI issues. It remains to be seen whether the report has any teeth, but it is the kind of start we need.
Ausma Bernot is a PhD candidate at Griffith University.