Gateway to Think Tanks
来源类型 | Paper |
规范类型 | 工作论文 |
The Global Expansion of AI Surveillance | |
Steven Feldstein | |
发表日期 | 2019-09-17 |
出版年 | 2019 |
语种 | 英语 |
概述 | A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens. Carnegie’s new index explores how different countries are going about this. |
摘要 | Executive SummaryArtificial intelligence (AI) technology is rapidly proliferating around the world. Startling developments keep emerging, from the onset of deepfake videos that blur the line between truth and falsehood, to advanced algorithms that can beat the best players in the world in multiplayer poker. Businesses harness AI capabilities to improve analytic processing; city officials tap AI to monitor traffic congestion and oversee smart energy metering. Yet a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground. In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used. Unfortunately, such information is scarce. To provide greater clarity, this paper presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:
Key Findings
Notes
A full version of the index can be accessed online here: https://carnegieendowment.org/files/AI_Global_Surveillance_Index1.pdf. An interactive map keyed to the index that visually depicts the global spread of AI surveillance technology can be accessed here: https://carnegieendowment.org/publications/interactive/ai-surveillance. All reference source material used to build the index has been compiled into an open Zotero library. It is available here: https://www.zotero.org/groups/2347403/global_ai_surveillance/items. Introducing the AI Global Surveillance (AIGS) IndexAI technology was once relegated to the world of science fiction, but today it surrounds us. It powers our smartphones, curates our music preferences, and guides our social media feeds. Perhaps the most notable aspect of AI is its sudden ubiquity. In general terms, the goal of artificial intelligence is to “make machines intelligent” by automating or replicating behavior that “enables an entity to function appropriately and with foresight in its environment,” according to computer scientist Nils Nilsson.5 AI is not one specific technology. Instead, it is more accurate to think of AI as an integrated system that incorporates information acquisition objectives, logical reasoning principles, and self-correction capacities. An important AI subfield is machine learning, which is a statistical process that analyzes a large amount of information in order to discern a pattern to explain the current data and predict future uses.6 Several breakthroughs are making new achievements in the field possible: the maturation of machine learning and the onset of deep learning; cloud computing and online data gathering; a new generation of advanced microchips and computer hardware; improved performance of complex algorithms; and market-driven incentives for new uses of AI technology.7 Unsurprisingly, AI’s impact extends well beyond individual consumer choices. It is starting to transform basic patterns of governance, not only by providing governments with unprecedented capabilities to monitor their citizens and shape their choices but also by giving them new capacity to disrupt elections, elevate false information, and delegitimize democratic discourse across borders. The focus of this paper is on AI surveillance and the specific ways governments are harnessing a multitude of tools—from facial recognition systems and big data platforms to predictive policing algorithms—to advance their political goals. Crucially, the index does not distinguish between AI surveillance used for legitimate purposes and unlawful digital surveillance. Rather, the purpose of the research is to shine a light on new surveillance capabilities that are transforming the ability of states—from autocracies to advanced democracies—to keep watch on individuals. AIGS Index—MethodologyThe AIGS Index provides a detailed empirical picture of global AI surveillance trends and describes how governments worldwide are using this technology. It addresses three primary questions:
The AIGS Index is contained in Appendix 1. It includes detailed information for seventy-five countries where research indicates governments are deploying AI surveillance technology. The index breaks down AI surveillance tools into the following subcategories: 1) smart city/safe city, 2) facial recognition systems, and 3) smart policing. A full version of the index can be accessed online at https://carnegieendowment.org/files/AI_Global_Surveillance_Index1.pdf. An interactive map keyed to the index that visually depicts the global spread of AI surveillance technology can be accessed at https://carnegieendowment.org/publications/interactive/ai-surveillance. All reference source material used to build the index has been compiled into an open Zotero library. It is available at https://www.zotero.org/groups/2347403/global_ai_surveillance/items. The majority of sources referenced by the index occur between 2017 and 2019. A small number of sources date as far back as 2012. The index uses the same list of countries found in the Varieties of Democracy (V-Dem) project with two minor exceptions.8 The V-Dem country list includes all independent polities worldwide but excludes microstates with populations below 250,000. The research collection effort combed through open-source material, country by country, in English and other languages, including news articles, websites, corporate documents, academic articles, NGO reports, expert submissions, and other public sources. It relied on systematic content analysis for each country incorporating multiple sources to determine the presence of relevant AI surveillance technology and corresponding companies. Sources were categorized into tiered levels of reliability and accuracy. First-tier sources include major print and news magazine outlets (such as the New York Times, Economist, Financial Times, and Wall Street Journal). Second-tier sources include major national media outlets. Third-tier sources include web articles, blog posts, and other less substantiated sourcing; these were only included after multiple corroboration. Given limited resources and staffing constraints (one full-time researcher plus volunteer research assistance), the index is only able to offer a snapshot of AI surveillance levels in a given country. It does not provide a comprehensive assessment of all relevant technology, government surveillance uses, and applicable companies. Because research relied primarily on content analysis and literature reviews to derive its findings, there are certain built-in limitations. Some companies, such as Huawei, may have an incentive to highlight new capabilities in this field. Other companies may wish to downplay links to surveillance technology and purposely keep documents out of the public domain. Field-based research involving on-the-ground information collection and verification would be useful to undertake. A number of countries—such as Angola, Azerbaijan, Belarus, Hungary, Peru, Sri Lanka, Tunisia, and Turkmenistan—provided circumstantial or anecdotal evidence of AI surveillance, but not enough verifiable data to warrant inclusion in the index. A major difficulty was determining which AI technologies should be included in the index. AI technologies that directly support surveillance objectives—smart city/safe city platforms, facial recognition systems, smart policing systems—are included in the index. Enabling technologies that are critical to AI functioning but not directly responsible for surveillance programs are not included in the index. Another data collection challenge is that governments (and many companies) purposely hide their surveillance capabilities. As such, it is difficult to precisely determine the extent to which states are deploying algorithms to support their surveillance objectives, or whether AI use is more speculative than real. The index does not differentiate between governments that expansively deploy AI surveillance techniques versus those that use AI surveillance to a much lesser degree (for example, the index does not include a standardized interval scale correlating to levels of AI surveillance). This is by design. Because this is a nascent field and there is scant information about how different countries are using AI surveillance techniques, attempting to score a country’s relative use of AI surveillance would introduce a significant level of researcher bias. Instead, a basic variable was used: is there documented presence of AI surveillance in a given country? If so, what types of AI surveillance technology is the state deploying? Future research may be able to assess and analyze levels of AI surveillance on a cross-comparative basis. Finally, instances of AI surveillance documented in the index are not specifically tied to harmful outcomes. The index does not differentiate between unlawful and legitimate surveillance. In part, this is because it is exceedingly difficult to determine what specifically governments are doing in the surveillance realm and what the associated impacts are; there is too much that is unknown and hidden. Findings and Three Key InsightsThe findings indicate that at least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries). Three key insights emerge from the AIGS Index’s findings. First, global adoption of AI surveillance is increasing at a rapid pace around the world. Seventy-five countries, representing 43 percent of total countries assessed, are deploying AI-powered surveillance in both lawful and unlawful ways. The pool of countries is heterogeneous—they come from all regions, and their political systems range from closed autocracies to advanced democracies. The “Freedom on the Net 2018” report raised eyebrows when it reported that eighteen out of sixty-five assessed countries were using AI surveillance technology from Chinese companies.9 The report’s assessment period ran from June 1, 2017 to May 31, 2018. One year later, the AIGS Index finds that forty-seven countries out of that same group are now deploying AI surveillance technology from China. Unsurprisingly, countries with authoritarian systems and low levels of political rights are investing heavily in AI surveillance techniques. Many governments in the Gulf, East Asia, and South/Central Asia are procuring advanced analytic systems, facial recognition cameras, and sophisticated monitoring capabilities. But liberal democracies in Europe are also racing ahead to install automated border controls, predictive policing, safe cities, and facial recognition systems. In fact, it is striking how many safe city surveillance case studies posted on Huawei’s website relate to municipalities in Germany, Italy, the Netherlands, and Spain. Regionally, there are clear disparities. The East Asia/Pacific and the Middle East/North Africa regions are robust adopters of these tools. South and Central Asia and the Americas also demonstrate sizable take-up of AI surveillance instruments. Sub-Saharan Africa is a laggard—less than one-quarter of its countries are invested in AI surveillance. Most likely this is due to technological underdevelopment (African countries are struggling to extend broadband access to their populations; the region has eighteen of twenty countries with the lowest levels of internet penetration).10 Given the aggressiveness of Chinese companies to penetrate African markets via BRI, these numbers will likely rise in the coming years. Figure 1 shows the percentage breakdown by region of countries adopting AI surveillance. Second, China is a major supplier of AI surveillance. Technology linked to Chinese companies are found in at least sixty-three countries worldwide. Huawei alone is responsible for providing AI surveillance technology to at least fifty countries. There is also considerable overlap between China’s Belt and Road Initiative and AI surveillance—thirty-six out of eighty-six BRI countries also contain significant AI surveillance technology. However, China is not the only country supplying advanced surveillance technology. France, Germany, Japan, and the United States are also major players in this sector. U.S. companies, for example, have an active presence in thirty-two countries. Figure 2 breaks down the leading companies in the sector. Third, liberal democracies are major users of AI surveillance. The index shows that 51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. Liberal democratic governments are aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behavior, and pull out suspected terrorists from crowds. This doesn’t necessarily mean that democracies are using this technology unlawfully. The most important factor determining whether governments will exploit this technology for repressive purposes is the quality of their governance—is there an existing pattern of human rights violations? Are there strong rule of law traditions and independent institutions of accountability? That should provide a measure of reassurance for citizens residing in democratic states. But advanced democracies are struggling to balance security interests with civil liberties protections. In the United States, increasing numbers of cities have adopted advanced surveillance systems. A 2016 investigation by Axios’s Kim Hart revealed, for example, that the Baltimore police had secretly deployed aerial drones to carry out daily surveillance over the city’s residents: “From a plane flying overhead, powerful cameras capture aerial images of the entire city. Photos are snapped every second, and the plane can be circling the city for up to 10 hours a day.”11 Baltimore’s police also deployed facial recognition cameras to monitor and arrest protesters, particularly during 2018 riots in the city.12 The ACLU condemned these techniques as the “technological equivalent of putting an ankle GPS [Global Positioning Service] monitor on every person in Baltimore.”13 On the U.S.-Mexico border, an array of hi-tech companies also purvey advanced surveillance equipment. Israeli defense contractor Elbit Systems has built “dozens of towers in Arizona to spot people as far as 7.5 miles away,” writes the Guardian’s Olivia Solon. Its technology was first perfected in Israel from a contract to build a “smart fence” to separate Jerusalem from the West Bank. Another company, Anduril Industries, “has developed towers that feature a laser-enhanced camera, radar and a communications system” that scans a two-mile radius to detect motion. Captured images “are analysed using artificial intelligence to pick out humans from wildlife and other moving objects.”14 It is unclear to what extent these surveillance deployments are covered in U.S. law, let alone whether these actions meet the necessity and proportionality standard. The United States is not the only democracy embracing AI surveillance. In France, the port city of Marseille initiated a partnership with ZTE in 2016 to establish the Big Data of Public Tranquility project. The goal of the program is to reduce crime by establishing a vast public surveillance network featuring an intelligence operations center and nearly one thousand intelligent closed-circuit television (CCTV) cameras (the number will double by 2020). Local authorities trumpet that this system will make Marseille “the first ‘safe city’ of France and Europe.”15 Similarly, in 2017, Huawei “gifted” a showcase surveillance system to the northern French town of Valenciennes to demonstrate its safe city model. The package included upgraded high definition CCTV surveillance and an intelligent command center powered by algorithms to detect unusual movements and crowd formations.16 The fact that so many democracies—as well as autocracies—are taking up this technology means that regime type is a poor predictor for determining which countries will adopt AI surveillance. A better predictor for whether a government will procure this technology is related to its military spending. A breakdown of military expenditures in 2018 shows that forty of the top fifty military spending countries also have AI surveillance technology.17 These countries span from full democracies to dictatorial regimes (and everything in between). They comprise leading economies like France, Germany, Japan, and South Korea, and poorer states like Pakistan and Oman. This finding is not altogether unexpected; countries with substantial investments in their militaries tend to have higher economic and technological capacities as well as specific threats of concern. If a country takes its security seriously and is willing to invest considerable resources in maintaining robust military-security capabilities, then it should come as little surprise that the country will seek the latest AI tools. The motivations for why European democracies acquire AI surveillance (controlling migration, tracking terrorist threats) may differ from Egypt or Kazakhstan’s interests (keeping a lid on internal dissent, cracking down on activist movements before they reach critical mass), but the instruments are remarkably similar. Future research might examine country-level internal security figures and compare them to levels of AI surveillance. Distinguishing Between Legitimate and Unlawful SurveillanceState surveillance is not inherently unlawful. Governments have legitimate reasons to undertake surveillance that is not rooted in a desire to enforce political repression and limit individual freedoms. For example, tracking tools play a vital role in preventing terrorism. They help security forces deter bad acts and resolve problematic cases. They give authorities the ability to monitor critical threats and react accordingly. But technology has changed the nature of how governments carry out surveillance and what they choose to monitor. The internet has proliferated the amount of transactional data or “metadata” available about individuals, such as information about sent and received emails, location identification, web-tracking, and other online activities. As former UN special rapporteur Frank La Rue noted in a milestone 2013 surveillance report: Communications data are storable, accessible and searchable, and their disclosure to and use by State authorities are largely unregulated. Analysis of this data can be both highly revelatory and invasive, particularly when data is combined and aggregated. As such, States are increasingly drawing on communications data to support law enforcement or national security investigations. States are also compelling the preservation and retention of communication data to enable them to conduct historical surveillance.18 It goes without saying that such intrusions profoundly affect an individual’s right to privacy—to not be subjected to what the Office of the UN High Commissioner for Human Rights (OHCHR) called “arbitrary or unlawful interference with his or her privacy, family, home or correspondence.”19 Surveillance likewise may infringe upon an individual’s right to freedom of association and expression. Under international human rights law, three principles are critical to assessing the lawfulness of a particular surveillance action. First, does domestic law allow for surveillance? La Rue’s successor, David Kaye, issued a report in 2019 that affirmed that legal regulations should be “formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and it must be made accessible to the public.” Legal requirements should not be “vague or overbroad,” which would allow unconstrained discretion to government officials. The legal framework itself should be “publicly accessible, clear, precise, comprehensive and non-discriminatory.”20 Second, does the surveillance action meet the “necessity and proportionality” international legal standard, which restricts surveillance to situations that are “strictly and demonstrably necessary to achieve a legitimate aim”?21 Third, are the interests justifying the surveillance action legitimate? Disagreements abound when it comes to determining what constitutes legitimate surveillance and what is an abuse of power. While governments commonly justify surveillance on national security or public order grounds, the OHCHR warns that such restrictions may “unjustifiably or arbitrarily” restrict citizens’ rights to freedom of opinion and expression. It contends that legitimate surveillance requires states to “demonstrate the risk that specific expression poses to a definite interest in national security or public order,” and that a “robust, independent oversight system” that entrusts judiciaries to authorize relevant surveillance measures and provide remedies in cases of abuse is required.22 Kaye adds that legitimate surveillance should only apply when the interest of a “whole nation is at stake,” and should exclude surveillance carried out “in the sole interest of a Government, regime or power group.”23 The legal standards required to legitimately carry out surveillance are high, and governments struggle to meet them. Even democracies with strong rule of law traditions and robust oversight institutions frequently fail to adequately protect individual rights in their surveillance programs. Countries with weak legal enforcement or authoritarian systems “routinely shirk these obligations.”24 As the OHCHR’s inaugural report on privacy in the digital age concludes, states with “a lack of adequate national legislation and/or enforcement, weak procedural safeguards and ineffective oversight” bring reduced accountability and heightened conditions for unlawful digital surveillance.25 AI surveillance exacerbates these conditions and makes it likelier that democratic and authoritarian governments may carry out surveillance that contravenes international human rights standards. Frank La Rue explains: “Technological advancements mean that the State’s effectiveness in conducting surveillance is no longer limited by scale or duration. Declining costs of technology and data storage have eradicated financial or practical disincentives to conducting surveillance. As such, the State now has a greater capability to conduct simultaneous, invasive, targeted and broad-scale surveillance than ever before.”26 AI surveillance in particular offers governments two major capabilities. One, AI surveillance allows regimes to automate many tracking and monitoring functions formerly |
主题 | Democracy and Governance ; Political Reform ; Society and Culture ; Technology |
URL | https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847 |
来源智库 | Carnegie Endowment for International Peace (United States) |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/418002 |
推荐引用方式 GB/T 7714 | Steven Feldstein. The Global Expansion of AI Surveillance. 2019. |
条目包含的文件 | ||||||
文件名称/大小 | 资源类型 | 版本类型 | 开放类型 | 使用许可 | ||
WP-Feldstein-AISurve(2819KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | 浏览 | ||
AI_Global_Surveillan(230KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | 浏览 |
个性服务 |
推荐该条目 |
保存到收藏夹 |
导出为Endnote文件 |
谷歌学术 |
谷歌学术中相似的文章 |
[Steven Feldstein]的文章 |
百度学术 |
百度学术中相似的文章 |
[Steven Feldstein]的文章 |
必应学术 |
必应学术中相似的文章 |
[Steven Feldstein]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。