By Aram Aharonian
We live in times when algorithms facilitate electoral fraud, track people's movements and can analyze micro- expressions to anticipate criminal behavior.
We have reached a state of affairs where corporate/ state surveillance is the new normal, and privacy a thing of the past.
This article examines the introduction of transnational technologies in the South American context, and how they are raising human rights and privacy concerns.
In 2019, the Argentine computing specialist, Ariel Garbarz, shared with the Public Prosecutor’s Office of Argentina his concern about the possibility of electoral fraud in the upcoming Argentinian presidential and legislative elections.
Garbarz also denounced the infallibility of Election-360, an election management software, acquired from the transnational company Smartmatic by the country’s Ministry of the Interior. Election-360 has been in the news for fraud in several countries, including in Venezuela. Built on proprietary software with codes that are closed, it is not amenable to audit by computer inspectors assigned to monitor elections.
The mainstream media in Argentina insist that Smartmatic is a Venezuelan company, and that the risk of foreign interference is absent. But, the company, run by Lord Mark Malloch-Brown, is in fact susceptible to external influence.
According to Smartmatic's official website, Lord Brown, president of the SGO group, with which Smartmatic has partnered, has worked with George Soros, (the promoter of the ‘color revolutions’ in the former Soviet Union and known for his controversial involvement in Macedonia), as vice-chairman of his Investment Funds, as well as Soros’ Open Society Institute. He has also been a Vice-President at the World Bank and the lead international partner at Sawyer Miller, a political consulting firm. Lord Brown therefore is far from politically neutral.
In addition to the doubts generated by Smartmatic's electronic voting system regarding the violation of secrecy and possible rigging of results, there are risks with the company's biometric fingerprint identification system. The registration of each voter is updated in real time to a computer center and the government can access this information to ‘optimize’ decisions about voting. This implies that the selection of where to prolong voting can be politically motivated.
Surveillance cameras are increasingly in use by security forces in various countries, including the South Americas. But the images captured from these surveillance cameras are analyzed by transnational technology companies working on AI applications. While official narratives assert that the use of these applications is restricted to crime detection, it is not a transparent process, and can easily impinge on citizen rights.
While official narratives assert that the use of surveillance cameras is restricted to crime detection, it is not a transparent process, and can easily impinge on citizen rights.
Cortica, an Israeli lab specializing in autonomous AI, creates algorithms to analyze and isolate such facial patterns captured on camera, through AI systems. The company has developed software for the analysis of images from security cameras to detect movements and behavior associated with violent crimes or theft. Its ability to anticipate crime is based on “micro-expressions,” which betray the alleged criminal. The company claims its systems learn and ‘predict’ future events from the collected data.
Both Chile and Argentina are advancing the implementation of AI facial recognition systems, supposedly to detect people who are on wanted lists. But this technology has the potential to jeopardize the privacy of millions of innocent citizens.
Previously, Argentina had also announced the purchase of aerostatic surveillance balloons with cameras capable of recording a 360 degrees view, equipped with day and night vision, real-time video and the ability to identify and track targets for kilometers. These balloons are primarily used for covering large events with mass attendance, from political demonstrations to soccer games. As with Election 360, these surveillance processes are not open to public scrutiny.
As governments increasingly adopt such surveillance tools, many questions arise. How are the images recorded in public or private spaces treated? Who utilizes these images? What are the custody and guardianship processes in place, and for how long is the data stored? The truth is that no protocol has been studied, and no debate about the protection of personal data has explicitly addressed issues around the lack of transparency and accountability that accompany these processes.
Social network analysis
The analysis of images from CCTV cameras is only one of the many AI applications at play in the name of public safety. The US Department of Justice has funded a program at Cardiff University to develop a software for the analysis of social networks to detect areas where incidents of crime may occur.
The method draws from the analysis of Twitter data, alongside identifying outbreaks of verbal violence, which are then mapped onto historical hate crime data from the Los Angeles Police Department. These data sets are compared to incidents of violence in the city. Subsequently, an algorithm is able to learn from past correlations to predict future incidents and thus allocate resources to cover potentially dangerous areas.
Fake news has spread rapidly through the internet, and particularly through social networks. To curb the spread of misinformation, large tech corporations such as Google and Facebook, are seeking to impose their own standards of censorship. By using its algorithms to conceal certain media outlets, or censor certain photographs and videos, Google is able to manipulate its search engine results, essentially appropriating the internet and its information avenues.
By using its algorithms to conceal certain media outlets, or censor certain photographs and videos, Google is able to manipulate its search engine results, essentially appropriating the internet and its information avenues.
The lack of transparency, regulatory mechanisms and citizen/ state oversight, makes it impossible for the public to assess the success of these methods, and hold these companies accountable.
We are in the midst of a next-generation war, where collective imaginaries are based on the bombardment of people’s perceptions through AI, big data and digital networks. And where, the data collected, supposedly in the service of security and the detection of crimes lead to the definitive loss of an individual privacy, as well as a threat to democratic processes.
In the absence of a people’s internet that can guarantee neutrality and citizen sovereignty and control the manipulation of information by mega-companies, the social control managed by the big transnational tech corporations in alliance with a few states (especially the US, UK and Israel) will continue to promote societies under vigilance, through cyber-espionage, fake news, electoral fraud or biometric data.