The buzz around artificial intelligence (AI) continues to grow, but there is a need to cut through the hype and emphasize the importance of responsible use of AI. Artificial intelligence is a constellation of various technologies created to mimic human behavior. It is not just a technology, but an ideology stemming from the possibility of creating machines that possess human qualities. AI, which thrives on swathes of data, becomes a central concern for everyone as the collection, processing, storage and transfer of data remains at the heart of a contentious struggle between government, corporations, and citizens. AI is being integrated and applied to various domains including precision agriculture, predictive policing , consumer lending and healthcare, and most recently, to help predict the spread of the coronavirus. Forecasting the spread of a pandemic requires large volumes of historical data, which we lack for Covid-19; but agent-based modelling could be used for scenario simulation or to detect patterns that indicate where more resources are needed. AI can be used in many sectors to describe, predict, or prescribe using big data, training, and learning models.
Geoffrey Hinton, one of the founding fathers of deep learning, a notable technique driving AI, recently tweeted, ‘Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how [treatment] works but has a 90 percent cure rate and a human surgeon with an 80 percent cure rate. Do you want the AI surgeon to be illegal?’
While there is a push to continuously democratize AI tools and models, scaling AI in sectors like healthcare still relies heavily on accessing sub-population data and domain-expertise, which can be both unreliable and expensive.
AI is a general-purpose technology with the potential to transform economies and societies at large, but responsible use is crucial. Due to the potential for AI’s implementation across diverse sectors, there should be consistency to demands for transparency, especially in high-stakes areas like healthcare. For instance, demanding interpretable models in criminal justice to reduce recidivism but ignoring this issue when tackling health issues such as coronavirus or cancer is short-sighted. Although the rules of engagement guiding both domains differ, the taxonomy of software used is similar. In reference to Hinton’s tweet, if an AI surgeon has a 90 percent success rate when treating cancer, what happens to the remaining 10 percent of patients? If their lives are lost, is this approach fair? Transparency equips us with the knowledge to deliver more, even though the involvement of technical and non-technical approaches makes the transparent use of AI a herculean task. One approach for the use of AI in healthcare is to make patients aware of the implications and error rates and for companies to allocate research and development resources for the development of interpretable models and tools across various sectors. While there is a push to continuously democratize AI tools and models, scaling AI in sectors like healthcare still relies heavily on accessing sub-population data and domain-expertise, which can be both unreliable and expensive.
AI’s Authoritarian Model is Gaining Momentum
The need to digitize every part of our lives has grown rapidly in recent years. Cognitive scientist, Abeba Birhane, correctly explained that ‘data and AI seem to provide quick solutions to complex social problems and this is exactly where the problem arises.’ There is an underlying challenge in deploying AI that could be misused by irresponsible corporations or governments for oppression. China’s government is a notable example, as it has deployed AI algorithms for large-scale surveillance against its citizens.
The coronavirus crisis has exacerbated the issue of AI transparency because it has forced a trade-off between privacy and public health.
Governments’ desires to control the flow of information has become a wider concern. For example, in response to the coronavirus outbreak, the government of South Africa announced plans to tap phone data to prepare for spikes in the number of cases. The coronavirus crisis has exacerbated the issue of AI transparency because it has forced a trade-off between privacy and public health. This troubling trend leaves democratic values in the shadows, as countries use AI algorithms to track people’s biometrics and other personal data. There will be a wider impact in the long term. As Yuval Noah Harari noted in a Financial Times article:
‘You could, of course, make the case for biometric surveillance as a temporary measure taken during a state of emergency. It would go away once the emergency is over. But the temporary measures have a nasty habit of outlasting emergencies.’
A Darker Future?
Despite humanity’s current panic over the coronavirus pandemic, allowing corporations and governments to infer, analyze, and process data about us without our consent points to a darker future. The digital space operates in a feudal state. We are yet to reckon with the reality that the ideological underpinnings of our public institutions are misaligned with the private sector’s business and operating models. As it stands, we are like ‘digital serf ’ ready to sacrifice human autonomy to catch up on our favourite movies, litter our images across social networks, or unconsciously submit our political values to a recommender engine (an internet-based machine that suggests content to a user using certain data points).
Last year on Christmas Eve, after surviving a complex work schedule, I decided to ‘Netflix and chill.’ But after checking the content curated by the online media service provider, I stumbled upon a film containing violent images. Within an hour, I had received two recommendations for movies that were similar to content I had already angrily opted out of. These targeted adverts are amplified by cookie choice tools, recommender engines, and machine learning algorithms, and can sometimes act in a predatory and intrusive manner. I personally felt that I was under surveillance. Determining such preferences should be easy for an AI system, but this example shows otherwise. These AI tools work to improve customer experience by helping digital media firms detect patterns through consumers’ behavioral data so that content is personalized for end users. Similarly, YouTube has been repeatedly flagged for driving end users towards polarizing content. As this becomes normal practice for media outfits, dangers lie in complacency about the non-intended consequences of these algorithms.
Africa Needs to Choose Wisely
For Africa, adopting democratic principles to improve the responsible use of AI should be at the heart of implementation, despite the slow adoption of AI on the continent. But this will rely on ensuring that governments at all levels adopt a data transparency strategy for better use of AI. Although uptake is in its initial stages, there are signs of AI misuse. For example, in Johannesburg facial recognition systems are serving as surveillance weapons to threaten and discriminate against minorities. In Uganda, the deployment of unregulated facial recognition technologies threatens privacy. In the US, San Francisco, Massachusetts, and California have banned the use of facial recognition systems because of the potential for discriminatory use and their ability to amplify algorithmic bias. As explained in my article, algorithmic bias could emanate from representation, history, evaluation, optimization, aggregation, or measurement. By applying optimization techniques for machine learning models, a developer or researcher could associate proxies unconsciously or consciously, and this could amplify certain biases or generate unintended consequences. Importantly, this gives further credence to arguments for transparency within any social structure for AI implementation.
To achieve transparency within AI it is critical to follow a rights-based approach, either through technically interpreting models or defining clear rules of engagement for AI governance. This would help to build trust within societies where public trust is low.
To achieve transparency within AI it is critical to follow a rights-based approach, either through technically interpreting models or defining clear rules of engagement for AI governance. This would help to build trust within societies where public trust is low. Singapore has done this through the use of a privacy-preserving smartphone application which aggressively traces coronavirus infections. While it is unclear whether this application uses an AI tool, this example could serve as a motivation for better use of AI.
The Road Ahead
Often, the root cause of the problems with AI systems is not a failure to implement the systems correctly, but rather that the definition of correct implementation is not in the best interest of citizens. Even so-called content personalization is not really optimized to service the content that you are most interested in, but rather optimized to serve the content that will maximize revenue for the platform provider. In the case of platforms that make their money from advertising, for instance, those two optimization targets can be very different. Platform governance, therefore, needs to be reconsidered with the best interest of citizens in mind.
The effectiveness of contact-tracing apps to fight the pandemic can be overestimated. These applications are embedded with an exposure notification feature that tends to accelerate false positives and can reinforce social prejudices such as proxies prevalent in the datasets even after anonymizing or de-identifying people’s identities.
The usefulness of AI in addressing social issues, including criminal justice, but also coronavirus mitigation tools, is often greatly exaggerated. Machine learning works well for things where we can directly measure the relevant features that the predictions/classifications are supposed to depend on, e.g. image feature for image recognition. When it comes to social issues, however, we often cannot directly measure the relevant features, because we clearly do not know what they are. As a result, AI systems are doomed to become prejudice reinforcement machines. Even image features for image recognition still come with flaws. Abeba Brihane and Vinay Prabhu in their research on ImageNet, discovered questionable ways in which images were being sourced and labelled—such vision datasets could often be highly misogynistic. In this case, Brihane and Prabhu’s research forced MIT to remove huge datasets that taught AI systems to use racist and misogynistic slurs. On the other hand, the effectiveness of contact-tracing apps to fight the pandemic can be overestimated. These applications are embedded with an exposure notification feature that tends to accelerate false positives and could reinforce social prejudices such as proxies prevalent in the datasets even after anonymizing or de-identifying people’s identities.
The future of AI transparency is becoming increasingly unclear due to the ongoing coronavirus pandemic. At a crucial time like this, one of the most important areas requiring strong leadership is digital governance. The European Commission released a white paper in February entitled ‘Artificial Intelligence–A European Approach to Excellence and Trust’, which outlines the distributive and regulatory policy frameworks that it expects to be executed. Europe tends to use a rights-based approach in response to socio-technical issues, and this will continue to offer hope in solidifying trust when applying AI across different domains and sectors.
Although Africa falls behind in institutional capacity, countries on the continent can begin to create AI strategies that align with their values. Incorporating responsible methods of implementing AI, such as developing algorithmic accountability assessments, should be the aim. The creation and promotion of ethical boards responsible for validating and approving the use of certain AI technologies in high-stake environments would also be advisable. This would also help to improve public information on the uses of AI technologies. Continuing the top down approach for implementing technologies will only generate discontent between people and technology.
An earlier version of this article was published in The Republic.