What would you do if your phone was able to predict an imminent mood change in you? Or if your friend gifted you vouchers for therapy sessions on your birthday? Or if your partner got notified by an app every time you took your medication for depression? These are not hypothetical scenarios, many of these technologies already exist; TypingDNA claims to give you an hour-by-hour prediction of your mood; TalkSpace offers therapy gift cards, which were a holiday trend last year; and in 2017, the US Food and Drug Administration approved the use of the first digital pill, the ingestion of which can be tracked by oneself and others.
Long before the Covid-19 pandemic, the use of digital tools and technologies to provide mental health care was being fervently pushed by proponents of digital mental health care. However, since 2020, with the world going into lockdowns and amid the sudden and rampant digitization of our lifeworlds, the push has been slowly transforming into a shove. In the face of social isolation, employment uncertainties, the grief of losing loved ones, and limited access to offline mental health services, people turned to digital alternatives. The first few months of the pandemic saw a surge in downloads of mental health and wellness apps. In the two years since, digital mental health technologies have had a field day with the increase in investments, mergers, and big players entering the field. While these technologies have marginally increased the access of mental health care services to certain sections of the population – namely smartphone users – they have also raised issues of privacy, data protection, and patient autonomy, which need to be urgently addressed. These invasive technologies use design elements that can make them seem like viable substitutes for certified therapists or psychiatrists, which can have far-reaching consequences. More importantly, what is the impact of virtualizing mental health services, particularly on vulnerable populations such as women? How have states responded to the mental health crisis? And is it possible to uphold privacy and patient autonomy, while thinking through ways of using digital technologies to further public mental health services? Before delving into these questions, we need to better understand digital mental health technologies.
What is Digital Mental Health Care?
Digital mental health includes a wide array of technologies that provide on-demand mental health care in varying capacities. These technologies include apps, chatbots and platforms specializing in meditation, mindfulness and mood journaling. While some apps offer guided meditations and bedtime stories read by celebrities, others provide access to professional therapists and cognitive behavioral therapy (CBT)-based interventions. Chatbots can automate therapy using artificial intelligence (AI) and natural language processing (NLP). These technologies were particularly used during the pandemic when in-person therapy had to be replaced by virtual care, which was either synchronous (video or audio therapy sessions), or asynchronous (therapy through text messaging).
However, some applications go beyond these more common forms of interventions and claim to be able to predict and diagnose mental distress using AI and machine learning. In order to obtain objective measurements in the largely subjective field of mental health care, recent research has focused on developing predictive technologies such as digital phenotyping – tracking an individual’s location, speech patterns, and ‘keyboarding’ behavior through sensors. The data collected ecologically through continuous passive presence of the apps is aggregated and analyzed to provide digital “biomarkers”. According to Dr. Thomas Insel, co-founder of an app called Mindstrong, digital phenotyping can help create “a digital smoke alarm” which, he claims, can be used to design preemptive care.
While the use of digital mental health care may have democratized access to medical information and helped demystify medicine for some, the credibility of said information is up for debate. The adoption of digital services during a pandemic has ensured continued access to mental health services for many individuals, however, their indiscriminate adoption should be scrutinized.
Capitalizing on Vulnerabilities
Many apps claim to democratize access to mental health care and empower users by removing barriers such as location and/or cost, offering information on mental health, providing access to mental health care professionals and tools to record and quantify moods, feelings, emotions, etc. However, the ‘free’ services provided by most apps and platforms are automated while human therapists are sequestered behind paywalls. Furthermore, there is little evidence that these services are as effective as they claim to be. For instance, Mindstrong’s claims of being able to predict mental health illnesses have not been backed by peer-reviewed research, and studies on the app Headspace, have shown that it simply provides a placebo effect. Designed to make money, apps and platforms can trivialize mental health issues with their use of animated characters and bots that act as automated therapeutic tools. A couple of years ago, chatbot apps such as Woebot and Wysa were found incapable of identifying the sensitive nature of certain mental health issues (related to child sexual abuse, drug abuse, eating disorders, etc.) and failed to flag them for professional human intervention.
While the use of digital mental health care may have democratized access to medical information and helped demystify medicine for some, the credibility of said information is up for debate.
The extractivist nature of digital mental health technologies is not merely to track our behavior but to also shape it. These technologies feed, what professor Shoshana Zuboff terms, “behavioral futures markets” where human experiences are turned into behavior within the commercial project of surveillance capitalism. Big Tech players are forever searching for newer sources of predictive behavior to sustain this model, and digital mental health technologies are one such source, where people in their most vulnerable state, are appropriated into the supply chain for this market. Apart from sharing mental health data, apps also collect all kinds of ancillary data to feed the behavioral surplus flows.
Furthermore, the terms ‘personal’ and ‘precision’ are often used to justify the data guzzling design of digital health care by its proponents. The underlying claim is that the use of computational models makes these technologies neutral, scientifically objective, offering bias-free, precise forms of psychotherapy. However, apps rely on this perception of credibility and efficacy of data to capitalize on vulnerabilities. In fact, the process of big datafication removes nuances, ignores subjective experiences and seeks patterns in large data sets. The ‘data body’ computationally evacuates the physical body and in the process disregards unique experiences of individuals and digitally marginalized communities. Furthermore, the use of mental health data and its algorithmic analyses exacerbates the risk of psychological and emotional distress faced by users, particularly those who are vulnerable such as children and teenagers. For example, in Australia, Facebook was found to have shown advertisers its capacity to determine when young people feel “overwhelmed”, “anxious”, and “stressed”.
Additionally, apps and devices that are a part of the Internet of Things (IoT) contribute to the more nuanced medicalization of our society, where algorithmic technologies offer simplistic medical reasoning for complex health issues through surveillance. App-users are mediated and reified through data, machine learning, and algorithms, and this process acts as a substitution for in-person professional care. This further nuances the “medical gaze”, a term used by French philosopher Michel Foucault to describe the objectification and subsequent manipulation of the body through a fractured gaze that views human being in parts which can be medically analyzed and treated for disease. The advent of digital health care has resulted in people turning the medical gaze upon themselves, diluting the demarcation between patients and health practitioners. The patient-customers are co-opted in the act of gazing, encouraged to collect extensive amounts of data through the use of wearables, apps, IoTs, etc., and ‘willingly’ send it to app companies. Furthermore, since the patient is no longer a scientific entity to be studied, but “a unit of financial value”, there is no longer a unidirectional gaze, rather, there are multiple gazes, all trained to fragment and disembody being. These unceasing gazes that extract data from your keystrokes, social media posts, mood apps, and information shared with an animated chatbot, feed infinite marketing possibilities for health care, pharmaceutical, and insurance companies, among others. Thus, if you put up a social media post on anxiety or depression, it can result in you being stalked by advertisements for meditation or counselling sessions, complete with the illustration of a weeping woman on the bed.
Individualizing Mental Health Care
The creation of the ‘mental health futures market’ is driven by the commodification of mental health and the subsumption of social, economic, and political aspects of life within the neoliberal economic paradigm. This results in the normalization of market demands and the pathologization of behavior that deviates from this dominant paradigm. As academics Lisa Cosgrove and Justin M. Karter argue, under medical neoliberalism, “human suffering is all too easily recast in a disease framework and understood in economic terms”. Defined by neoliberal logic, ‘wellness’ is equated with the ability to work. People have to use mental health services to be their best productive selves and, therefore, the responsibility of mental well-being is placed on people themselves. In fact, during the early days of the pandemic, along with stipends offered by companies to enable the shift to work-from-home, many employers also offered tele-therapy and subscriptions of meditation and mindfulness apps like Calm and Headspace. However, this was not the result of some newfound benevolence of corporate employers, companies wanted their employees to be physically and mentally healthy to maintain or even increase their productivity.
The larger psychological discourse and the increasing awareness around mental health issues in the post-pandemic context has been used by proponents of digital mental health services to maximize their profits. However, using only apps for an already stigmatized area of public health can further sweep mental health under the carpet. These apps are also particularly harmful because they are attempting to replace trained psychologists and/or psychiatrists despite being proven to be ineffective.
It is also pertinent to acknowledge the impact of social, political, economical, and environmental factors on mental illness. Since the ‘personal is political’, mental health too, is political. The neoliberal market has a significant bearing on the mental well-being of people, and psychotherapy that does not acknowledge the external factors that impact mental well-being can become a means of social control. Most digital health technologies in the current form are designed to sidestep the structural issues that cause psychosocial distress in people by offering cosmetic fixes that outsource the burden of mental health care onto individuals.
Digital Mental Health through a Feminist Lens
While a relative increase in mental health issues has been observed across populations and countries, mental health well-being has particularly declined among vulnerable groups such as women, youngsters, and the working class. Most often, marginalized groups face higher risk of being exposed to trauma resulting in feelings of social isolation. These factors have worsened post-pandemic. For instance, research has shown that the pandemic has differently impacted men and women; while the number of Covid-19 related deaths has been high among men, mental and physical stressors have disproportionately impacted women. In 2020, Japan saw a rise in suicide rates after more than a decade and while the suicide rate among men fell, the rates among women rose by nearly 15 percent. Globally, women make up 70 percent of the health care workforce. Therefore, they have been at the frontline of the fight against the virus. The industries most severely hit by the pandemic such as food, retail, and tourism predominantly employ women. Women have also had to face increased incidence of domestic violence, the burden of paid and unpaid care and domestic work, social isolation due to multiple lockdowns, and job losses. Adding to their woes was the rampant and almost overnight digitization of work, education, and health care, which further marginalized women who were more likely to have low or no access to information and communications technologies (ICTs). In this context, the states’ use of digital mental health services as ‘the’ public health solution, is likely to further marginalize women and other vulnerable populations.
Besides, even for those who have access to ICTs, the design of most of these apps do not seem to account for race and gender in their therapeutic tools. For example, mental health apps that advise users to spend time with family as stress relief might, instead of alleviating distress, spark guilt about ‘failing’ at parenting in working women with children. Additionally, certain features of digital mental health are especially gendered, case in point, mood trackers. Mood tracking is available both in apps specifically designed to collect mood data and as a feature in most femtech apps such as period and pregnancy trackers, along with some mental health care apps. Users input their moods and emotions into the app, which generates graphs and charts based on the information received. However, the process seems to stop there. Aside from providing some reading material and suggesting that users share their app-generated mood data analysis with friends on social media, studies have found that mood trackers do not offer much else. However, the data collected from mood trackers can and does get sold to third parties. With many companies partnering with wellness, meditation, and mood apps, there is a high risk of employers getting access to the employees’ emotional data. This data, interpreted within corporate cultures rife with gender and racial biases, may be used to deny someone a job, promotion, or a scholarship.
With the increasing adoption of ‘free’ mental health apps and in-person therapy being monetized, states and corporations can use the argument of accessibility to push substandard digital technologies, while actually restricting the reach of quality services.
Surveilling Mental Health
Just as other apps in the market, mental health apps tattle – to third parties, to the government, to our significant others, etc. While human therapists are bound to the same rules of confidentiality both online and offline, a mood diary or a chatbot may share the information inputted into it with third parties like Facebook. In fact, even digitized mental health records are not safe. In 2020, Vastaamo, the largest network of private mental-health providers in Finland, had a major security breach when all patient information, including notes made by therapists were hacked. The hackers demanded ransom from Vastaamo patient-customers, threatening to publish information shared during private sessions on a Tor file server. There is a very real danger of something similar happening with mental health apps, online therapy sessions, or journaling apps. Though most apps claim to not share personal and sensitive health data with third parties, all data can be health data when aggregated with data from other public and non-public sources. Additionally, mergers, acquisitions, and the entry of tech giants such as Google and Apple into the health sector further contributes to Big Data and compounds privacy risks for users. For instance, Megan Jones Bell, the former Chief Strategy and Science Officer at Headspace has recently joined Google, pointing to the tech giant’s foray into health care.
With the increasing adoption of ‘free’ mental health apps and in-person therapy being monetized, states and corporations can use the argument of accessibility to push substandard digital technologies, while actually restricting the reach of quality services.
To make matters worse, the pandemic provided the right conditions for disaster capitalism to flourish, with tech and pharmacology companies, along with medical and public policy experts advocating for a “digitally transformed health care system” that extensively incorporates AI surveillance and screening programs. Indicative of a state-capital nexus, privacy and regulatory safeguards such as Health Insurance Portability and Accountability Act (HIPAA) were relaxed in the first few months of the pandemic. Similarly, the State of New York partnered with Headspace to provide free access to some of the app’s services. Even before the pandemic, the National Health Service (NHS) of the UK released a report envisioning a totalitarian future where “the workforce may become a sensor network”. The authors of the report were convinced that complete and constant monitoring through apps and wearables, predictive algorithms can be used to flag any “high-risk or high cost events in inpatient or community settings”. With political leaders like Donald Trump blaming the mentally ill for mass shootings and calling for their involuntary confinement, the threat of profiling based on mental health data is very real. Technology such as the ‘digital pill’ – drug embedded with ingestible sensors – could be used as a disciplinary apparatus, undermining patient’s autonomy and their right to make decisions for themselves, thereby impacting their road to recovery. Authoritarian governments can use these AI and data driven systems to persecute marginalized and minoritized communities, as has been demonstrated by China, which has been known to test an emotion-detection software on Uyghurs. In this context, the “digital smoke alarm” propounded by Dr. Insel seems to be a dumpster fire in the making.
Towards Public Mental Health Services
Mental health, until recently, had been largely ignored by most public health policies and infrastructures. The use of technology and digital aids for mental health services is still in its nascent form in most parts of world, but the recent stimulus from the pandemic has propelled the industry forward. While it is important to rethink traditional bio-ethical frameworks to incorporate both ubiquity and variety of health technologies, regulatory policies should be mindful that the private capture and ownership of sensitive mental health data is the bigger problem. It is important to acknowledge and address the systemic issues that cause racial, gender, ethnic minorities, and working-class communities to disproportionately experience socio-economic and psychological stressors. Private entities and public health systems should be mindful that surveillance and self-tracking cannot and should not substitute care provided by trained mental health professionals.
We are heading towards a future where changes in our mood, anxiety, and stress will be detected using facial recognition, our social media posts will be analyzed by a host of ‘smart’ devices and IoTs, and our conversations at home and work will be assayed using NLP to predict and promote mental well-being. However, if issues of transparency and accountability are resolved, robust data protection systems are instituted, and the technologies are scientifically tested for efficacy, digital health has the potential to offer continuous, quantitative, and affordable mental health care to many. Now more than ever, it is important to ethically frame this new frontier in digital health to avoid creating an Orwellian dystopia where ‘thoughtcrime’ will be gauged through the multiple screens encountered by people everyday.