Artificial Intelligence and Psychotherapy, Part 1: What is Possible?
- Jordan Conrad
- Apr 10
- 13 min read

A recent article in the New Yorker entitled Can AI Treat Mental Illness explored the question of whether artificial intelligence (AI) therapists could ever replace human therapists. Although it might sound as though they are just jumping on the AI bandwagon, this is a serious question that public health officials, bioethicists, and software developers are trying to figure out.
There are three horns to the dilemma. First, there are performance limitations, even to artificial intelligence. When OpenAI's ChatGPT was first unveiled, people were amazed at its remarkable abilities and for the first time the prospect of an AI assistant you would actually want to have seemed real. That excitement has caused many to blind themselves to the very real limitations of the current state of AI, as well as the ontological limits of digital programs in general.
Second, there are administrative and professional obstacles that have to be considered. As with any other significant technological development, we simply do not have the infrastructure - legally or professionally - to regulate these programs. This may seem boring, but it has very real consequences.
Finally, there are social and ethical issues that must be considered as we roll out a novel treatment to a vulnerable population. Introducing a brand-new technology requires careful planning and attention to the effects that integrating it into our lives will have. This is particularly true for AI in a psychotherapeutic context as we hope it to interact with our mental wellbeing.
In the following few posts, I articulate these concerns and the reasons I have for them. I have written about this subject academically and have been interviewed on the topic for media outlets, so some of this will be repetitive. However, academic work can be dry and interviews typically do not go deep enough, so in this series I hope to correct the shortcomings of both of those other mediums.
This post will focus exclusively on the first horn of the dilemma: what limitations to AI's functionality prevents it from replacing human psychotherapists?
Is There Even a Problem?
Before we dive right in to the question of whether the AI can replace human therapists, we have to understand why we would even want that. The New Yorker article does a fairly good job unpacking that.
“Roughly one in five American adults has a mental illness. An estimated one in twenty has what’s considered a serious mental illness—major depression, bipolar disorder, schizophrenia—that profoundly impairs the ability to live, work, or relate to others. Decades-old drugs such as Prozac and Xanax, once billed as revolutionary antidotes to depression and anxiety, have proved less effective than many had hoped; care remains fragmented, belated, and inadequate; and the over-all burden of mental illness in the U.S., as measured by years lost to disability, seems to have increased. Suicide rates have fallen around the world since the nineteen-nineties, but in America they’ve risen by about a third.”
Actually, the situation is a bit worse than this: Over half of the counties in the U.S.
are without a single psychiatrist,[1] 37% are without a psychologist, and 67% are without a psychiatric nurse practitioner.[2] The Health Resources and Services Administration estimates that roughly 163 million Americans (nearly half!) live in federally designated mental health professional shortage areas.[3] Even controlling for stigma, cost, and other personal barriers, there are just not enough mental health providers to meet the demand.
This problem is not isolated to the United States, either. Nearly 13% of the global population - thats 970 million people - were living with a mental disorder in 2019.[4] Depression and anxiety disorders rank among the top 25 leading causes of burden globally[5] and in just one year during the pandemic, prevalence rates of depression rose roughly 28% and anxiety disorders rose roughly 26%. [6-7] Troublingly, there is a significant treatment gap: Even in high-income countries, only 33% of people received treatment for depressive disorders (in low- or lower middle-income locations, this number drops to 8%).[8]
This is where AI psychotherapists could make a very significant change in the treatment landscape. The majority of the global population owns a smartphone and has access to the internet [9-10] and the same is true in the U.S.,[11] meaning that accessing web or app based mental health services is technologically within reach for most people.
If ameliorating the geographic and wait-time barriers to mental health were the only contribution that AI psychotherapists made, they would be revolutionary. But, in addition, developers often boast of significant advantages digital programs have over human practitioners. Certain apps employing machine learning programs can track user location and device usage (so, how fast you are moving from one app to the next, your typing speed, your use of certain apps instead of others) have shown some promise in predicting mood shifts, identifying disorders, and predicting suicide risk. [12-14] Crucially, having your psychotherapist in your pocket also allows patients to access psychotherapeutic services when they are most in need.
Why Can't AI Help?
The short answer is that AI can, and likely will, help reduce the burden on a strained mental health care system. But there are several qualifications to that statement. The first that we should get out of the way right now is that there is an intense over-optimism about the current state of AI. The reality is that, as impressive as it is, the majority of the apps currently on the market offer nothing close to the innovative advantages discussed just above.[15] I have no doubt that in time these will, or could be, achieved, but it is not obvious when. Developers and technologists love to tell us that in just a few years our technological dreams will come true, but we have good reason to be cautious.
Biases: Ramified and Reified
A bigger problem, however, is bias. Humans, of course, are biased in all sorts of interesting ways and therapists, being humans, can be biased as well. It is no surprise that when those biases enter treatment, it can negatively impact outcomes.[16] Similarly, programmers may in inadvertently program their own biases into the AI. So far, this seems about equal - we cannot eliminate biases in humans, so we should not expect to eliminate them in machines, right?
Well, the problem is much broader, unfortunately. Human clinicians are limited by their license where they are permitted to practice. While that doesn't come close to eliminating practitioner-bias, it does ensure that they are at least familiar with and have lived in the location they are working for several years. In contrast, one of the most important advantages of digital programs over human psychotherapists is their ability to treat people all over the world at once. That means that a single programmer's provincial biases might affect someone a continent away. In fact, it means that a single programmer's biases, if unknowingly integrated into the programming, could affect millions of potential users.
Even if we remove programmer bias, AI is trained on current data sets which might reflect biased or prejudiced systems. If a particular demographic is unfairly disadvantaged, for instance, any program trained on that data will project that historical disadvantage into the future, potentially perpetuating mental health treatment disparities that we hope to end.
Operationalized Treatment vs Effective Treatment
Right now, by far the most common treatment modality on mental health apps is cognitive behavioral therapy (CBT).[17-18] That is not surprising, nor a particularly bad thing. It is not surprising because CBT has been endorsed by several professional institutions, [19-20] and it is not bad because it has decades of empirical support behind it.[21]
But CBT might be disproportionately represented for another reason. CBT is an operationalized treatment and the purpose of operationalization is to ensure that it is being delivered faithfully, regardless of the particulars of the person delivering it. With highly operationalized treatments, capable of being broken down into discrete techniques, the assessment and interventions should, in theory, remain the same no matter who - or what - delivers it.

Note that that is less true for other psychotherapeutic modalities, such as psychodynamic psychotherapy, play therapy, art therapy, interpersonal therapy, etc. To take just one example - psychodynamic psychotherapy has been shown to be clinically effective [22] at rates similar to CBT [23-24] and is argued to work along different pathways, potentially addressing conditions unresponsive to CBT.[25] However, psychodynamic psychotherapies are less operationalized and often examine the displacement of thoughts and feelings from formative relationships to others in a way that relies on the therapist's presence in the room, making them harder to program.
The trouble is that if programmers prioritize programmability over efficacy in their selection of psychotherapies, it undermines the legitimacy of digital mental health interventions. Anything less than a focus on efficacious treatment renders the digital mental health marketplace a bogus site for medical treatments.
Ontological Impoverishments and the Problem of Foreknowledge
There are two aspects to problem of foreknowledge. Let's look at them separately.
The Hard Problem. Within philosophy and cognitive neuroscience, a single issue has been dubbed “The Hard Problem.” Simply put, the hard problem is figuring out how experiences are derived from physical systems. When we think, perceive, and act there is, in addition to a complex integration of multiple information processing systems, an experience of. There is something it is like to be a conscious organism – to read this sentence, see a color, to fall in love, to be you. The problem is that your physical things do not have experiences, mental things do. Put another way, more relevant to humans, your brain does not have experiences, your mind does. There is nothing it is like to be brain matter, electricity, neurons, or chemicals, any more than there is something it is like to be a table, though the former of these mysteriously give rise to experiences.
This point is helpfully illustrated by a famous thought experiment by Frank Jackson known as “Mary and the Black and White Room.”
Mary is a brilliant scientist who specializes in the neurophysiology of vision. Throughout her years researching the subject, she has acquired all the physical information there is about how perceptual processes operate in the brain; she understands, for example, “just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that result in the uttering of the sentence ‘the sky is blue.”[26] However, Mary herself has never seen a color: she lives in a black and white room, receives information from the outside world via a black and white television, and for added measure her skin has been dyed black and white. One day Mary decides to leave the room and, upon opening the door, first sees a cherry-red fire-engine driving past.
The question Jackson asks is: did Mary learn anything new by seeing a color? People have different intuitions about this, but I believe Mary does learn something: she learns what it is like to see the color red.
Mary and the Black and White Room highlights what it is at stake in the hard problem. There seems to be something mental that is not reducible to the physical system. As Jackson summarizes: “It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false." [26]
The Second Hardest Problem. The second problem is similarly intractable. The remarkable, but often unremarked upon, fact is that our mental states represent something about the world; your memory of breakfast, nervousness about a test, and thoughts about this article represent breakfast, a test, and this article. How it is that our thoughts are about or represent properties or states of affairs—what philosophers refer to as their intentional content—is as mysterious a problem as any other. As the cognitive scientist and philosopher, Zenon Plyshyn, explains, “what meaning is, or how it gets into my head” is “probably the second hardest puzzle in philosophy of mind.”[27]
So, when I say that AI does not just have emotions, but does not “know” anything, what do I mean? AI can act as though it knows something just as a Roomba can act like it is “thinking” about where to go, but both are physical systems merely behaving as though they had inner experiences. This is helpfully illustrated by another famous thought experiment, this time by John Searle,[28] entitled “The Chinese Room.”

Imagine you are seated at a desk in a small room. On the left side of the room is a small opening in which questions written in Chinese are passed through. Your job is to (1) receive these questions, (2) consult a book that has various instructions, such as “when you receive the card with X symbols on it, write Y symbols on a different card,” (3) follow the instructions in the book and write the new symbols on a new card, and (4) pass the new card through a small opening on the right of the room. Importantly, you neither speak Chinese nor are aware that the symbols passed through are written in Chinese; you are merely receiving notes in an unknown language, consulting a book, writing notes in an unknown language, and outputting them.
It is obvious that the person in this room does not know Chinese – they do not understand the input questions, what they are writing, or the effect it will have upon the person that receives the answer – though an external observer might assume that whomever is in that room has an expert understanding of the language. This is precisely what a computer does when you ask it to perform an even rudimentary task. A calculator, for instance, does not know arithmetic - it receives an input (say 1+1), consults a program that states "when condition 1+1 obtains, produce the "2" symbol," and then outputs 2. The calculator understands arithmetic just as the human in The Chinese Room understands the Chinese language.
Searle explains: “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have." [27]
The Problem of Foreknowledge. Every layperson and expert alike knows that
who your therapist is matters. Patients google therapists and report having different feelings about them depending on their political views, religion, highest degree, etc. It stands to reason that many patients will care that the therapist they are considering does not - actually, cannot - feel or know. How would you feel going to group therapy and finding out that everyone else in the room would be a computer program? Would it matter that the stories they told about their own lives were believable? Or that they expressed them with what appeared to be real sadness?

For many, the fact that AI programs don’t actually “know” anything (just as a calculator doesn’t “know” arithmetic, though it acts as though it does) won’t be a problem. As the author of the New Yorker piece muses: “I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself.”
However, for many others, “talking to myself” is not what they want or need from psychotherapy. Aside from those suffering from loneliness for whom AI therapies might only advertise their isolation to themselves, the vast majority of patients – in particular teens and those seeking couples therapy – will benefit from the difficult process of learning to relate to another person who they know is devoted to their wellbeing.
[1] University of Michigan Behavioral Health Workforce Research Center (2018). Estimating the Distribution of the U.S. Psychiatric Subspecialist Workforce. Ann Arbor, MI: UMSPH. Available from: https://behavioralhealthworkforce.org/wp-content/uploads/2019/02/Y3-FA2-P2-Psych-Sub_Full-Report- FINAL2.19.2019.pdf.
[2] Andrilla, C. H. A., Patterson, D. G., Garberson, L. A., Coulthard, C., & Larson, E. H. (2018). Geographic variation in the supply of selected behavioral health providers. American Journal of Preventive Medicine, 54(6), S199–S207. https://doi.org/10.1016/j. amepre.2018.01.004.
[3] Health Resources & Services Administration (2023). Health workforce shortage areas. Available from: https://data.hrsa.gov/topics/ health-workforce/shortage-areas.
[4] Institute of Health Metrics and Evaluation. (2024). Global Health Data Exchange (GHDx). Retrieved from https://vizhub.healthdata.org/gbd-results/, accessed on 22 Feb 2024.
[5] GBD Mental Disorders Collaborators. (2022). Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990-2019: A systematic analysis for the Global Burden of Disease Study 2019. The Lancet Psychiatry, 9(2), 137-150.
[6] COVID-19 Mental Disorders Collaborators (2021). Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet (London, England), 398(10312), 1700–1712. https://doi.org/10.1016/S0140-6736(21)02143-7
[7] WHO. (2022) Mental Health and COVID-19: Early evidence of the pandemic’s impact. Geneva: World Health Organization.
[8] Moitra, M., Santomauro, D., Collins, P. Y., Vos, T., Whitford, H., Saxena, S., Ferrari, A. J. (2022). The global gap in treatment coverage for major depressive disorder in 84 countries from 2000-2019: A systematic review and Bayesian meta-regression analysis. Plos Medicine, 19(2), e1003901. https://doi.org/10.1371/journal.pmed.1003901
[9] Petrosyan, A. (2024). Worldwide digital population. Statista. Retrieved from: https://www.statista.com/statistics/617136/digital-population-worldwide/#:~:text=Worldwide%20digital%20population%202024&text=As%20of%20January%202024%2C%20there,population%2C%20were%20social%20media%20users
[10] Global System for Mobile Communications. (2023). The state of mobile internet connectivity report 2023. GSMA. Retrieved from: https://www.gsma.com/r/somic/
[11] Perrin, A. Mobile technology and home broadband 2021 [Internet]. Research Topics: [Internet, & Technology] (2021). Washington D.C.: Pew Research Center. Retrieved from: https://www.pewresearch.org/internet/2021/06/03/ mobile-technology-and-homebroadband-2021/.
[12] Mendes, J. P. M., Moura, I. R., Van de Ven, P., Viana, D., Silva, F. J. S., Coutinho, L. R., … & Teles, A. S. (2022). Sensing apps and public data sets for digital phenotyping of mental health: Systematic review. Journal of Medical Internet Research, 24(2), e28735.
[13] Torous, J., Bucci, S., Bell, I. H., Kessing, L. V., Faurholt-Jepsen, M., Whelan, P., … & Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318–335.
[14] Haines-Delmont, A., Chahal, G., Bruen, A. J., Wall, A., Khan, C. T., Sadashiv, R., & Fearnley, D. (2020) Testing Suicide Risk Prediction Algorithms Using Phone Measurements With Patients in Acute Mental Health Settings: Feasibility Study. JMIR Mhealth Uhealth, 8(6), e15901. DOI: 10.2196/15901
[15] Parrish, E. M., Filip, T. F., Torous, J., Nebeker, C., Moore, R. C., & Depp, C. A. (2022). Are Mental Health Apps Adequately Equipped to Handle Users in Crisis? Crisis, 43(4), 289-298. https://doi.org/10.1027/0227-5910/a000785
[16] Saposnik, G., Redelmeier, D., Ruff, C. C., et al. (2016). Cognitive biases associated with medical decisions: a systematic review. BMC Medical Informatics and Decision Making, 16, 138.
[17] Andersson, G., Titov, N., Dear, B. F., Rozental, A., & Carlbring, P. (2019). Internet-delivered psychological treatments: from innovation to implementation. World psychiatry : official journal of the World Psychiatric Association (WPA), 18(1), 20–28. https://doi.org/10.1002/wps.20610
[18] Bry, L. J., Chou, T., Miguel, E., & Comer, J. S. (2018). Consumer smartphone apps marketed for child and adolescent anxiety: A systematic review and content analysis. Behavior Therapy, 49, 249–261. DOI: 10.1016/j.beth.2017.07.008
[19] Society of Clinical Psychology. (2022). Psychological treatments. Division 12: American Psychological Association. Retrieved from https://div12.org/treatments/
[20] The National Institute for Health and Care Excellence (2011). Common mental health problems: Identification and pathways to care. Retrieved from: https://www.nice.org.uk/guidance/cg123/chapter/Recommendations
[21] Öst, L.-G. (2008). Cognitive behavior therapy for anxiety disorders: 40 years of progress. Nordic Journal of Psychiatry, 62(sup47), 5-10.
[22] Shedler, J. (2010). The efficacy of psychodynamic psychotherapy. The American psychologist, 65(2), 98–109. https://doi.org/10.1037/a0018378
[23] Fonagy, P. (2015). The effectiveness of psychodynamic psychotherapies: An update. World Psychiatry, 14(2), 137–150.
[24] Steinert, C., Munder, T., Rabung, S., Hoyer, J., & Leichsenring, F. (2017). Psychodynamic therapy: As efficacious as other empirically supported treatments? A meta-analysis testing equivalence of outcomes. American Journal of Psychiatry, 174, 943-953.
[25] Wakefield, J. C., Baer, J. C., & Conrad, J. A. (2020). Levels of meaning, and the need for psychotherapy integration. Clinical Social Work Journal, 48, 236-256. doi: 10.1007/s10615-020-00769-6
[26] Jackson, F. (1982). Epiphenomenal Qualia. Philosophical Quarterly, 32(127), 127-136.
[27] Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. MIT Press.
[28] Searle, J. (1999). The Chinese Room. In R. A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press.