The utilization of artificial intelligence in mental health

Alen Greš1, Dijana Staver2

1Department of Psychiatry and Psychological Medicine, University Hospital Center Zagreb, Croatia; 2University Psychiatric Hospital Vrapče, Croatia.

Summary. Background. Artificial intelligence (AI) is the simulation of human intelligence by machines, especially computer systems. It includes learning, reasoning, and problem-solving. Chatbots are AI-powered programs that simulates human conversation. This paper explores the growing role of AI, particularly chatbots, in the field of mental health. Methods. This review article is based on professional and scientific literature as the research method. The literature search was conducted using databases such as Google Scholar, PubMed, Scopus, and Web of Science that met the inclusion criteria. Results. The findings of this review suggest that chatbots, as AI-powered tools, provide continuous emotional support and can improve access to mental health support. They are particularly useful between therapy sessions and in crisis situations. Still, AI cannot replace human therapists. There are some ethical concerns, including misinformation and privacy risks. Conclusions. It is highly likely that AI has a bright future in mental health care. A balanced and well-regulated approach is essential for understanding how chatbots work: we can use them in an informed and rational way.

Key words. Artificial intelligence, chatbots, emotional support, ethical concerns, mental health.

L’utilizzo dell’intelligenza artificiale nella salute mentale.

Riassunto. Introduzione. L’intelligenza artificiale (IA) è la simulazione dell’intelligenza umana da parte delle macchine, in particolare dei sistemi informatici. Include apprendimento, ragionamento e risoluzione dei problemi. I chatbot sono programmi basati sull’IA che simulano la conversazione umana. Questo articolo esplora il ruolo crescente dell’IA, in particolare dei chatbot, nel campo della salute mentale. Metodi. Questo articolo di revisione si basa sulla letteratura professionale e scientifica come metodo di ricerca. La ricerca bibliografica è stata condotta utilizzando database come Google Scholar, PubMed, Scopus e Web of Science che soddisfacevano i criteri di inclusione. Risultati. I risultati di questa revisione suggeriscono che i chatbot, in quanto strumenti basati sull’IA, forniscono un supporto emotivo continuo e possono migliorare l’accesso al supporto per la salute mentale. Sono particolarmente utili tra le sedute di terapia e nelle situazioni di crisi. Tuttavia, l’IA non può sostituire i terapeuti umani. Vi sono alcune preoccupazioni etiche, tra cui la disinformazione e i rischi per la privacy. Conclusioni. È molto probabile che l’IA abbia un futuro brillante nell’assistenza alla salute mentale. Un approccio equilibrato e ben regolamentato è essenziale per comprendere il funzionamento dei chatbot, consentendoci di utilizzarli in modo informato e razionale.

Parole chiave. Chatbot, intelligenza artificiale, preoccupazioni etiche, salute mentale, supporto emotivo.

«Artificial intelligence is like nuclear energy –

it can be extremely useful and extremely dangerous».

Max Tegmark

Introduction

There is currently no officially agreed-upon definition, but the generally accepted one is that artificial intelligence (AI) refers to intelligent machines that mimic human thinking. It encompasses a field of computer science focused on developing intelligent tools (machines, devices, applications) that react and learn like humans1. This includes understanding and analyzing language, speech, and images, through which the computer system learns how to react, plan, solve specific tasks, and make decisions2.

Historically, four industrial revolutions have occurred; we are currently in the digital one, which is characterized by a fusion of technologies: artificial intelligence, Internet of Things (IoT), robotics, biotechnology, 5G, and quantum computing3.

Artificial intelligence is one aspect of this digital revolution. It was originally recognized in 1956, and the term “artificial intelligence” was coined by computer scientist John McCarthy, who defined it as «the science and engineering of making intelligent machines»4. The first use of AI in psychiatry dates back to the 1960s, with computer programs used to support diagnostic decisions and create treatment plans. In 1966, ELIZA, the first chatbot designed to provide psychotherapeutic support, was developed. OpenAI developed ChatGPT, with the first version (GPT-1) released in 2018, followed by GPT-3 in 2020, and GPT-4 in 20235. AI models are not magical, we should move away from the notion of AI powers. On the contrary, they are explainable and structured systems. AI algorithms are essentially statistical models that approximately reproduce learned data. Their capabilities are often measured by the number of parameters within the model6. Large Language Models (LLMs) are a type of AI specialized in understanding, generating, and manipulating human language. They are deep neural networks trained on massive datasets, enabling them to communicate naturally, answer questions, write, translate, and more7.

A chatbot is a computer program designed to communicate with people through a text or voice interface, most often using artificial intelligence (AI). The term “chat” is synonymous with “conversation”, and the term “bot” is short for “robot”. Chatbots are AI platforms designed to generate human-like dialogue, often referred to as conversational AI. They are widely used in service industries such as banking and telecoms to answer frequently asked questions8.

ChatGPT is a specific chatbot – Chat Generative Pre-trained Transformer. It generates responses using an autoregressive statistical model that predicts words based on the likelihood of sequences of words9. AI operates through complex algorithms and models that mimic human thinking and learning via data, algorithms, models, training, and application. It is increasingly integrated into psychiatry, offering expanded access and improved efficiency of services. AI can identify psychiatric symptoms and suicidal ideation from health data and even predict such risks from social media posts10. Though widely used in physical health, AI applications in mental health are still limited. Many psychiatrists may not use AI due to a lack of training in software and AI. It is also not part of standard medical curricula11. ChatGPT can be a clinical decision-support tool. Clinicians can input patient data and receive suggestions on diagnoses and treatment. However, it’s not exclusively trained on medical content, so responses should be verified by professionals. With appropriate prompting, chatbot can provide lists of symptoms and tests, but its recommendations must always be validated by a human clinician12.

Materials and methods

This review article employed a systematic approach to synthesize professional and scientific literature as its primary research method. A comprehensive literature search was conducted utilizing academic databases, including Google Scholar, PubMed, Scopus, and Web of Science.

The aim of this article was to collect literature and discuss current knowledge on the use of artificial intelligence (chatbots) focusing on articles dealing with ethical, clinical, technical, or psychosocial implications from 2015 to 2025 that explore AI (particularly chatbots) in mental health.

The results included a total of 32 studies analyzed, comprising 4 systematic reviews and meta-analyses, 6 randomized controlled trials or pilot studies, 12 empirical studies, 6 conceptual or theoretical papers, and 4 papers providing historical and technological context. All selected literature was carefully chosen and used if deemed satisfactory. These studies collectively offered foundational insights, historical perspectives, and theoretical frameworks relevant to the application of artificial intelligence in mental health.

The subsection comparison of three key studies provides a focused analysis of three pivotal studies13-15, which were instrumental in shaping the core findings and overall conclusions of this review.

The inclusion criteria encompassed studies specifically addressing with the application of artificial intelligence (AI) in the field of mental health. Papers that analyze chatbots and large language models (LLMs) in psychiatry and psychotherapy. Articles published in scientific and professional journals. Publications from the last 10 years (from 2015 to 2025) to cover the latest technologies and approaches. Research that deals with ethical issues, efficiency, safety, or the impact of AI tools in the field of mental health.

Studies were excluded if they focused solely on physical health (eg, cardiology, oncology) with no connection to mental health. Papers that do not use empirical data (eg, comments, opinions, essays without a scientific basis). Studies that analyze AI in a technical sense (algorithms, computer architectures) without application in healthcare. Research older than 2015, unless it is historically significant work (e.g. ELIZA chatbot as a prime example).

Discussion

The development of AI technology is advancing at an extremely rapid pace. The capabilities of artificial intelligence have advanced significantly in recent years. This progress has expanded its applications across various sectors, including mental health. Chatbot technology can enhance mental health by personalized therapy, mood tracking, education, anonymity, and stigma reduction. Numerous new studies on AI in mental health provide significant insights into its effectiveness, functionality, and potential applications16,17.

Chatbots in psychiatry and psychotherapy

Modern studies confirm that the use of artificial intelligence is effective in the diagnosis and treatment of mental disorders. Some emphasize that chatbots can help manage depression and anxiety symptoms. Use of chatbots in psychotherapy has significantly increased in recent years. Since the emergence of Covid-19 in 2019, the adoption and application of AI technologies have rapidly expanded across multiple areas of the healthcare sector13. Eshghie and Eshgie’s study13 demonstrated that ChatGPT can be guided to converse with patients between therapy sessions. It was shown to engage positively, without judgment, and offer emotional support, ask clarifying questions, validate emotions and experiences, and suggest coping strategies.

Therefore, ChatGPT can assist therapists by providing emotional support between sessions. It has proven to be a good listener, available 24/7, and cost-effective. Some patients prefer speaking to a machine rather than a human therapist, especially when discussing intimate issues, due to fear of judgment18.

Lee et al.19 conducted a systematic review and meta-analysis showing AI conversational agents promote mental health and well-being. They offer timely support and personalized recommendations, reducing anxiety and improving psycho-emotional states, especially in students.

Klos et al.20 study found that chatbot Tess reduced anxiety but not depressive symptoms. Studies have also shown that chatbots can be used in psychotherapy, allowing patients to interact with virtual therapists via online platforms.

Abd-Alrazaq et al.15 found that chatbots help with depression, stress, and distress but had no impact on subjective psychological well-being. Their 24/7 availability makes chatbot valuable in crisis situations, such as managing suicidal ideation or panic attacks. AI may aid in emotional regulation, mindfulness, meditation, and goal-setting.

Researchers Cameron and Bergen21 claim in their study that the latest language model, ChatGPT-4.5, is the first AI system to consistently pass the Turing test, a classical evaluation of a machine’s ability to mimic human intelligence in text-based communication. Their results showed that in 500 interactions, GPT-4.5 successfully fooled evaluators in 73% of cases, surpassing the success rate of actual human participants. This success was largely attributed to a carefully crafted persona prompt, instructing the model to behave like an introverted young person using typical internet slang.

Green et al.22 highlighted that AI-based systems could improve access to psychological treatments in low-resource settings, as shown by the chatbot Friend used in conflict zones. Using NLP, it analyzes user content, generates responses, and personalizes interaction. It adapts through learning, which is critical in crisis situations.

Fitzpatrick et al.14 study presented positive results, such as a decrease in depressive symptoms among students using Woebot after just a two-week CBT intervention. ChatGPT can also mimic therapists for those with social anxiety disorder.

Plakun23 emphasized that while AI can track patients and provide tailored treatment suggestions, it cannot replace human elements in therapy. A balanced approach is key, where AI complements but does not replace human therapists.

Ronneberg et al.24 led the SPEAC-2 study exploring a voice-enabled AI counselor trained in PST for adults experiencing emotional distress. It provided personalized real-time support, especially where traditional care access is limited.

Ooi and Wilkinson’s25 paper calls for deeper research into ethical aspects of AI use in therapy, considering psychological and social factors that affect treatment.

Spytska’s26 new study emphasizes that AI-powered chatbot Friend has potential in psychotherapy, especially during crises when human therapists are unavailable. Suggesting that the optimal approach could be a hybrid model that combines the advantages of traditional therapy and AI technology.

Numerous individuals have reported that interactions with chatbots provided greater perceived benefit than those with human therapists. Chatbots are always available, friendly, and easy to talk to. However, there are risks such as the mistaken attribution of empathy to AI systems and the potential emotional dependency of users. Empathy still remains a defining trait of human interaction. They create a false sense of an ideal relationship without any real contact27. This kind of interaction feeds our need to be in control and makes us even more attached to our own fantasy. This makes it even more attractive and further pushes our perception of reality out of reach. It does not provide space for uncertainty, frustration, or reflection, which are all important for personal growth and the development of symbolic thinking28.

Artificial intelligence in psychiatric diagnosis

AI is increasingly integrated into various aspects of human lives, including psychiatry. The use of artificial intelligence (AI) in this field opens new opportunities to expand access to psychiatric services and improve their efficiency.

Haber et al.29 introduced the concept of an ‘artificial third party’ in therapy, exploring how AI affects therapist-patient dynamics. While AI can support diagnostics and offer aid, it may alter traditional relationships. AI can identify predetermined psychiatric or suicidal symptoms from health data and can predict depression and suicidality on unstructured texts, posts on social networks.

Rony et al.30 presented a systematic review and meta-analysis that evaluated the diagnostic accuracy of AI models in psychiatry. The findings revealed that AI achieved an 85% pooled diagnostic accuracy, particularly excelling in detecting complex psychiatric disorders. Machine learning models demonstrated notable performance in various diagnostic tasks.

Weaknesses of AI and ethical challenges

There is also the “other side” of AI. While it promises much, it also poses some dangers. If AI can mimic us, it can also replicate both our best and worst traits with the potential for massive harm. Similar to humans, chatbots can give false answers because they produce answers by calculating the most likely answer based on the information provided9,31.

AI systems are inherently fallible; they are neither fully unbiased, truthful, nor reliable. These shortcomings can arise from intentional acts, oversights, or unintended errors. What AI can do well on a broad level, it can also do harmfully. The hidden nature of AI systems and our reliance on them can make risks invisible32.

Chatbots can give us answers that come from a large data set that was not specific, expert, or scientific. Therefore, they can produce misleading information and inappropriate advice. The computer scientists say that the wrong, imaginary answers produced by the AI model are called hallucinations. Even if the systems were perfect, their implementation and distribution might be flawed, benefiting some while disadvantaging others33. Wang et al.34 emphasize in their study that there are currently no clear legal and professional standards regulating the use of large language models in medical practice, and they call for the urgent development of guidelines and legal frameworks.

Lee et al.35 demonstrate that machine learning holds great promise for forecasting the results of depression treatment. Although the authors stress that more study with larger and higher-quality data sets is required to assure the clinical implementation of these technologies, the use of several algorithms enables more individualized treatment. Despite the encouraging results, standardizing and testing the models for broad practical application remains difficult.

Integration of AI into psychiatry raises several ethical and practical concerns, how important it is to address the practical, ethical, and technical issues around the incorporation of AI in mental health treatment. To guarantee fair and moral use, concerns including algorithmic bias, data privacy, and the interpretability of AI models should be addressed30,36.

Comparison of three key studies

All of the studies presented highlighted the potential benefits and limitations of employing AI chatbots in mental health care. Eshghie and Eshghie13 emphasized in their study that ChatGPT can effectively provide emotional support between therapy sessions. They also demonstrated that the chatbot is capable of engaging empathetically with patients, posing questions and follow-up queries, and suggesting coping strategies. However, AI tools are unable to replicate the depth of human therapeutic relationships.

In contrast, Fitzpatrick et al.14 conducted a controlled trial among a student population examining the use of the Woebot chatbot. Their findings indicated a reduction in depressive symptoms after just two weeks of interaction, thereby demonstrating the clinical efficacy of chatbot-delivered cognitive-behavioral therapy (CBT). The study’s limitations were the young age of the participants and the short duration of the intervention.

Abd-Alrazaq et al.15 used meta-analysis. Their data suggested that chatbots are effective in alleviating symptoms of depression, stress, and emotional distress, but they did not significantly improve psychological well-being.

Overall, these studies imply that chatbots are valuable tools for expanding access to mental health support and alleviating symptoms; however, they also have inherent limitations.

In conclusion, chatbots appear to be most effective when used as an adjunct to, rather than a replacement for, traditional therapeutic interventions.

Limitations

The principal limitation of this review article is predominantly focuses on the short-term outcomes. AI technologies are the subject of ongoing development and refinement, which renders maintaining currency with these technologies a challenging task. The absence of subjective user perspectives creates a gap in understanding the qualitative aspects of the human–AI interaction, which could offer critical insights into the psychological processes at play.

Conclusions

Artificial intelligence holds significant promise and possesses the potential to transform the field of mental health. Chatbots offer a cost-effective, readily available, and supportive resource; nevertheless, they cannot replace the human connection that is essential to effective therapy. Moreover, important ethical considerations must be carefully addressed. By comprehensively understanding the functioning of chatbots, we can employ them in an informed and judicious manner. A balanced and well-regulated framework is crucial for the safe and effective integration of AI technologies into mental health care.

Future research should prioritize addressing ethical challenges and aim to standardize emerging methodologies.

Conflict of interests: the authors have no conflicts of interests to declare.

Author contribution: A.G. have made a substantial contribution to the concept or design of the article, drafted the article or revised it critically for important intellectual content, approved the version to be published, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy; D.S. interpretation of data for the article, drafted the article or revised it critically for important intellectual content, approved the version to be published, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy.

Both authors have read and approved the final version submitted and take public responsibility for all aspects of the work.

References

1. Jiang Y, Li X, Luo H, Yin S, Kaynak O. Quo vadis artificial intelligence? Discover Artif Intell 2022; 2: 4.

2. Devlin J, Chang MW, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv 2018; 1810.04805.

3. Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep 2019; 21: 116.

4. Moor J. The Dartmouth College Artificial Intelligence Conference: the next fifty years. AI Magazine 2006; 27: 87.

5. Hirani R, Noruzi K, Khuram H, et al. Artificial Intelligence and Healthcare: a journey through history, present innovations, and future possibilities. Life. 2024;14(5):557.

6. Arbanas G. ChatGPT and other Chatbots in Psychiatry. Arch Psychiatry Res 2024; 60: 137-42.

7. Roberts J, Baker M, Andrew J. Artificial intelligence and qualitative research: the promise and perils of large language model (LLM) assistance. Crit Perspect Account 2024; 99: 102722.

8. Chakraborty C, Pal S, Bhattacharya M, Dash S, Lee SS. Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science. Front Artif Intell 2023; 6: 1237704.

9. Alanezi F. Assessing the effectiveness of ChatGPT in delivering mental health support: a qualitative study. J Multidiscip Healthc 2024; 461-71.

10. Fones D, Fones CSL. AI-powered chatbots in psychiatry: a critical evaluation of ChatGPT’s sleep aid guidance. Int J Neuropsychopharmacol 2025; 28 (Suppl 1): i245-i246.

11. Lee EE, Torous J, De Choudhury M, et al. Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging 2021; 6: 856-64.

12. Biswas SS. Role of ChatGPT in public health. Ann Biomed Eng 2023; 51: 868-9.

13. Eshghie M, Eshghie M. ChatGPT as a therapist assistant: a suitability study [Internet]. SSRN. 2023.09873v1.

14. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017; 4: e19.

15. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res 2020; 22: e16021.

16. Vahedifard F, Haghighi AS, Dave T, Tolouei M, Zare FH. Practical use of ChatGPT in psychiatry for treatment plan and psychoeducation. arXiv 2023; 2311.09131.

17. Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatr Q 2022; 93: 249-53.

18. Poalelungi DG, Musat CL, Fulga A, et al. Advancing patient care: how artificial intelligence is transforming healthcare. J Pers Med 2023; 13: 1214.

19. Lee M, Jeong S, Kim CS, Yang YJ. Analysis of health behavior, mental health, and nutritional status among Korean adolescents before and after COVID-19 outbreak: based on the 2019-2020 Korea National Health and Nutrition Examination Survey. J Nutr Health 2023; 56: 667-82.

20. Klos MC, Escoredo M, Joerin A, Lemos VN, Rauws M, Bunge EL. Artificial intelligence-based chatbot for anxiety and depression in university students: pilot randomized controlled trial. JMIR Form Res 2021; 5: e20678.

21. Cameron R. J, Bergen BK. People cannot distinguish GPT-4 from a human in a Turing test. arXiv 2024; 2405.08007.

22. Green EP, Lai Y, Pearson N, et al. Expanding access to perinatal depression treatment in Kenya through automated psychological support: development and usability study. JMIR Form Res 2020; 4: e17895.

23. Plakun EM. Psychotherapy and artificial intelligence. J Psychiatr Pract 2023; 29: 476-9.

24. Ronneberg CR, Lv N, Ajilore OA, et al. Study of a PST-trained voice-enabled artificial intelligence counselor for adults with emotional distress (SPEAC-2): design and methods. Contemp Clin Trials 2024; 142: 107574.

25. Ooi PB, Wilkinson G. Enhancing ethical codes with artificial intelligence governance – a growing necessity for the adoption of generative AI in counselling. British Journal of Guidance & Counselling 2024; 53: 66-80.

26. Spytska L. The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychol 2025; 13: 175.

27. Rubin R, Arnon H, Huppert JD, Perry A. New study explores artificial intelligence (AI) and empathy in caring relationships. JMIR Preprints 2025; 56529.

28. Sedlakova J, Trachsel M. Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent? Am J Bioeth 2022; 23: 4-13.

29. Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The artificial third: a broad view of the effects of introducing generative artificial intelligence on psychotherapy. JMIR Ment Health 2024; 11: e54781.

30. Rony MKK, Das DC, Khatun MostT, et al. Artificial intelligence in psychiatry: a systematic review and meta-analysis of diagnostic and therapeutic efficacy. Digit Health 2025; 11: 20552076251330528.

31. Brown JEH, Halpern J. AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare. SSM Ment Health 2021; 1: 100017.

32. Sun Y, Sheng D, Zhou Z, et al. AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanit Soc Sci Commun 2024; 11: 1278.

33. Amram B, Klempner U, Shturman S, Greenbaum D. Therapists or replicants? Ethical, legal, and social considerations for using ChatGPT in therapy. Am J Bioeth 2023; 23: 40-2.

34. Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical Considerations of using ChatGPT in health care. J Med Internet Res 2023; 25: e48009.

35. Lee Y, Ragguett RM, Mansur RB, et al. Applications of machine learning algorithms to predict therapeutic outcomes in depression: a meta-analysis and systematic review. J Affect Disord 2018; 241: 519-32.

36. Linardon J, Liu C, Messer M, et al. Current practices and perspectives of artificial intelligence in the clinical management of eating disorders: insights from clinicians and community participants. Intl J Eating Disorders 2025: eat.24385.