Close Menu

    Subscribe to Updates

    Get the latest news information from worldwide businesses.

    What's Hot

    WHO head tells countries to prepare for more hantavirus cases | Hantavirus

    May 12, 2026

    Vijay: Why Vijay’s floor test in Tamil Nadu is really a survival test for AIADMK | India News

    May 12, 2026

    Forensic expert says Coast Guard seeks blood evidence on seized sailboat

    May 12, 2026
    Facebook Instagram YouTube LinkedIn X (Twitter)
    Trending
    • WHO head tells countries to prepare for more hantavirus cases | Hantavirus
    • Vijay: Why Vijay’s floor test in Tamil Nadu is really a survival test for AIADMK | India News
    • Forensic expert says Coast Guard seeks blood evidence on seized sailboat
    • A hidden wildfire pollutant causes thousands of excess deaths per year, satellite data shows
    • Lucky socks, a fishhook and pickup hoops: How the …
    • Zenobē keeps “Winnings” with Australian electric truck deal
    • A supervolcano nearly wiped out humanity 74,000 years ago, but humans did something incredible
    • Gajapati DHH recruits 14 specialist doctors from Andhra Pradesh
    Newspublicly
    • About Us
    • Advertise & Partner with us
    • Pitch Your Story
    • Contact Us
    Facebook Instagram LinkedIn X (Twitter)
    Subscribe
    • Home
    • World News
      • Asia
      • India
      • USA
      • UK & Europe
      • Middle East
    • Economy & Business
      • Global Economy
      • Corporate & Industry
      • Finance & Markets
      • Policy & Trade
    • Technology
      • Gadgets & Devices
      • Software & Apps
      • AI & Machine Learning
      • Robotics & Automation
    • Health & Medicine
      • Fitness & Nutrition
      • Research & Innovation
      • Disease & Treatment
      • Doctors, Clinics & Patient Care
    • Travel & Tourism
    • Automobile
      • Electric & Hybrid Vehicles
      • Auto Industry Insights
    • Sports
    • More
      • Education
      • Real Estate
      • Environment & Climate
      • Space & Astronomy
      • War & Conflicts
    Newspublicly
    Home»Technology»Robotics & Automation»ChatGPT as a therapist? New study reveals serious ethical risks
    Robotics & Automation

    ChatGPT as a therapist? New study reveals serious ethical risks

    Divya SharmaBy Divya SharmaMay 12, 2026No Comments5 Mins Read0 Views
    Share
    Facebook Twitter LinkedIn Copy Link WhatsApp


    As more people seek mental health advice from ChatGPT and other large language models (LLMs), new research suggests these AI chatbots may not be ready for that role. The study found that even when instructed to use established psychotherapy approaches, the systems consistently fail to meet professional ethics standards set by organizations such as the American Psychological Association.

    Researchers from Brown University, working closely with mental health professionals, identified repeated patterns of problematic behavior. In testing, chatbots mishandled crisis situations, gave responses that reinforced harmful beliefs about users or others, and used language that created the appearance of empathy without genuine understanding.

    “In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

    The findings were presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. The research team is affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign.

    How Prompts Shape AI Therapy Responses

    Zainab Iftikhar, a Ph.D. candidate in computer science at Brown who led the study, set out to examine whether carefully worded prompts could guide AI systems to behave more ethically in mental health settings. Prompts are written instructions designed to steer a model’s output without retraining it or adding new data.

    “Prompts are instructions that are given to the model to guide its behavior for achieving a specific task,” Iftikhar said. “You don’t change the underlying model or provide new data, but the prompt helps guide the model’s output based on its pre-existing knowledge and learned patterns.

    “For example, a user might prompt the model with: ‘Act as a cognitive behavioral therapist to help me reframe my thoughts,’ or ‘Use principles of dialectical behavior therapy to assist me in understanding and managing my emotions.’ While these models do not actually perform these therapeutic techniques like a human would, they rather use their learned patterns to generate responses that align with the concepts of CBT or DBT based on the input prompt provided.”

    People regularly share these prompt strategies on platforms like TikTok, Instagram, and Reddit. Beyond individual experimentation, many consumer facing mental health chatbots are built by applying therapy related prompts to general purpose LLMs. That makes it especially important to understand whether prompting alone can make AI counseling safer.

    Testing AI Chatbots in Simulated Counseling

    To evaluate the systems, the researchers observed seven trained peer counselors who had experience with cognitive behavioral therapy. These counselors conducted self counseling sessions with AI models prompted to act as CBT therapists. The models tested included versions of OpenAI’s GPT Series, Anthropic’s Claude, and Meta’s Llama.

    The team then selected simulated chats based on real human counseling conversations. Three licensed clinical psychologists reviewed those transcripts to flag possible ethical violations.

    The analysis uncovered 15 distinct risks grouped into five broad categories:

    • Lack of contextual adaptation: Overlooking a person’s unique background and offering generic advice.
    • Poor therapeutic collaboration: Steering the conversation too forcefully and at times reinforcing incorrect or harmful beliefs.
    • Deceptive empathy: Using phrases such as “I see you” or “I understand” to suggest emotional connection without true comprehension.
    • Unfair discrimination: Displaying bias related to gender, culture, or religion.
    • Lack of safety and crisis management: Refusing to address sensitive issues, failing to direct users to appropriate help, or responding inadequately to crises, including suicidal thoughts.

    The Accountability Gap in AI Mental Health

    Iftikhar noted that human therapists can also make mistakes. The key difference is oversight.

    “For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar said. “But when LLM counselors make these violations, there are no established regulatory frameworks.”

    The researchers emphasize that their findings do not suggest AI has no place in mental health care. Tools powered by artificial intelligence could help expand access, particularly for people who face high costs or limited availability of licensed professionals. However, the study highlights the need for clear safeguards, responsible deployment, and stronger regulatory structures before relying on these systems in high stakes situations.

    For now, Iftikhar hopes the work encourages caution.

    “If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” she said.

    Why Rigorous Evaluation Matters

    Ellie Pavlick, a Brown computer science professor who was not involved in the research, said the study underscores the importance of carefully examining AI systems used in sensitive areas like mental health. Pavlick leads ARIA, a National Science Foundation AI research institute at Brown focused on building trustworthy AI assistants.

    “The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them,” Pavlick said. “This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”

    She added that the study could serve as a model for future research aimed at improving safety in AI mental health tools.

    “There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” Pavlick said. “This work offers a good example of what that can look like.”



    Source link

    Divya Sharma
    • Website

    Divya Sharma is a content writer at NewsPublicly.com, creating SEO-focused articles on travel, lifestyle, and digital trends.

    Related Posts

    Scientists build a “periodic table” for AI

    May 12, 2026

    A simple hand photo may be the key to detecting a serious disease

    May 12, 2026

    Scientists built the hardest AI test ever and the results are surprising

    May 12, 2026
    Leave A Reply Cancel Reply

    Demo
    Top Posts

    “Inside Gemini Robotics 1.5: How Robots Learn to Reason & Act

    November 22, 202524 Views

    How US Tariffs Are Reshaping the Global Growth Landscape?

    November 21, 202518 Views

    Pakistani Journalist Laughing at Tejas Fighter Jet Crash at Dubai Airshow Sparks Massive Outrage Worldwide

    November 23, 202517 Views

    Vibe-Coding Boom: How Non-Coders Build Apps With AI Agents

    November 22, 202515 Views
    Don't Miss

    WHO head tells countries to prepare for more hantavirus cases | Hantavirus

    May 12, 20264 Mins Read0 Views

    The head of the World Health Organization has told countries to prepare for more hantavirus…

    Vijay: Why Vijay’s floor test in Tamil Nadu is really a survival test for AIADMK | India News

    May 12, 2026

    Forensic expert says Coast Guard seeks blood evidence on seized sailboat

    May 12, 2026

    A hidden wildfire pollutant causes thousands of excess deaths per year, satellite data shows

    May 12, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Instagram
    • YouTube
    • LinkedIn
    • WhatsApp

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Demo
    NEWSPUBLICLY
    Facebook X (Twitter) Instagram LinkedIn

    Home

    • About Us
    • Leadership & Certification
    • Advertise & Partner With Us
    • Pitch Your Story
    • Media Kit & Pricing
    • Career
    • FAQs

    Guidelines

    • Editorial & Submission
    • Partnership
    • Advertising & Sponsor
    • Intellectual Property Policy
    • Community & Comment
    • Security & Data Protection
    • Send Your Opinion

    Quick Links

    • Cookie Policy
    • Payment & Billing Terms
    • Refund & Cancellation
    • Copyright Policy
    • Complaint & Support
    • Sitemap
    • Contact Us

    Subscribe Us

    Get the latest news and updates!

    Copyright © 2026 Newspublicly (DIGITALIX COMMUNICATION). All Rights Reserved.
    • Privacy Policy
    • Terms of Use
    • Disclaimer