OpenAI Restricts ChatGPT From Providing Professional Medical and Legal Advice

0
ChatGPT
ChatGPT

OpenAI has updated its ChatGPT usage policy, prohibiting the use of the artificial intelligence (AI) system to provide medical, legal, or any other advice that requires professional licensing. The changes are detailed in the company’s official Usage Policies and took effect from October 29.

Under the new rules, users are forbidden from using ChatGPT for consultations that require professional certification including medical or legal advice, facial or personal recognition without a person’s consent, making critical decisions in areas such as finance, education, housing, migration, or employment without human oversight, and academic misconduct or manipulation of evaluation results.

OpenAI states that the updated policy aims to enhance user safety and prevent potential harm that could result from using the system beyond its intended capabilities. As reported by NEXTA, the bot will no longer give specific medical, legal, or financial advice and is now officially an educational tool, not a consultant.

The reason for this change has been attributed to regulations and liability fears, with Big Tech companies seeking to avoid lawsuits. Now, instead of providing direct advice, ChatGPT will only explain principles, outline general mechanisms and tell users to talk to a doctor, lawyer or financial professional.

Based on the new explicit rules, there will be no more naming medications or giving dosages, no lawsuit templates, and no investment tips or buy and sell suggestions. Users have noted that common workarounds, such as framing a request as a hypothetical situation, are no longer effective, with updated safety filters consistently preventing the model from offering specific advice.

The update follows growing public debate over the increasing number of people turning to AI chatbots for expert guidance, particularly in the medical field. Conversations with ChatGPT are not protected by doctor patient or attorney client privilege, and a court could subpoena conversation records to be used as evidence.

OpenAI also introduced changes to its default model this week aimed at better recognizing and supporting people in moments of distress, with safety improvements focusing on mental health concerns such as psychosis or mania, self harm and suicide, and emotional reliance on AI.

Send your news stories to [email protected] Follow News Ghana on Google News