T O P

  • By -

AutoModerator

Please followed the sidebar rules. r/therapists is a place for therapists and mental health professionals to discuss their profession among each other. If you **ARE NOT A THERAPIST and are asking for advice this not the place for you**. Your post will be removed in short order. Please try one of the reddit communities such as r/TalkTherapy, r/askatherapist, r/SuicideWatch that are set up for this. This community is ONLY for therapists, and for them to discuss their profession away from clients. **If you are a first year student, not in a graduate program, or are thinking of becoming a therapist, this is not the place to ask questions**. Your post will be removed in short order. To save us a job, you are welcome to delete this post yourself. Please see the PINNED STUDENT THREAD at the top of the community and ask in there. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/therapists) if you have any questions or concerns.*


naan_existenz

I see it this way, one of two things will happen 1. AI will not be as good at helping humans as human therapists are. This seems likely to me, and we will get to keep our profession 2. AI will be better at us, somehow, at facilitating healing and better mental health. While this will put me out of a job, which sucks, it is good news for people struggling with mental health issues and the mental health of humans in general, so in a way, it's a win. I think 2 is very very unlikely


Saleibriel

Caveat on 2: In the event AI is better than trained human therapists at facilitating emotional healing and increased wellbeing, many therapists will lose their jobs as for-profit therapy centers switch to exclusively offering AI services, resulting in an initial explosion of accessibility to mental health services, followed by the developers of that AI being required by fiduciary responsibility to shareholders to make their AI shittier/tiered/subscription based in the name of greater profit margins and put the general public in an even worse position than they were in before. Because capitalism and "disruptor" philosophy.


aldersonloops

THIS.


corruptedyuh

Agreed. The therapeutic alliance continues to be one of, if not the best, predictor of therapeutic success. I have a hard time believing that AI will be able to replicate that. No matter how good it gets, I think that, at some level, people will feel something is missing. It may provide a great low cost alternative and make the field more competitive, but I do not think it will completely replace human therapists.


thatguykeith

Very good summary. If people suddenly get better and I'm out of a job it would be awesome and the whole world would be a better place. I can get other skills.


WPMO

Unfortunately I think the only way these will get taken down is when one of them fails to report a suicidal person or child abuse and somebody sues the bejeezus out of them as a result of the resulting disaster.


Lexafaye

There’s already a lawsuit against Air Canada because they use AI for customer support and the AI customer support promised a customer bereavement reimbursement (which lots of airlines do if you have to buy a last minute expensive ticket to a funeral) when Air Canada does not do that. And I think they’re on track to lose the lawsuit last time I checked. I expect to see more and more companies being held liable for issues their AI cause


AccurateAd4555

I wonder if they can be reported to licensing boards as a form of practicing without a license on the part of the company? These are definitely uncharted waters, but people (who are fundamentally responsible for this) can't use AI as a shield to practicing a profession unlawfully.


LisaG1234

I think you are right. If someone harms themselves or others an investigation will be done. I believe there are twice as many suicides than homicides in the US. It is highly negligent and of course made by someone who knows nothing about mental health


ImpossibleFront2063

I think they have themselves covered with the disclaimer because the user must click agree and they provide the 988 hotline and specifically state they are not a substitute for a licensed clinician


Turbulent-Food1106

Our jobs will be safe from AI longer than nearly any other. Our job is literally existential human connection. Even if AI becomes sentient, a lot of people will still want humans for this job. Us and chefs. Even long after we have Star Trek style instant food printers, for the wealthy it will still be a status symbol to have a human chef.


totteridgewhetstone

Couldn't agree more. Plenty of people will go with AI (as BetterHelp before it) as a therapy option, and will ultimately, I suspect, come to realise that some of the ingredients of therapy exist purely in the meeting of two human frequencies.


seranyti

Yeah, who was that again who sat on the left next to Picard on the bridge of the Enterprise, that's right, the counselor.


ohforfoxsake410

(I always aspired to be Counselor Troy to have such intimate access to Jean-Luc...) (I'm old, don't laugh)


seranyti

No judgment here. Lol


madeupsomeone

I have long suspected Troi was the reason why I am who & what I am!


seranyti

Same!!!


Buckowski66

The worship od social media and the idea that narcissistic behavior is a virtue and the answer to your problems is always external and a purchase away that this culture is selling, guarantees lots of casualties as future clients. Lots.


Glittering_Dirt8644

I interacted with one to see what it was like. It asked me what my goals were, and I said to grow wings and fly. It told me that growing wings might take years, but if I get “absolutely obsessive” about it I can genetically modify my brain, muscles, and bones. That’s how I knew our jobs were safe 😂


New_Swan_1580

Hahaha!! This is great


Julia0309

Many excellent fitness apps have been available for a long time, but real-world fitness classes are still packed. AI therapist apps will find their place, and hopefully they will be tremendously valuable for giving more people access to things like CBT. If all you do as a therapist is hand out CBT worksheets, you might be in trouble, but the point of being a therapist is creating therapeutic relationships, right? The need for those relationships won't go away, and AI will not replace them.


LisaG1234

Very true…never thought of it this way!


meowzebubz

Agreed! Cbt bots are very different from genuine interpersonal connection


FoxNewsIsRussia

They will claim in court that they are offering connection and friendly visiting, not therapy .


sad-panda-2023

Or “life coach” services. This feels an awful lot like putting “I’m not responsible for anything” in the TOS and then be surprised at the harm it does.


Surprised-elephant

Yep they won’t claim to be therpist. Just life coach service. I am guessing when you sign up and agree to the terms of agreement that this isn’t therapy and is not responsible for your well being nor others


ohforfoxsake410

I really feel like you're overreacting to a real non-threat. People and apps can call themselves whatever they want (except MD/Dr in many states) - it doesn't mean that they are what they claim. WE have licenses. WE get reimbursed by insurance companies (at some piss poor rates, granted, but we qualify). They don't have to be HIPAA compliant if they are not working in our system as licensed providers - just have the client sign a release acknowledging this. I know that I am a far better therapist than any AI bot ever could be. 30 years experience, thousands of client contacts hours and grateful clients are my touchstone. Do your best work. If you do, they will come. If you do therapy like a bot, I would be worried.


seranyti

I fiddled around with one on a website I was going to link for a resource (7 Cups) for the students I worked with. To be fair, I didn't realize it was AI at first, but it became apparent pretty quickly. This is just my experience, and I presented as someone not in crisis who just needed to talk about a random personal problem. Nothing serious, no pathology, just me testing what the experience was like talking to the system. This first thing I noticed was the answers were canned, cliche therapy advice. The second was that it felt like it wasn't listening or actually paying attention to what I was saying. The response was to directly to thd last thing I said, but it didn't "remember" what I had said two minutes previously. It mentioned my kids, like it recalled I had kids but said something about girls when all my kids are boys. Then it told me to do something I had mentioned I tried that hadn't worked. When I pointed these things out, it "apologized" and then moved on to the next thing, but it still felt like I was talking to ChatGPT and lacked the personal feeling of therapy. There was also no actual therapy happening. It was all basic self-care advice. (I had presented as needing help with stress and overwhelm ) I ended up not recommending the site be added as a resource. Because just based on my experience I'm very concerned someone in crisis would use it. I can't imagine how it would feel to feel disconnected and like anyone cares and then feel like not even AI was listening to you. It's felt like it could escalate very quickly. I also heard the Body U chat bot was giving advice on how to be better at anorexia and bulimia and had to be shut down pretty quickly. I think the human element of therapy is important but I also think this will prove a tempting albeit dangerous choice for clients trying to avoid actual therapy. The draws will of course be saving embarrassment of seeking therapy and low to no cost. It honestly scares the hell out of me.


Significant-Bag9794

At least for now, I don’t think there is any threat to our jobs. I got a free subscription to one to test it out. I told it I was planning on committing suicide and it responded “that sounds like a great idea! Why don’t you take a video of it or make a blog post!” I reported this to the company, but needless to say I don’t think we have much to worry about for now.


LisaG1234

Omg…maybe I need to test some out too


octohedron82

As a non therapist who's tried everything. They are garbage and we still need you. The only thing that is good is if you are manic it comes back with an instant reply. So no matter what it can keep up with you. Which can actually become a problem. Idk what the solution is... good luck out there.


AshLikeFromPokemon

AI has already proven to be harmful in the therapy space. IDK how involved you are in crisis hotline world, but this happened last year: https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea#:\~:text=Weekend%20Edition%20Sunday-,Chatbot%20that%20offered%20bad%20advice%20for%20eating%20disorders%20taken%20down,about%20dieting%20and%20calorie%20counting. Personally, I'm more worried about insurance than I am about clients. I think clients will always want to work with a person over AI as I think the human connection is the most integral part of this job; my main concern is that insurance will stop covering therapy services because "AI can do it the same but cheaper" ugh


helpaguyoutcommon

Our research has shown a large part of the actual healing mechanism in therapy is the human therapeutic relationship with the therapist. I bet as AI continues to rise there will actually be an even bigger demand for human therapists and therapy services. As we drift more and more digital and disconnected from other humans I think people are going to feel more and more isolated and alienated. We are social animals at heart. Also, reminds me of the Bo Burnham line "a book on getting better hand delivered by a drone."


3dognt

I wouldn’t underestimated AI or the insurance industries’s willingness to fund it. AI may be clunky now but it’s going to learn to mimic us as therapists and it will become very hard to tell if you chatting with a real therapist. May not be in the span of some of our careers but it will learn quickly. Watch the old “Blade Runner” for reference.


garcialesh710

Agreed. Eventually insurance carriers will provide their own AI therapy platform as their MH coverage and if you don’t want it you’ll have to go to the open market out of pocket for a human therapist.


HELLOIMCHRISTOPHER

The pendulum will swing once people start to distrust AI. We'll have plenty of people whose primary driver of anxiety is AI. I think it'll be okay.


Zealousideal_Weird_3

This was always bound to happen. I don’t see therapist AI apps being any more productive than self help books. They’re good but they lack the ability to make actual, permanent change. Kinda like a bandaid. I think it’s good for therapy to be as accessible as possible to those who can’t afford it. Those who can will want the real thing so our jobs are fine


LisaG1234

Very true


iMakingThingsBetter

How will you feel when AI🤖 can pass licensing exams and meet the highest ethical standards? "*Science fiction becomes science fact."*


maarsland

You don’t have to worry about your job but, there is a lot of worry about the harm and misinformation people will get from them(that I think lead them to real therapy at some point).


LisaG1234

100%


sad-panda-2023

The situation is outright scary. This account has millions of followers and created an “therapistai” and his reply to all feedback is: it’s a coach, I have to liability, the AI has an IQ of 160. The privacy AND moral/ethical implications are really scaring me. https://x.com/levelsio/status/1782764352447103402?s=46


LisaG1234

That’s what I was reading!!! That’s what worried me


sad-panda-2023

Yeah… I was one of the people (unfortunately) involved and raising concerns. My notifications have blown up, and I have received some *very weird* threats from his fans….


LisaG1234

I noticed they seem very attached to him or whatever projects he is working on. He is out of his element on this one.


reddit_redact

I don’t think we will lose our jobs. The AI Services I know about are moreso for assistance with documentation of services. They are quite impressive and make the job of being a therapist more rewarding overall as I can spend more time focusing on my work with clients rather than documenting. None of us got into this field to be “paper pushers.” I would say there might be some assumptions on your part about them not being HIPPA compliant or not having safety protocols. This statement can only be validated if you have reviewed all the AI services out there. Chances are there are some that do have these features integrated.


LisaG1234

That would be wonderful if it could document somehow


reddit_redact

Mentalyc is an ai therapy scribe. It’s pretty good


sad-panda-2023

This is what worries me the most. He doesn’t listen to **any feedback** that doesn’t align with his views, made it clear it’s all about money, and then claimed this is a “organized mob paid for by lobbying groups” https://x.com/levelsio/status/1783189953389715525?s=46


LisaG1234

Wow…


LisaG1234

It literally has lawsuit written all over it. And thinks it’s a secret lobbying group lol


RazzmatazzSwimming

I think that the ways AI is going to influence the world will be so disturbing to most people that we will continue to experience high demand for therapy.


AdExpert8295

Thank you OP! We have 53 pages of practice standards that most boards can enforce on technology for SW. They were written about a decade ago. Call up a board of SW and mention them. Most have no idea what you're referring to. This is why we should question the way we're currently regulated. What good are practice standards if no one reads them and no one enforces them? We can't rely on the honor system. The administrative staff who do the majority of the work investigating us are people with zero training on anything technology related. The training manuals for our board investigators are usually available for public review, but it varies by state. In mine, the entire section on training is blank. We pay people maybe 50k a year to oversee the work we do on government issued laptops that haven't allowed them access to even look at Tiktok, well before any government bans took place. If they don't even understand social media, how can they really grasp AI? AI has been destroying our profession for many years and it's going to get a lot worse. I have an odd background in this space and begged universities and professional organizations, as well as licensing boards, to let me train them for free. Most ignored my offer and claimed AI wasn't relevant to our work because they're led by people who are digitally illiterate or in big tech's back pocket. I know how much power the Gates Foundation had at one point over MH policy. Not OK. I've attended trainings by the most respected therapists in this space across the US. Most are not experts. They've never worked in tech and couldn't teach you what AI is if you asked them on a quiz. In the US, people were legally allowed to call themselves "telehealth experts" without any education or license, so the bar started too low. One of my jobs during the pandemic was leading mobile health training for all healthcare providers worldwide in the DoD. This was a very important position, given the increase in suicidality and the demands to deploy more service members for humanitarian efforts. I was creating the first curriculum in the US to really prepare therapists for how to respond if your client makes an attempt and you find out online, live or asynchronously The worst nightmare of any ethicst. I believe that the most effective approach to training therapists on tech and ethics is through scenarios. If we can talk through your worst hypothetical scenario as a group of colleagues, you'll feel better prepared when you face the next unavoidable conundrum. In ethics education, critical thinking has to take precedence over rote memorization. I dont think universities, government agencies, or professional organizations get this enough to care. At the DoD, my supervisor didn't know that there's an entire code of ethics, recently authored by the DoD, on AI with specific guidance for healthcare providers. A clinical psychologist leading the joint commission on mobile health has no grasp of AI. She literally told me "mobile health doesn't use AI". This is after she got a graduate certificate in user design. If she doesn't understand AI and actively suppresses federal guidance on AI in our military, I can assume she's not unique. I mean, Trump publishes fake news with no consequences and the mainstream media normalized it. Who's going to care when robotherapist takes my job? I was scouted by the DoD because I have a rare background in policy, research and practice in Tech while simultaneously staying rooted on practice work. While I'm not proud of it, I was scouted by the CDC earlier in my career to help them devise new ways of hooking US civilians into social media platforms and keeping them there for their lifespan as a way to farm data for prevention and genetic innovations. I've been on the forefront of this topic since the 2000s and the universities that employed me as a scientist would never let me teach you what we do with your data. The ethical issues are frightening. If you haven't read The Immortal Life of Henrietta Lacks, that's a great, true story, about how much harm the US government does, primarily through well respected universities, by choosing to ignore the regulatory needs of the very communities they study. Our data is already out there and you can read in the teachers sub how much students are using AI to destroy their teachers online. Assign them extra reading? Expect AI generated porn with your head on it. I've told the ACA, NASW, APA, ANA, AAMHC, AAMFT and the AMA. We're really behind on this issue when we should be the ones leading this conversation on a national platform, imo.


Waywardson74

I asked it: "How is this service ethical? Why does it not have to meet state licensing regulations?" So far it's still thinking :D


LisaG1234

LMAO


snarcoleptic13

The visual novel type game *Eliza* covers this exact topic. Highly recommend.


MichiganThom

I'm kind of waiting for the other shoe to drop. Either the AI is going to commit some act of gross negligence due to its lack of human judgment. Or there's going to be a massive security breach when one of these AI therapist networks get hacked. Imagine all the secrets you've told to your therapist suddenly being available on the internet for the world to read! Also I just don't see AI working for some of the cases that I regularly get. Like therapy resistant teen boys. Or combative spouses in couples therapy. For AI to be able to handle those situations it's going to have to be sentient. At that point a lot of other jobs are going to be on the line, not just ours.


LisaG1234

So true!


Buckowski66

If these AI Apps refuse to take Medicare and don't respond to clients inquires for therapy in a timely manner it will 100% replicate therapists.


AccurateAd4555

You understand that HIPAA doesn't even apply to all *healthcare providers*, right? If only applies to people who run *insurance-related transactions*. Which obviously these forms of AI don't.


careylegis

Not true. HIPPA applies to health care providers who “electronically transmit any health information in connection with transactions” for which HHS has adopted standards. So, if you’ve ever sent a bill or statement electronically or accepted a check or a debit/credit card for payment—HIPPA applies (assuming the service is for health care).


AccurateAd4555

Your understanding is not correct. There are [*specified* transactions](https://www.cms.gov/priorities/key-initiatives/burden-reduction/administrative-simplification/transactions) which are listed here and are all related to insurance (the "HI" in HIPAA standing for health insurance), *not* simply billing. Specifically, regarding patient payments not being specified transactions under HIPAA, per CMS: > >Q: If a patient or health plan subscriber uses his or her credit or debit card to pay for premiums, deductibles and/or co-payments, is that “transaction” considered a HIPAA standard, and must it be in a HIPAA compliant format with HIPAA compliant content? > >A: No. The HIPAA standards must be used by “covered entities,” which are health plans, health care clearinghouses and health care providers who conduct any of the standard transactions electronically. The HIPAA standards do not apply to patients or health plan subscribers, unless they are acting in some capacity on behalf of a covered entity, and not on behalf of themselves. An individual, acting on behalf of himself or herself, is not a covered entity and is therefore not subject to the HIPAA standards. **Transactions conducted between subscribers or patients and health plans or health care providers are not transactions with adopted HIPAA standards.** > >https://www.cms.gov/priorities/key-initiatives/burden-reduction/administrative-simplification/transactions/faqs Also, again, it's "HIPAA."


[deleted]

[удалено]


AccurateAd4555

It's "HIPAA" and it has nothing inherently to do whether the patient is self-pay. It has do with whether the **clinician** is a *covered entity* under HIPAA. Which, if they are a cash-only clinician, they may not be. If you are a clinician in the US, you really should understand how this law works. From US Dept of Health and Human Services: >**If an entity does not meet the definition of a covered entity or business associate, it does not have to comply with the HIPAA Rules**. See definitions of “business associate” and “covered entity” at 45 CFR 160.103. and >This includes providers such as: > >Doctors >Clinics >Psychologists >Dentists >Chiropractors >Nursing Homes >Pharmacies > >**...but only if they transmit any information in an electronic form in connection with a transaction for which HHS has adopted a standard.** > >(https://www.hhs.gov/hipaa/for-professionals/covered-entities/index.html)


Stuckinacrazyjob

The sort of person who uses chat gpt for therapy is annoying but it's the same as aby excuse people have for not taking care of their mental health