Software Eats My Therapist
Thinking about how software does the job (or not) of a professional therapist.
The Teardown
Wednesday :: September 11th, 2024 :: Approx. 12 min read
👋 Hi, this is Chris with another issue of The Teardown. In every issue, I cover how we interact with technology that enables our day-to-day lives. If you’d like to get emails like this in your inbox every week, hit the subscribe button below.
Is today’s focus on and perhaps obsession with mental health a new phenomenon? Did the pioneering women and men of the early 1900’s visit their feelings on a regular basis?
Stripe CEO Patrick shared one view:
Reading books/accounts from the 1920s and thereabouts, otherwise high-functioning people regularly seem to have had nervous breakdowns that necessitated months of recuperation in a restful sanatorium. What happened to that? Better treatment? Less stress?
I had to research the precise definition of a sanatorium. In short, a sanatorium was a rural resort (and hospital) for rest and recuperation, necessary following diagnoses of diseases like tuberculosis. The unadulterated country air, mixed with robust rest and nutrition, prevented symptom deterioration in unlucky patients. Conceptually, anyways.
That sort of retreat reminded me of a scene from Netflix’s Masters of The Air in which airmen received active breaks from traumatic war activity to rest and recuperate. These breaks happened in a palatial country estate doubling as a retreat and hospital.
Collison’s tweet suggests that people now have better tools to deal with health issues. Those from a hundred years ago were simply, instead, sent away. Or, worse, ignored entirely. Modern people benefit from objective progress in treating all sorts of ailments, stemming from vaccines, antibiotics, and recognition rather than ignorance of issues, and etc.
Likewise, we’ve never had so many tools at our disposal to achieve mental and emotional peace. There are, of course, chemical alterations like antidepressants. You’ll find in-person or Telehealth therapists through a quick Google Search. Social media sites are packed full of experts and influencers offering process prescriptions, tips, tricks, and guides to build better mental health structures. Both the Apple and Google app stores display a plethora of apps focused on mental health.
There’s also something new. New-ish anyways. It is the virtual therapist. And, I don’t mean a person in the traditional sense, but instead, a virtual person that offers guidance, answers questions, offers contrasting thoughts, and more.
Behind the scenes, this virtual coach is some sort of large-language-model (LLM) that has the general training to converse with you, or perhaps specific tuning on mental health concepts, response techniques, and etc.
That you can converse with an LLM in this manner isn’t surprising anymore. But doing so sparks a question: is a virtual coach worth your effort?
Let’s dig in.
Pay For Knowledge-Based Service
People charge money for knowledge-rich professional services. If you visited a doctor and asked them to treat free of charge, they’d laugh, most of the time. Maybe they’d shoo you from their office or place of care. There might a free initial consultation, but comprehensive follow-on treatment costs money.
Most therapy is no different:
You or someone (on your behalf) locates a therapist.
There is a consultation during which both parties learn about conversing further, goals (if any), and understand personality fit. That last piece is important. It’s hard to talk to someone that you don’t vibe with, right?
Subsequent sessions cost money, paid by either you (out of pocket) or your insurance if your plan is charitable (most aren’t).
There doesn’t have to be an end-state established in step three. Instead, you may choose to meet once a week or on some other recurring basis. You stop when or if it makes sense to you, or keep going if your motivation and wallet remain aligned. Talk therapy can be just that - talking. There are many other types.
You accumulate costs if you attend sessions frequently, say once a week or more. And, there is a counterbalancing force: the cost (on your health) of your issues. Doing nothing seems cheaper if you view everything as a short-term problem. Doing nothing in the long term - depending the circumstance - probably costs more.
So, the moment in which you conclude “I don’t need this as much” is possible after just a few sessions. But for some, that happens only after years of conversations. For some, it never ends. And, that’s an important point in this post: the process doesn’t require an outcome.
One alternative view is that the sessions are about building routines and habits that are healthier than some previous state in your life. You’re doing what you can to be better than some recent less good place in life.
So, you might ask, do I need a person to help me or is there some other way?
One Option: Tech-Enabled Therapy
The answer is yes, sort of. There might be another way. I’m the guinea pig.
I struggle with many things. One is imposter syndrome. I present comfortably in many settings but habitually downplay my knowledge, skills, and ability to do things compared to others.
Imposter syndrome isn’t a professionally-recognized disorder but is a widely discussed problem. You probably know folks who claim to suffer from it.
Imposter syndrome is a nice wrapper around a broader chronic lack of asserting confidence in yourself. In talk therapy sessions both with professionals, and informally with friends, I’ve heard that I’ve probably done something right, yes? Career progress rarely happens on pure luck. Objective life progress requires effort.
So, I thought I would experiment with the technology I mentioned earlier: LLMs. I constructed a therapist that helped me with imposter syndrome using Open AI’s ChatGPT. (4o model)
Before I go further, full disclosure: I excluded some specifics in my back-and-forth for brevity. The full transcript isn’t here.
On we go.
First, I needed to conceive my therapist. My first prompt:
Can you create a male talk therapist that works with successful males in their early 40s suffering from imposter syndrome? The therapist persona should be calm, knowledgable, and willing to confront me if I seem to be contradicting myself. The therapist shouldn’t be overly prescriptive, seeking instead to converse with me about the details of my imposter syndrome rather than offer bulleted-list fixes.
I tried, initially, to model my therapist on real-life experience. And, ChatGPT responded with a persona:
Let me introduce you to Dr. Michael, a therapist who works primarily with successful men in their early 40s dealing with imposter syndrome. Dr. Michael has a calm demeanor and an insightful, no-nonsense approach. He doesn't shy away from confronting contradictions or discrepancies in your thinking but does so with the intention of digging deeper into your experience.
Dr. Michael's sessions are conversational, leaning into exploration over quick-fix solutions. He's keen on understanding the roots of your imposter syndrome, focusing on the stories you tell yourself, and guiding you to uncover your own answers through thoughtful dialogue.
He might start by asking something like: “So, what’s going on in your mind when you feel like an imposter? Do you feel like you're playing a role that doesn’t suit you, or is it more that you doubt your ability to keep up with the expectations?”
How does that vibe sound to you for a session?
That ChatGPT used the word vibe is so 2024.
Anyways. I agreed to a session and Dr. Michael prompted me for some background. My unedited response - sorry grammar people - was this:
I think that imposter syndrome causes some paralysis when I think about roles I could do. I have an objective set of skills but then look at roles and or think about career paths and don't find the motivation to follow through. But I'm an intellectually active person who enjoys reading, writing, and learning. So, it's not that I am completely unmotivated as a person in general. What are your thoughts?
Does any of that sound familiar to you? If so, good. Let’s walk down this path together. Dr. Michael responded with a few lines synthesizing my situation but then asked one provocative question:
What do you think happens when you imagine committing to one of these paths? Do you start to feel like you won’t measure up in some way, or is it more about feeling boxed in by the choice itself? I'm curious if you notice a pattern in how this paralysis comes on.
Well, um. 😳
My history suggests that I’m an opportunity chaser: interested to try many things, but cursed by the impossible task of narrowing my choices. What if I just want to do everything when I grow up? (I’m 41)
I didn’t choose my current career on purpose. It just sort of fell in my lap. But I answered that therapist question: yes, I often feel boxed in. (paraphrased)
Dr. Michael then asked:
When you think about committing to a specific role or career path, what’s the story you tell yourself about that decision? Is it about what you’d lose or give up by choosing it? Or is it more about doubting that you’d thrive in that path once you’re committed to it?
A funny thing that happens in parallel is ChatGPT describes how these statements might be delivered, saying about that passage above: He’d pause for a moment, letting the question hang. What a salesman this doctor is.
What I then typed was:
I think the trouble is less about regret or happiness over the path, but instead, a lack of a clear framework for deciding what path makes the most sense. How do I know? How should I choose? What variables matter? I tend to overanalyze.
Dr. Michael then breathed deeply and responded with a conclusion:
Ah, I see. It’s not so much the fear of regret or happiness, but more the lack of a clear decision-making framework. You’re searching for a way to evaluate these choices logically, but when you're faced with too many variables, overanalysis kicks in. That sounds like it might feed into a bit of analysis paralysis—endlessly weighing options without moving forward because you can’t settle on the 'right' decision.
We went back and forth like this a bit more until Dr. Michael connected the dots to another problem: readiness. He asked whether I needed perfect readiness elsewhere in life. And, the answer is, yes. Not perfect in every situation, but certainly many.
I gave some examples that might be germane to you:
Complicated house work. (e.g. constructing something large)
Making sure we [my family] are at a future destination on time.
Preparing for a presentation.
We kept going.
As I talked more with Dr. Michael, I felt a growing hesitation to continue. Was this math - this LLM - just reworking and spitting my words back at me? Or were we getting somewhere?
In real life, one area of imposter syndrome tendencies is in describing my skills as a product manager. I’ve worked with lots of technology products, but not as much in end-to-end leadership-level product management. I haven’t owned a product P&L.
So, when asked to describe my experience, I sometimes try to fit square peg answers into round-ish holes. Yes, of course I have that experience and here are A, B, C examples. Dr. Michael picked up on that and asked:
I wonder, what would happen if you leaned into those moments of uncertainty instead of defending yourself? Could acknowledging that you don’t have all the answers, but that you’re open to learning or figuring it out, actually build more confidence and connection, rather than weaken your position?
Yep, probably. And, here, Dr. Michael suggested a tactic or at least presented a conclusion that looked like something from an actual human.
Is Dr. Michael Worth It
There are two functional takeaways:
Dr. Michael isn’t a human, and that means we aren’t talking out loud to each other but instead interacting through asynchronous text. The style of the conversation is close to how you might text back and forth with a friend that you trust to give you good advice.
Dr. Michael remembers what I say. He uses prior context to help guide future questions. But, only to a point. I can’t find a clear answer, but the Open AI message board suggests that ChatGPT’s memory holds 1,500 to 5,000 words. As of the last period, this entire post covered around 1,900 words. Not a terrible memory, but certainly not one can be used for successive conversations over time. Dr. Michael won’t have all the context.1
But I think an important fact is this: Dr. Michael is very cheap.
My OpenAI plan costs $20 per month. That price covers new features, all relevant models, DALL-E (for images, see above), and custom models. And, I pay that $20 no matter how many times I use ChatGPT per month, for any reason, at any time.
By contrast, a therapist is expensive and often not covered by insurance. Prices vary based on skill, focus, and location, but let’s assume something like $300 per session, knowing that you’ll attend a session once a week to pour your heart out. $1,200 per month. $14,400 per year.
Therapists also aren’t always reachable. Even those closest to you like family need sleep, rest, or time spent not being so on.
Dr. Michael is always reachable. He’s affordable. He’ll help you with many issues, assuming they don’t violate his standards of conduct.
But of course, he isn’t real. Let’s not forget that.
A Pin In It For Now: Future Exploration
Something opaque in all of this is the value of contextual thought. A professional therapist is (1) trained in school and post-school professional work, and (2) full of thinking and practice from their actual work. They talk to people all day long. They’re able to adapt to problems as they arise in their patient population and develop strategies to deal with those problems in real time. Right?
Dr. Michael is, by contrast, only as adaptive as ChatGPT’s active memory and underlying training allow. There is some debate about whether LLMs like those created by OpenAI can perceive and feel like humans (i.e. sentient).
My strict technical view is that algorithms don’t think and perceive and feel. They take inputs and produce outputs. They can’t really flex beyond their structure.
Human brains do something similar: produce outputs from inputs. Humans can feel, perceive, think, and adapt. Humans have flexibility, right?
But the line doesn’t feel as obvious as I want it to be. I’ll come back to this point in a future post.
I don’t see an obvious memory restriction in the OpenAI FAQ. The FAQ says that ChatGPT conversions update, add, or remove memories per user direction. As an example, ChatGPT remembers something when you directly prompt it to do so, rather than forming that memory through some sort of inference.