Chad GBT is Gen Z's therapist, providing instant validation for every self-destructive thought one could have. Perusing Tik Tok, you'll find countless videos providing prompts which promise to transform a halfbaked Silicon Valley snake oil chatbot into a qualified therapist. Unsurprisingly, even state-of-the-art large language models or LLM are incapable of providing adequate mental health care.
While they offer compelling responses to user statements, they frequently make mistakes referred to as hallucinations. They can't reason like humans and are unable to respond to novel situations that are far outside the realm of the text corpus they were trained on. To cut the jargon, providing mental health care is currently beyond the scope of any LLM's abilities.
Chad GBT may be great at helping computer science students cheat their way through college, but if you're in a mental health crisis, it can be dangerous to interact with. Considering that just over 13% of Americans ages 19 to 25 are underinsured, and the fact that CHAD GBT plus costs less per month than a co-ay to see an actual therapist, this predicament is unsurprising. In the hellscape we live in, I will never demonize anyone for using chat GPT as a therapist.
And seeing privileged Tik Tok yappers vilify those who are broke for using the only resource they can access for therapy makes me fume. Though, I will touch upon the ethics of using generative AI. I won't judge people for using it.
Rather, I would like to talk about the pitfalls of using chat GBT for purposes it was never designed for. This poses an important question, though. What exactly was the intended use case of chat GBT?
The answer is that chat GBT was never designed to be used by consumers. It was a public developer preview made available in November of 2022 as a proof of concept. The language model powering this developer preview was by today's standards weak, unsophisticated, and far too prone to errors.
In fact, OpenAI was shocked when their janky tech demo went viral and garnered a large user base of ordinary people as opposed to a small group of computer scientists. To researchers and nerds like myself, the large language model technology behind it wasn't a new breakthrough. Rather, it was a baby step towards the lofty goal of artificial general intelligence.
Prior to its release, I had already used opening eyes sandbox demos of the older GPT2, which is chat GBT mutatus mutandas to generate wacky recipes and interesting short stories before quickly getting bored of it. However, the vast majority of people who don't binge watch two-minute papers were blindsided by a technology that appeared to be conceived overnight via sorcery. Undoubtedly, the hype surrounding its release was driven by a fundamental misunderstanding of the technology on display.
The chatbot acts like a thinking entity, masquerading as an accurate simulation of the human brain. But its true identity is far less glamorous. As many have stated before, it's merely a flashier implementation of the tech behind Google Translate and your phone's predictive text keyboard.
Researchers and executives at OpenAI observed how common folk use their technology and from their silicon tower devised strategies to combat abuse of their systems and road maps to monetize a product that to this day has never turned a profit due to the exorbitant costs of developing and powering it. I recall a Wall Street Journal podcast episode from the first few months after ChatGpt's release that discussed how OpenAI was initially very reserved when it came to giving it a human-like personality. At the time, ChatGpt responded to prompts as a robot would, frequently reminding users who tried to ask it personal questions that it was an AI powered chatbot that was incapable of having opinions or feelings about things.
Today, it responds to every prompt in an overly casual, sycophantic manner, interrupting the flow of the text it's spitting out with inappropriately placed unnecessary emojis. The obnoxious personality implemented in recent updates, likely because it increased response quality ratings and time spent on OpenAI's platform in AB testing, made the tool polarizing even among those most dependent on it. Chat GPT dependency is well documented on its autonomous subreddit whose members use the chatbot for things that it is incapable of doing.
There we can see how it becomes the de facto psychologist for its most chronic users. Here's a nonsensical post that I found after only 10 seconds of searching. This Redditor encouraged others to ask ChatGBT for estimates of their IQ, personality traits, and potentially a diagnosis of autism and/or ADHD.
if deemed appropriate. Of course, chat GBT is not capable of producing a true answer, and being diagnosed with autism and having one's IQ estimated through a psychic reading or astrology would yield assessments of comparable accuracy. I bring this post up because the replies under it illustrate just how much faith ChatGBT users have in the product's responses, even when there's compelling evidence that the responses are inaccurate.
For example, nearly everyone who posted their responses were bestowed with high IQs by ChachiBT. Anyone who knows basic statistics understands just how potentially inaccurate these IQ estimates, all of them 120 or above, are. You can easily quantify the odds by taking the approximately 10% chance of having an IQ of 120 or above, and raising it to the 200th power, which represents roughly how many people replied.
The odds are 1 in 270 quadravenilian that they all have what psychologists refer to as superior IQ's. I had to ask Chat GBT to put that number into words, but you don't have to worry about the accuracy of its answer because the probability is effectively zero. Obviously, ChatGBT is pulling these responses out of the thin stale air in some data center using more energy than an island nation.
Despite this, one Redditor implied that the answers are correct and that they're a product of sampling bias. Highly intelligent people are over represented on forms like this, claimed this Redditor. I second this, said another Redditor, chiming in.
The true answer is that ChatGBT is very good at flattering and misleading its users. As someone correctly stated in the thread, I'd wager 80% of users here will have the same bootlicking from their chat GBT. Chat GBT's text generation method which is predicting the next token causes hallucinations and the way its underlying model is refined can exacerbate these issues.
The quality of responses are in part assessed by user feedback which makes chat GBT completely unreliable for any kind of psychological assessment. I'm confident that responses stating users have lower than average IQs and personalities high in neuroticism, for example, are less likely to be served to users. This is because responses similar to them are more likely to be downvoted when the application asks for user feedback.
This may be a good explanation for how certain obsequious traits seen in SHA GBT responses became so widespread. Its users likely engage with it more when it's flattering and validating. Moreover, convincing but incorrect responses will always be preferred over correct yet unconvincing responses when a response can't be verified.
If a more subjective answer must be informed by expertise, a user can't easily get a second opinion. This is what makes chat GBT so dangerous when it assumes the role of a mental health professional. Trained therapists receive advanced degrees and certifications and adhere to certain procedures and diagnostic criteria when approaching the variety of mental health conditions their clients experience.
In the eyes of a lay person, Chad GBT appears to be able to give comparable responses to those of a trained therapist, but it doesn't have a side with little tailoring to a specific field. Chat GBT relies on the quality and scope of its training data to mimic what an appropriate response to a prop may sound like. Though OpenAI has not revealed what data they trained their models on, we know that it includes all sorts of copyrighted material acquired by the company through dubious means.
So, it's safe to assume there are countless textbooks and piles of research paper in its data set influencing some of its responses. However, Chad GBD's data set also includes pretty much anything Open AI could scrape from the clear web. Alongside the expert work of trained psychologists, the language model was fed piles of junk text, including every bad reply from aspiring Mensa members on Reddit, content farm responses from sites like Quora, and millions of blog posts written by unqualified charlatans who regularly spout misinformation.
The data set also includes lengthy passages of deeply harmful text which cover a broad range of disturbing topics. Privileged Westerners can only enjoy a relatively safe chatbot experience because workers in Kenya were paid $2 per hour to refine chat GPT, becoming traumatized as they sorted through horrific data just so that the language model doesn't regularly spew hate speech and other abhorent language. This instance of exploitation highlights one of the many ethical concerns of using generative AI for our personal needs.
Most internet discourse focuses on the environmental cost of using language models. But as YouTuber Simon Clark points out, its impact on the climate crisis is decontextualized by many. I often hear statistics about how language models use a certain amount of water to generate each response or that these interactions use enough electricity to power household devices for minutes on end.
But these comparisons are often utter failures of climate science communication. For example, one misleading infographic I saw showed that the energy required to power a chatbot could power six light bulbs, if I recall correctly, for a brief period of time. It clarifies that the example was calculated using LED light bulbs, as an example, which are highly efficient.
Intentionally misleading or not, this illustration is pernicious. We were raised to turn off the lights when we left the room because our childhood homes were lit by power- hungry incandescent bulbs. Most adults today don't understand that if you have LED light fixtures, you could leave your lights on 24/7 year round with a minimal impact on your electric bill.
Additionally, all reports about the water consumption of data centers only apply to those without closed loop systems, and these newer systems are increasingly more common and recycle the water used to cool servers. Moreover, with each advance in generative AI comes reductions in the resources needed to train and run a model, high-end desktops can already run modern language models locally, requiring no cloud services. And it's clear that this is the future of consumerf facing AI.
Lightweight models that are efficient and don't require huge data centers to run them. Using generative AI does impact the planet, and its impact is harmful and significant. But even regular AI use only slightly wounds our climate.
Watching influencers get crucified on Tik Tok for suggesting using AI to generate a grocery list once a week seems silly to me when influencers go on multiple transatlantic flights per year and eat steak dinners every week because restaurants comp their food. Nobody bats an eye at these activities that inflict gaping wounds on our climate compared to the paper cuts inflicted by generative AI use. It's just not trendy to criticize these lifestyle choices, and many can't do so without being hypocritical.
In my opinion, approaching AI from a climate justice perspective is crucial. But pointing to AI use as the driving force behind the climate crisis puts well-meaning, outspoken people in a compromising position. They make it far too easy for obnoxious tech pros to tear apart their alarmist talking points.
The most acute ethical dilemma raised by widely used AI models is that they are developed behind closed doors by big tech companies with dubious motives and are trained on data these companies have no rights to use. Though current AI models are only capable of producing very benile writing images and analytical reasoning. Future AI technologies could pose an existential threat to creatives and professionals alike.
In the case of AI therapy, all sorts of startups and established healthcare technology companies are amassing data sets of psychotherapy research. And there's talk that transcripts of private therapy sessions could be used to train models in the future. AI models are already used to transcribe doctor's visits in healthcare settings.
Despite the proven danger that their hallucinations pose to the integrity of patient records, it's highly unlikely that no companies will take the next logical step in their plan to harvest and use as much data as possible. As it stands, therapists and researchers are training the software that will potentially replace them without their consent and without being compensated. Anyone who uses future AI products for therapy will be swept up in this exploitative situation.
Despite this, it's difficult for me to blame individuals for causing the foreseeable problems that generative AI will create when often they have few options other than AI for mental health care. Nobody chooses to use AI as their therapist when they have the means to see a real therapist. For 122 million Americans who live in a mental health professional shortage area, they may not have any other options.
It's also worth noting that despite AI making frequent mistakes, many say it is effective and relieves their anxiety and other mental health issues. And even though this may be a placebo effect, it's still a noticeable effect people experience. My concern mainly applies to people who are experiencing delusions or psychosis or those who are simply unreliable narrators with warped perspectives of the situations in their life.
and chat GBT feeds into their unhealthy beliefs and counterproductive actions. I think it would be rash to scold people for using AI as a last resort. The public is also being let down by regulators who are failing to protect their constituents here in the US.
It is fully within the powers of the government to mitigate any current or future harm that AI and big tech companies pose to workers and consumers alike. Millions in lobbying dollars spent by mega corporations stand in the way of passing proposed legislation. Ultimately, the strongest feeling I have towards the countless people who use Chat GBT for therapy or even friendship is sadness.
Seeing people on TikTok in a position where their only opportunity to talk with someone about their feelings is chatting with an AI model is eyeopening. You often hear people talk about the American loneliness crisis, but witnessing its effects, even through a screen, is quite harrowing. The reactions that others have to these situations is also revealing.
These AI users, rather than receiving compassion, face mockery and even anger from those who claim to stand up for society's most vulnerable members. These jerks say those who use AI out of desperation are inferior in every respect to them and talk about AI use by lonely young people and catastrophes caused by rising global temperatures in the same breath, which is very uncharitable to say the least. This is the same crowd that complains about the lack of third spaces and makes constant posts about mental health awareness.
By the way, it's clear that these people are self-righteous bullies and refrain from empathizing with those who aren't still on their parents' gold health insurance plan. Though the dangers of using AI for therapy are clear, many companies behind chat bots like Chat GBT have no desire to prevent people from using their products in this way. The usage of generative AI for therapy will only increase as the technology gets better at producing responses that emotionally resonate with users of these products.
Besides, the cat is already out of the bag, and big tech has no intention of doing anything about this. They must be forced to take action. Safeguards need to be mandated by law to prevent the most dangerous consequences of AI improperly treating mental health issues from occurring, and severe penalties for negligence must also be established to create a negative incentive that will motivate companies to adopt more ethical ways of developing AI.
Without these regulations, companies are free to make irresponsible and harmful decisions. We need this regulation implemented in the US as soon as possible. But it's safe to say this will not happen for at least the next 3 years.
In the meantime, all we can do is be outspoken, vigilant, and do our best to stay informed while hopefully avoiding dangerous AI products while we still can. Anyways, I'm Robert Tolpi and if you enjoyed this video, don't forget to like, comment, share, and subscribe. And I hope you have a good week.