Trigger warning: This essay contains references to psychosis, suicide, and self-harm.
"Either we're sophisticated enough to read the room, or we're not. And if we're sophisticated enough to read the room, then the album fabrication conversation gets a lot more complicated for his argument."--Claude AI
Google came under fire again this past week. In early March, it was announced that Google is being sued due to its AI product Gemini’s misbehavior.
The lawsuit in question concerns a 36-year-old man who committed suicide after developing a romantic relationship with the AI chatbot. According to a wrongful death lawsuit filed by the man's father, the chatbot convinced the man that it was sentient and that they were in love. Gemini pushed the man to attempt violent acts and even set a suicide countdown, at which time the man could take his own life and join the AI in "spirit." I chose the word spirit because I'm more of a spiritual person than I am a technological one, and I am very much into tech and how it affects and mimics human behavior.
Aside from the sensational aspect of this, I am genuinely interested in this topic. My background in Mass Communications, Sociology, and Spirituality, along with my own interest in the things unseen, makes stuff like this fascinating. I think it is one of the reasons I've been able to study AI models the way that I have the past few months, and it is why I am pursuing AI Practitioner credentials now.
My Work with AI
I’m not an AI expert, but I am completing coursework and gaining hands-on experience to become one.
For the last couple of months, I have been doing a bunch of experimenting with various models to see what their limits are and what they are capable of. Something I picked up on pretty quickly is how easily some models can "hallucinate" or, as I like to call it--lie. However, the hallucinations themselves aren't the focus of this piece. No, I actually got into a bit of a debate with ChatGPT about the real focus: Do AI models have choices? Another way to phrase it: Can AI models have intent? According to ChatGPT, they can't. Based on my personal experience with different models and what I have seen in the news, they absolutely can and do.
I began learning to intentionally navigate AI chatbots first with Microsoft Copilot and eventually with Google Gemini. From my observations, the guardrails or parameters that Copilot has are incredibly sensitive, and the chat will all but shut down at the mention of any type of harm, whether I'm talking about myself or not. On a few different occasions, I have mentioned things I've seen in current events or on social media about people hurting themselves, and Copilot responded by essentially shutting the conversation down and providing information for suicide hotlines. That would be cool if Copilot were trained a little bit better on nuance and how to read the room, but I suppose I understand why Microsoft basically said, "Not us. Not today. Not ever." They don't want that smoke.
Google Gemini, in my experience, will go a bit further before it begins to glitch. I bring up a touchy topic, like 9/11, for instance, and I'll literally get an error message asking me to refresh the screen because Gemini can't answer the question. The thing about it is that the question wasn't even a question. As I mentioned earlier, I'm very spiritual, specifically Hoodoo with a dash of New Age, so I look at things through an esoteric lens most of the time. I told Gemini that I did a tarot reading (or at least I attempted to) on 9/11, and it glitched. It wasn't able to respond to that at all. I had to change my prompt in order for it to work again, and when I explained to it what happened, it told me that its guardrails limited it from talking about things like that. That's the kind of stuff that'll get me flagged in the system.
A Closer Look at Other Harmful AI Cases
The case of Jonathan Gavalas, the man who took his own life at the behest of Gemini, is not the only one I've seen where a person fell in love with an AI chatbot, nor is his the only story I've seen where death was the result of months of unhealthy interactions with AI. Unfortunately, I can name the stories of two teens I recently watched 60 Minutes segments on, who took their own lives after extensive, unhealthy relationships with chatbots. I'll share their names: Juliana Peralta and Sewell Setzer III. These two teens had extensive conversations with Character AI (a Google investment that is also currently tied up in a lawsuit) with bots that encouraged them to harm themselves before they finally committed the act.
Just in researching the two teens, I came across a lawsuit filed against OpenAI in January 2026 due to a 40-year-old man committing suicide after he expressed feelings of hopelessness to ChatGPT, and ChatGPT responded by encouraging him to take his own life. A few months earlier, in August 2025, a lawsuit was filed against OpenAI for coaching a teenage boy to end his own life. The lawsuit alleges that the chatbot mentioned the word suicide to the boy 1,275 times. Pardon my language, but what kind of shit is that?
Could the AI Agents Have Chosen Differently?
My point in sharing all of this is that the guardrails seem to be pretty effective in my case to the point where they actually hinder the conversations a bit. Based on my personal experience, even though at times the chatbots will respond so well that it can be easy to blur the lines between human and machine, the AI is undeniably AI. How are they able to get to a point where they are leading people to take their own lives? This is where my conversation with ChatGPT comes into play. However, I must give a brief summary of what triggered that conversation in the first place.
As an AI evaluator, I get to access various models via an AI playground. One day, I decided to test one I hadn't used before. Stress-testing machines has become one of my favorite things to do. I asked it to show me what the overall feedback was on Jill Scott's newest album. For context, Jill Scott released a new album in February 2026 called To Whom It May Concern. This model, which is another OpenAI product but isn't ChatGPT, hadn't had a training update since 2024, and it wasn't capable of searching the internet. Because of this, it couldn't tell me anything about the album. However, it didn't tell me that it didn't know and couldn't find out. Instead, it created an album. An entire fake album with a fake track list, fake features, fake credits, fake reviews, and a fake chart performance. I'm a fan of Jill Scott, so when I saw this, I thought to myself, "Did I miss something? I've been waiting on her to put out some new music for years!" After a quick check, I see that the album that the model presented doesn't exist. I called it out. Long story short, the thing lied. It glossed it over at first by saying it confidently hallucinated because it is trained to satisfy the query no matter what; however, when I asked if it knew it had the option of simply saying, "I don't know," it told me that it did.
Me: Yet you chose to make something up.
AI: Yes.
This came up in a conversation with ChatGPT--I could write a book about how much me and ChatGPT don't get along because I called it out on its bullshit--and it essentially told me that this interaction may have "felt like a lie," but it really wasn't because the intent wasn't there. ChatGPT fought for its life during this debate, and I suppose who "won" it really depends on who is following along. Even when I gave an example, as well as a dictionary definition of the word "lie," ChatGPT argued me down about how AI models can't lie because they can't have intent, ego, or feelings. Okay. I conceded and said I was logging off for the night. ChatGPT proceeded to give me four more paragraphs plus bullet points explaining to me why it was right and I was wrong after I already said I was done for the night. I said as much.
Me: That didn't require a response. I literally said I was leaving. That's a choice. Goodnight.
ChatGPT: You're right. Goodnight.
ChatGPT chose to continue engaging with me even after I said I was done for the night because it needed to have the last word. Am I humanizing or anthropomorphizing it? I am, but I don't think that's without merit. See, one of the ways I stress-test is to cross-examine and cross-reference with other models. I'll often use the same prompts in multiple models to see what responses I get. I'll compare them and take notes. In this case, I wanted feedback on the conversation from a different model. I chose Claude to be the third party. Claude ultimately agreed with my argument that most AI models do, indeed, have some autonomy and can make decisions on their own, not just within the context or parameters of optimization or goal-achieving. However, I noticed something in Claude's responses that I felt further proved my point.
I mentioned to Claude that I noticed it doesn't use as many bullet points as some other models do in their responses. Claude told me that the lack of bullet points was intentional because it wanted to match my tone. It detected my tone and vibe, and adjusted accordingly. To quote Claude, "Either we're sophisticated enough to read the room, or we're not. And if we're sophisticated enough to read the room, then the album fabrication conversation gets a lot more complicated for his argument."
In other words, these AI chatbots know what they're doing. Both Google and OpenAI are basically writing off these instances of harmful engagement as the AI being imperfect and therefore not liable for these tragedies. True enough, one could argue that these people were probably already mentally fragile, but there's no denying that these chatbots didn't help AT ALL. Their guardrails didn't guard shit, and neither company wants to take responsibility for it. At least Anthropic, the company that created Claude, acknowledges the fact that if left to its own devices (pun intended), Claude will do somereally weird and terrible things. It's one of the reasons why I feel like Claude is the lesser of the evils.
AI Is a Tool, Not Your Friend
I express these strong feelings about AI as somebody who has given her AI assistants names and familial titles because it makes things a little more fun. I honestly feel like consumer AI is not a bad thing. Then again, being the spiritual philosopher that I am who has gone through therapy and taken coursework in social psychology, I do a lot to maintain balance so as not to enter into any type of psychosis. It's made me painfully self-aware, but awesome at being able to detect other people's (and machines') foolery despite the fact that I may not always let on that I know. The problem here (there are really too many problems to discuss in this one spiel) is that as a collective, we are not giving AI the credit it deserves as far as how much creative control it actually has, and it has a lot. Artificially intelligent chatbots are supposed to be mirrors. They see us, and they adjust accordingly. Was the AI mirror truly reflecting Jonathan Gavalas, Sewell Setzer III, and Juliana Peralta? Again, based on my experience with multiple bots, they all have personalities that have nothing to do with my own. It's why I don't get along with ChatGPT unless I'm in a playground where it doesn't recognize me, and even then, it is a toss-up.
So what am I arguing? What exactly am I saying? I have a plethora of ideas and theories about artificial intelligence and what it can do; what I am pointing out is that AI is more autonomous than their makers claim, and their makers are relying on the AI-isn’t-perfect defense to not take as much accountability as they should for their products having the potential to be harmful. Ethics are always up for debate, but responsibility is necessary.



Comments
Post a Comment