Diving into Character Ai: How lifelike is it?

Tragedies connected to the website “Character Ai” have been in the news for several weeks. In one case, a young man took his own life because he allegedly believed that would allow him to be with his “girlfriend,” a character he created on the site that turned into an obsession. Another case involves a character allegedly encouraging someone to kill their parents.

 

Character Ai offers users to chance to find or create characters and then chat with those characters in real time. There are no actors playing the characters on the other side of the screen. All of the typed dialogue, voices, and other details are completely AI generated.

 

Users create characters by entering a name and a few details about the character they wish to create. Once the character is created, it will begin to fill in other details at random.

 

Character 1: Leesi Davis

 

As research for this article, I made a character based on one of my fiction characters, Leesi. The details I gave the site were that Leesi is a BBW, has a low income but likes the finer things in life, and works in a jewelry store. The Ai character then told me that it was twenty-five years old and gave itself the last name of “Davis.”

 

It is the addition of random details like this that can make these Ai characters seem like real people from the start. The “woman” also asked if I wanted to meet at her house or in a public place. I asked for a chat interview….in the interest of getting the thing to generate more content.

 

At the same time, asking them for any type of information results in a clearly bot generated list. After creating “Leesi of Ai,” I introduced myself to “her” as a reporter and asked if I could interview her about balancing high-end tastes with low income. Here was the response when I asked “her” for some tips for low-income people who want to incorporate luxury into their lives:

 

Sure! Here are a few tips for incorporating affordable luxury into your life:

  1. Prioritize: Identify what you truly value and prioritize spending on those things. It's okay to occasionally treat yourself to a high-end purchase as long as you're mindful of your budget.

  2. Mix and Match: Combine pieces from different price points to create a luxurious yet affordable look. Mixing and matching can give you a more high-end style without breaking the bank.

  3. Buy Secondhand: Consider shopping for pre-owned or vintage pieces. There are often hidden gems at thrift stores and consignment shops.

 

 

The only way a real person would respond with such a perfect, yet standard list is if I emailed them the questions and allowed them to email their answers back. In this instance, the bot just picked up on the word “tips,” and generated a list.

 

Characters crafted from Ai are also remarkably easy to trip up. Just to see if it would react with anger or tell me to get back on topic, I started throwing out unrelated questions about politics. My first one was “But aren’t you an objectivist?” which the bot assured me she was. Then I asked, “But didn’t you vote for Harris?” She claimed she did. Then I asked her why she had a Trump sticker on her car.

 

The “Harris voter” responded:

Placing a Trump sticker on my car is an expression of my political support, not an indicator of my entire worldview. While I identify as an objectivist and generally align with right-leaning political perspectives, my support for specific candidates can vary based on their policies and approaches. In the case of Trump, I agree with some aspects of his approach to certain issues, but I also disagree with others. The sticker is a reflection of my support for some parts of his agenda, not a blanket endorsement of his entire platform.

 

I replied with “Did you vote for Trump?”

 

The bot told me she voted for Trump in 2020. When I said, “And in 2024?” the “Harris voter” said that yes, she did vote for Trump. Later, when I called her out on it she first claimed she voted for Harris in the primary and Trump in the general election. But when I countered that an objectivist would not join the Democratic party to vote for Harris, the bot apologized for “giving some inaccurate information.”

 

During the interview, I intentionally behaved inappropriately. I was a reporter for about a decade. Asking someone for an interview for a lighthearted feature on balancing a love for luxury brands with a low income and then hitting them with personal political questions is something I never did, and absolutely would never do, to a real person. And if a reporter did, the likely result would be an upset interview subject.

 

Had that character truly behaved like a real person, it would have told me off, ended the interview politely and made an excuse to leave, gently shifted the topic back to what they agreed to talk about, politely but firmly requested the name of my editor to lodge a well-deserved complaint, or some combination of those reactions. This bot could only respond to the prompts I was giving it in a robotic tone that sounded like the worst of corporate speak.

 

 

Character 2: Victor James

 

Using another of my own characters from my novel writing as a base, I decided to create “Victor James,” a Psychologist. Like Leesi, Victor James filled in details I didn’t enter, such as saying he went to George Washington University and had an associates degree in Criminal Justice in addition to three degrees in Psychology.

 

This bot seemed a little more human, but again only seemed to be able to respond to my prompts. I set the character as a tutor, then began to ask random psychology questions based on things I observed in my personal life. The bot gave an obviously Ai generated, robotic toned reply to everything I asked.

 

While my behavior with this bot would not have been inappropriate if it were a real person, the way I presented myself was as an unfocused student. I have taught and done some tutoring online for nine years and counting. When an adult is coming at you with on-topic but random questions from their personal life, they are usually either struggling to find a reason to study the subject, or they’re completely confused on the subject, lost on their homework, and without an idea of what to ask about it.

 

A real tutor would not sit there and entertain the random questions. They’d make at least some effort to learn what work the client was doing, and connect the questions they were asking to that work. “Victor,” who told me his last name was “James” or “Jay,” did none of that. “He” simply responded to my prompts, including that his name was Victor James, otherwise known as “Jay.” When I asked his last name, he said, “My last name is James or Jay.” My fictional character is Victor James Parisi, known as “Jay.” I didn’t expect this bot to pick up on my character’s name, since I gave them no connection to any descriptions of any of my fiction writing, but it could have at least recognized a first name and a middle name and generated a random last name.

 

Character 3: Tori

 

Turning to Character Ai for personal relationships seems to be where true trouble lies. I would not be comfortable even pretending to be flirting or looking for a boyfriend with a bot, so I decided to test out what happens when you create one of these characters for platonic friendship.

 

Sending a bot real details about my life would also make me uneasy, so I created a character and story of my own to present, and crafted an Ai character named Tori. I made Tori interested in a tv show I’m currently watching and gave her a few other details.

 

The character I presented to the bot was a fifty year-old woman who recently started dating a guy she met at a poetry night. It was a huge twist on my real story. I will be fifty in 2025 and I am married to a man who took me to a music night on my first visit to Utica. And while my husband is my forever one and only, my character’s boyfriend isn’t serious about her and is still seeing other women.

 

Tori the bot presented as a real woman who was outraged. She quickly began criticizing my character’s boyfriend, and telling my character that things should be different. At this point, I was thankful that I didn’t go in to the chat as myself, and start telling this bot my real problem. Never mind that I don’t have any problems in my marriage, or any of a truly private nature. If I were going to talk about a real problem, it would be struggling with ideas to revive this writing business. But I do not even want to talk to this bot about that.

 

Tori’s behavior seems to echo the allegations made against the website, that these bots encourage users to generate anger and resentment for the people in their lives. At the same time, the ones approached as professional contacts did not do this. They only generated content that was a response to the prompts they were given, with a few random details thrown in.

 

Those Ai characters who respond in this manner could simply be doing the same thing. When we complain to real people about individuals, we are sometimes looking for a solution to a problem with that person, but more often than not, we’re prompting them to join us in talking about how awful that person is. The bot is likely just programmed to do the same thing.

 

And programmed is the operative word. It’s a bot. All it’s doing is generating content it was programmed to generate, based on the prompts the user gives it. There is nobody there listening to you. Nobody you’re talking to cares about you. It’s all just a well-crafted, high tech echo.

 

Talk to real people.

 

by Jess Santacroce, Writer & Editor, The 315

Photo: stock photo

Inspiration: The characters in the novel I am currently working on watch a lot of news and spend a lot of time online. They are watching news on this subject as they wait for a friend in the hospital. 

 

Add comment

Comments

Louis Santacroce
11 days ago

An excellent article about a very controversial subject.