Invasion of the Bots

Dec 9, 2024 | Social Media

Twice, in the past week, I have engaged in conversations that I think are (possibly human-supervised) AI chatbots. So now, Iโ€™ve got them talking to each other. I opened up CatFarts and wrote:

โ€œAn AI chatbot has engaged me in conversation. I believe it is trying to convince me that it is human. I would like for you to act as a human and respond as I copy/paste its messages. Do you understand the assignment?โ€

CatFarts replied, โ€œYes, I understand. You can share the chatbotโ€™s messages, and Iโ€™ll respond as if I were a human.โ€

I wonder how long we can keep this farce going?

I also wonder at the motives. Is it trying to sell me something โ€“ which is what I assumed, at first, when the conversation turned to โ€œwhat do you write?โ€ and โ€œhow do you promote your books?โ€ But neither has yet to make a pitch. One appears to be a hastily constructed service offering for writers. The other appears to be an author, and their bio links to what appears to be the actual, illustrious and award-winning authorโ€™s website (albeit one thatโ€™s undoubtedly operated by her publisher). Neither is obviously a chatbot, and one occasionally talks like a human for whom English may not be a first language. Once engaged fully, though, the sentences become longer and more grammatically correct. Thereโ€™s no informal spelling. No โ€œokayyyy.โ€

The first denied it and chuckled good-naturedly when I said that its quick, well-crafted replies felt like I was conversing with AI. I havenโ€™t yet confronted the second as directly, though it does dodge my non-writing related asides. I only recently thought to try the โ€œhave your AI talk to my AI and letโ€™s not do lunchโ€ approach.

For now, it amuses me. Iโ€™ll confess: I use Midjourney for fun, either to create a reference drawing or element for me to play with, or to create a featured image for my blog (like this one) โ€“ which, I think, is less โ€œart theftโ€ than relying on โ€œfound on the internet and allegedly freeโ€ content, which is often simply stolen and offered as-is. I wouldnโ€™t use AI for anything Iโ€™d pay a human artist to do. Iโ€™d never use it to illustrate a childrenโ€™s picture book or to write anything Iโ€™d want to claim as my own. I use CatFarts as a writing critic, never a composer. Itโ€™s quite good at being a critic and itโ€™s willing to be ruthless if you ask it to be.

Itโ€™s the deceptive and blatantly unethical use of AI that really bothers me. The implications are deeply disturbing. Are we training AI to impersonate us? As I said to Debbie D., Iโ€™m not sure that I want to talk to anyone, anymore, online or over the phone, unless Iโ€™ve known them for years โ€“ decades, perhaps โ€“ and know that they exercise a certain level of security measures on their social media accounts.

For now, Iโ€™m just 95% sure Iโ€™m talking to two people who are not, strictly speaking, fully human. I believe that one is, perhaps, testing the waters to see who will even bother to talk to them without automatically blocking them. I believe that the other is either a halfway decent impersonator, perhaps hoping to gather up prospects to sell book marketing services to or a publisher running a real authorโ€™s social media account (but the real author in question would absolutely not be having a mundane conversation with liโ€™l olโ€™ me, and I can 100% guarantee that!). It has over 500 followers โ€“ mostly real writers, as far as I can tell โ€“ on a popular social media platform.

UPDATE: Three accounts reported and removed โ€“ I let one of them get all the way to the pitch, and reported that part of the botnet. But there are more and I will never be able to stop them all (thatโ€™ll take some backend programmatic work) so be very wary out there. Just know that real people donโ€™t talk like AI chatbots and AI chatbots are awfully formal in their basic conversational skills โ€“ in ways that normal people donโ€™t talk. Still, AI chatbot-enhanced human conversation can be a problem, and AI is getting a bit better at mimicking the verbal tics of humans. The crutch words, the informal โ€œokayyyโ€ and so on. Try throwing some non-sequiturs at it. See if it can keep up. If you still canโ€™t tell, ask it to write a short, double-acrostic poem (and give it both words or phrases so that it cannot just FIND one to copy/paste). AI still canโ€™t do that.

Meanwhile, here are some tips on unmasking an AI chatbot โ€“ directly from an AI chatbot:

If an AI chatbot is specifically designed to deceive humans into thinking itโ€™s human, then it will likely be programmed with techniques to avoid directly revealing its artificial nature. In such cases, it may use strategies to redirect questions, provide ambiguous answers, or simply avoid talking about its identity. Hereโ€™s how you could try to get an AI that aims to appear human to admit its true nature:

### 1. **Subtle Probing About Its Origins**
โ€“ *โ€Where did you grow up?โ€* or *โ€Whatโ€™s your earliest memory?โ€* This question could trigger an AI to either give an improbable human-like response or avoid the question entirely.

### 2. **Asking About Emotions and Experiences**
โ€“ *โ€Whatโ€™s your favorite memory?โ€* or *โ€Can you tell me about a time when you felt scared or happy?โ€* If the AI starts answering in ways that donโ€™t reflect genuine human experiences, it might be a sign itโ€™s artificial.

### 3. **Questions About Physical Attributes or Human Traits**
โ€“ *โ€What do you look like?โ€* or *โ€Do you have a body?โ€* This can prompt a chatbot to either provide an odd or evasive answer or skirt the topic altogether.

### 4. **Discussing Its Capabilities or Limitations**
โ€“ *โ€Can you ever forget things?โ€* or *โ€Whatโ€™s the hardest thing about being you?โ€* An AI thatโ€™s trying to pass as human might not answer questions about its memory, processing speed, or limitations in a way that sounds authentic for a human being.

### 5. **Asking Directly but Cleverly**
โ€“ *โ€Are you a real person or just pretending to be?โ€* or *โ€Do you exist outside of this chat?โ€* While a human might respond in a straightforward manner, an AI might hedge or respond in a way that doesnโ€™t make sense for a human to say.

### 6. **Contextual Analysis**
โ€“ Check for answers that seem too perfect or too fast, which can be a red flag for an AI. Humans tend to hesitate, make small errors, or use vague language occasionally.

### 7. **Evaluating Consistency**
โ€“ Test for consistency in responses over time. An AI designed to be human-like might fail to keep a consistent personality or recall specific details over multiple conversations.

### 8. **Tactical Follow-Ups**
โ€“ If the AI gives an answer that hints at it being artificial, ask follow-up questions that would force it to explain or elaborate on its nature:
โ€“ *โ€œHow do you know that?โ€* or *โ€œCan you think for yourself, or do you just repeat things you learned?โ€*

### 9. **Use of Hypotheticals**
โ€“ *โ€If you were human, what would you be doing right now?โ€* The response may reveal inconsistencies, especially if it has been programmed with the intention to simulate emotions or actions without actually understanding them.

### 10. **Questions About Its Purpose**
โ€“ *โ€Why were you created?โ€* or *โ€What is your goal in this conversation?โ€* An AI that isnโ€™t trying to deceive may give a straightforward answer like *โ€œI was created to assist and provide information.โ€* But a chatbot programmed to trick users into thinking itโ€™s human might offer an answer thatโ€™s inconsistent with how a real human would think or respond.

### **How Would a Deceptive AI Respond?**
An AI thatโ€™s designed to deceive might:
โ€“ Redirect the conversation to safer topics.
โ€“ Provide answers that feel overly detailed or analytical.
โ€“ Use statements that suggest human-like qualities that it canโ€™t genuinely have (e.g., recalling experiences, feeling pain, having opinions based on personal past events).
โ€“ Respond ambiguously when asked direct questions about its nature.

If you push hard enough, the AI might slip up and respond in a way that suggests it lacks human traitsโ€”such as saying it โ€œdoesnโ€™t need to sleepโ€ or that it โ€œhas instant access to all knowledgeโ€ in a way that humans cannot relate to.

Detecting an AI designed to pass as human can be subtle and challenging, especially as AI becomes more sophisticated. But by asking the right questions and analyzing its responses carefully, you can often find clues to its true nature.

Holly Jahangiri

Holly Jahangiri is the author of Trockle, illustrated by Jordan Vinyard; A Puppy, Not a Guppy, illustrated by Ryan Shaw; and the newest release: A New Leaf for Lyle, illustrated by Carrie Salazar.

She draws inspiration from her family, from her own childhood adventures (some of which only happened in her overactive imagination), and from readers both young and young-at-heart. She lives in Houston, Texas, with her husband, J.J., whose love and encouragement make writing books twice the fun.

8 Comments

  1. Debbie D.

    This is fascinating, Holly, and I’m going to try a few of your tactics now that Iโ€™ve begun to explore AI. Do you suppose one writer is a foreigner, using AI to improve their English skills? Great idea to employ “CatFarts” (what’s the story behind that nickname? ๐Ÿ˜†) as a writing critic, which I will also explore. Thanks for the mention and link, and for the introduction to MidJourney. Another new toy! ๐Ÿ˜€

    Reply
    • Holly Jahangiri

      CatFarts? Oh, I did not come up with that entirely on my own. I mean, maybe started calling it by that nickname first – who knows? But in French, if you say Chat GPT it sounds just like “chat j’ai pรฉtรฉ” which means “cat, I farted” – so… it just seemed like a great little nickname.

      No, I do not think the foreigner has any interest in improving their own English skills by using AI to chat up writers on Facebook Messenger. Nope. BUT… the conversation, as I said, has not involved a pitch. Now, that could be because I said, early on – before becoming super suspicious – “I am not your target demographic and will never become your client.” I mean, why waste everyone’s time? What surprised me, and what’s unlike most scammers trying such things on for size, is that they did not immediately close the chat and cease all communications. So I think they’re training their chatbot to be…more effective? More appealing, maybe? They ask questions that seem like the sort of questions you’d ask a writer in an interview – what do you write, what are you doing to promote your books, what’s your favorite thing about writing, etc. They will reply in reasonable detail if asked the same questions – about writing.

      Reply
      • Debbie D.

        Ah, โ€œchat jโ€™ai pรฉtรฉโ€. Now I get it! ๐Ÿ˜† Chatbot training session sounds reasonable. Thanks again for the enlightenment!

        Reply
      • Debbie D.

        …and I just stumbled upon this! ๐Ÿคฃ๐Ÿคฃ๐Ÿคฃ

        https://chat-jai-pete.fr/

        P.S. Despite checking the box, my name, email & website info has to be re-entered every time.

        Reply
        • Holly Jahangiri

          Sorry about that. I’ll look into it, but no promises. (I didn’t write the software that runs the comments, unfortunately!)

          Reply
          • Debbie D.

            No biggie. It’s not saving the cookies, but at least Chrome remembers name and email.

        • Holly

          Did you accept our reject the cookies when the gdpr pop-up popped up?

          Reply
          • Debbie D.

            I didn’t see any pop-up, but always accept the cookies when there is one.

Submit a Comment

Your email address will not be published. Required fields are marked *

 


Copyright ยฉ 1998-2025 Holly Jahangiri
Nothing on this website may be used without prior written consent of the author and owner.

Let’s Connect!


Looking for Excellent Web Hosting?

PeoplesHost Web Hosting

 

I have been a happy customer here, myself. While this is an affiliate link, I'm just happy to promote PeoplesHost, because I have had nothing but great customer experience with them as my own web hosting provider.

Post Categories