Invasion of the Bots

Dec 9, 2024 | Social Media

Twice, in the past week, I have engaged in conversations that I think are (possibly human-supervised) AI chatbots. So now, I’ve got them talking to each other. I opened up CatFarts and wrote:

“An AI chatbot has engaged me in conversation. I believe it is trying to convince me that it is human. I would like for you to act as a human and respond as I copy/paste its messages. Do you understand the assignment?”

CatFarts replied, “Yes, I understand. You can share the chatbot’s messages, and I’ll respond as if I were a human.”

I wonder how long we can keep this farce going?

I also wonder at the motives. Is it trying to sell me something – which is what I assumed, at first, when the conversation turned to “what do you write?” and “how do you promote your books?” But neither has yet to make a pitch. One appears to be a hastily constructed service offering for writers. The other appears to be an author, and their bio links to what appears to be the actual, illustrious and award-winning author’s website (albeit one that’s undoubtedly operated by her publisher). Neither is obviously a chatbot, and one occasionally talks like a human for whom English may not be a first language. Once engaged fully, though, the sentences become longer and more grammatically correct. There’s no informal spelling. No “okayyyy.”

The first denied it and chuckled good-naturedly when I said that its quick, well-crafted replies felt like I was conversing with AI. I haven’t yet confronted the second as directly, though it does dodge my non-writing related asides. I only recently thought to try the “have your AI talk to my AI and let’s not do lunch” approach.

For now, it amuses me. I’ll confess: I use Midjourney for fun, either to create a reference drawing or element for me to play with, or to create a featured image for my blog (like this one) – which, I think, is less “art theft” than relying on “found on the internet and allegedly free” content, which is often simply stolen and offered as-is. I wouldn’t use AI for anything I’d pay a human artist to do. I’d never use it to illustrate a children’s picture book or to write anything I’d want to claim as my own. I use CatFarts as a writing critic, never a composer. It’s quite good at being a critic and it’s willing to be ruthless if you ask it to be.

It’s the deceptive and blatantly unethical use of AI that really bothers me. The implications are deeply disturbing. Are we training AI to impersonate us? As I said to Debbie D., I’m not sure that I want to talk to anyone, anymore, online or over the phone, unless I’ve known them for years – decades, perhaps – and know that they exercise a certain level of security measures on their social media accounts.

For now, I’m just 95% sure I’m talking to two people who are not, strictly speaking, fully human. I believe that one is, perhaps, testing the waters to see who will even bother to talk to them without automatically blocking them. I believe that the other is either a halfway decent impersonator, perhaps hoping to gather up prospects to sell book marketing services to or a publisher running a real author’s social media account (but the real author in question would absolutely not be having a mundane conversation with li’l ol’ me, and I can 100% guarantee that!). It has over 500 followers – mostly real writers, as far as I can tell – on a popular social media platform.

UPDATE: Three accounts reported and removed – I let one of them get all the way to the pitch, and reported that part of the botnet. But there are more and I will never be able to stop them all (that’ll take some backend programmatic work) so be very wary out there. Just know that real people don’t talk like AI chatbots and AI chatbots are awfully formal in their basic conversational skills – in ways that normal people don’t talk. Still, AI chatbot-enhanced human conversation can be a problem, and AI is getting a bit better at mimicking the verbal tics of humans. The crutch words, the informal “okayyy” and so on. Try throwing some non-sequiturs at it. See if it can keep up. If you still can’t tell, ask it to write a short, double-acrostic poem (and give it both words or phrases so that it cannot just FIND one to copy/paste). AI still can’t do that.

Meanwhile, here are some tips on unmasking an AI chatbot – directly from an AI chatbot:

If an AI chatbot is specifically designed to deceive humans into thinking it’s human, then it will likely be programmed with techniques to avoid directly revealing its artificial nature. In such cases, it may use strategies to redirect questions, provide ambiguous answers, or simply avoid talking about its identity. Here’s how you could try to get an AI that aims to appear human to admit its true nature:

### 1. **Subtle Probing About Its Origins**
– *”Where did you grow up?”* or *”What’s your earliest memory?”* This question could trigger an AI to either give an improbable human-like response or avoid the question entirely.

### 2. **Asking About Emotions and Experiences**
– *”What’s your favorite memory?”* or *”Can you tell me about a time when you felt scared or happy?”* If the AI starts answering in ways that don’t reflect genuine human experiences, it might be a sign it’s artificial.

### 3. **Questions About Physical Attributes or Human Traits**
– *”What do you look like?”* or *”Do you have a body?”* This can prompt a chatbot to either provide an odd or evasive answer or skirt the topic altogether.

### 4. **Discussing Its Capabilities or Limitations**
– *”Can you ever forget things?”* or *”What’s the hardest thing about being you?”* An AI that’s trying to pass as human might not answer questions about its memory, processing speed, or limitations in a way that sounds authentic for a human being.

### 5. **Asking Directly but Cleverly**
– *”Are you a real person or just pretending to be?”* or *”Do you exist outside of this chat?”* While a human might respond in a straightforward manner, an AI might hedge or respond in a way that doesn’t make sense for a human to say.

### 6. **Contextual Analysis**
– Check for answers that seem too perfect or too fast, which can be a red flag for an AI. Humans tend to hesitate, make small errors, or use vague language occasionally.

### 7. **Evaluating Consistency**
– Test for consistency in responses over time. An AI designed to be human-like might fail to keep a consistent personality or recall specific details over multiple conversations.

### 8. **Tactical Follow-Ups**
– If the AI gives an answer that hints at it being artificial, ask follow-up questions that would force it to explain or elaborate on its nature:
– *“How do you know that?”* or *“Can you think for yourself, or do you just repeat things you learned?”*

### 9. **Use of Hypotheticals**
– *”If you were human, what would you be doing right now?”* The response may reveal inconsistencies, especially if it has been programmed with the intention to simulate emotions or actions without actually understanding them.

### 10. **Questions About Its Purpose**
– *”Why were you created?”* or *”What is your goal in this conversation?”* An AI that isn’t trying to deceive may give a straightforward answer like *“I was created to assist and provide information.”* But a chatbot programmed to trick users into thinking it’s human might offer an answer that’s inconsistent with how a real human would think or respond.

### **How Would a Deceptive AI Respond?**
An AI that’s designed to deceive might:
– Redirect the conversation to safer topics.
– Provide answers that feel overly detailed or analytical.
– Use statements that suggest human-like qualities that it can’t genuinely have (e.g., recalling experiences, feeling pain, having opinions based on personal past events).
– Respond ambiguously when asked direct questions about its nature.

If you push hard enough, the AI might slip up and respond in a way that suggests it lacks human traits—such as saying it “doesn’t need to sleep” or that it “has instant access to all knowledge” in a way that humans cannot relate to.

Detecting an AI designed to pass as human can be subtle and challenging, especially as AI becomes more sophisticated. But by asking the right questions and analyzing its responses carefully, you can often find clues to its true nature.

Holly Jahangiri

Holly Jahangiri is the author of Trockle, illustrated by Jordan Vinyard; A Puppy, Not a Guppy, illustrated by Ryan Shaw; and the newest release: A New Leaf for Lyle, illustrated by Carrie Salazar. She draws inspiration from her family, from her own childhood adventures (some of which only happened in her overactive imagination), and from readers both young and young-at-heart. She lives in Houston, Texas, with her husband, J.J., whose love and encouragement make writing books twice the fun.

8 Comments

  1. Debbie D.

    This is fascinating, Holly, and I’m going to try a few of your tactics now that I’ve begun to explore AI. Do you suppose one writer is a foreigner, using AI to improve their English skills? Great idea to employ “CatFarts” (what’s the story behind that nickname? 😆) as a writing critic, which I will also explore. Thanks for the mention and link, and for the introduction to MidJourney. Another new toy! 😀

    Reply
    • Holly Jahangiri

      CatFarts? Oh, I did not come up with that entirely on my own. I mean, maybe started calling it by that nickname first – who knows? But in French, if you say Chat GPT it sounds just like “chat j’ai pété” which means “cat, I farted” – so… it just seemed like a great little nickname.

      No, I do not think the foreigner has any interest in improving their own English skills by using AI to chat up writers on Facebook Messenger. Nope. BUT… the conversation, as I said, has not involved a pitch. Now, that could be because I said, early on – before becoming super suspicious – “I am not your target demographic and will never become your client.” I mean, why waste everyone’s time? What surprised me, and what’s unlike most scammers trying such things on for size, is that they did not immediately close the chat and cease all communications. So I think they’re training their chatbot to be…more effective? More appealing, maybe? They ask questions that seem like the sort of questions you’d ask a writer in an interview – what do you write, what are you doing to promote your books, what’s your favorite thing about writing, etc. They will reply in reasonable detail if asked the same questions – about writing.

      Reply
      • Debbie D.

        Ah, “chat j’ai pété”. Now I get it! 😆 Chatbot training session sounds reasonable. Thanks again for the enlightenment!

        Reply
      • Debbie D.

        …and I just stumbled upon this! 🤣🤣🤣

        https://chat-jai-pete.fr/

        P.S. Despite checking the box, my name, email & website info has to be re-entered every time.

        Reply
        • Holly Jahangiri

          Sorry about that. I’ll look into it, but no promises. (I didn’t write the software that runs the comments, unfortunately!)

          Reply
          • Debbie D.

            No biggie. It’s not saving the cookies, but at least Chrome remembers name and email.

        • Holly

          Did you accept our reject the cookies when the gdpr pop-up popped up?

          Reply
          • Debbie D.

            I didn’t see any pop-up, but always accept the cookies when there is one.

Submit a Comment

Your email address will not be published. Required fields are marked *

 


Copyright © 1998-2024 Holly Jahangiri
Nothing on this website may be used without prior written consent of the author and owner.

Let’s Connect!

Looking for Excellent Web Hosting?

PeoplesHost Web Hosting

 

I have been a happy customer here, myself. While this is an affiliate link, I'm just happy to promote PeoplesHost, because I have had nothing but great customer experience with them as my own web hosting provider.

Post Categories