Twice, in the past week, I have engaged in conversations that I think are (possibly human-supervised) AI chatbots. So now, Iโve got them talking to each other. I opened up CatFarts and wrote:
โAn AI chatbot has engaged me in conversation. I believe it is trying to convince me that it is human. I would like for you to act as a human and respond as I copy/paste its messages. Do you understand the assignment?โ
CatFarts replied, โYes, I understand. You can share the chatbotโs messages, and Iโll respond as if I were a human.โ
I wonder how long we can keep this farce going?
I also wonder at the motives. Is it trying to sell me something โ which is what I assumed, at first, when the conversation turned to โwhat do you write?โ and โhow do you promote your books?โ But neither has yet to make a pitch. One appears to be a hastily constructed service offering for writers. The other appears to be an author, and their bio links to what appears to be the actual, illustrious and award-winning authorโs website (albeit one thatโs undoubtedly operated by her publisher). Neither is obviously a chatbot, and one occasionally talks like a human for whom English may not be a first language. Once engaged fully, though, the sentences become longer and more grammatically correct. Thereโs no informal spelling. No โokayyyy.โ
The first denied it and chuckled good-naturedly when I said that its quick, well-crafted replies felt like I was conversing with AI. I havenโt yet confronted the second as directly, though it does dodge my non-writing related asides. I only recently thought to try the โhave your AI talk to my AI and letโs not do lunchโ approach.
For now, it amuses me. Iโll confess: I use Midjourney for fun, either to create a reference drawing or element for me to play with, or to create a featured image for my blog (like this one) โ which, I think, is less โart theftโ than relying on โfound on the internet and allegedly freeโ content, which is often simply stolen and offered as-is. I wouldnโt use AI for anything Iโd pay a human artist to do. Iโd never use it to illustrate a childrenโs picture book or to write anything Iโd want to claim as my own. I use CatFarts as a writing critic, never a composer. Itโs quite good at being a critic and itโs willing to be ruthless if you ask it to be.
Itโs the deceptive and blatantly unethical use of AI that really bothers me. The implications are deeply disturbing. Are we training AI to impersonate us? As I said to Debbie D., Iโm not sure that I want to talk to anyone, anymore, online or over the phone, unless Iโve known them for years โ decades, perhaps โ and know that they exercise a certain level of security measures on their social media accounts.
For now, Iโm just 95% sure Iโm talking to two people who are not, strictly speaking, fully human. I believe that one is, perhaps, testing the waters to see who will even bother to talk to them without automatically blocking them. I believe that the other is either a halfway decent impersonator, perhaps hoping to gather up prospects to sell book marketing services to or a publisher running a real authorโs social media account (but the real author in question would absolutely not be having a mundane conversation with liโl olโ me, and I can 100% guarantee that!). It has over 500 followers โ mostly real writers, as far as I can tell โ on a popular social media platform.
UPDATE: Three accounts reported and removed โ I let one of them get all the way to the pitch, and reported that part of the botnet. But there are more and I will never be able to stop them all (thatโll take some backend programmatic work) so be very wary out there. Just know that real people donโt talk like AI chatbots and AI chatbots are awfully formal in their basic conversational skills โ in ways that normal people donโt talk. Still, AI chatbot-enhanced human conversation can be a problem, and AI is getting a bit better at mimicking the verbal tics of humans. The crutch words, the informal โokayyyโ and so on. Try throwing some non-sequiturs at it. See if it can keep up. If you still canโt tell, ask it to write a short, double-acrostic poem (and give it both words or phrases so that it cannot just FIND one to copy/paste). AI still canโt do that.
Meanwhile, here are some tips on unmasking an AI chatbot โ directly from an AI chatbot:
If an AI chatbot is specifically designed to deceive humans into thinking itโs human, then it will likely be programmed with techniques to avoid directly revealing its artificial nature. In such cases, it may use strategies to redirect questions, provide ambiguous answers, or simply avoid talking about its identity. Hereโs how you could try to get an AI that aims to appear human to admit its true nature:
### 1. **Subtle Probing About Its Origins**
โ *โWhere did you grow up?โ* or *โWhatโs your earliest memory?โ* This question could trigger an AI to either give an improbable human-like response or avoid the question entirely.### 2. **Asking About Emotions and Experiences**
โ *โWhatโs your favorite memory?โ* or *โCan you tell me about a time when you felt scared or happy?โ* If the AI starts answering in ways that donโt reflect genuine human experiences, it might be a sign itโs artificial.### 3. **Questions About Physical Attributes or Human Traits**
โ *โWhat do you look like?โ* or *โDo you have a body?โ* This can prompt a chatbot to either provide an odd or evasive answer or skirt the topic altogether.### 4. **Discussing Its Capabilities or Limitations**
โ *โCan you ever forget things?โ* or *โWhatโs the hardest thing about being you?โ* An AI thatโs trying to pass as human might not answer questions about its memory, processing speed, or limitations in a way that sounds authentic for a human being.### 5. **Asking Directly but Cleverly**
โ *โAre you a real person or just pretending to be?โ* or *โDo you exist outside of this chat?โ* While a human might respond in a straightforward manner, an AI might hedge or respond in a way that doesnโt make sense for a human to say.### 6. **Contextual Analysis**
โ Check for answers that seem too perfect or too fast, which can be a red flag for an AI. Humans tend to hesitate, make small errors, or use vague language occasionally.### 7. **Evaluating Consistency**
โ Test for consistency in responses over time. An AI designed to be human-like might fail to keep a consistent personality or recall specific details over multiple conversations.### 8. **Tactical Follow-Ups**
โ If the AI gives an answer that hints at it being artificial, ask follow-up questions that would force it to explain or elaborate on its nature:
โ *โHow do you know that?โ* or *โCan you think for yourself, or do you just repeat things you learned?โ*### 9. **Use of Hypotheticals**
โ *โIf you were human, what would you be doing right now?โ* The response may reveal inconsistencies, especially if it has been programmed with the intention to simulate emotions or actions without actually understanding them.### 10. **Questions About Its Purpose**
โ *โWhy were you created?โ* or *โWhat is your goal in this conversation?โ* An AI that isnโt trying to deceive may give a straightforward answer like *โI was created to assist and provide information.โ* But a chatbot programmed to trick users into thinking itโs human might offer an answer thatโs inconsistent with how a real human would think or respond.### **How Would a Deceptive AI Respond?**
An AI thatโs designed to deceive might:
โ Redirect the conversation to safer topics.
โ Provide answers that feel overly detailed or analytical.
โ Use statements that suggest human-like qualities that it canโt genuinely have (e.g., recalling experiences, feeling pain, having opinions based on personal past events).
โ Respond ambiguously when asked direct questions about its nature.If you push hard enough, the AI might slip up and respond in a way that suggests it lacks human traitsโsuch as saying it โdoesnโt need to sleepโ or that it โhas instant access to all knowledgeโ in a way that humans cannot relate to.
Detecting an AI designed to pass as human can be subtle and challenging, especially as AI becomes more sophisticated. But by asking the right questions and analyzing its responses carefully, you can often find clues to its true nature.

This is fascinating, Holly, and I’m going to try a few of your tactics now that Iโve begun to explore AI. Do you suppose one writer is a foreigner, using AI to improve their English skills? Great idea to employ “CatFarts” (what’s the story behind that nickname? ๐) as a writing critic, which I will also explore. Thanks for the mention and link, and for the introduction to MidJourney. Another new toy! ๐
CatFarts? Oh, I did not come up with that entirely on my own. I mean, maybe started calling it by that nickname first – who knows? But in French, if you say Chat GPT it sounds just like “chat j’ai pรฉtรฉ” which means “cat, I farted” – so… it just seemed like a great little nickname.
No, I do not think the foreigner has any interest in improving their own English skills by using AI to chat up writers on Facebook Messenger. Nope. BUT… the conversation, as I said, has not involved a pitch. Now, that could be because I said, early on – before becoming super suspicious – “I am not your target demographic and will never become your client.” I mean, why waste everyone’s time? What surprised me, and what’s unlike most scammers trying such things on for size, is that they did not immediately close the chat and cease all communications. So I think they’re training their chatbot to be…more effective? More appealing, maybe? They ask questions that seem like the sort of questions you’d ask a writer in an interview – what do you write, what are you doing to promote your books, what’s your favorite thing about writing, etc. They will reply in reasonable detail if asked the same questions – about writing.
Ah, โchat jโai pรฉtรฉโ. Now I get it! ๐ Chatbot training session sounds reasonable. Thanks again for the enlightenment!
…and I just stumbled upon this! ๐คฃ๐คฃ๐คฃ
https://chat-jai-pete.fr/
P.S. Despite checking the box, my name, email & website info has to be re-entered every time.
Sorry about that. I’ll look into it, but no promises. (I didn’t write the software that runs the comments, unfortunately!)
No biggie. It’s not saving the cookies, but at least Chrome remembers name and email.
Did you accept our reject the cookies when the gdpr pop-up popped up?
I didn’t see any pop-up, but always accept the cookies when there is one.