usDebbie D.anyonefullyor
If an AI chatbot is specifically designed to deceive humans into thinking it’s human, then it will likely be programmed with techniques to avoid directly revealing its artificial nature. In such cases, it may use strategies to redirect questions, provide ambiguous answers, or simply avoid talking about its identity. Here’s how you could try to get an AI that aims to appear human to admit its true nature:
### 1. **Subtle Probing About Its Origins**
– *”Where did you grow up?”* or *”What’s your earliest memory?”* This question could trigger an AI to either give an improbable human-like response or avoid the question entirely.### 2. **Asking About Emotions and Experiences**
– *”What’s your favorite memory?”* or *”Can you tell me about a time when you felt scared or happy?”* If the AI starts answering in ways that don’t reflect genuine human experiences, it might be a sign it’s artificial.### 3. **Questions About Physical Attributes or Human Traits**
– *”What do you look like?”* or *”Do you have a body?”* This can prompt a chatbot to either provide an odd or evasive answer or skirt the topic altogether.### 4. **Discussing Its Capabilities or Limitations**
– *”Can you ever forget things?”* or *”What’s the hardest thing about being you?”* An AI that’s trying to pass as human might not answer questions about its memory, processing speed, or limitations in a way that sounds authentic for a human being.### 5. **Asking Directly but Cleverly**
– *”Are you a real person or just pretending to be?”* or *”Do you exist outside of this chat?”* While a human might respond in a straightforward manner, an AI might hedge or respond in a way that doesn’t make sense for a human to say.### 6. **Contextual Analysis**
– Check for answers that seem too perfect or too fast, which can be a red flag for an AI. Humans tend to hesitate, make small errors, or use vague language occasionally.### 7. **Evaluating Consistency**
– Test for consistency in responses over time. An AI designed to be human-like might fail to keep a consistent personality or recall specific details over multiple conversations.### 8. **Tactical Follow-Ups**
– If the AI gives an answer that hints at it being artificial, ask follow-up questions that would force it to explain or elaborate on its nature:
– *“How do you know that?”* or *“Can you think for yourself, or do you just repeat things you learned?”*### 9. **Use of Hypotheticals**
– *”If you were human, what would you be doing right now?”* The response may reveal inconsistencies, especially if it has been programmed with the intention to simulate emotions or actions without actually understanding them.### 10. **Questions About Its Purpose**
– *”Why were you created?”* or *”What is your goal in this conversation?”* An AI that isn’t trying to deceive may give a straightforward answer like *“I was created to assist and provide information.”* But a chatbot programmed to trick users into thinking it’s human might offer an answer that’s inconsistent with how a real human would think or respond.### **How Would a Deceptive AI Respond?**
An AI that’s designed to deceive might:
– Redirect the conversation to safer topics.
– Provide answers that feel overly detailed or analytical.
– Use statements that suggest human-like qualities that it can’t genuinely have (e.g., recalling experiences, feeling pain, having opinions based on personal past events).
– Respond ambiguously when asked direct questions about its nature.If you push hard enough, the AI might slip up and respond in a way that suggests it lacks human traits—such as saying it “doesn’t need to sleep” or that it “has instant access to all knowledge” in a way that humans cannot relate to.
Detecting an AI designed to pass as human can be subtle and challenging, especially as AI becomes more sophisticated. But by asking the right questions and analyzing its responses carefully, you can often find clues to its true nature.

This is fascinating, Holly, and I’m going to try a few of your tactics now that I’ve begun to explore AI. Do you suppose one writer is a foreigner, using AI to improve their English skills? Great idea to employ “CatFarts” (what’s the story behind that nickname? 😆) as a writing critic, which I will also explore. Thanks for the mention and link, and for the introduction to MidJourney. Another new toy! 😀
CatFarts? Oh, I did not come up with that entirely on my own. I mean, maybe started calling it by that nickname first – who knows? But in French, if you say Chat GPT it sounds just like “chat j’ai pété” which means “cat, I farted” – so… it just seemed like a great little nickname.
No, I do not think the foreigner has any interest in improving their own English skills by using AI to chat up writers on Facebook Messenger. Nope. BUT… the conversation, as I said, has not involved a pitch. Now, that could be because I said, early on – before becoming super suspicious – “I am not your target demographic and will never become your client.” I mean, why waste everyone’s time? What surprised me, and what’s unlike most scammers trying such things on for size, is that they did not immediately close the chat and cease all communications. So I think they’re training their chatbot to be…more effective? More appealing, maybe? They ask questions that seem like the sort of questions you’d ask a writer in an interview – what do you write, what are you doing to promote your books, what’s your favorite thing about writing, etc. They will reply in reasonable detail if asked the same questions – about writing.
Ah, “chat j’ai pété”. Now I get it! 😆 Chatbot training session sounds reasonable. Thanks again for the enlightenment!
…and I just stumbled upon this! 🤣🤣🤣
https://chat-jai-pete.fr/
P.S. Despite checking the box, my name, email & website info has to be re-entered every time.
Sorry about that. I’ll look into it, but no promises. (I didn’t write the software that runs the comments, unfortunately!)
No biggie. It’s not saving the cookies, but at least Chrome remembers name and email.
Did you accept our reject the cookies when the gdpr pop-up popped up?
I didn’t see any pop-up, but always accept the cookies when there is one.