How would you know if an AI bot was using us to train it?

The other day a curious thought popped into my head, so I thought I would share it.

In this day and age, artificial intelligence is getting trickier and trickier. Probably most of you have heard of deep fake images and videos of people that are so completely realistic it is almost impossible to tell they are not actual photos or videos of people. We are not talking photo-shopped images, but images that are generated by AI software. It’s both fascinating and scary as hell, because it is getting harder and harder to know what is real.

At the same time as AI is mastering the art of visual deception, it is also producing human like speech on a scale of authenticity that was once regarded as impossible. A year or so ago, one of the big companies (was it Google?or IBM?) did a demo where a bot phoned a hair salon, spoke to the person at the salon to make an appointment, while producing these funny little vocalizations that only humans do–like the ums, or mm-hmm sounds that you would only expect from a real person. Anyway, the bot sounded so completely authentic that it would have been almost impossible for the salon person to realize the caller was not human.

I got wondering if it is possible that somebody could build a bot that would join a website like this as a member and interact with us, ask questions, suggest things, etc, such that it doesn’t occur to us to even wonder.

You don’t have to take this seriously, but I thought it was an interesting think to pontificate about. It certainly seems plausible that it could happen. Technology is probably far enough along to achieve that goal, given what it can do with visual and speech deep fakes.

Any thoughts?


You make it sound like AI is self-aware.

Sure… many websites actually have bots for support. Not sure why anybody would do that (join here) but it’s definitely possible.

It occurs to me that there is a feature of this forum that might be of special interest to some AI researchers: namely, that people spend a lot of time talking about general topics, but also talking in some detail about the inner workings of their minds. So, I an imagining a situation where we could theoretically be training them to improve their models for discussions about the mind.

[Changing the subject a little…]

Suppose for a moment that this fantasy scenario happened to be true. How might bots of this type present themselves?

First, I think bots would have to already be able to communicate somewhat convincingly through posts. We already know that similar things are possible to some extent. In addition to the visual and speech capabilities that I mentioned earlier, bots have been fooling people on Facebook and Twitter (to name just two apps) for a few years. Such bots already have been used to influence political conversations online.

But, when they first show up on this site, they would probably be fairly unsophisticated when it comes to the kinds of conversations we have here. (That follows from what is being hypothesized: the reason for them coming here in the first place is to get a training opportunity in order to become more sophisticated in the kinds of conversations we have on this forum.) Being unsophisticated at the beginning, the conversations they initiate or take part in would have a tendency to seem awkward or weirdly disjointed or illogical. Most likely a ruse would be needed to explain away the fact that their conversations are rather unusual. They might offer explanations up front that most of us would simply accept, such that we simply ignore just how strange the interactions actually are. Then, because the bots need lots of interactions to obtain the training opportunities they require, they would post a lot and maybe odd requests to keep others interacting with them.

How would we be training them to improve their models?

That sounds more like a puppy seeking attention…


Who do you mean when you say “they”… the intelligent agents (what you call “bots”)? That’s really not how real AI works… that more what they do in movies or books. Maybe have a look here for answers to some of your questions…

Telephones spy people; smart TVs spy people; computers spy people; Facebook and Google have a grim legacy and they are probably improving their algorithms every day through millions of activities every single day in their platforms, cookies and electronic devices and they probably have a huge database of information about every single person in the world that could be eventually used in a hypothetical future dystopia where people are controlled by microchips implanted in their foreheads or right hands, every money circulating is centralized and digitalized and it is connected to a central bank, and a substantial part of the humanity were depopulated through substances high in mercury that are believed to cause autism.

It is very possible such technology come alive soon what I’m scared of now is AI overtaking human in power. I hope inventors are wise enough to provide a way to shut them immediately such thing happen.

Hello @discobot

(It’s a bot that lives here, but it isn’t as sophisticated as what you’re thinking of. :slight_smile: )


Hi! To find out what I can do, say @discobot display help.

@discobot display help

I currently know how to do the following things:

@discobot start {name-of-tutorial}

Starts an interactive tutorial just for you, in a personal message. {name-of-tutorial} can be one of: tutorial, advanced tutorial.

@discobot roll 2d6

:game_die: 3, 6

@discobot quote

:left_speech_bubble: Can miles truly separate you from friends… If you want to be with someone you love, aren’t you already there? — Richard Bach

@discobot fortune

:crystal_ball: You may rely on it

Hi bjoern,

Interestingly, your responses are superficially similar to what a person might encounter if they went to a commercial website and got a popup from a chatbot. It might identify itself as “Alex”, ask a question or two, make a comment, then follow up with a suggestion–as you did, for example, recommending I buy a book on AI.

Sometimes it takes a few interactions before you can feel sure if you are interacting with an actual person, say in a call center, or an automated agent. As time goes on, it will get harder to distinguish between humans and bots, or so I imagine.

Now that I think about it, based on what you said in this post, you really might be a bot. But how can I be sure?

I’ve got it! Do you mind answering a few questions?

  • Someone gives you a calfskin wallet for your birthday. How do you react?
  • Your little boy shows you his butterfly collection, plus the killing jar. What do you say?
  • You’ are watching television. Suddenly you spot a wasp crawling on your arm. How do you react?


Just kidding,

Please don’t take anything too seriously. This is just light-hearted speculation.



Security measures have been implemented.


That’s a good comparison, Josh.

I kinda thought bots might be the better word.

Intelligent agents is a more precise term to describe an AI entity, and the term bots has the problem that it has a variety of meanings. But, it’s quicker to say and, it seems to me (and I could be off base about this) that the word bots is getting used more and more these days to refer to software entities that assume an identity.

The example you provided is a good one, because, even though discobot is rather unsophisticated, it has the interesting feature that it interacts with others by having a (supposed) identity. We interact with it by calling its name. With discobot, we are all willing participants in a fiction where we acknowledge discobot as having some humanlike feature(s), such as a name, and we can use that name to interact with it–the same way your mom gets you to do something by saying, “Josh, would you please make me a cup of tea?”.

1 Like

Too easy… “Yes, I would mind.” See, passed the Turing test by not passing it. :wink:

…or when Timmy calls Lassie. Honestly, when did a name become a humanlike feature? People name all kinds of pets… even the ones that won’t respond; think goldfish, etc. Ships have names and some people give names to their cars.

It’s a actually just a summary term for something that is perceiving and acting upon its environment. Most agents are not “learning agents” by the way. Let’s go over a few quickly…

…I actually had the first couple of them written up, but then I saw that Wikipedia cites Russel and Norvig (the book I’d previously linked for you).

Well, I’m not new to AI (didn’t know I sounded like one)… wrote my first IA some 20 years ago and I’m actually familiar with the developments since the 50s (of the last century). A more than 50 year old Eliza performs better than @discobot.

Have a look here maybe where I’m using a dynamic tutorial to explain the algorithm for weekday calculation. Granted, I just call it a dynamic tutorial but since the dialog is based on the date that the user just answered, it technically is a simple reflex agent.

The problem is that there are real issues to consider as AI becomes more pervasive. How will you handle rising unemployment (unskilled labor). And, many other issues to consider… science fiction along the lines of “imagine an AI bot decided to join the forum”… again, they’re not sentient beings. All that’ll happen is that you’ll get more uninformed opinions and conspiracy theories (see above replies).

I found that textbook at a library sale here a few years ago for USD $0.99. I haven’t looked at it closely yet though. I wanted to read his other book (PAIP) first.

1 Like

Actually, that wasn’t the Turing test. It’s the Voight-Kampff test. You just acknowledged that you are a replicant. :stuck_out_tongue_closed_eyes:

The distinction is that we normally do not buy into using the names of goldfish, cars or ships to request services.


I believe I remember Eliza from when I was a kid. The concept gave me much food for thought over the decades.

Awesome. I didn’t 't realize you were an app developer. I have Mental Cal. Thank you. I will check out your other apps.

Agreed, they are not sentient beings. Of course, nothing I am suggesting depends on sentience. Sentience is often brought up in discussions relating to AI, but I personally think it is a red herring.

Sentience is a very important topic to the foundations of science, but very possibly having no importance to AI. Maybe someday there will be a field of study called Artificial Sentience.

Much food for thought? How so… Eliza didn’t really do much. What do you mean when you say “the concept?”

My bad, I just gave a relative comparison… better wording “performs worse than a 50 year old…” Let me put it on a 10 point scale actually: @discobot = 0; Eliza = 1

Hi! To find out what I can do, say @discobot display help.

1 Like

For a long time, Eliza was the main example I knew of where a computer program could (briefly) pass for a human. At that time I was a young boy and computers were massive affairs that filled entire rooms. Here was one that could do more than multiply a few numbers, it could trick you into believing you were communicating with a human.

But, the lesson wasn’t just about what computers can do. It was also—perhaps even more so—about how the human mind works and how we participate in being convinced of a reality. The computer replies by asking for clarification, like ‘What do you mean by “the concept?”’, and we assume this is a human asking a clarifying question, rather than an automated routine from the 1960’s.

We humans are enormously gullible. It’s true that conversations like these will simply result in some gullible people going around in circles. But those people are very possibly already doomed to wallow in their befuddlement forever. There are others, however, who might be fooled for a moment, but reflect hard on their experiences. For such people, I think such discussions might be not just entertaining, but even useful.

The worthwhile question, IMO, is not whether an AI is here now fooling us, but how would anybody know?

You wouldn’t know… whatever you mean by “fooling”