Is this anthropomorphic view of AI really helpful?

The article concludes with “[…] if used without care, it is all too easy for AI to entrench existing disparities and discriminate against already-disadvantaged groups.”

It’s 2020, could we maybe move away from this Skynet/Terminator narrative… take Formula 1, you got the driver and the car. The “car” though means the team behind it. The car itself doesn’t care if it wins or loses; much like AI doesn’t either.

In fact, AI has to discriminate… inside that little blackbox that people don’t seem to understand. It’s simply part of pattern recognition, your brain does the same thing all day long (shame on you). Now, that is obviously not the kind of discrimination the article is talking about, but then don’t use phrases such as “it is all too easy for AI to” when AI is really not the culprit in any of the scenarios the article describes.

I don’t mind if we simplify things… and there is no need for managers to know what alpha-beta pruning or simulated annealing is. You need to be aware of the fact that AI will answer your question to the best of its ability… if there is a bias there, then it’s something that you’ve introduced… can’t blame AI for it afterwards.


Let’s wait for John Connor. Until then I’m with Musk on this.

(completely useless post, just couldn’t help myself!)

Actually, that depends on what you mean by…

Punctuation mistake… I was typing on my phone and accidentally double spaced.

Let’s wait for John Connor, until then I’m with Musk on this.

I added the part at the bottom to point out that I didn’t have anything to contribute to the discussion but I just couldn’t stop myself from throwing in the reference to John Connor and Elon Musk.

To answer your question, if I am not mistaken, Elon Musk is very outspoken against the idea of a fully autonomous artificial intelligent entity.

He regards individuals as incapable of regulating themselves and that is why he proposes oversight in the development of AI.

Hope that clarifies some of the previous issues with my post.

Just an aside, I find it really funny that Musk says that he’s against AI when he’s funding the Neuralink project.

I have a lay interest in the development of AI so I might be really wrong, but I’m under the impression that Musk is against AI as an autonomous entity but he is 100% behind body augmentation or human enhancement.

He is referencing AGI because AGI can make Super AGI. When this happens the world can end like a button press. His proposition is to give each human AI capabilities to counter AGI so that even if it happens humans can be in control stopping the switch to the end.

Believing AGI can happen accidentally is why he is against it. Not really actually against AI itself. Just the chance of an end switch.

His views on AI are amorphous, ultimately. He’s afraid of AIs “taking over the world”, but doesn’t express concern over having having a chip in your brain that will be able to predict and manage almost everything you do. Sounds pretty similar to me.

Ultimately, if you want to get interested in AI, don’t listen to Musk. He’s not respected by the broader AI community. Also, be careful from whom you learn about AI, since AI is in its buzzword phase, and every huckster in the world has their own “informed” opinion on AI. If you want to truly understand AI, you should probably talk to an actual computer scientist and read some of the works of John McCarthy.

Awesome! Thanks for the great reply!

Unfortunately, I’m not interested in getting into the field of AI in any other capacity other than a casual reader of news about it. Yet, as I might not be able to understand the technological aspect of AI, I do however understand the philosophical and ethical issues involved and now I do feel that my initial reply is more warranted.

Let’s wait for John Connor, until then I’m with Musk on this.


There is a difference between the “how” and the “what/why”. That is why I made the distinction.

The ethical system in society in the western world is one that evolved over time. It is not suitable nor is it adapting quickly enough for the world in which we live in now. A world driven by technological innovation.

And then on to one of my favourite philosophical questions(I’ll leave the others for a later date)… This one @bjoern.gumboldt is one I’d love to hear your thoughts on. Is it possible for a flawed being to create a perfect being?

What does that have to do with AI…?

From a philosophical perspective… Everything.

I have great respect for you, whenever you post something I read it and I walk away knowing that I learned something. Yet, this topic is a very controversial topic. We are looking at AI differently and I am pretty sure we disagree on whether AI is just a box or Pandora’s box.