The article concludes with “[…] if used without care, it is all too easy for AI to entrench existing disparities and discriminate against already-disadvantaged groups.”
It’s 2020, could we maybe move away from this Skynet/Terminator narrative… take Formula 1, you got the driver and the car. The “car” though means the team behind it. The car itself doesn’t care if it wins or loses; much like AI doesn’t either.
In fact, AI has to discriminate… inside that little blackbox that people don’t seem to understand. It’s simply part of pattern recognition, your brain does the same thing all day long (shame on you). Now, that is obviously not the kind of discrimination the article is talking about, but then don’t use phrases such as “it is all too easy for AI to” when AI is really not the culprit in any of the scenarios the article describes.
I don’t mind if we simplify things… and there is no need for managers to know what alpha-beta pruning or simulated annealing is. You need to be aware of the fact that AI will answer your question to the best of its ability… if there is a bias there, then it’s something that you’ve introduced… can’t blame AI for it afterwards.