Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success. […]
The current nature of algorithms is secret, proprietary code, protected as the “secret sauce” of corporations. They’re so secret that most online scoring systems aren’t even apparent to the people targeted by them. That means those people also don’t know the score they’ve been given, nor can they complain about or contest those scores. Most important, they typically won’t know if something unfair has happened to them.
Given all of this, it’s difficult to imagine oversight for algorithms, even when they’ve gone wrong and are actively harming people. For that matter, not all kinds of harm are distinctly measurable in the first place. One can make the argument that, what with all the fake news floating around, our democracy has been harmed. But how do you measure democracy?
That’s not to say there is no hope. After all, by definition, an illegal algorithm is breaking an actual law that we can point to. There is, ultimately, someone that should be held accountable for this. The problem still remains, how will such laws be enforced?