Wednesday, April 8, 2015

Give me Algorithmic Accountability Or

Give me Algorithmic Accountability or give me… ah, what is the alternative again?

I thought Steve Lohr's article in yesterday's New York Times was worth pointing out, as it boils up a larger issues from the flotsam and jetsam of the big data analytics parade. Oneline ads, the killer app (to date) for big data and machine learning re but a Petri dish, he says. After all, if the wrong ad is served up, the penalty is mild. But, he writes, the stakes are rising. Companies and governments will churn big data  to prevent crime, diagnosis illness, and more. Why just the other day JP Morgan said it could spot a rogue trader before he-she went rogue.

The algorithms that do the decisions may need more human oversight, the writer and others tend to suggest. Civil right organizations are among those suggesting. An other is Rajeev Date, formerly of the Consumer Financial Protection Bureau. The story focuses on the notion of Algorithmic Accountability (meeting tonight in the church basement, no smoking please) as an antidote to brewing mayhem

IBM Watson appears in the story. It is hard to get a handle on Watson, but one thing is crystalline; that is, that the mountains of documents is growing beyond managers’ capacity to understand, and that Google is paling under the weight. Watson is meant to do the first cut on finding a gem in, for example the medical literature – reading ‘many thousands of documents per second.’ Along the way, a few researchers may lose their jobs, but the remaining managers will need coffee and servers are wanted.

Havent heard for a while of Danny Hillis – he coined the Thinking Machine back in the day. The original cognitive computer? Or was that the old Ratiocinator (but I digress). Hillis says data storytelling is key. To, like old man Chaucer, find narrative in the confused data stream. If the story teller had a moral compass that would be an additional positive factor, if you take Louis Berry’s word for it. He is cofounder of Earnest, a company that has staff to keep an eye on the predictor engine output.

Opacity would be good, Lohr concludes, as Gary King, director of Harvard’s Institute for Quantitative Social Science joins the narrative. The Learning Machines should learn to err on the side of the individual in the data pool – if that would happen you would get that bank loan, that might be a little iffy. Rather than have a fairly innocuous money request rejected. George Bailey would be the patron saint of the Moralistic Data Story Telling Engineer.

I am trying to think of a case where the owners of the machines programmed them that way .. but parted-lipped Jennifer Lawrence is in a Dior ad contiguous with Lohr’s Maintaining a Human Touch As the Algorithms Get to Work (NYT, Apr 7, 2015, p. A3) and my train of thought has left the station.

Data science should not happen in the dark. We have in fact aborning a classic humanization-computerization dilemma. Academia and associations, mobilize! – Jack Vaughan, Futurist


[Imagine Betty Crocker working a conveyor belt where algorithms are conveyed. I do.]

No comments:

Post a Comment