Wednesday, December 24, 2014

Talking Data: Protecting online data privacy was the big 2014 trend

Talking Data: Protecting online data privacy was the big 2014 trend


The last Talking Data Podcast of 2014 is a bit of a walk down Twitter Lane with data privacy issues in 2014. Ed Burns and I discuss a Pew poll that looks at the attitude of Americans toward online privacy. They aren't comfortable with their data sharing, but they sure do like that social infrastructure services at gratis. Uber and its loping missteps on the way to killing the hackney as he is now known, and what that means for data professionals. There is more, including some discussion of HortonWorks IPO. On Christmas eve my version was truncated. Might wait until the Christmas smoke clears to sort through that. - Jack Vaughan

Tuesday, December 23, 2014

That was the year that was big data ala Hadoop, NoSQL


The year 2014 saw progress in big data architecture development and deployment, as users gained more experience with NoSQL alternatives to relational databases, and Hadoop 2 gained traction for operational analytics uses beyond the distributed processing framework's original batch processing role. Those trends were detailed in a variety of SearchDataManagment pieces. Big data in review: 2014



The roots of machine learning

Neural networks and artificial intelligence have been on my mind over many years, while I spent most days studying middleware, something different. At the heart of neurals were backward propagations and feedback loops – all somewhat related to cybernetics, which flowered in the 1940s into the early 1970s. One of the first implements of cybernetics was the thermostat.

In early 2013 I started working at SearchDataManagement, writing about Big Data. At the end of  this year I have devoted some time to book learning about machine learning. A lot happened while Rip Van Vaughan was catching some z's.  So something told me to go back to one of my old blogs and see where I left off with feedback. If you pick through it you will find Honeywell and the thermostat and automatic pilot, etc. My research told me the first auto pilot arose from a combo of the thermostat (Honeywell) and advanced gyroscopes (Sperry).

I spent hours looking at the thermostat, its mercury, its coil. It had an alchemical effect. I remember wondering if the thermostat could be a surveillance bug. Now we have Nest, which uses the thermostat as a starting point for collecting data for machine learning process. - Jack Vaughan

[It is funny how the old arguments about the autopilot appeared as memes in this year of machine learning. This link, which includes Tom Wolfe's mirthful take on the autopilot in The Right Stuff… is here rather as a place-marker for background: The Secret Museum of Cybernetics - JackVaughan's Radio Weblog, March 2004(also reposted on Moon Traveller with a slew of feedback errata). Probably valuable to cite Nicholas Carr's The Glass Cage, published this year, which takes as its premise, society's growing inabilities, many brought on by automation. Several serious airplane crashes where pilots' skills seemed overly lulled by automation form a showcase in The Glass Cage, ]

From Wolfe’s The Right Stuff:

“Engineers were ... devising systems for guiding rockets into space, through the use of computers built into the engines and connected to accelerometers for monitoring the temperature, pressure, oxygen supply, and other vital conditions of the Mercury capsule and for triggering safety procedures automatically -- meaning they were creating  with computers, systems in which machines could communicate with one another, make decisions, take action, all with tremendous speed and accuracy .. Oh, genius engineers! “

Wednesday, December 17, 2014

AI re-emergence : Study to Examine Effects of Artificial Intelligence


Able New York Times technology writer John Markoff (he has been far away the star of my RJ-11 blog) had two of three (count em, three) AI articles in the Dec 16 Times. One discusses Paul Allen's AI2 institute work; the other discusses a study being launched at Stanford with the goal to look at how technology reshapes roles of humans. Dr. Eric Horvitz of MS Research will lead a committee with Russ Altman, a Stanford professor of bioengineering and computer science. The committee will include Barbara J. Grosz, a Harvard University computer scientist; Yoav Shoham, a professor of computer science at Stanford; Tom Mitchell, the chairman of the machine learning department at Carnegie Mellon University; Alan Mackworth, a professor of computer science at the University of British Columbia; Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley. The last, Mulligan, is the only one who immediately with some cursory Googling appears to be ready to accept that there are some potential downsides to AI re-emergence. It looks like Horvitz has an initial thesis formed ahead of the committee work. That is that, based on a TED presentation ("Making friends with AI") , while he understand some people's issues with AI, that the methods of AI will come to support people's decisions in a nurturing way. The theme would be borne out further if we look at the conclusion of an earlier Horvitz'z organized study on AI's ramifications (that advances were largely positive and progress relatively graceful). Let's hope the filters the grop implement tone down the rose-colored learning machine that enforces academics' best hopes. – Jack Vaughan