Tuesday, December 25, 2018

For the unforeseeable future we have no paradigm


James Burke came up in conversation the other day. You know, the British science reporter who hosted the late 1970s Connections TV series?

On the show, he globe-hopped from one scene to the other, always wearing the same white leisure suit, weaving a tale of technological invention that would span disparate events - show for example, how the Jacquard loom or the Napoleonic semaphore system led to the mainframe or the fax machine.

Its hard to pick up a popular science book these days that doesn't owe something to Connections. Burke of Connections had cosmic charisma - in his hands,  Everything is connected to Everything. You'll hear that again.

Today I picked up Connections (the book that accompanied the series), looking, this being Christmas, illuminations. Not just the connections - but how the connections are connected. Cause its been a search for me over many years - I've stumbled and bumbled, but I have never been knocked on my heels more than this year, 2018.

And Burke delivered: It's not just about the connected, but also about the unconnected. How things happen: "The triggering factor is more often than not operating in an area entirely unconnected with situation which is about to undergo change," he writes. [Connections, p.289]

This seems to me today pertinent. Because the year just past was one where some among my interests (horse race handicapping and predictive analytics; Facebook, feedback, news and agitprop, and the mystic history of technology) seemed to defy understanding.

You see, you look close, and you analyze, but there is a cue ball just outside your frame of reference that  will break up the balls. It is a dose of nature - dose of reality - a dose of chaos. In horse racing it can be quite visible when a favorite bobbles at the start, or a hefty horse takes a wide turn and thus impels another horse a significant number of paths (and, ultimately, lengths) wider.  We (journalists, handicappers, stock market analysts) generally predict by looking in the rear view mirror, because we don't have a future-ready time machine.

I had the good fortune to cover events that Burke keynoted. There was OOPSLA in Tampa in 2001 (less than a month after 9/11 terror attack). And their was O'Reilly Strata West in Santa Clara (?) in about 2013 (?), at which the O'Reilly folks kindly set up a small press conference with Burke for media his keynote.

Burke is adamant that inventors do not understand all the ramifications their inventions will have in society in practice. One thing I tried to press him on was the role of the social structure (in our case the capitalist system) has in technology's development. He'd just gotten off a transatlantic cross continental flight, and delivered a startling keynote, before sitting down with press ( he was asked would he like some coffee, and he said that in his time zone it was time for wine), and Jack's questions did not so resonate.

My notes thereof are a bit of jumble... Everything is connected to everything. He said of Descartes… and his fledgling scentific methods... he "froze the world.." with  reductionist - which may have value but which, as forecasters, pundits, and handicappers have found, "doesn’t tell you how all the parts work together."

"For the future we have no paradigm."

Screaming from the conversation with Burke, was a quote, actually from Mark Twain.

In the real world, the right thing never happens in the right place and the right time. It is the job of journalists and historians to make it appear that it has.”

Tuesday, December 4, 2018

NIPS is NeurIPS



Its a big day for regeneration, for non neural cognition and bioelectric mechanisms. The lowly flat worm has had its day. At NeurIPS 2018 up Montreal way.





Saturday, November 24, 2018

Detecting glaucoma from raw OCT with deep learning framework

''A team of scientists from IBM and New York University is looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. In a recent paper, they detail a new deep learning framework that detects glaucoma directly from raw optical coherence tomographic (OCT) imaging.''

''Logistic regression was found to be the best performing classical machine learning technique with an AUC* of 0.89. In direct comparison, the deep learning approach achieved AUC of 0.94 with the additional advantage of providing insight into which regions of an OCT volume are important for glaucoma detection.''

Read more at: https://phys.org/news/2018-10-deep-glaucoma.html#jCp
Also https://arxiv.org/abs/1807.04855v1
* "Area under the ROC Curve."

Friday, November 16, 2018

GPUs speed computation

The Science for Life Lab uses GROMACS on NVIDIA GPUs to accelerate drug design. The research group is studying the mechanisms behind various molecular phenomena that occur at human cellular membranes. GROMACS is a molecular dynamics application designed to simulate Newtonian equations of motion for systems with hundreds to millions of particles. The researchers write: The highly iterative nature of fitting the parameters of the kinetic models used to simulate the electrical current curves and running compute heavy simulations for each consumes both time and resources. Slower simulations mean fewer iterations.
Adding GPU acceleration provides a significant performance boost.
Read more. Shown at left: Voltage sensing protein doman.

Thursday, November 8, 2018

Platform for Terror

Sunday, September 30, 2018

They dont call it the Web for nothing

Was remembering when the Web first caught on: There have been a lot of changes in system and data architecture since the. One thing I remember back then is people saying “yeah, it is pretty cool, but, you know, it is stateless.” As most of what I heard on this issue was from enterprise software vendors, with all the bias that could entail, I should have taken what I was told with a grain of salt. The first big problem these folks saw with the Web was its statelessness, which made it far different from the synchronously connect clients and servers (at that time, Java servers) they were used to. Wrote this up for a podcast page related to a podcast ...

 Podcast Page
https://itknowledgeexchange.techtarget.com/talking-data/web-what-have-you-wrought-on-strata-microservices-and-more/
Podcast https://cdn.ttgtmedia.com/Editorial/2016/PodcastTechTarget/Talking_Data_Podcast_092418_withmusic.mp3

Friday, September 28, 2018

Name that tune, Now Playing!



A recent note on the Google AI blog discusses the company’s use of a deep neural network for music recognition on mobile devices. As it brings extreme-scale noodling (convolution) to bandwidth limited devices (smart phones) it could be a breakthrough on par with MPEG and JPEG, which dramatically transformed music distribution beginning in the 1990s. It’s known as Now Playing, and it can use a sequence of embeddings that run your music against its network and recognize the song, while conserving energy on the device. Each embedding has 96 to 128 dimensions. An embedding threshold is raised for obscure songs – which is the town where I live. I guess when you look at what Google has done with Search, it shouldn’t be that surprising – but the idea that so much of the work occurs on the Thing (device), is pretty astounding. I  asked it ‘what’s that song’ and it got it right. Slam dunk. “Ride Your Pony” by Lee Dorsey. Now, Shoot! Shoot! Shoot! Shoot!  Jack Vaughan

RELATED 


Speaking of Name That Tune – why not a little vignette from the time when Humans Walked the Earth?

Thursday, August 30, 2018

Opaque algorithms with singular purpose

Today's article  by DiResta:

"Opaque algorithms with their singular purpose—“keep watching”—coupled with billions of users is a dangerous recipe

...

Most RT viewers don’t set out in search of Russian propaganda. The videos that rack up the views are RT’s clickbait-y, gateway content: videos of towering tsunamis, meteors striking buildings, shark attacks, amusement park accidents, some that are years old but have comments from within an hour ago. This disaster porn is highly engaging; the videos been viewed tens of millions of times and are likely watched until the end. As a result, YouTube’s algorithm likely believes other RT content is worth suggesting to the viewers of that content—and so, quickly, an American YouTube user looking for news finds themselves watching Russia’s take on Hillary Clinton, immigration, and current events. These videos are served up in autoplay playlists alongside content from legitimate news organizations, giving RT itself increased legitimacy by association.

...
The social internet is mediated by algorithms: recommendation engines, search, trending, autocomplete, and other mechanisms that predict what we want to see next."


https://www.wired.com/story/free-speech-is-not-the-same-as-free-reach/

Wednesday, August 29, 2018

The Cavalcade of Falsehood


Notes - The Fog falls on Facebook-When Deception becomes the norm - Manipulation of public opinion is nothing new. But assorted characteristics of the social media platform Facebook make of public opinion a very new mixer. But for context let’s not go back as far as the Lusitania – lets go to the Ukraine.

The downing of a Malaysian jet liner in July 2014 may have pretty immediately been laid at Russia’s door, but it did not catch the Bear without the wherewithal to respond in ingrained fashion. A disinformation campaign was soon launched, with cascading and contradictory alternative stories of lies, half-truths and some truths.

The formula is becoming familiar now in America, but in 2014 it was already very familiar to countries bordering Putin’s dark lair. Reading the report “Fog of Falsehood: Russian Strategy of Deception and the Conflict in Ukraine” bears this out. It is in fact an analysis of the Kremlin kiddos strategic deception, which has had incredible influence in America, and which is at the point of  shaking the foundations of friends and families.

Like others, the Fog of Falsehood editors, Katie Pynnoniemi and Adras Racz, point to Putin’s pre-history, the Soviet Era, to gain historical view on deep trickeration as a tact.  -Jack Vaughan

Sunday, August 26, 2018

Demolition Derby World of Data


Weapons of Math Destruction by Cathy O’Neil cuts to the chase when it comes to big data and its very dark side, which she saw first hand working as a quant in the run up to 2008, and thereafter as a data scientist in e-commerce.

What she saw was the housing crisis the collapse of major financial institutions ... all had been aided and abetted by mathematicians wielding magic formulas.

That was 2008. But there was no let up there after.
   
New mathematical techniques were used to churn through petabytes of information much of its created from social media or you e- commerce websites -mathematicians studied desires movements and spending power they were predicting trustworthiness and calculating potential.

But, as O’Neil, author of the Mathbabe blog, documents ably: The models encoded human prejudice.

She enumerates the differences between a study of small classroom and the big data they work on at Google. This is something I see regularly, as I cover big data as it pertains to business enterprises. People see  what Google does and mistakenly extrapolate the company’s proven success to their own potential outcome. They feel good, because they think they  are doing something akin to what the great disruptor of advertising did.

Systems like that can be improved via Feedback but systems like the one she discusses in the Washington school system and which she says is similar to other weapons of mass destruction she considers in her book, generally lag in terms of feedback. They also create fail-safe false premises for their syllogism.

Author writes:
You cannot appeal to a WMD that’s part of their fears sun power they do not listen nor do they bend their DEF not only to charm threats and cajoling but also to logic.
...they define their own reality and use it to justify their results this type of model is Self perpetuating highly destructive and very common. [p.10]

A great example of unfairness is the use of credit scores to decide who gets a job. That has a way of enforcing failures that would cause Horacio Alger a troubled sleep. For me it rather recalls the great moloch of Search Engine Optimization, a dark cottage industry that sells “Google know how” but which is an amazing indisputable black box of Oz.

But the story really begins with the economic crisis of 2008. And the creation of math models that packaged assorted mortgages (as buckets of risk called securities) in ways that proved lethal, complex and resistance to unravelment an underlying assumption was as familiar as any disaster that had come before:

The risk models were assuming that the future would be no different than the past. [p.41]
Subsequently, O'Neal becomes a data scientist for Intent, working on algorithms to predict the better prospects among web sites' visitors. The leap from math models for futures and math models on web sites put her firmly in the realm of big data, which is where Weapons of Math Destruction really begins.
- Vaughan


https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815


Thursday, August 23, 2018

Gaming platforms



With Facebook we see algorithms have replaced editorial boards... a lot of people welcome that... but they may have not have entirely thought through the implications. The Facebook and Twitter platforms have been gamed/amplified by clever/nefarious state-backed programmers. A lot of positive work done to engineer the Internet, has, like the snake eating its tail, begun to devour itself. Her work is not "light reading" but Renee DiResta is someone who I find has really thought through this stuff, and is thinking several steps ahead of the bad guys. - Vaughan

Related

Tuesday, August 21, 2018

Facebook fights broadcasts of confusion

Facebook continues to be used as a vehicle for disinformation. This is done by publishing provocative news (not always fake, but certainly presented with nefarious gusto)  under false pretenses to influence the division in large population. Facebook said on Tuesday that it had identified several new Iranian and Russian influence campaigns on its platform designed to mislead people in different countries and regions. Able Renee DiResta of the New Knowledge Research Group said "malicious narratives are spreading to mislead people around the world". The news comes on the same day as reports that Microsoft has found Russian government affiliated websites that masquerade as websites of prominent American conservative think tank websites. The saw of confusion cuts in all directions. Know your links, or you may be sharing falsehoods that have suspicious origin and negatively disruptive intention. - Vaughan

Monday, August 20, 2018

How well can neurals generalize across hospitals?



Which features in any quantity influence a convolutional neural network’s (CNN’s) decision? To find the answer in radiology, work is needed, writes researcher John Zech on Medium. The matter gains increased importance as researchers look to ‘go big’ with their data, and to create models based on X-rays obtained from different hospitals.

Before tools are used to crunch big data for actual diagnosis "we must verify their ability to generalize across a variety of hospital systems" writes Zech.

Among findings:

that pneumonia screening CNNs trained with data from a single hospital system did generalize to other hospitals, though in 2 / 4 cases their performance was significantly worse than their performance on new data from the hospital where they were trained.

he goes further:

CNNs appear to exploit information beyond specific disease-related imaging findings on x-rays to calibrate their disease predictions. They look at parts of the image that shouldn’t matter (outside the heart for cardiomegaly, outside the lungs for pneumonia). Initial data exploration suggests they appear rely on these more for certain diagnoses (pneumonia) than others (cardiomegaly), likely because the disease-specific imaging findings are harder for them to identify.

These findings come against a backdrop: An early target for IBM’s Watson cognitive software has been radiology diagnostics. Recent reports question the efficacy thereof. Zech and collaborators’ work shows another wrinkle on the issue, and the complexity that may test estimates of early success for deep learning in this domain. - Vaughan

Related
https://arxiv.org/abs/1807.00431
https://medium.com/@jrzech/what-are-radiological-deep-learning-models-actually-learning-f97a546c5b98
https://en.wikipedia.org/wiki/Convolutional_neural_network
https://www.clinical-innovation.com/topics/artificial-intelligence/new-report-questions-watsons-cancer-treatment-recommendations

Sunday, August 19, 2018

DeepMind AI eyes ophthalmological test breakthrough

Eye ball to eye ball with DeepMind.

DeepMind, the brainy bunch of British boffins whom Google pickedup to carry forward the AI torch, has reported in a scientific journal that it succeeded in employing a common ophthalmological tests to screen for many health disorders.

So reports Bloomberg.

DeepMind’s software used two separate neural networks, a kind of machine learning loosely based on how the human brain works. One neural network labels features in OCT images associated with eye diseases, while the other diagnoses eye conditions based on these features.

Splitting the task means that -- unlike an individual network that makes diagnoses directly from medical imagery – DeepMind’s AI isn’t a black box whose decision-making rationale is completely opaque to human doctors, [a principal said].

The group, which encountered controversy over its use of patient data in the past, said it has cleared important hurdles and  hopes to move to clinical tests in 2019.

Related
https://www.bloomberg.com/news/articles/2018-08-13/google-s-deepmind-to-create-product-to-spot-sight-threatening-disease


Monday, June 4, 2018

these deep neural nets just sort of keep getting deeper and bigger


hard to open up those many layered neurals.

to wrap your head around a hundred million weights.

that's harder to udnerstand compared to linear regression.

these deep neural nets just sort of keep getting deeper and bigger.

cc: ummings

Sunday, May 20, 2018

ODSC placekeeper

Sorry I missed the Open Data Science Conference & Expo in Boston earlier this month. I could even have taken that there bus on the left. It was one of those things. This year has a scattered plot. I would have liked to accelerate my data science knowledge, training, and do some networking.  ODSC East 2018 is one of the largest applied data science conferences in the world. But let's think of this as a book mark for a placekeeper for a mnemonic jigger to pick up where we left off.  Find out more: https://odsc.com/boston

Tuesday, April 3, 2018

Recalling good old Obama days



The NYTimes had an editorial about Facebook data privacy yesterday.  In it they recall Obama’s efforts in this regard. Which we saw firsthand at an MIT event back in 2014. I got to cover it as part of my job.

I remember thinking at the time that Obama’s Data Privacy Fact Finding committee was likely to be sidetracked (and co-opted by advertising giants Facebook and Google and telecoms like Verizon and Comcast and their soldiers among the MIT high tech intelligentsia).

That feeling emerged as the conference events ensued, which revolved around encryption and differential privacy and other of the hemming and hawing that characterize the corridors of technology power.

A colleague and I agreed the theme that emerged most prominently was that data was the "new gold" or the "new oil"  -- it seems overblown (why not the "new tulips"?), until you see a room full of policy and commerce people discussing how much data is going to change the world as we know it. Ad nauseum.

Whether they were right or wrong, we more or less settled, was less important than the palpable sense that something akin to gold or oil ''fever'' was in the air. Which brings us back to Facebook, seen in a new light, given the way its data (your data) ended up in the hands of Cambridge Analytica.

The Times's recent editorial avows there is no reason to start from scratch when it comes to data privacy today, that Obama's privacy proposals of 2012 and thereafter, for a basis for data rights. I am not so sure there was much inthe way of real changeat work there. I don't want to sound relativistic like the Trump cracker contingent, but there wasnt much different between the left and right when push came to shove on privacy back in 2004. - Jack IgnatiusVaughan

Related
https://www.nytimes.com/2018/04/01/opinion/facebook-lax-privacy-rules.html
https://itsthedatatalking.blogspot.com/2014/03/encryption-and-differential-privacy.html 



Orwell's Bad Dream Lives

Provided uninterrupted


Tuesday, March 27, 2018

False news travels faster

Steve Lorhr's  “Why we are easily seduced by false news” recalls an old adage: It takes two to tango.

Yes the IRA attacked America in the soft underbelly known as the Facebook newsfeed, but what made that tummy so flaccid? It was not just the broadcaster - the broadcaster found receivers - many of them. Oafs, retired and semiretired; students, part time and less; nightwatchmen and nightwatchwomen, clicking on their smart phones.

They danced with the Ruskie night riders. And they danced on the winds of false news, which, Lohr reports, follows a unique trajectory. He focuses on an the MIT study that found false news travels faster than true news - that false claims were 70% more likely than the truth to be shared on Twitter

It took true stories about six times longer than false ones to reach 1500 people the MIT study disclosed.

The research was published in Science magazine. It examined stories posted to Twitter from 2006 until 2017, tracking 126,000 stories tweeted by roughly 3.0 million people more than 4.5 million times. News was defined broadly.

What is it about people that makes them more likely to share the false news? It's said here that true news inspired more anticipation, sadness and joy - while false claims elicited greater surprise and disgust. I guess you can say what is false is more visceral.

Should journalism classes be required of citizens in the 21st-century democracy? As I recall, the 20th century journalism teachers told us -- first day of class -- that you did not have to go to journalism school to be a journalist. We're people different then? Was the environment different than today's? - Jack Vaughan


===
I remember in the run up to the election losing my temper with all the false things I was seeing - cant say really understood what was going on but I really wailed away on Facebook . Visceral, one night. Yes, yes. Take this y'all who is reposteth Breitbart, I railed too.

https://www.nytimes.com/2018/03/08/technology/twitter-fake-news-research.html

Thursday, March 22, 2018

Facebook faces breach



Bannon at the controls of the
Cambridge Analytica voter vaporizer.
Gonna tell you a little story that'll make The Man From Uncle sound like Howdy Doody. Bear with me.
THere is a train coming down the track.One is Cambridge Analytca - which is a big data operation HQ'd in Britain. The other is the IRA, the Internet Research Agency, a Russian social media hack.
Cambridge Analytica comprises a bunch of statisticians and programmers who found some warm fuzzy US political venture money and joined forces with an impish devil.
They set up a data gathering project, “thisisyourdigitallife,” that offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” (I'd add a bit more on the brains and funding of thisisyourditigallife if I get the chance.) The test could go something like: Do you like Manfred Mann AND Joni Mitchell? You are a precious introvert. What about Ted Nugent AND Deep Purple? You are outgoing extrovert. I digress.
thisisyourditigallife paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends - activity that Facebook more or less kinda permitted at the time.
That profile helped them to figure out if you were a conspiracy buff, and that in that case you could be pitched posts that fed that inclindation, which you could have shared, and so on.
This resulted in 50 million raw profiles that were forwarded to Cambridge Analytica... A principle officer in Cambridge Analytica was Steve "The Imp of the Perverse" Bannon. (It should be noted that their VC backers originally sought to help Ted Cruz - it took a while to find the right potion or carrier.)
Here comes the second train: The Internet Research Agency aka Glavset, the Trolls from Olgino or kremlebots. It has been charged by US DoJ with criminal interfrence with the 2016 election. These trolls thrived on hacked data like such drawn from innocouous personality tests you might take online.
As far as I am aware, a link between IRA and Cambridge Analytica has not been established - I stand before you today to sibmit that it seems like a distinct possibility. (It is all dark and complicated - not like the good old days where the president had a tape recorder rolling while he plotted nefariously, and there was a fully functioning congress and opposition party also by the way.)
If you read the attached Facebook press release you get some of the gist of what is afoot in the convoluted James Bond scenario called Cambridge Analytica.
Since the first release there has been an amendment. One press account described what happened as a hack or a hijack, so Facebook responded. What Facebook asks you to do is to not think of all this as a hack of your data but to instead understand that their policies were insuficient 2-4 years ago but have been updated. Democracy in America at Facebook HQ today is about covering its hinder.

See Facebook release March 16, 2018 - Suspending Cambridge Analytica and SCL Group from Facebook
https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/



Facebook spurred GDPR, in only in small part. Let's tune into a recent podcast I did on that topic.

Monday, February 19, 2018

Cybernetic Sutra

I'd had an opportunity in college days to study comparative world press under professor Lawrence Martin Bittman, who introduced BU journalism students to the world of disinformation, a discipline he'd learned first hand in the 1960s, before his defection to the West, as a head of Czech Intelligence. We got a view into the information wars within the Cold War. This gave me a more nuanced view of the news than I might otherwise have known. Here I am going to make a jump. 

I'd begun a life-long dance with the news. 

I'd also begun a life-long study of cybernetics. 

And lately the two interests have begun oddly to blend. 

It was all on the back of Really Simple Syndication -RSS- and its ability to feed humongous quantities of online content in computer-ready form-It made me a publisher, as able as Gutenberg, and my brother a publisher, and my brother-in-law a publisher, and on ...

Cybernetics was a promising field of science that seemed ultimately to fizzle. After World War II, led by M.I.T.'s Norbert Wiener and others, cybernetics arose as, in Wiener's words, "the scientific study of control and communication in the animal and the machine."

It burst rather as a movement upon the mass consciousness at a time when fear of technology and the dehumanization of science were a growing concern. - As the shroud of war time secrecy dispersed, in 1948 penned Cybernetics, which was followed by a popularization.

Control, communication, feedback, regulation. It took its name for the Greek root cyber. Wiener - Brownian motion - artillery tables - development of the thermostat, autopilot, differential analyzer, radar, neural networks, back propagation.

Cybernetics flamed out in a few years, tho made an peculiar reentry in the era of the WWW. Flamed out but, somewhat oddly, continued as an operational style in the USSR for quite some time more. Control, communication, feedback, regulation played out there somewhat differently.

A proposal for a Soviet Institute of Cybernetics included "the subjects of logic, control, statistics, information theory, semiotics, machine translation, economics, game theory, biology, and computer programming."1 It came back to mate with cybernetics on the web in the combination of agitprop and social media, known as Russian meddling, that slightly tipped the scales, arguably, of American politics.

1 http://web.mit.edu/slava/homepage/reviews/review-control.pdf

Sunday, February 4, 2018

Pixie dust of technology





Back in the day, the Obama campaign got good press for its efforts to employ technology and then-new social media platforms to organize a large political base. Part of the effort was Dipayan Ghosh, who served in the Obama White House. Like others, Ghosh is having second - or deeper thoughts - on the subject. In a report on "#DigitalDeceit" he and a coauthor ruminate on the Internet giant's (Google's and Facebook's) alignment with advertising motivations - and the resultant penchant for misinformation. Comment: Technology always exists within the a larger context, and will eventually be subsumed thereto. What it will do is cast a haze of pixie dust over ethos, established mores, institutional memory. The haze gradually recedes. -- Jack Vaughan

Sunday, January 21, 2018

AI drive spawns new takes on chip design


As soon as we solve machine
learning we will fix printer.
It is has been interesting to see a re-mergence in interest in new chip architectures. Just when you think it is all been done and there's nothing new. Bam! For sure.

The driver these days is A.I. but more particularly the machine learning aspect of AI. GPUs jumped out of the gamer console and onto the Google and Facebook data center. But there was more in the way of hardware tricks to come. The effort is to get around the tableau I here repeatedly cited: the scene is the data scientist sitting there thumb twiddling while the neural machine slowly does its learning.

I know when I saw that Google had created a custom ASIC for Tensor Flow processing, I was taken aback. If new chips are what is needed to succeed in this racket, it will be a rich man's game.

Turns out a slew of startups are on the case. This article by Cade Metz suggests that at least 45 startups are working on chips for AI type applications such as speech recognition and self-driving cars. It seems the Nvidia GPU that has gotten us to where we are, may not be enough going forward. Co processors for co processors, chips that shuttle data about in I/O roles for GPUs, may be the next frontier.

Metz names a number of AI chip startups: Cerbras, Graphcore, Wave Computing, Mythic, Nervana (now part of Intel). - Jack Vaughan

Related
https://www.nytimes.com/2018/01/14/technology/artificial-intelligence-chip-start-ups.html

Monday, January 8, 2018

What is the risk of AI?

Happy New Year from all of us at the DataDataData blog. Let's start the year out with a look at Artificial Intelligence - actually a story thereof. That is, "Leave Artificial Intelligence Alone" by Andrew Burt, appearing in last Friday's NYTimes' Op-Ed section.

Would that people could leave AI alone! You cant pick up a the supermarket sales flyer without hearing someone's bit on the subject. As Burt points out, a lot of the discussion is unnecessarily - and unhelpfully - doomy and gloomy. Burt points out that AI lacks definition. You can see the effect in much of the criticism, which lashes out with haymakers at a phantom - one that really comprises very many tributary technologies - quite various ones at that.

Some definition, some narrowing of the problem scope is in order.

If you study the history of consumer data privacy you discover, as Burt reminds, the Equal Credit Opportunity Act of 1974. Consider it as a pathway for data privacy that still can be followed.

Burt also points to SR 11-7 regulations that are intended to provide breadcrumbs back to how and why trading models were constructed, so that there is good understanding of risk involved in the automated pits of Wall Street.

Within the United States’ vast framework of laws and regulatory agencies already lie answers to some of the most vexing challenges created by A.I. In the financial sector, for example, the Federal Reserve enforces a regulation called SR 11-7, which addresses the risks created by the complex algorithms used by today’s banks. SR 11-7’s solution to those challenges is called “effective challenge,” which seeks to embed critical analysis into every stage of an algorithm’s life cycle — from thoroughly examining the data used to train the algorithm to explicitly outlining the assumptions underlying the model, and more. While SR 11-7 is among the most detailed attempts at governing the challenges of complex algorithms, it’s also one of the most overlooked.

Burt sees such staged algorithm analysis as a path for understanding AI and machine learning going forward.

It is good to see there may be previous experience that can be tapped when looking at how to handle AI decision making - as opposed to jumping up and down and yelling 'the sky is falling.'

As he says, it is better to distinguish the elements of AI application according to use cases, and look at regulation specifically in verticals - where needed. 

Spoke with Andrew Burt last year as part of my work for SearchDataManagement - linked to here: Machine learning meets Data Governance.  - Jack Vaughan