Tuesday, April 30, 2019

Google improves its cloud database lineup

In the early days of cloud, data was second only to security amid reasons not to migrate. Today, data as a migration barrier may be in ascendance – but cloud vendors have determinedly worked to fix that.

Having a single database for a business is an idea whose time came and went. Of course, you can argue that there never was a time when a single database type would suffice. But, today, fielding a selection of databases seems to be key to most plans for cloud computing.

While Amazon and to a slightly lesser extent Microsoft furnished their clouds with a potpourri of databases, Google stood outside the fray.

It’s not hard to imagine that a tech house like Google, busy inventing away, might fall into a classic syndrome where it dismisses databases that it hadn’t itself invented. It’s engineers are rightly proud of homebrewed DBs such as Big Query, Spanner and Big Table. But having watched Microsoft and Amazon gain in the cloud, the company seems more resolute now to embrace diverse databases.

The trend was manifest earlier this month at Google Cloud Next 2019. This was the first Google Cloud confab under the leadership of Thomas Kurian, formerly of Oracle.

Kurian appears to be leading Google to a more open view on a new generation of databases that are fit for special purpose. This is seen in deals with DataStax, InfluxDB, MongoDB, Neo4j, Redis Labs and others. It also is seen in deeper support for familiar general purpose engines like PostgreSQL and Microsoft SQL Server, taking the form of Cloud SQL for PostgreSQL and Cloud SQL for Microsoft SQL Server, respectively

In a call from the Google Cloud Next showfloor, Kartick Sekar told us openness to a variety of databases is a key factor in cloud decisions that enterprises are now making. Sekar, who is Google Cloud solutions architect with consultancy and managed services provider Pythian, said built-in security and management features distinguish cloud vendors latest offerings.

When databases like PostgreSQL, MySQL and SQL Server become managed services on the cloud, he said, users don’t have to change their basic existing database technology.

This is not to say migrations occur without some changes. “There will always be a need for some level of due diligence to see if everything can be moved to the cloud,” Sekar said.

The view here is that plentiful options are becoming par for cloud. Google seems determined that no database will be left behind. Its update to its SQL Server support, particularly, bears watching, as  its ubiquity is beyond dispute. – Jack Vaughan.

Read Google takes a run at enterprise cloud data management - SearchDataManagement.com

Saturday, April 20, 2019

DataOps, where is thy sting?


I had reason to look at the topic of DataOps the other day. It is like DevOps, with jimmies on top. When we talk techy, several things are going on, it occurred to me. That is: DataOps is DevOps as DevOps is last year's Agile Programming. Terms have a limited lifespan (witness the replacement of BPM with RPA). And you may be saying "DataOps" today because "Dataflow automation" did not elicit an iota of resonance last year, that I may write a story about 'dataflow automation' and not realized I am writing about 'dataOps' or vice versa.  At left are technologies or use cases related to DataOps. At right are stories I or colleagues wrote on the related topics.

Dataflow automation, Workflow management

Jan 15, 2019 - Another planned integration will link CDH to Hortonworks DataFlow, a real-time data streaming and analytics platform that can pull in data from a variety of ...
Sep 7, 2018 - At the same time as it advanced the Kafka-related capabilities in SMM, Hortonworks released Hortonworks DataFlow 3.2, with improved performance for ...
You've visited this page 3 times. Last visit: 12/20/18
Aug 2, 2018 - ... or on the Hadoop platform. Data is supplied to the ODS using data integration and data ingestion tools, such as Attunity Replicate or Hortonworks DataFlow.
5 days ago - Focusing on data flow, event processing is a big change in both computer and data architecture. It is often enabled by Kafka, the messaging system created and ...

>

Containers and orchestration
Mar 29, 2019 - Docker containers in Kubernetes clusters give IT teams a new framework for deploying big data systems, making it easier to spin up additional infrastructure for ...

Sep 13, 2018 - Hortonworks is joining with Red Hat and IBM to work together on a hybrid big data architecture format that will run using containers both in the cloud and on ...

Jan 15, 2019 - Containers and the Kubernetes open source container orchestration system will also play a significant role in Cloudera's development strategy, Reilly said.

Performance and application monitoring

Ingest new data sources more rapidly

May 11, 2018 - The GPUs can ingest a lot of data -- they can swallow it and process it whole. People can leverage these GPUs with certain queries. For example, they do ...
You've visited this page 4 times. Last visit: 3/10/19
5 days ago - Streaming and near-real-time data ingestion should also be a standard feature of integration software, along with time-based and event-based data acquisition; ...
Feb 11, 2019 - Landers said StoryFit has built machine learning models that understand story elements. StoryFit ingests and maps whole texts of books and scripts, and finds ...