Week of July 3rd

Hanging in the cosmos 🌌 like a space ornament”

Happy Birthday🎂🎁🎉🎈America🇺🇸 !

“Now let me welcome everybody to the Wild, Wild West 🤠. A state database that’s untouchable like Eliot Ness.” So, after spending a good concentrated week in the “humongous” document database world better known as the popular MongoDB🍃, it only made sense to continue our Jack Kerouac-like adventures through the universe 🌌 of “Not only SQL” databases.  

“So many roads, so many detours. So many choices, so many mistakes.” -Carrie Bradshaw

But with so many Document databases, Table and Key-value stores, Columnar and Graph databases to choose from in the NoSQL universe, where shall we go?  Well, after a brief deliberation, we turned to the one place that empowers every person and every organization on the planet to achieve more. That’s right, Microsoft! Besides we haven’t been giving Mr. Softy enough love ❤️ in our travels. So, we figured we would take a stab and see what MSFT had to offer. Oh boy, did we hit eureka with Microsoft’s Cosmos DB🪐!

For those not familiar with Microsoft’s Cosmos DB🪐 it was released for GA in 2017. The solution had morphed out of the Azure DocumentDB (the “Un-cola”🥤of document databases of its day) which was initially released in 2014. During the time of its inception, Azure DocumentDB was the only NoSQL Cloud☁️ solution (MongoDB🍃 Atlas☁️ was released two years later in 2016) but its popularity was still limited. Fortunately, MSFT saw the “forest 🌲🌲🌲through the trees🌳” or I shall I say the planets🪐 through the stars ✨ and knew there was a lot more to NoSQL then just some JSON and bunch of curly braces. So, they “pimped up” Azure DocumentDB and gave us the Swiss🇨🇭 Army knives of all NoSQL solutions through their rebranded offering Cosmos DB🪐

Cosmos DB 🪐 is multi-model NoSQL Database as a Service (NDaaS) that manages data at planetary 🌎 scale ⚖️! Huh? In other words, Cosmos DB🪐 supports 6 different NoSQL solutions through the beauty of APIs (Application Program Interfaces). Yes, you read that correctly. Six! Cosmos DB🪐 supports the SQL API which was originally intended to be used with aforementioned Azure DocumentDB which uses the friendly SQL query language, the MongoDB🍃 API (For all the JSON fans), Cassandra (Columnar database), Azure Table Storage (Table) and etcd (Key Value Store) and last but certainly not least the Gremlin👹 API (Graph database).

Cosmos DB🪐 provides virtually unlimited scale ⚖️ through both storage and throughput and it automatically manages the growth of the data with server-side horizontal partitioning.

So, no worrying about adding more nodes or shards.  …And that’s not all! Cosmos DB🪐 does all the heavy lifting 🏋🏻‍♀️ with automatic global distribution and server-side partitioning for painless management over the scale and growth of your database. Not to mention, offers a 99.999% SLA when data is distributed across multi-regions 🌎 (Only a mere four 9s when you stick to a single region).

Yes, you read that right, too. 99.999% guarantee! Not just on availability… No, No, No… but five 9s on latency, throughput, and consistency as well!

Ok, so now I sound like a MSFT fanboy. Perhaps? So now, we were fully percolating ☕️  with excitement who will guide us through such amazing innovation? Well, we found just the right tour guide in a Native New Yorker Lenni Lobel. Through his melodious 🎶  voice and over 5 decades of experience in IT, Lenni takes us through an amazing journey through Cosmos DB🪐 with his Plural sight course Learning Azure Cosmos DB🪐

In the introduction, Lenni gives his us his interpretation on NoSQL which answers the common problem of 3Vs in regards to data and the roots of Cosmos DB🪐 which we touched on earlier. Lenni then explains how the Cosmos DB🪐 engine is an atom-record-sequence (ARS) based system. In other words the database engine of Cosmos DB🪐 is capable of efficiently translating and projecting multiple data models by leveraging ARS. Still confused?  Don’t be. In more simplistic terms, under the covers Cosmos DB🪐 leverages the ARS framework to be able support multiple NoSQL technologies. It does this through APIs and then placing each of data models in separate schema-agnostic containers which is super cool 😎! Next, he discusses one of the cooler 😎  features of Cosmos DB🪐 “Automatic Indexing”. If you recall from our MongoDB travels  one of the main takeaways was a strong emphasis on the need for indexes in MongoDB🍃. Well, in Cosmos DB🪐 you need not to worry. Cosmos DB🪐 does this for you automatically. The only main concern is choosing the right partition key🔑  on your container but you must choose wisely otherwise performance and cost will suffer.

Lenni further explains how one quantifies performance for data through Latency and throughput. In the world 🌎 of data, Latency is how long the data consumer waits for the data to be received from end to end. Whereas throughput is the performance of database itself. First, Mr. Lobel demonstrates how to provision throughput through Cosmos DB🪐 which provides predictable throughput to the database through a server-less approach measured in Request Units (RUs). RUs are a blended measure of computational cost CPU, memory, disk I/O, network I/O.

So, like most server-less approaches you don’t need to worry about provisioning hardware to scale ⚖️  your workloads. You just need to properly allocate the right amount of RUs to a given container. The good news on RUs is that this setting is flexible. So it can be easily throttled up and down through the portal or even specify on an individual query level.

Please note: data writes are generally more expensive than data reads. The beauty of the RU approach is that you are guaranteed throughput and you can predict cost. You will even be notified through a friendly error message when your workloads exceed a certain threshold.  There is an option to run your workloads in an “auto-pilot ✈️ mode” in which Cosmos DB🪐 will adjust the RUs to a given workload but beware this option could be quite costly so proceed with risk and discuss this option with MSFT before considering using it.

In effort of being fully transparent, unlike some of their competitors, Microsoft offers a Capacity Calculator  So you can figure out exactly how much it will cost you to run your workloads (Reserved RU/sec per hour $0.008 for 100 RU/sec). The next import considerations in regards to throughput is Horizontal Partitioning. Some might regard, Horizontal Partitioning as strictly just for storage, but in fact it also massively impacts throughput.

“Yes, it’s understood that Partitioning and throughput are distinct concepts, but they’re symbiotic in terms of scale-out.

Anyway, no need to fret… We just simply create a container and let Cosmos DB🪐 automatically manage these partitions for us behind the scenes (including the distribution of partitions within a given data center). However, keep in mind that we must choose a proper partition key🔑 otherwise we can have a rather unpleasant😞 and costly🤑 experience with Cosmos DB🪐. Luckily, there are several best practices around choosing the right partition key🔑. Personally, I like to stick to the rule of thumb 👍 to always choose a key🔑 with many distinct values like in 100s or 1000s. This can hopefully help avoid the dreaded Hot🔥 Partition 

Please note: Partition keys 🔑 are immutable but there are documented workarounds on how to deal with changing this in case you find yourself in this scenario.

Now, that we have a good grasp on how Cosmos DB🪐 handles throughput and latency through RUs and horizontal partitioning but what if your application is global 🌎  and your primary data is located halfway around the world 🌍 ? Our performance could suffer tremendously… 😢 

Cosmos DB🪐 handles such challenges with one of its most compelling features in the solution through Global Distribution of Data. Microsoft intuitively leverages the ubiquitousness of its global data centers and offers a Turnkey global distribution “Point-and-click” control so your data can seamlessly be geo-replicated across regions.

In cases, where you have multiple-masters or data writers, Cosmos DB🪐 offers three options to handle such conflicts:

  • Option 1: Last Writer Wins (default) based on the highest _ts property or any other numeric property) Conflict Resolver Property Write with higher valuer wins if blank than master with high _ts property wins 
  • Option 2: Merge Procedure (Custom) – Based on stored procedure result
  • Option 3: Conflict feed (Offline resolution) Based Quorum majority

Whew 😅 …  But what about data consistency? How do we ensure our data is consistent in all of our locations? Well once again, Cosmos DB🪐 does not disappoint supporting five different options.  Of course, like life itself there is always tradeoffs. So, depending on your application needs. You will need to determine what’s the most important need for your application latency or availability? Below are the options based higher latency to lowest availability:

  1. Strong – (No Dirty Reads) Higher latency on writes waiting for write to be written to Cosmos DB Quorum. Higher RU costs
  2. Bounded Staleness – Dirty reads possible Bounded by time and updates which kind of like “Skunked🦨 beer🍺” You decide the level of freshness you can tolerate.
  3. Session – (Default) No dirty reads for writers (read your own writes). Dirty Reads are possible for other users
  4. Consistent Prefix – Dirty reads possible. Reads never see out-of-order writes. Never experience data returned out of order.
  5. Eventual – Stale reads possible, No guaranteed order. Fastest

So, after focusing on these core concepts within Cosmos DB🪐, we were ready to dig our heels 👠 👠 right in and get this bad boy up and running 🏃🏻 . So after waiting about 15 minutes or so… we had our Cosmos DB🪐 fired up 🔥 and running in Azure… Not bad for a such complex piece of infrastructure. 😊

Next, we created a Container and then a Database and started our travels with the SQL API.  Through the portal, We were easily able manually write some JSON documents and add them to our collection.

In addition, through Lenni’s brilliantly written .Net Core code samples, we were able to automate writing, Querying, and reading in bulk data. Further, we were able to easily adjust throughput and latency through the portal by tweaking the RUs and enabling multi-region replication. We were able to demonstrate this by re-running Lenni’s code after the changes 

Although, getting Lenni’s code to work did take a little bit of troubleshooting with visual studio 2019 and a little bit of understanding how to fix the .Net SDK errors and some of Compilation errors NuGet from packages . All of which was out of our purview.. But needless to say we figured how to troubleshooted the NuGet Packages and modify some of the parameter’s in the code like _ID field and Cosmos DB🪐 Server and Cosmos DB master key 🔑.

We were able to enjoy the full experience of SQL API including the power⚡️ of using the familiar SQL query language and not to having to worrying about the all 

db.collection.insertOne() this 

and 

db.collection.find(), 

db.collection.UpdateOne()

db.collection.deleteOne()

that..

We also got to play with server‑side programming in Cosmos DB🪐 like the familiar concept of stored procedures, triggers, and user‑defined functions which  in Cosmos DB🪐 are basically self‑contained JavaScript functions that are deployed to the database for execution.  But one can always pretend like we are in the relational database world. 😊

Next we, got to test drive 🚙  the Data Migration tool 🛠 that allows you to import data from an existing data sources into Cosmos DB🪐. 

From our past experiences, we have found Microsoft has gotten quite good at creating these type of tools 🧰.  Cosmos DB🪐 Data Migration tool offers great support for many data sources like SQL Server, JSON files, CSV files, MongoDB, Azure Table storage, and others.

First, we used the UI to move data from Microsoft SQL Server 2016 and the popular example Adventureworks database to Cosmos DB🪐 and then later through the CLI (azcopy) from Azure Table storage.

Notably, Azure Table Storage is on the road map to be deprecated and automatically migrated to Cosmos DB🪐 but this was good exercise for those who can’t wait and want to take advantage such awesome platform today!

As a grand finale, we got to play with Graph Databases through the Gremlin 👹 API.  As many of you might be aware, Graph databases are becoming excessively popular these days. Mostly because Data in the real world is naturally connected through relationships and Graph Databases do a better job managing when many complex relationships exist opposed to our traditional RDBMS.

Again, it’s worth noting that in the case of Cosmos DB🪐, it doesn’t really matter what data model you’re implementing because as we mentioned earlier it leverages the ARS framework. So as far as Cosmos DB🪐 concerned it’s just another container to manage and we get all the Horizontal partitioning, provisioned throughput, global distribution, indexing goodness 😊.  

We were new to whole concept of Graph Databases so we were very excited to get some exposure here which looks to be a precursor for further explorations. The most important highlights of Graph database is understanding Vertex and Edge objects. These are basically just fancy schmancy words for Entities and Relationships. A Vertex is an entity and a Edge is a relationship between any two vertices respectively. Both can hold arbitrary key-value pairs 🔑🔑 and are the building blocks of a graph database.

Cosmos DB🪐 utilizes the Apache TinkerPop standard which uses Gremlin as a functional step-by-step language to create vertices and edges and stores the data as GraphSON or “Graphical JSON”.  

In addition, Gremlin 👹 allows you to query the graph database by using simple transversals though a myriad of relationships or Edges. The more edges you add, the more relationships you define, and the more questions you can answer by running Gremlin👹 Queries. 😊

To further our learning Lenni once again gave us some nice demos using a fictitious company “Acme” and its relationships of employees, Airport terminals and Restaurants and another example using Comic Book hero’s which made playing along fun.

Below is some example of some Gremlin 👹 syntax language from our voyage.

g.addV(‘person’).property(‘id’,’John’).property(‘age’,25).property(‘likes’,’pizza’).property(‘city’,’NY’)

g.addV(‘person’).property(‘id’,’Alan’).property(‘age’,22).property(‘likes’,’seafood’).property(‘city’,’NY’)

g.addV(‘company’).property(‘id’,’Acm’e).property(‘founded,2001).property(‘city’,’NY’)

g.V().has(‘id’,’John’).addE(‘worksAt’).property(‘weekends’, true).to(g.V().has(‘id’,’Acme’))

g.V().has(‘id’,’Alan’).addE(‘worksAt’).property(‘weekends’, true).to(g.V().has(‘id’,’Acme’))

g.V().has(‘id’,’Alan’).addE(‘manages’).to(g.V().has(‘id’,’John’))

When in comes to Graph databases the possibilities are endless. Some good use cases for Graph Database would be:

  • Complex Relationships – Many “many-to-many” relationships
  • Excessive JOINS
  • Analyze interconnected data relationships
  • Typical graph applications
    • Social networks 
    • Recommendation Engines

In Cosmos DB🪐, it’s clear to see how a graph database is no different than any other key value data model. Graph database gets provisioned throughput, fully indexed, partitioned, and globally distributed just like a document collection in this SQL API or a table in the Table API

Cosmos DB🪐 will one day allow you to switch freely between different APIs and data models within the same account, and even over the same data set. So by adding this graph functionality to Cosmos DB🪐 Microsoft really hit ⚾️  this one out of the park 🏟!

Closing time …Every new beginning.. comes from some other beginning’s end

Below are some topics I am considering for my wonderings next week:

  • Neo4J and Graph DB
  • More on Cosmos DB
  • More on MongoDB
  • More with Google Cloud Path
  • Working with Parquet files 
  • JDBC Drivers
  • More on Machine Learning
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well –

–MCS

One comment

  1. Pingback: Week of July 10th | SQL Squirrels

Leave a Reply