Week of September 25th

“Dynamite🧨 with a laser beam💥…Guaranteed to blow💨 your mind🧠

Happy National Lobster 🦞 Day!

“And here I go again on my own… Goin’ down the only road I’ve ever known”

This week we continued where we last left off the previous week as we continued exploring the depths of SQL Server 2019. Last week, we just merely scratched 💅 the surface of SQL Server 2019 as we dove🤿 into IQP and the improvements made to TempDB. This week we tackled Microsoft’s most ambitious SQL Server offering to date in SQL Server 2019 Big Data Clusters (BDC). When I first thought of BDCs the first thing that came to mind 🤔 was a Twix Bar 🍫. Yes, we all know Twix is the is the “only candy with the cookie crunch” but what makes the Twix bar so delicious 😋 is the perfect melding of smooth Chocolate, Chewy Carmel and of course crisp cookie🍪! Well, that’s exactly what Big Data Cluster is like… You’re probably thinking Huh?

Big Data Clusters (BDC) is MSFT’s groundbreaking new Big Data/Data Lake architecture that unifies virtual business data and operational data stored in relational databases with IoT for true real-time BI and embedded Artificial Intelligence (AI) and Machine Learning (ML). BDC combines the power⚡️of SQL Server, Spark 🔥, and the Hadoop Distributed File System (HDFS) 🐘 into a unified data platform. But that’s not all!  Since BDC runs natively on Linux🐧 it’s able to embrace modern architectures for deploying applications like Linux-based Docker🐧- 🐳 containers on Kubernetes ☸︎.

By Leveraging K8s ☸︎ for orchestration, deployments of BDCs are predictable, fast 🏃🏻, elastic🧘‍♀️ and scalable ⚖️. Seeing that Big data clusters can run any Kubernetes ☸︎ environment whether it be on-premise 🏠 (i.e. Red Hat OpenShift) or in the cloud☁️ (i.e. Amazon EKS); BDC makes a perfect fit to be hosted on Azure Kubernetes Service (AKS).

Another great feature that BDC makes use of is data virtualization, also known as “Polybase“. Polybase made its original debut with SQL Server 2016. It had seemed like Microsoft had gone to sleep😴 on it but now with SQL Server 2019 BDC, Microsoft has broken out big time! BDC takes advantage of Data Virtualization as it’s “data hub”. So, you don’t need to spend time 🕰 and expense💰 of traditional extract, transform, and load (ETL) 🚜 and Data Pipelines.  In addition, it lets organizations leverage existing SQL Server expertise and extract value from third-party data sources such as NoSQL, Oracle, Teradata and HDFS🐘.

Lastly, BDC takes advantage of Azure Data Studio (ADS) for both deployments and administration of BDC. For those who are not familiar with ADS, it’s a really cool 😎 tool 🛠 that you can benefit from by acquainting yourself with. Of course, SSMS isn’t going anywhere but with ADS you get a cross-platform database lightweight tool 🛠 that uses Jupyter notebooks📒 and Python🐍 scripting making deployments and administration of BDCs a breeze.  OK, I am ready to rock and roll 🎶!

“I love❤️ rock n’ roll🎸… So put another dime in the jukebox 📻, baby”

Before we can jump right into the deep end of the Pool 🏊‍♂️ with Big Data Cluster, we felt a need for a little primer on Virtualization, Kubernetes ☸︎, and Containers. Fortunately, we knew just who to deliver the perfect overview, no other then one of the prominent members of the Mount Rushmore of SQL Server, Buck Woody who through his guest appearance with Long time product team member and Technology Evangelist Sanjay Soni in the Introduction to Big Data Cluster on SQL Server 2019 | Virtualization, Kubernetes ☸︎, and Containers YouTube Video. Buck who has a very unique style and an amazing skill of taking complex technologies and making them seem simple. His supermarket 🛒 analogy to explain Big Data, Hadoop and Spark 🔥, virtualization, containers, and Kubernetes ☸︎ is pure brilliance! Now, armed 🛡🗡 with Buck’s knowledge bombs 💣 we were ready to light 🔥 this candle 🕯!

“Let’s get it started (ha), let’s get it started in here”

Taking us through BDC architecture and deployments was newly minted Microsoft Data Platform MVP Mohammad Darab who put together a series of super exciting videos as well as excellent detailed blog posts on BDC and Long Time SQL Server veteran and Microsoft Data Platform MVP Ben Weissman through his awesome Building Your First Microsoft SQL Server Big Data Cluster Pluralsight course.  Ben’s speculator course covers not only architecture and deployment but how to get data in and out of the, make the most of out of them and how to monitor, maintain, and troubleshoot BDC.

“If you have built castles 🏰 in the air, your work need not be lost; that is where they should be. Now put the foundations 🧱 under them.” – Henry David Thoreau

A big data cluster consists of several major components:

  • Controller (Control plane)
    • Master Instance – manage connectivity (endpoints and communication with the other Pools 🏊‍♂️), scale-out ⚖️ queries, metadata and user databases (target for Restore databases), and machine learning services.

Data not residing in your master instance will be exposed through the concept of External tables. Examples are CSV files on an HDFS store or Data on another RDBMS. External tables can be queried the same as local tables on the SQL Server with several cavorts:

  • Unable modify the table structure or content
  • No Indexes can be applied (only statistics are kept. SQL Server to houses that meta data.
  • Data source might require you to provide credentials 🔐.
  • Data source may also need a format definition like text qualifiers or separators for CSV files 🗃
  • Compute Pool 🏊‍♂️
  • Storage Pool 🏊‍♂️
  • Data Pool 🏊‍♂️
  • Application Pool 🏊‍♂️

The controller provides management and security for the cluster and acts as the control plane for the cluster. It takes care of all the interactions with K8s ☸︎, the SQL Server instances that are part of the cluster and other components like HDFS🐘, Spark🔥, Kibana, Grafana, and Elastic Search

The controller manages:

  • Cluster lifecycle, bootstrap, delete update, etc.
  • Master SQL Server Instance
  • Compute, data, and Storage Pools 🏊‍♂️
  • Cluster Security 🔐

The Compute Pool 🏊‍♂️ is a set a stateless multiple SQL Server 2019 instances that work together. The Compute Pool 🏊‍♂️ leverage Data Virtualization or Polybase to scale-out⚖️ queries across partitions. The Compute Pool 🏊‍♂️ is automatically provisioned as part of BDC. Management and Patching of Computer Pools 🏊‍♂️ is easy because they run on Docker 🐳 containers running on K8☸️ pods.

Please note: Queries on BDC can also function without the Compute Pool 🏊‍♂️.  

 Compute Pool 🏊‍♂️ is responsible for:

  • Joining of two or more directories 📂 in HDFS🐘 with 100+ files
  • Joining of two or more data sources
  • Joining multiple tables with different partitioning or distribution schemes
  • Data stored in Blob Storage

The Storage Pool 🏊‍♂️ consists of pods comprised of SQL Servers on Linux🐧 and Spark🔥 on HDFS 🐘 (deployed automatically). All the Nodes of the BDC are members of an HDFS🐘 Cluster. The Storage Pool 🏊‍♂️ stores file‑based 🗂 data like a CSV and queried directly through external tables and T‑SQL, or you can use Python🐍 and Spark🔥. If you already have an HDFS🐘 on either Azure Data Lake (ADLS) store or AWS S3 buckets🗑 you can easily mount your existing storage into your BDC without the need to shift all your data around.

The Storage Pool 🏊‍♂️ is responsible for:

  • Data ingestion through Spark🔥
  • Data Storage in HDFS🐘 (Parquet format). HDFS🐘 data is spread across all storage nodes in the BDC for persistency
  • Data access through HDFS🐘 and SQL Server Endpoints

The Data Pool 🏊‍♂️ is used for data persistence and caching. Under the covers the Data Pool 🏊‍♂️ is a set of SQL Servers (Defined at Deployment) that are using Columnstore Index and Sharding. In other words, SQL Server will create physical tables with same structure and evenly distribute the data across the total number of servers. The Queries will be also be distributed across server but to the user it will be transparent as all the magic✨🎩 is happening behind the scenes. The data stored in the data Pool 🏊‍♂️ does not support transactions as its sole purpose is for caching. It is used to ingest data from SQL Queries or Spark🔥 Jobs. BDC data marts are persisted in the data Pool 🏊‍♂️.

Data Pool 🏊‍♂️ is used for:

  • Complex Query Joins
  • Machine Learning
  • Reporting

The application Pool 🏊‍♂️ is used to run jobs like SSIS, store and execute ML models, and all kinds of other applications which are generally exposed through a web service.

“This is 10% luck🍀… 20% skill… 15% concentrated power🔌 of will… 5% pleasure😆… 50% pain🤕… And a 100% reason to remember the name”

After both great overviews of the BDC architecture by both Mo and Ben, we were eager to build this spaceship 🚀. First, we need to download the required tools 🛠🧰 so we can happily😊 have our base machine where we can deploy our first BDC. I have chosen the Mac 💻 just to spice 🥵 things up as my base machine.

Below are the Required Tools:

  • Azure Data Studio (ADS)
  • ADS Extension for Data Virtualization
  • Python 🐍
  • Kubectl ☸️
  • azdata
  • Azure-CLI (and only required if using AKS)

Optional Tools:

  • Text Editor 🗒
  • PowerShell Extension for ADS
  • Zip utility Software 🗜
  • SQL Server Utilities
  • SSH

Now that we got all our prerequisites downloaded, we next needed to determine where we should deploy our BDC. The most natural choice seemed to be with AKS.

To help walk🚶‍♂️us through the installation of our base machine and the deployment of BDS using ADS on AKS, we once again turned to Mo Darab who put together an excellent and easy to follow along series of videos Deploying Big Data Clusters on his YouTube channel.

Base Machine

In his video “How to set Up a Base Machine”, Mo used a Windows machine opposed to us who went with the Mac 💻 . But for all intents and purposes the steps are pretty much the same. The only difference is the package📦 manager that we need to use. Windows the recommended package manager is Chocolatey 🍫 while on the Mac 💻 its Brew 🍺.

Here were the basic steps:

§  brew install kubernetes-cli
§  brew update && brew install azure-cli
  • Install Python
    • brew install python3
  • Install azdata
    • brew tap microsoft/azdata-cli-release
    • brew update
    • brew install azdata-cli
  • Install ADS Extension for Data Virtualization
  • Install Pandas (manage package option in ADS/ Add New Pandas/ Click Install)

“Countdown commencing, fire one”

Now, we had our base machine up and running 🏃‍♂️, it was time ⏱ to deploy. To guide us through the deployment process, we once again went to Mo and followed along in his How to Deploy Big Data Cluster on AKS using Azure Data Studio. Mo walked us through the wizard 🧙‍♂️ in ADS which basically creates a Notebook 📒 that builds our BDC. We created our deployment Jupyter notebook 📒 and then clicked Run 🏃‍♂️ and let it rip☄️. Everything seemed to be humming 🎶 along except 2 hours later our Jupyter notebook was still running 🏃‍♂️.

Obviously, something wasn’t right? 🤔 Unfortunately, we didn’t’ have much visibility on what’s happening with the install through the notebook but hey that’s ok we have a terminal prompt in ADS. So we can just run some K8☸️ commands to see what’s happening under the covers. Ok, so after running a few K8 commands we noticed “ImagePullBackOff” error with our SQL Server Images. After a little bit research, we determined someone forgot 🙄to update the Microsoft Repo with the latest CU Image.

“But there is no joy in Mudville—mighty Casey has struck out.”

So we filled a bug 🐞 on GitHub and we ran the BDC wizard 🧙‍♂️ changed the Docker🐳 settings pointing to the next latest package available on the Docker🐳 Registry and we were back in business until… And then Bam!🥊🥊

Operation failed with status: 'Bad Request'. Details: Provisioning of resource(s) for container service mssql-20200923030840 in resource group mssql-20200923030840 failed. Message: Operation could not be completed as it results in exceeding approved Total Regional Cores quota.

What shall we do? What does our Azure Fundamentals training tells us to do? That’s right we go to Azure Portal and submit a support ticket and beg the good people at Microsoft to increase our Cores Quota. So, we did that and almost instantaneously MSFT quick obliged. 😊

Great, we are back in business (Well sort of).. After several more attempts we ran into more Quota issues with all three types (SKU, Static, and Basic) of Public IP Addresses . So, 3 more support tickets later and there was finally joy 😊 to the world🌎.  

Next, we turned back to Ben who provided some great demos in his Pluralsight course on how to set up Data Virtualization for both a SQL Server and HDFS 🐘 files as a data sources on BDC

Data Virtualization (SQL Server)

  1. Right‑mouse click ->Virtualize Data
  2. Data virtualization wizard 🧙‍♂️ will launch.
  3. Next step chooses data source
  • Create Master Key
  • Create connection and specify username and password
  • Next, the wizard🧙‍♂️ will connect to the source and a list of the tables and views
  • Choose the script option
  • Click on a specific table(s)

The external table inherits a schema from its source, and transformation would happen in your queries.

You can choose between just having them created or to generate a script.

CREATE EXTERNAL TABLE [virt].[PersonPhone]

        (

            [BusinessEntityID] INT NOT NULL,

            [PhoneNumber] NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,

            [PhoneNumberTypeID] INT NOT NULL,

            [ModifiedDate] SMALLDATETIME NOT NULL

        )

        WITH (LOCATION = N'[AdventureWorks2015].[Person].[PersonPhone]’, DATA_SOURCE = [SQLServer]);

  • Data Virtualization (HDFS🐘)
  1. Right‑mouse‑click your HDFS 🐘 and create a new directory
  2. Upload the flight delay dataset to it.
  3. Expand the directory 📁 to see files 🗂
  • Right‑mouse‑click a file launches a wizard 🧙‍♂️ to virtualize data from CSV files
  • The wizard 🧙‍♂️will ask you for the database in which you want to create the external table, a name for the data source;
  • Next step preview of our data,
  • next step, the wizard 🧙‍♂️ recommends a column type

  CREATE EXTERNAL DATA SOURCE [SqlStoragePool]

            WITH (LOCATION = N’sqlhdfs://controller-svc/default’);

        CREATE EXTERNAL FILE FORMAT [CSV]

            WITH (FORMAT_TYPE = DELIMITEDTEXT, FORMAT_OPTIONS (FIELD_TERMINATOR = N’,’, STRING_DELIMITER = N'”‘, FIRST_ROW = 2));

        CREATE EXTERNAL TABLE [csv].[airlines]

        (

            [IATA_CODE] nvarchar(50) NOT NULL,

            [AIRLINE] nvarchar(50) NOT NULL

        )

        WITH (LOCATION = N’/FlightDelays/airlines.csv’, DATA_SOURCE = [SqlStoragePool], FILE_FORMAT = [CSV]);

Monitoring 🎛 Big Data Clusters through Azure Data Studio

BDC comes with a pre-deployed Grafana container for SQL Server and system metrics. This map server collects all those metrics from every single node, container, and pod and provides them individual dashboards 📊.

The Kibana dashboard 📊 is part of the Elastic Stack and provides a looking glass🔎 into all of your log files in BDC.

Troubleshooting Big Data Clusters through Azure Data Studio

In ADS, on the main dashboard 📊 there is a button Troubleshoot. This provides a library📚 of notebooks 📒 that you can use to troubleshoot and analyze the cluster. Notebooks 📒 are categorized and provide all kinds of different aspects on monitoring 🎛 of a diagnosing to help you repair 🛠 an issue within your cluster.

In addition, the azdata utility can be monitoring 🎛, running queries and notebooks 📒, and retrieve a cluster’s endpoints. the namespace and username as well.

We really enjoyed spending time learning SQL Server 2019 Big Data Cluster. 😊

“And I feel like it’s all been done Somebody’s tryin’ to make me stay You know I’ve got to be movin’ on”

Below are some of the destinations I am considering for my travels for next week:

  • Google Cloud Certified Associate Cloud Engineer Path

Thanks –

–MCS

3 comments

  1. Hari · September 27, 2020

    Great post Mark! I was reading a lot about SQL 2019 last few weeks and this blog gives quite a nice summary and examples. Given the fact that BDC is a completely new technology, I was wondering why Microsoft didn’t release BDC as a separate product outside of the main SQL product. anyways…

    Cheers!

    • Mark Shay · September 27, 2020

      Hari, Thanks for the kind words 😊. I suspect at some point MSFT might release BDC as separate product offering in the future. But since it leverages SQL Sever Engine for the Master database and various pools it makes sense from a cost standpoint (for every 1 EE core license you get 8 cores covered in the pools). Also by being offered under the SQL Server umbrella MSFT probably hopes to get more adoption.

  2. Pingback: Week of October 9th | SQL Squirrels

Leave a Reply to Mark ShayCancel reply