Week of November 20th

“Climb🧗‍♀️ in the back with your head👤 in the Clouds☁️☁️… And you’re gone

Hi All –

Happy Name Your PC💻 Day!

Forward yesterday makes me wanna stay…”

“Welcome back, to that same old place that you laughed 😂 about”. So, after a short recess we made our splendiferous return this week. To where else? …But to no other than Google Cloud Platform a.k.a GCP☁️ , of course! 😊 So after completing our three-part Cloud Journey, we were feeling the need for a little refresher… Also, there were still had a few loose ends we needed to sew🧵 up. The wonderful folks at Google Cloud☁️ put together amazing compilation on GCP☁️ through their Google Cloud Certified Associate Cloud Engineer Path but we were feeling the need for a little more coverage on GCP CLI i.e. “gcloud”, “gsutil”, and “bq” . In addition, we had a great zest to learn a little more about some of the service offerings like GCP Development Services and APIs. Fortunately, we knew exactly who could deliver tremendous content on GCP☁️ as well as hit the sweet spot on some of the areas where we felt we were lacking a bit. That would be of course one of our favorite Canucks 🇨🇦 Mattias Andersson

For those who are not familiar with Mattias, he is one of the legendary instructors on A Cloud Guru. Mattias is especially well-known for his critically acclaimed Google Certified Associate Cloud Engineer 2020 course.

In this brilliantly produced course Mattias delivers the goods and then some! The goal of the course is to prepare those interested in preparing for Google’s Associate Cloud Engineer (ACE) Certification exam but it’s structured in a manner to efficiently to provide you with the skills to troubleshoot GCP through having a better understanding of “Data flows”. Throughout the course Mattias emphasizes the “see one, do one, teach one” technique in order to get the best ROI out of the tutorial.

So, after some warm salutations and a great overview of the ACE Exam, Mattias takes right to an introductions of all the Google Cloud product and Services. He accentuates the importance of Data Flow in fully understanding how all GCP solutions work. “Data Flow is taking data or information and it’s moving it around, processing it and remembering it.

Data flows – are the foundation of every system

  • Moving, Processing, Remembering
    • Not just Network, Compute, Storage
  • Build mental models
    • Helps you make predictions
  • Identify and think through data flows
    • Highlights potential issues
  • Requirement and options not always clear
    • Especially in the real world🌎
  • Critical skills for both real world🌎 and exam📝 questions

“Let’s get it started, in here…And the base keep runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and runnin’ 🏃‍♂️ runnin’ 🏃‍♂️, and…”

After walking🚶‍♀️ us through how to create a Free account it was time ⏰ to kick off 🦵 us with a little Billing and Billing Export.

“Share it fairly, but don’t take a slice of my pie 🥧”

Billing Export –to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset.

  • Export must be set up per billing account
  • Resources should be placed into appropriate projects
  • Resources should be tagged with labels🏷
  • Billing export is not real-time
    • Delay is hours

Billing IAM – Role: Billing Account User

  • Link🔗 projects to billing accounts
  • Restrictive permissions
  • Along with the Project Creator allow a user to create new projects linked to billing

Budgets – Help with project planning and controlling costs

  • Setting a budget lets you track spend
  • Apply budget to billing account or a Project

Alerts 🔔 – notify billing administrators when spending exceeds a percentage of your budget

Google Cloud Shell 🐚 – provides with CLI access to Cloud☁️ Resources directly from your browser.

  • Command-line tool🔧 to interact GCP☁️
  • Basic Syntax
 gcloud–project=myprojid compute instances list
gcloud compute instances create myvm
gcloud services list --available
gsutil ls
gsutil mb -l northamerica-northeast1 gs://storage-lab-cli
gsutil label set bucketlables.json gs://storage-lab-cli

GCS via gsutil in Command Line

 gcloud config list
 gcloud config set project igneous-visitor-293922
 gsutil ls
 gsutil ls gs://storage-lab-console-088/
 gsutil ls gs://storage-lab-console-088/**
 gsutil mb --help
 gsutil mb -l northamerica-northeast1 gs://storage-lab-cli-088
 gsutil label get gs://storage-lab-console-088/
 gsutil label get gs://storage-lab-console-088/ > bucketlabels.json
 cat bucketlabels.json
 gsutil label get gs://storage-lab-cli-088
 gsutil label set bucketlabels.json gs://storage-lab-cli-088
 gsutil label ch -l "extralable:etravalue" gs://storage-lab-cli-088
 gsutil versioning get gs://storage-lab-cli-088
 gsutil versioning set on gs://storage-lab-cli-088
 gsutil versioning get gs://storage-lab-cli-088
 gsutil cp README-Cloudshell.txt gs://storage-lab-cli-088
 gsutil ls -a gs://storage-lab-cli-088
 gsutil rm gs://storage-lab-cli-088/README-Cloudshell.txt
 gsutil cp gs://storage-lab-console-088/** gs://storage-lab-cli-088/
 gsutil acl ch -u AllUsers:R gs://storage-lab-cli-088/shutterstock.jpg 

Create VM via gsutil in Command Line

 gcloud config get-value project
 gcloud compute instances list
 gcloud services list
 gcloud services list --enabled
 gcloud services list --help
 gcloud services list –available
 gcloud services list --available |grep compute
 gcloud services -h
 gcloud compute instances create myvm
 gcloud compute instances delete myvm 

Security🔒 Concepts

Confidentiality, Integrity, and Availability (CIA)

  • You cannot view data you shouldn’t
  • You cannot change data you shouldn’t
  • You can access data you should

Authentication, Authorization, Accounting (AIA)

  • Authentication – Who are you?
  • Authorization – What are you allowed to do?
  • Accounting – What did you do?
  • Resiliency – Keep it running 🏃‍♂️
  • Security🔒 Products
  • Security🔒 Features
  • Security🔒 Mindset
    • Includes Availability Mindset

Key🔑 Security🔒 Mindset (Principles)

  • Least privilege
  • Defense in depth
  • Fail Securely

Key🔑 Security🔒 Products/Features

  • Identity hierarchy👑 (Google Groups)
  • Resource⚙️ hierarchy👑 (Organization, Folders📂, Projects)
  • Identity and Access Management (IAM)
    • Permissions
    • Roles
    • Bindings
  • GCS ACLs
  • Billing management
  • Networking structure & restrictions
  • Audit / Activity Logs (provided by Stackdriver)
  • Billing export
    • To BigQuery
    • To file (in GCS bucket🗑)
      • Can be JSON or CSV format
  • GCS object Lifecycle Management

IAM – Resource Hierarchy👑

  • Resource⚙️
    • Something you create in GCP☁️
  • Project
    • Container for a set of related resources
  • Folder📂
    • Contains any number of Projects and Subfolders📂
  • Organization
    • Tied to G Suite or Cloud☁️ Identity domain

IAM – Permissions & Roles

Permissions – allows you to a perform a certain action

  • Each one follows the form Service.Resource.Verb
  • Usually correspond to REST API methods
    • pubsub.subcription.consume
    • pubsub.topics.publish

Roles – is a collection of permissions to use or manage GCP☁️ resources

  • Primitive Roles – Project-level and often too broad
    • Viewer is read-only
    • Editor can view and change things
    • Owner can also control access & billing
  • Predefined Roles
    • roles/bigquery.dataEditor, roles/pub.subscriber
    • Read through the list of roles for each product! Think about why each exists
  • Custom Role – Project or Org-Level collection you define of granular permissions

IAM – Members & Groups

Members – some Google-known identity

  • Each member is identifying by unique  email📧 address
  • Can be:
    • user: Specific Google account
      • G Suite, Cloud☁️ Identity, gmail, or validated email
    • serviceAccount: Service account for apps/services
    • group: Google group of users and services accounts
    • domain: whole domain managed by G Suite or Cloud☁️ Identity
    • allAuthenticatedUsers: Any Google account or service account
    • allUsers: Anyone on the internet (Public)

Groups – a collection of Google accounts and service accounts

  • Every group has a unique  email📧 address that is associated with the group
  • You never act as the group
    • But membership in a group can grant capabilities to individuals
  • Use them for everything
  • Can be used for owner when within an organization
  • Can nest groups in an organization
    • One group for each department, all those in group for all staff

IAM – Policies

Policies – binds members to roles for some scope of resources

  • Enforce who can do what to which thing(s)
  • Attached to some level in the Resource⚙️ history
    • Organization
    • Folder📂
    • Project Resource⚙️
  • Roles and Members listed in policy, but Resources identified by attachment
  • Always additive (Allow) and never subtractive (no Deny)
  • One policy per Resource⚙️
  • Max 1500-member binding per policy
 gCloud[GROUP] add-iam-policy-binding [Resource-NAME]
 --role [ROLE-ID-TO-GRANT] –member user: [USER-EMAIL]
 gCloud[GROUP] remove-iam-policy-binding [Resource-NAME]
 --role [ROLE-ID-TO-REVOKE] –member user: [USER-EMAIL] 

Billing Accounts – represents some way to pay for GCP☁️ service usuage

  • Type of Resource⚙️ that lives outside of Projects
  • Can belong to an Organization
    • Inherits Org-level IAM policies
  • Can be linked to projects
    • Not the Owner
      • No impact on project IAM
Billing Account CreatorCreate new self-service billing accountsOrg
Billing Account AdministratorManage billing accountsBilling Account
Billing Account UserLink Projects to billing accountsBilling Account
Billing Account ViewerView billing account cost information and transactionsBilling Account
Project Billing ManagerLink/unlink the project to/from a billing accountProject

Monthly Invoiced Billing – Billed monthly and pay by invoice due date

  • Pay via check or wire transfer
  • Increase project and quota limits
  • Billing administrator of org’s current billing account contacts Cloud☁️ Billing Support
    • To Determine eligibility
    • To apply to switch to monthly invoicing
  • Eligibility depends on
    • Account age
    • Typical monthly spend
    • Country


            Choose the right solution to get data to the right Resource⚙️

  • Latency reduction – Use Servers physically close to clients
  • Load Balancing – Separate from auto-scaling
  • System design – Different servers may handle different parts of the system
  • Cross-Region Load Balancing – with Global🌎 Anycast IPs
  • Cloud☁️ Load Balancer 🏋️‍♀️ – all types; internal and external
  • HTTP(S) Load Balancer 🏋️‍♀️ (With URL Map)

Unicast vs Anycast

Unicast – There is only one unique device in the world that can handle this; send it there.

Anycast – There are multiple devices that could handle this; send it to anyone – but ideally the closest.

            Load Balancing – Layer 4 vs Layer 7

  • TCP is usually called Layer 4 (L4)
  • HTTP and HTTPS work at Layer (L7)
  • Each layer is built on the one below it
    • To route based on URL paths, routing needs to understand L7
    • L4 cannot route based on the URL paths defined in L7

DNS – Name resolution (via the Domain Name System) can be the first step in routing

  • Some known issues with DNS
    • Layer 4 – Cannot route L4 based on L7s URL paths
    • Chunky – DNS queries often cached and reused for huge client sets
    • Sticky – DNS lookup “locks on” and refreshing per request has high cost
      • Extra latency because each request includes another round-trip
      • More money for additional DNS request processing
    • Not Robust – Relies on the client always doing the right thing
  • Premium tier routing with Global🌎 anycast Ips avoids these problems

Options for Data from one Resource to another

  • VPC (Global🌎) Virtual Private Cloud☁️ – Private SDN space in GCP☁️
    • Not just Resource-to-Resource – also manages the doors to outside & peers
  • Subnets (regional) – create logical spaces to contain resources
    • All Subnets can reach all others – Globally without any need for VPNs
  • Routes (Global🌎) define “next hop” for traffic🚦 based on destination IP
    • Routes are Global🌎 and apply by Instance-level Tags, not by Subnet
    • No route to the Internet gateway means no such data can flow
  • Firewall🔥 Rules (Global🌎) further filter data flow that would otherwise route
    • All FW Rules are Global🌎 and apply by Instance-level Tags or Service Acct.
    • Default Firewall🔥 Rules are restrictive inbound and permissive outbound


  • IP Address is (dotted quad) where each piece is 0-255
  • CIDR block is group of IP addresses specified in <IP>/xy notation
    • Turn IP address into 32-bit binary number
    • -> 00001010 00001010 00000000 11111110
    • /xy in CIDR notation locks highest (leftmost) bits in IP address (0-32)
    • abc.efg.hij.klm/32 is single IP address ( because all 32 bits are looked
    • abc.efg.hij.klm /24 is 24 is 256 (255.555.255.0) IP addresses because last 8 bits can vary
    • means “any IP address” because no bits are locked
  • RFC1918 defines private (i.e non-internet) address ranges you can use:
    •, and

Subnet CIDR Ranges

  • You can edit a subnet to increase its CIDR range
  • No need to recreate subnet or instances
  • New range must contain old range (i.e. old range must be subnet)

Shared VPC

  • In an Organization, you can share VPCs among multiple projects
    • Host Project: One project owns the Shared VPC
    • Service Projects: Other projects granted access to use all/part of Shared VPC
  • Lets multiple projects coexist on same local network (private IP space)
  • Let’s a centralized team manage network security🔒

“Ride, captain👨🏿‍✈️ ride upon your mystery ship⛵️


A Kubernetes ☸️ cluster is a set of nodes that run containerized applications. Containerizing applications packages an app with its dependences and some necessary services.

K8s ☸️ you know that the control plane consists of the kube-apiserver, kube-scheduler, kube-controller-manager and an etcd datastore. 

Deploy and manage clusters on-prem

Step 1: The container runtime

Step 2: Installing kubeadm

Step 3: Starting the Kubernetes cluster ☸️

Step 4: Joining a node to the Kubernetes cluster ☸️

Deploy and manage clusters on-prem in the Cloud☁️

To deploy and manage your containerized applications and other workloads on your Google Kubernetes Engine (GKE) cluster, you use the K8s ☸️ system to create K8s ☸️  controller objects. These controller objects represent the applications, daemons, and batch jobs running 🏃‍♂️ on your clusters.

            Cloud Native Application Properties

  • Use Cloud☁️ platform services.
  • Scale horizontally.
  • Scale automatically, using proactive and reactive actions.
  • Handle node and transient failures without degrading.
  • Feature non-blocking asynchronous communication in a loosely coupled architecture.

Kubernetes fits into the Cloud-native ecosystem

K8s ☸️ native technologies (tools/systems/interfaces) are those that are primarily designed and built for Kubernetes ☸️.

  • They don’t support any other container or infrastructure orchestration systems
  • K8s ☸️ accommodative technologies are those that embrace multiple orchestration mechanisms, K8s ☸️ being one of them.
  • They generally existed in pre-Kubernetes☸️ era and then added support for K8s ☸️ in their design.
  • Non-Kubernetes ☸️ technologies are Cloud☁️ native but don’t support K8s ☸️.

Deploy and manage applications on Kubernetes ☸️

K8s ☸️ deployments can be managed via Kubernetes ☸️ command line interface kubectl. Kubectl uses the Kubernetes ☸️ API to interact with the cluster. 

When creating a deployment, you will need to specify the container image for your application and the number of replicas that you need in your cluster.

  • Create Application
    • create the application we will be deploying to our cluster
  • Create a Docker🐳 container image
    • create an image that will contain the app built.
  • Create a K8s ☸️ Deployment
    • K8s ☸️ deployments are responsible for creating and managing pods
    • K8s ☸️ pod is a group of one or more containers, tied together for the purpose of administration and networking. 
    • K8s ☸️ Deployments can be created in two ways
      •  kubectl run command
      •  YAML configuration

Declarative Management of Kubernetes☸️ Objects Using Configuration Files

K8s ☸️ objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using kubectl apply to recursively create and update those objects as needed.

This method retains writes made to live objects without merging the changes back into the object configuration files. kubectl diff also gives you a preview of what changes apply will make.


A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

  • Running 🏃‍♂️ a cluster storage🗄 daemon on every node
  • Running 🏃‍♂️ a logs collection daemon on every node
  • Running 🏃‍♂️ a node monitoring🎛 daemon on every node

Cloud Load Balancer 🏋️‍♀️ that GKE created

Google Kubernetes☸️ Engine (GKE) offers integrated support for two types of Cloud☁️ Load Balancing for a publicly accessible application:

When you specify type:LoadBalancer 🏋️‍♀️ in the Resource⚙️ manifest:

  1. GKE creates a Service of type LoadBalancer 🏋️‍♀️. GKE makes appropriate Google Cloud API calls to create either an external network load balancer 🏋️‍♀️ or an internal TCP/UDP load balancer 🏋️‍♀️.
  • GKE creates an internal TCP/UDP load balancer 🏋️‍♀️ when you add the Cloud.google.com/load-balancer 🏋️‍♀️-type: “Internal” annotation; otherwise, GKE creates an external network load balancer 🏋️‍♀️.

Although you can use either of these types of load balancers 🏋️‍♀️ for HTTP(S) traffic🚦, they operate in OSI layers 3/4 and are not aware of HTTP connections or individual HTTP requests and responses.

Imagine all the people👥 sharing all the world🌎

GCP Services


Compute Engine (GCE) – (Zonal) (IaaS) – Fast-booting Virtual Machines (VMs) for rent/demand

  • Pick set machine type – standard, high memory, high CPU-or custom CPU/RAM
  • Pay by the second (60 second min.) for CPUs, RAM
  • Automatically cheaper if you keep running 🏃‍♂️ it (“sustained use discount”)
  • Even cheaper for “preemptible” or long-term use commitment in a region
  • Can add GPUs and paid OSes for extra cost*
  • Live Migration: Google seamlessly moves instance across hosts, as needed

Kubernetes Engine (GKE) – (Regional (IaaS/Paas) -Managed Kubernetes ☸️ cluster for running 🏃‍♂️ Docker🐳 containers (with autoscaling)

  • Kubernetes☸️ DNS on by default for service discovery
  • NO IAM integration (unlike AWS ECS)
  • Integrates with Persistent Disk for storage
  • Pay for underlying GCE instances
    • Production cluster should have 3+ nodes*
  • No GKE management fee, no matter how many nodes in cluster

  App Engine (GAE) – (Regional (PaaS) that takes your code and runs it

  • Much more than just compute – Integrates storage, queues, NoSQL
  • Flex mode (“App Engine Flex”) can run any container & access VPC
  • Auto-Scales⚖️ based on load
    • Standard (non-Flex) mode can turn off las instance when no traffic🚦
  • Effectively pay for underlying GCE instances and other services

Cloud Functions – (Regional (FaaS), “Serverless” -Managed K8s☸️ cluster for running 🏃‍♂️ Docker🐳 containers (with autoscaling)

  • Runs code in response to an event – Node.js Python🐍, Java☕️, Go🟢
  • Pay for CPU and RAM assigned to function, per 100ms (min. 100ms)
  • Each function automatically gets an HTTP endpoint
  • Can be triggered by GCS objects, Pub/Sub messages, etc.
  • Massively Scalable⚖️ (horizontally) – Runs🏃‍♂️ many copies when needed


Persistent Disk (PD) – (Zonal) Flexible🧘‍♀️, block-based🧱 network-attached storage; boot disk for every GCE instance

  • Perf Scales⚖️ with volume size; max way below Local SSD, but still plenty fast🏃‍♂️
  • Persistent disks persist, and are replicated (zone or region) for durability
  • Can resize while in use (up to 64TB), but will need file system update with VM
  • Snapshots (and machine images🖼 ) add even more capability and flexibility
  • Not file based NAS, but can mount to multiple instances if all are read-only
  • Pay for GB/mo provisioned depending on perf.class;plus snapshot GB/mo used

  Cloud Filestore – (Zonal) Fully managed file-based storage

  • “Predictably fast🏃‍♂️ performance for your file-based workloads”
  • Accessible to GCE and GKE through your VPC, via NFSv3 protocol
  • Primary use case is application migration to Cloud☁️ (“lift and shift”)🚜
  • Fully manages file serving, but not backups
  • Pay for provisioned TBs in “Standard” (slow) or “Premium” (fast🏃‍♂️) mode
  • Minimum provisioned capacity of 1TB (Standard) or 2.5TB (Premium)

Cloud Storage (GCS) – (Regional, Multi-Regional) Infinitely Scalable⚖️, fully managed, versioned, and highly durable object storage

  • Designed for 99.999999999% (11 9’s) durability
  • Strong consistent💪 (even for overwrite PUTs and DELETEs)
  • Integrated site hosting and CDN functionality
  • Lifecycle♻️ transitions across classes: Multi-Regional, Regional, Nearline, Coldline🥶
    • Diffs in cost & availability (99.5%, 99.9%, 99%, 99%), not latency (no thaw delay)
  • All Classes have same API, so can use gsutil and gcsfuse


            Cloud SQL – (Regional, Fully managed and reliable MySQL and PostgreSQL databases

  • Supports automatic replication, backup, failover, etc.
  • Scaling is manual (both vertically and horizontally)
  • Effectively pay for underlying GCE instances and PDs
    • Plus, some baked-in service fees

Cloud Spanner – (Regional, Multi-Regional), Global🌎 horizontally Scalable⚖️, strongly consistent 💪, relational database service”

  • “From 1 to 100s or 1000s of nodes”
    • “A minimum of 3 nodes is recommended for production environments.”
  • Chooses Consistency and Partition – Tolerance (CP and CAP theorem)
  • But still high Availability: SLA has 99.999% SLO (five nines) for multi-region
    • Nothing is actually 100%, really
    • Not based on fail-over
  • Pay for provisioned node time (by region/multi-region) plus used storage-time

BigQuery (BQ)– Multi-Regional Serverless column-store data warehouse for analytics using SQL

  • Scales⚖️ internally (TB in seconds and PB in minutes)
  • Pay for GBs actually considered (scanned) during queries
    • Attempts to reuse cached results, which are free
  • Pay for data stored (GB-months)
    • Relatively inexpensive
    • Even cheaper when table not modified for 90 days (reading still fine)
  • Pay for GBs added via streaming inserts

Cloud Datastore – (Regional, Multi-Regional) Managed & autoscale⚖️ NoSQL DB with indexes, queries, and ACID trans, support

  • No joins or aggregates and must line up with indexes
  • NOT, OR, and NOT EQUALS (<>,!=) operations not natively supported
  • Automatic “built-in” indexes for simple filtering and sorting (ASC, DESC)
  • Manual “composite” indexes for more complicated, but beware them “exploding”
  • Pay for GB-months of storage🗄 used (including indexes)
  • Pay for IO operations (deletes, reads, writes) performed (i.e. no pre-provisioning)

Cloud Bigtable – (Zonal) Low latency & high throughput NoSQL DB for large operational & analytical apps

  • Supports open-source HBase API
  • Integrates with Hadoop, Dataflow, Dataproc
  • Scales⚖️ seamlessly and unlimitedly
    • Storage🗄 autoscales⚖️
    • Processing nodes must be scaled manually
  • Pay for processing node hours
  • GB-hours used for storage 🗄 (cheap HDD or fast🏃‍♂️ SSD)

Firebase Realtime DB & Cloud Firestore 🔥 – (Regional, Multi-Regional) NoSQL document📃 stores with ~real-time client updates via managed WebSockets

  • Firebase DB is single (potentially huge) JSON doc, located only in central US
  • Cloud☁️ Firestore has collection, documents📃, and contained data
  • Free tier (Spark⚡️), flat tier (Flame🔥), or usage-based pricing (Blaze)
    • Realtime Db: Pay more GB/month stored and GB downloaded
    • Firestore: Pay for operations and much less for storage🗄 and transfer

Data Transfer ↔️

Data Transfer Appliance – Rackable, high-capacity storage 🗄 server to physically ship data to GCS

  • Ingest only; not a way to avoid egress charges
  • 100 TB or 480 TB/week is faster than a saturated 6 Gbps link🔗

Storage Transfer Service – (Global) Copies objects for you, so you don’t need to set up a machine to do it

  • Destination is always GCS bucket 🗑
  • Source can S3, HTTP/HTTPS endpoint, or another GCS bucket 🗑
  • One-time or scheduled recurring transfers
  • Free to use, but you pay for its actions

External Networking

Google Domains – (Global) Google’s registrar for domain names

  • Private Whois records
  • Built-in DNS or custom nameservers
  • Support DNSSEC
  •  email📧 forwarding with automatic setup of SPF and DKIM (for built-in DNS)

Cloud DNS– (Global) Scalable⚖️, reliable, & managed authoritative Domain (DNS) service

  • 100 % uptime guarantee
  • Public and private managed zones
  • Low latency Globally
  • Supports DNSSEC
  • Manage via UI, CLI, or API
  • Pay fixed fee per managed zone to store and distribute DNS records
  • Pay for DNS lookups (i.e. usage)

Static IP Addresses – (Regional, Global🌎 Reserve static IP addresses in projects and assign them to resources

  • Regional IPs used for GCE instances & Network Load Balancers🏋️‍♀️
  • Global IPs used for Global load balancers🏋️‍♀️:
  • HTTP(S) SSL proxy, and TCP proxy
  • Pay for reserved IPs that are not in use, to discourage wasting them

Cloud Load Balancing (CLB) – (Regional, Global🌎 High-perf, Scalable ⚖️ traffic🚦 distribution integrated with autoscaling & Cloud☁️ CDN

  • SDN naturally handles spikes without any prewarming, no instances or devices
  • Regional Network Load Balancer 🏋️‍♀️: health checks, round robin, session affinity
    • Forwarding rules based on IP, protocol (e.g. TCP, UDP), and (optionally) port
  • Global load balancers 🏋️‍♀️ w/ multi-region failover for HTTP(S), SSL proxy, & TCP proxy
    • Prioritize low-latency connection to region near user, then gently fail over in bits
    • Reacts quickly (unlike DNS) to changes in users, traffic🚦, network, health, etc.
  • Pay by making ingress traffic🚦 billable (Cheaper than egress) plus hourly per rule

Cloud CDN – (Global) Low-latency content delivery based on HTTP(S) CLB integrated w/ GCE & GCS

  • Supports HTTP/2 and HTTPS, but no custom origins (GCP☁️ only)
  • Simple checkbox ✅ on HTTP(S) Load Balancer 🏋️‍♀️ config turns this on
  • On cache miss, pay origin-> POP “cache fill” egress charges (cheaper for in-region)
  • Always pay POP->clint egress charges, depending on location
  • Pay for HTTP(S) request volume
  • Pay per cache invalidation request (not per Resource⚙️ invalidated)
  • Origin costs (e.g. CLB, GCS) can be much lower because cache hits reduce load

Virtual Private Cloud (VPC) – (Regional, Global), Global IP v4 unicast Software-Defined Network (SDN) for GCP☁️ resources

  • Automatic mode is easy; custom mode gives control
  • Configure subnets (each with a private IP range), routes, firewalls🔥, VPNs, BGP, etc.
  • VPC is Global🌎 and subnets are regional (not zonal)
  • Can be shared across multiple projects in same org and peered with other VPCs
  • Can enable private (internal IP) access to some GCP☁️ services (e.g. BQ, GCS)
  • Free to configure VPC (container)
  • Pay to use certain services (e.g. VPN) and for network egress

Cloud Interconnect – (Regional, Multi-Regional) Options for connecting external networks to Google’s network

  • Private connections to VPC via Cloud VPN or Dedicated/Partner Interconnect
  • Public Google services (incl. GCP) accessible via External Peering (no SLAs)
    • Direct Peering for high volume
    • Carrier Peering via a partner for lower volume
  • Significantly lower egress fees
    • Except Cloud VPN, which remains unchanged

Internal Networking

Cloud Virtual Private Network (VPN)– (Regional) IPSEC VPN to connect to VPC via public internet for low-volume data connections

  • For persistent, static connections between gateways (i.e. not for a dynamic client)
    • Peer VPN gateway must have static (unchanging) IP
  • Encrypted 🔐 link🔗 to VPC (as opposed to Dedicated interconnect), into one subnet
  • Supports both static and dynamic routing
  • 99.9% availability SLA
  • Pay per tunnel-hour
  • Normal traffic🚦 charges apply

Dedicated Interconnect – (Regional, Multi-Regional) Direct physical link 🔗 between VPC and on-prem for high-volume data connections

  • VLAN attachment is private connection to VPC in one region: no public GCP☁️ APIs
    • Region chosen from those supported by particular Interconnect Location
  • Links are private but not Encrypted 🔐; can layer your own encryption 🔐
    • Redundancy achieves 99.99% availability: otherwise, 99.9% SLA
  • Pay fee 10 Gbps link, plus (relatively small) fee per VLAN attachment
  • Pay reduced egress rates from VPC through Dedicated Interconnect

Cloud Router 👮‍♀️ – (Regional) Dynamic routing (BGP) for hybrid networks linking GCP VPCs to external networks

  • Works with Cloud VPN and Dedicated Interconnect
  • Automatically learns subnets in VPC and announces them to on-prem network
  • Without Cloud Router👮‍♀️ you must manage static routes for VPN
    • Changing the IP addresses on either side of VPN requires recreating it
  • Free to set up
  • Pay for usual VPC egress

CDN Interconnect – (Regional, Multi-Regional) Direct, low-latency connectivity to certain CDN providers, with cheaper egress

  • For external CDNs, not Google’s Cloud CDN service
    • Supports Akamai, Cloudflare, Fastly, and more
  • Works for both pull and push cache fills
    • Because it’s for all traffic🚦 with that CDN
  • Contact CDN provider to set up for GCP☁️ project and which regions
  • Free to enable, then pay less for the egress you configured

Machine Learning/AI 🧠

Cloud Machine Learning (ML) Engine – (Regional) Massively Scalable ⚖️ managed service for training ML models & making predictions

  • Enables apps/devs to use TensorFlow on datasets of any size, endless use cases
  • Integrates: GCS/BQ (storage), Cloud Datalab (dev), Cloud Dataflow (preprocessing)
  • Supports online & batch predictions, anywhere: desktop, mobile, own servers
  • HyperTune🎶 automatically tunes 🎶model hyperparameters to avoid manual tweaking
  • Training: Pay per hour depending on chosen cluster capabilities (ML training units)
  • Prediction: Pay per provisioned node-hour plus by prediction request volume made

Cloud Vison API👓 – (Global) Classifies images🖼 into categories, detects objects/faces, & finds/reads printed text

  • Pre-trained ML model to analyze images🖼 and discover their contents
  • Classifies into thousands of categories (e.g., “sailboat”, “lion”, “Eiffel Tower”)
  • Upload images🖼 or point to ones stored in GCS
  • Pay per image, based on detection features requested
    • Higher price for OCR of Full documents📃 and finding similar images🖼 on the web🕸
    • Some features are prices together: Labels🏷 + SafeSearch, ImgProps + Cropping
    • Other features priced individually: Text, Faces, Landmarks, Logos

Cloud Speech API🗣 – (Global) Automatic Speech Recognition (ASR) to turn spoken word audio files into text

  • Pre-trained ML model for recognizing speech in 110+ languages/variants
  • Accepts pre-recorded or real-time audio, & can stream results back in real-time
  • Enables voice command-and-control and transcribing user microphone dictations
  • Handles noisy source audio
  • Optionally filters inappropriate content in some languages
  • Accepts contextual hints: words and names that will likely be spoken
  • Pay per 15 seconds of audio processed

Cloud Natural Language API 💬 – (Global) Analyzes text for sentiment, intent, & content classification, and extracts info

  • Pre-trained ML model for understanding what text means, so you can act on it
  • Excellent with Speech API (audio), Vision API (OCR), & Translation API (or built-ins)
  • Syntax analysis extracts tokens/sentences, parts of speech & dependency trees
  • Entity analysis finds people, places, things, etc., labels🏷 them & links🔗 to Wikipedia
  • Analysis for sentiment (overall) and entity sentiment detect +/- feelings & strength
  • Content classification puts each document📃 into one of 700+ predefined categories
  • Charged per request of 1000 characters, depending on analysis types requested

Cloud Translation API –(Global) Translate text among 100+ languages; optionally auto-detects source language

  • Pre-trained ML model for recognizing and translating semantics, not just syntax
  • Can let people support multi-regional clients in non-native languages,2-way
  • Combine with Speech, Vision, & Natural Language APIs for powerful workflows
  • Send plain text or HTML and receive translation in kind
  • Pay per character processed for translation
  • Also pay per character for language auto-detection

Dialogflow – (Global) Build conversational interfaces for websites, mobile apps, messaging, IoT devices

  • Pre-trained ML model and service for accepting, parsing, lexing input & responding
  • Enables useful chatbot and other natural user interactions with your custom code
  • Train it to identify custom entity types by providing a small dataset of examples
  • Or choose from 30+ pre-built agents (e.g. car🚙, currency฿, dates as starting template
  • Supports many different languages and platforms/devices
  • Free plan had unlimited text interactions and capped voice interactions
  • Paid plan is unlimited but charges per request: more for voice, less for text

Cloud Video Intelligence API 📹 – (Regional, Global) Annotates videos in GCS (or directly uploaded) with info about what they contain

  • Pre-trained ML model for video scene analysis and subject identification
  • Enables you to search a video catalog the same way you search text documents📃
  • “Specify a region where processing will take place (for regulatory compliance)”
  • Label Detection: Detect entities within the video, such as “dog” 🐕, “flower” 🌷 or “car”🚙
  • Shot Change Detection: Detect scene changes within the video🎞
  • SafeSearch Detection: Detect adult content within the video🎞
  • Pay per minute of video🎞 processed, depending on requested detection modes

Cloud Job Discovery– (Global) Helps career sites, company job boards, etc. to improve engagement & conversion

  • Pre-trained ML model to help job seekers search job posting databases
  • Most job sites rely on keyword search to retrieve content which often omits relevant jobs and overwhelms the job seeker with irrelevant jobs. For example, a keyword search with any spelling error returns) results, and a keyword search for “dental assistant” returns any “assistant” role that offers dental benefits.’
  • Integrates with many job/hiring systems
  • Lots of features, such as commute distance and recognizing abbreviations/jargon
  • “Show me jobs with a 30-minute commute on public transportation from my home”

Big Data and IoT

            Four Different Stages:

  1. Ingest – Pull in all the raw data in
  2. Store – Store data without data loss and easy retrieval
  3. Process – transform that raw data into some actionable information
  4. Explore & Visualize – turn the results of that analysis into something that’s valuable for your business

Cloud Internet of Things (IoT) Core– (Global) Fully managed service to connect, manage, and ingest data from device Globally

  • Device Manager handles device identity, authentication, config & control
  • Protocol Bridge publishes incoming telemetry to Cloud☁️ Pub/Sub for processing
  • Connect securely using IoT industry standard MQTT or HTTPS protocols
  • CA signed certificates can be used to verify device ownership on first connect
  • Two-way device communication enables configuration & firmware updates
  • Device shadows enable querying & making control changes while devices offline
  • Pay per MB of data exchanged with devices; no per-device charge

Cloud Pub/Sub– (Global) Infinitely Scalable⚖️ at-least-once messaging for ingestion, decoupling, etc.

  • “Global🌎 by default: Publish… and consume from anywhere, with consistent latency”.
  • Messages can be up to 10 MB and undelivered ones stored for 7 days-but no DLQ
  • Push mode delivers to HTTPS endpoints & succeeds on HTTP success status code
    • “Slow-start” algorithm ramps up on success and backs off & retries, on failures
  • Pull mode delivers messages to requestion clients and waits for ACK to delete
    • Let’s clients set rate of consumption, and supports batching and long-polling
  • Pay for data volume
    • Min 1KB per publish/Push/Pull request (not by message)

Cloud Dataprep– (Global) Visually explore, clean, and prepare data for analysis without running 🏃‍♂️ servers

  • “Data Wrangling” (i.e. “ad-hoc ETL”) for business analysts, not IT pros
    • Who might otherwise spend 80% of their time cleaning data?
  • Managed version of Trifacta Wrangler – and managed by Trifacta, not Google
  • Source data from GCS, BQ, or file upload – formatted in CSV, JSON, or relational
  • Automatically detects schemas, datatypes, possible joins, and various anomalies
  • Pay for underlying Daaflow job, plus management overhead charge
  • Pay for other accessed services (e.g. GCS, BQ)

Cloud Dataproc– (Zonal) Batch MapReduce processing via configurable, managed Spark & Hadoop clusters

  • Handles being told to scale (adding or removing nodes) even while running 🏃‍♂️ jobs
  • Integrated with Cloud☁️ Storage, BigQuery, Bigtable, and some Stackdriver services
  • “Image versioning” switches between versions of Spark, Hadoop, & other tools
  • Pay directly for underlying GCE servers used in the cluster – optionally preemptible
  • Pay a Cloud Dataproc management fee per vCPU-hour in the cluster
  • Best for moving existing Spark/Hadoop setups to GCP☁️
    • Prefer Cloud Dataflow for new data processing pipelines – “Go with the flow”

Cloud Datalab 🧪– (Regional) Interactive tool 🔧 for data exploration🔎, analysis, visualization📊 and machine learning

  • Uses Jupyter Notebook📒
    • “[A]n open-source web🕸 application that allows you to create and share documents📃 that contain live code, equations, visualizations and narrative text. Use include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.”
  • Supports iterative development of data analysis algorithms in Python🐍/ SQL/~JS
  • Pay for GCE/GAE instance hosting and storing (on PD) your notebook📒
  • Pay for any other resources accessed (e.g. BigQuery)

Cloud Data Studio– (Global) Big Data Visualization📊 tool 🔧 for dashboards and reporting

  • Meaningful data stories/presentations enable better business decision making
  • Data sources include BigQuery, Cloud SQL, other MySQL, Google Sheets, Google Analytics, Analytics 360, AdWords, DoubleClick, & YouTube channels
  • Visualizations include time series, bar charts, pie charts, tables, heat maps, geo maps, scorecards, scatter charts, bullet charts, & area charts
  • Templates for quick start; customization options for impactful finish
  • Familiar G Suite sharing and real-time collaboration

Cloud Genomics 🧬– (Global) Store and process genomes🧬 and related experiments

  • Query complete genomic🧬 information of large research projects in seconds
  • Process many genomes🧬 and experiments in parallel
  • Open Industry standards (e.g. From Global🌎 Alliance for Genomics🧬 and Health)
  • Supports “Requester Pays” sharing

Identity and Access – Core Security🔒

Roles– (Global) collections of Permissions to use or manage GCP☁️ resources

  • Permissions allow you to perform certain actions: Service.Resource.Verb
  • Primitive Roles: Owner, Editor, Viewer
    • Viewer is read-only; Editor can change things; Owner can control access & billing
    • Pre-date IAM service, may still be useful (e.g., dev/test envs), but often too broad
  • Predefined Roles: Give granular access to specific GCP☁️ resources (IAM)
    • E.g.: roles/bigquery.dataEditor, roles/pub/sub.subscriber
  • Custom Roles: Project- or Org-level collections you define of granular permissions

Cloud Identity and Access Management (IAM)– (Global) Control access to GCP☁️ resources: authorization, not really authentication/identity

  • Member is user, group, domain, service account, or the public (e.g. “allUsers”)
    • Individual Google account, Google group, G Suite/ Cloud Identity domain
    • Service account belongs to application/instance, not individual end user
    • Every identity has a unique e-mail address, including service accounts
  • Policies bind Members to Roles at a hierarchy👑 level: Org, Folder📂, Project, Resource⚙️
    • Answer: Who can do what to which thing(s)?
  • IAM is free; pay for authorized GCP☁️ service usage

Service Accounts– (Global) Special types of Google account that represents an application, not an end user

  • Can be “assumed” by applications or individual users (when so authorized)
  • “Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys🔑.”
  • Consider resources and permissions required by application; use least privilege
  • Can generate and download private keys🔑 (user-managed keys🔑), for non-GCP☁️
  • Cloud-Platform-managed keys🔑 are better, for GCP☁️ (i.e. GCF, GAE, GCE, and GKE)
    • No direct downloading: Google manages private keys🔑 & rotates them once a day

Cloud Identity– (Global) Identity as a Service (IDaaS, not DaaS) to provision and manage users and groups

  • Free Google Accounts for non-G-Suite users, tied to a verified domain
  • Centrally manage all users in Google Admin console; supports compliance
  • 2-Step verification (2SV/MFA) and enforcement, including security🔒 keys🔑
  • Sync from Active Directory and LDAP directories via Google Cloud☁️ Directory Sync
  • Identities work with other Google services (e.g. Chrome)
  • Identities can be used to SSO with other apps via OIDC, SAML, OAuth2
  • Cloud Identity is free; pay for authorized GCP☁️ service usage

Security Key Enforcement– (Global) USB or Bluetooth 2-step verification device that prevents phishing🎣

  • Not like just getting a code via email📧 or text message…
  • Eliminates man-in-the-middle (MITM) attacks against GCP☁️ credentials

Cloud Resource Manager– (Global) Centrally manage & secure organization’s projects with custom Folder📂 hierarchy👑

  • Organization Resource⚙️ is root node in hierarchy👑, folders📂 per your business needs
  • Tied 1:1 to a Cloud Identity / G Suite domain, then owns all newly created projects
  • Without this organization, specific identities (people) must own GCP☁️ projects
  • “Recycle bin” allows undeleting projects
  • Define custom IAM policies at organization, Folder📂, or project levels
  • No charge for this service

Cloud Identity-Aware Proxy (IAP)– (Global) Guards apps running 🏃‍♂️ on GCP☁️ via identity verification, not VPN access

  • Based on CLB & IAM, and only passes authed requests through
  • Grant access to any IAM identities, incl. group & service accounts
  • Relatively straightforward to set up
  • Pay for load balancing🏋️‍♀️ / protocol forwarding rules and traffic🚦

Cloud Audit Logging– (Global) “Who did what, where and when?” within GCP☁️ projects

  • Maintains non-tamperable audit logs for each project and organization:
  • Admin Activity and System Events (400-day retention)
  • Access Transparency (400-day retention)
  • Shows actions by Google support staff
  • Data Access (30-day retention)
  • For GCP-visible services (e.g. Can’t see into MySQL DB on GCE)
  • Data Access logs priced through Stackdriver Logging; rest are free

Security Management – Monitoring🎛 and Response

Cloud Armor🛡 – (Global) Edge-level protection from DDoS & other attacks on Global🌎 HTTP(S) LB🏋️‍♀️

  • Offload work: Blocked attacks never reach your systems
  • Monitor: Detailed request-level logs available in Stackdriver Logging
  • Manage Ips with CIDR-based allow/block lists (aka whitelist/blacklist)
  • More intelligent rules forthcoming (e.g. XSS, SQLi, geo-based🌎, custom)
  • Preview effect of changes before making them live
  • Pay per policy and rule configured, plus for incoming request volume

Cloud Security Scanner– (Global) Free but limited GAE app vulnerability scanner with “very low false positive rates”

  • “After you set up a scan, Cloud☁️ Security🔒 Scanner automatically crawls your application, following all links🔗 within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible.”
  • Can identify:
    • Cross-site-scripting (XSS)
    • Flash🔦 injection💉
    • Mixed content (HTTP in HTTPS)
    • Outdated/insecure libraries📚

Cloud Data Loss Prevention API (DLP) – (Global) Finds and optionally redacts sensitive info is unstructured data streams

  • Helps you minimize what you collect, expose, or copy to other systems
  • 50+ sensitive data detectors, including credit card numbers, names, social security🔒 numbers, passport numbers, driver’s license numbers (US and some other jurisdictions), phone numbers, and other personally identifiable information (PII)
  • Data can be sent directly, or API can be pointed at GCS, BQ, or Cloud☁️ DataStore
  • Can scan both text and images🖼
  • Pay for amount of data processed (per GB) –and gets cheaper when large volume
    • Pricing for storage 🗄now very simple (June 2019), but for streaming is still a mess

Event Threat Detection (ETD)– (Global) Automatically scans your Stackdriver logs for suspicious activity

  • Uses industry-leading threat intelligence, including Google Safe Browsing
  • Quickly detects many possible threats, including:
    • Malware, crypto-mining, outgoing DDoS attacks, port scanning, brute-force SSH
    • Also: Unauthorized access to GCP☁️ resources via abusive IAM access
  • Can export parsed logs to BigQuery for forensic analysis
  • Integrates with SIEMs like Google’s Cloud☁️ SCC or via Cloud Pub/Sub
  • No charge for ETD, but charged for its usage of other GCP☁️ services (like SD Logging)

Cloud Security Command Center (SCC) – (Global)

  • “Comprehensive security🔒 management and data risk platform for GCP☁️”
  • Security🔒 Information and Event Management (SIEM) software
  • “Helps you prevent, detect & respond to threats from a single pane of glass”
  • Use: Security🔒 Marks” (aka “marks”) to group, track, and manage resources
  • Integrate ETD, Cloud☁️ Scanner, DLP, & many external security🔒 finding sources
  • Can alert 🔔 to humans & systems; can export data to external SIEM
  • Free! But charged for services used (e.g. DLP API, if configured)
  • Could also be charged for excessive uploads of external findings

Encryption Key Management 🔐

Cloud Key Management Services (KMS)– (Regional, Multi-Regional, Global) Low-latency service to manage and use cryptographic keys🔑

  • Supports symmetric (e.g. AES) and asymmetric (e.g. RSA, EC) algorithms
  • Move secrets out of code (and the like) and into the environment, in a secure way
  • Integrated with IAM & Cloud☁️ Audit Logging to authorize & track key🔑 usage
  • Rotate keys🔑 used for new encryption 🔐 either automatically or on demand
    • Still keeps old active key🔑 versions, to allow decrypting
  • Key🔑 deletion has 24-hour delay, “to prevent accidental or malicious data loss”
  • Pay for active key🔑 versions stored over time
  • Pay for key🔑 use operations (i.e. encrypt/decrypt; admin operation are free)

Cloud Hardware Security Module (HSM)– (Regional, Multi-Regional, Global) Cloud KMS keys🔑 managed by FIPS 140-2 Level 3 certified HSMs

  • Device hosts encryption 🔐 keys🔑 and performs cryptographic operations
  • Enables you to meet compliance that mandates hardware environment
  • Fully integrated with Cloud☁️ KMS
    • Same API, features, IAM integration
  • Priced like Cloud KMS: Active key🔑 versions stored & key🔑 operations
    • But some key🔑 types more expensive: RSA, EC, Long AES

Operations and Management

Google Stackdriver– (Global) Family of services for monitoring, logging & diagnosing apps on GCP/AWS/hybrid

  • Service integrations add lots of value – among Stackdriver and with GCP☁️
  • One Stackdriver account can track multiple:
    • GCP☁️ projects
    • AWS☁️ accounts
    • Other resources
  • Simple usage-based pricing
    • No longer previous system of tiers, allotments, and overages

Stackdriver Monitoring– (Global) Gives visibility into perf, uptime, & overall health of Cloud☁️ apps (based on collectd)

  • Includes built-in/custom metrics, dashboards, Global🌎 uptime monitoring, & alerts
  • Follow the trail: Links🔗 from alerts to dashboards/charts to logs to traces
  • Cross-Cloud☁️: GCP☁️, of course, but monitoring🎛 agent also supports AWS
  • Alerting policy config includes multi-condition rules & Resource⚙️ organization
  • Alert 🔔 via email, GCP☁️ Mobile App, SMS, Slack, PagerDuty, AWS SNS, webnook, etc.
  • Automatic GCP☁️/Anthos metrics always free
  • Pay for API calls & per MB for custom or AWS metrics

Stackdriver Logging – (Global) Store, search🔎, analyze, monitor, and alert 🔔 on log data & events (based on Fluentd)

  • Collection built into some GCP☁️, AWS support with agent, or custom send via API
  • Debug issues via integration with Stackdriver Monitoring, Trace & Error Reporting
  • Crate real-time metrics from log data, then alert 🔔 or chart them on dashboards
  • Send real-time log data to BigQuery for advanced analytics and SQL-like querying
  • Powerful interface to browse, search, and slice log data
  • Export log data to GCS to cost-effectively store log archives
  • Pay per GB ingested & stored for one month, but first 50GB/project free

Stackdriver Error Reporting– (Global) Counts, analyzes, aggregates, & tracks crashes in helpful centralized interface

  • Smartly aggregates errors into meaningful groups tailored to language/framework
  • Instantly alerts when a new app error cannot be grouped with existing ones
  • Link🔗 directly from notifications to error details:
    • Time chart, occurrences, affected user count, first/last seen dates, cleaned stack
  • Exception stack trace parser know Java☕️, Python🐍, JavaScript, Ruby💎,C#,PHP, & Go🟢
  • Jump from stack frames to source to start debugging
  • No direct charge; pay for source data in Stackdriver Logging

Stackdriver Trace– (Global) Tracks and displays call tree 🌳 & timings across distributed systems, to debug perf

  • Automatically captures traces from Google App Engine
  • Trace API and SDKs for Java, Node.js, Ruby, and God Capture traces from anywhere
  • Zipkin collector allows Zipkin tracers to submit data to Stackdriver Trace
  • View aggregate app latency info or dig into individual traces to debug problems
  • Generate reports on demand and get daily auto reports per traced app
  • Detects app latency shift (degradation) over time by evaluating perf reports
  • Pay for ingesting and retrieving trace spans

Stackdriver Debugger– (Global) Grabs program state (callstack, variables, expressions) in live deploys, low impact

  • Logpoints repeat for up to 24h; fuller snapshots run once but can be conditional
  • Source view supports Cloud Source Repository, Github, Bitbucket, local, & upload
  • Java☕️ and Python🐍 supported on GCE, GKE, and GAE (Standard and Flex)
  • Node.js and Ruby💎supported on GCE, GKE, and GAE Flex; Go only on GCE & GKE
  • Automatically enabled for Google App Engine apps, agents available for others
  • Share debugging sessions with others (just send URL)

Stackdriver Profiler– (Global) Continuous CPU and memory profiling to improve perf & reduce cost

  • Low overhead (typical: 0.5%; Max: 5%) – so use it in prod, too!
  • Supports Go, Java, Node,js, and Python (3.2+)
  • Agent-based
  • Saves profiles for 30 days
  • Can download profiles for longer-term storage

Cloud Deployment Manager– (Global) Create/manage resources via declarative templates: “Infrastructure as Code”

  • Declarative allows automatic parallelization
  • Templated written in YAML, Python🐍, or Jinja2
  • Supports input and output parameters, with JSON schema
  • Create and update of deployments both support preview
  • Free service: Just pay for resources involved in deployments

Cloud Billing API 🧾– (Global) Programmatically manage billing for GCP☁️ projects and get GCP☁️ pricing

  • Billing 🧾 Config
    • List billing🧾 accounts; get details and associated projects for each
    • Enable (associate), disable (disassociated), or change project’s billing account
  • Pricing
    • List billable SKUs; get public pricing (including tiers) for each
    • Get SKU metadata like regional availability
  • Export of current bill to GCS or BQ is possible – but configured via console, not API

Development and APIs

Cloud Source Repositories – (Global) Hosted private Git repositories, with integrations to GCP☁️ and other hosted repos

  • Support standard Git functionality
  • No enhanced workflow support like pull requests
  • Can set up automatic sync from GitHub or Bitbucket
  • Natural integration with Stackdriver debugger for live-debugging deployed apps
  • Pay per project-user active each month (not prorated)
  • Pay per GB-month of data storage 🗄(prorated), Pay per GB of Data egress

Cloud Build 🏗 – (Global) Continuously takes source code and builds, tests and deploys it – CI/CD service

  • Trigger from Cloud Source Repository (by branch, tag or commit) or zip🤐 in GCS
    • Can trigger from GitHub and Bitbucket via Cloud☁️ Source Repositories RepoSync
  • Runs many builds in parallel (currently 10 at a time)
  • Dockerfile: super-simple build+push – plus scans for package vulnerabilities
  • JSON/YAML file: Flexible🧘‍♀️ & Parallel Steps
  • Push to GCR & export artifacts to GCS – or anywhere your build steps wrtie
  • Maintains build logs and build history
  • Pay per minute of build time – but free tier is 120 minutes per day

Container Registry (GCR) 📦– (Regional, Multi-Regional) Fast🏃‍♂️, private Docker🐳 image storage 🗄 (based on GCS) with Docker🐳 V2 Registry API

  • Creates & manages a multi-regional GCS bucket 🗑, then translates GCR calls to GCS
  • IAM integration simplifies builds and deployments within GCP☁️
  • Quick deploys because of GCP☁️ networking to GCS
  • Directly compatible with standard Docker🐳 CLI; native Docker🐳 Login Support
  • UX integrated with Cloud☁️ Build & Stackdriver Logs
  • UI to manage tags and search for images🖼
  • Pay directly for storage 🗄and egress of underlying GCS (no overhead)

Cloud Endpoints – (Global) Handles authorization, monitoring, logging, & API keys🔑 for APIs backed by GCP☁️

  • Proxy instances are distributed and hook into Cloud Load Balancer 🏋️‍♀️
  • Super-fast🏃‍♂️ Extensible Service Proxy (ESP) container based on nginx: <1 ms /call
  • Uses JWTs and integrates with Firebase 🔥, AuthO, & Google Auth
  • Integrates with Stackdriver Logging and Stackdriver Trace
  • Extensible Service Proxy (ESP) can transcode HTTP/JSON to gRPC
    • But API needs to be Resource⚙️-oriented (i.e RESTful)        
  • Pay per call to your API

Apigee API Platform – (Global) Full-featured & enterprise-scale API management platform for whole API lifecycle

  • Transform calls between different protocols: SOAP, REST, XML, binary, custom
  • Authenticate via OAuth/SAML/LDAP: authorize via Role-Based Access Control
  • Throttle traffic🚦 with quotas, manage API versions, etc.
  • Apigee Sense identifies and alerts administrators to suspicious API behaviors
  • Apigee API Monetization supports various revenue models /rate pans
  • Team and Business tiers are flat monthly rate with API call quotas & feature sets
  • “Enterprise” tier and special feature pricing are “Contact Sales”

Test Lab for Android – (Global) Cloud☁️ infrastructure for running 🏃‍♂️ test matrix across variety of real Android devices

  • Production-grade devices flashed with Android version and locale you specify
  • Robo🤖 test captures log files, saves annotated screenshots & video to show steps
    • Default completely automatic but still deterministic, so can show regressions
    • Can record custom script
  • Can also run Espresso and UI Automator 2.0 instrumentation tests
  • Firebase Spark and Flame plans have daily allotment of physical and virtual tests
  • Blaze (PAYG) plan charges per device-hour-much less for virtual devices

“Well, we all shine☀️ on… Like the moon🌙 and the stars🌟 and the sun🌞

Thanks –


Week of October 23rd

Part III of a Cloud️ Journey

“I’m learning to fly ✈️, around the clouds ☁️

Hi All –

Happy National Mole Day! 👨‍🔬👩🏾‍🔬

“Open your eyes 👀, look up to the skies🌤 and see 👓

As we have all learned, Cloud computing ☁️ empowers us all to focus our time 🕰 on dreaming 😴 up and creating the next great scalable ⚖️ applications. In addition, Cloud computing☁️ enables us less worry😨 time 🕰 about infrastructure, managing and maintaining deployment environments or agonizing😰 over security🔒. Google evangelizes these principals stronger💪 than any other company in the world 🌎.

Google’s strategy for cloud computing☁️ is differentiated by providing open source runtime systems and a high-quality developer experience where organizations could easily move workloads from one cloud☁️ provider to another.

Once again, this past week we continued our exploration with GCP by finishing the last two courses as part of Google Cloud Certified Associate Cloud Engineer Path on Pluralsight. Our ultimate goal was to have a better understanding of the various GCP services and features and be able to apply this knowledge, to better analyze requirements and evaluate numerous options available in GCP. Fortunately, we gained this knowledge and a whole lot more! 😊

Guiding us through another great introduction on Elastic Google Cloud Infrastructure: Scaling and Automation were well-known friends Phillip Maier and Mylene Biddle. Then taking us through the rest of the way through this amazing course was the always passionate Priyanka Vergadia.

Then finally taking us down the home stretch 🏇 with Architecting with Google Kubernetes Engine – Foundations (which was last of this amazing series of Google Goodness 😊) were famous Googlers Evan Jones and Brice Rice.  …And Just to put the finishing touches 👐 on this magical 🎩💫 mystery tour was Eoin Carrol who gave us in depth look at Google’s game changer in Modernizing existing applications and building cloud-native☁️ apps anywhere with Anthos.

After a familiar introduction by Philip and Mylene we began delving into the comprehensive and flexible 🧘‍♀️ infrastructure and platform services provided by GCP.

“Across the clouds☁️ I see my shadow fly✈️… Out of the corner of my watering💦 eye👁

Interconnecting Networks – There are 5 ways of connecting your infrastructure to GCP:

  1. Cloud VPN
  2. Dedicated interconnect
  3. Partner interconnect
  4. Direct peering
  5. Carrier peering

Cloud VPN – securely connects your on-premises network to your GCP VPC network. In order to connect to your on-premise network via Cloud VPN configure cloud VPN, VPN Gateway, and to repeat in tunnels.

  • Useful for low-volume connections
  • 99.9% SLA
  • Supports:
    • Site-to-site VPN
    • Static routes
    • Dynamic routes (Cloud Router)
    • IKEv1 and IKEv2 ciphers

Please note: The maximum transmission unit or MTU for your on-premises VPN gateway cannot be greater than 1460 bytes.

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" target-vpn-gateways create "vpn-1" --region "us-central1" --network "vpn-network-1"

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" target-vpn-gateways create "vpn-1" --region "us-central1" --network "vpn-network-1"
$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-esp" --region "us-central1" --address "" --IP -protocol "ESP" --target-vpn-gateway "vpn-1"

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-udp500" --region "us-central1" --address "" --IP -protocol "UDP" --ports "500" --target-vpn-gateway "vpn-1"

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" forwarding-rules create "vpn-1-rule-udp4500" --region "us-central1" --address "" --IP -protocol "UDP" --ports "4500" --target-vpn-gateway "vpn-1"

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" vpn-tunnels create "tunnel1to2" --region "us-central1" --ike-version "2" --target-vpn-gateway "vpn-1"

$gcloud compute --project "qwiklabs-GCP -02-9474b560327d" vpn-tunnels create "vpn-1-tunnel-2" --region "us-central1" --peer-address "" --shared-secret "GCP  rocks" --ike-version "2" --local-traffic-selector "" --target-vpn-gateway "vpn-1"

Cloud Interconnect and Peering

Dedicated connections provide a direct connection to Google’s network while shared connections provide a connection to Google’s network through a partner

Comparison of Interconnect Options

  • IPsec VPN Tunnel – Encrypted tunnel to VPC networks through the public internet
    • Capacity 1.5-3 Gbps/Tunnel
    • Requirements VPN Gateway
    • Access Type – Internal IP Addresses
  • Cloud Interconnect – Dedicated interconnect provides direct physical connections
    • Capacity 10 Gbps/Link -100 Gbps/ link
    • Requirements – connection in colocation facility
    • Access Type– Internal IP Addresses
  • Partner Interconnect– provides connectivity through a supported service provider
    • 50 Mbps -10 Gbps/connection
    • Requirements – Service provider
    • Access Type– Internal IP Addresses


Direct Peering provides a direct connection between your business network and Google.

  • Broad-reaching edge network locations
  • Capacity 10 Gbps/link
  • Exchange BGP routes
  • Reach all of Google’s services
  • Peering requirement (Connection in GCP PoPs)
  • Access Type: Public IP Addresses

Carrier Peering provides connectivity through a supported partner

  • Carrier Peering partner
  • Capacity varies based on parent offering
  • Reach all of Google’s services
  • Partner requirements
  • No SLA
  • Access Type: Public IP Addresses

Choosing the right connection – Decision Tree 🌳

Shared VPC and VPC Peering

Shared VPC allows an organization to connect Resource is from multiple projects to a common VPC network.

VPC Peering is a decentralized or distributed approach to multi project networking because each VPC network may remain under the control of separate administrator groups and maintains its own global firewall and routing tables.

Load Balancing 🏋️‍♂️ and Autoscaling

Cloud Load Balancing 🏋️‍♂️ – distributes user traffic 🚦across multiple instances of your applications. By spreading the load, load balancing 🏋️‍♂️ reduces the risk that your applications experience performance issues. There are 2 basic categories of Load balancers:  Global load balancing 🏋️‍♂️ and Regional load balancing 🏋️‍♂️.

Global load balancers – when workloads are distributed across the world 🌎 Global load balancers route traffic🚦 to a backend service in the region closest to the user, to reduce latency.

They are software defined distributed systems using Google Front End (GFE) reside in Google’s PoPs are distributed globally

Types of Global Load Balancers

  • External HTTP and HTTPS (Layer 7)
  • SSL Proxy (Layer 4)
  • TCP Proxy (Layer 4)

Regional load balancers – when all workloads are in the same region

  • Regional load balancing 🏋️‍♂️ route traffic🚦within a given region.
  • Regional Load balancing 🏋️‍♂️ uses internal and network load balancers.

Internal load balancers are software defined distributed systems (using Andromeda) and network load balancers which use  Maglev distributed system.

Types of Global Load Balancers

  • Internal TCP/UDP (Layer 4)
  • Internal HTTP and HTTPS (Layer 7)
  • TCP/UDP Network (Layer 4)

Managed instance groups – is a collection off identical virtual machine instances that you control as a single entity. (Same as creating a VM but applying specific rules to an Instance group)

  • Deploy identical instances based on instance template
  • Instance group can be resized
  • Manager ensures all instances are Running
  • Typically used with autoscaling ⚖️
  • Can be single zone or regional

Regional managed instance groups are usually recommended over zonal managed instance groups because this allow you to spread the application’s load across multiple zones through replication and protect against zonal failures.

         Steps to create a Managed Instance Group:

  1. Need to decide the location and whether instance group will be Single or multi-zones
  2. Choose ports that are going allow load balancing🏋️‍♂️ across.
  3. Select Instance template
  4. Decide Autoscaling ⚖️ and criteria for use
  5. Creatine a health ⛑ check to determine instance health and how traffic🚦should route

Autoscaling and health checks

Managed instance groups offer autoscaling ⚖️ capabilities

  • Dynamically add/remove instances
    • Increases in load
    • Decrease in load
  • Autoscaling policy
    • CPU Utilization
    • Load balancing 🏋️‍♂️capacity
    • Monitoring 🎛 metrics
    • Queue-based workload

HTTP/HTTPs load balancing

  • Target HTTP/HTTPS proxy
  • One singed SSL certificate installed (minimum)
  • Client SSL session terminates at the load balancer 🏋️‍♂️
  • Support the QUIC transport layer protocol
  • Global load balancing🏋️‍♂️
  • Anycast IP address
  • HTTP or port 80 or 8080
  • HTTPs on port 443
  • IPv4 or IP6
  • Autoscaling⚖️
  • URL maps 🗺

Backend Services

  • Health ⛑ check
  • Session affinity (Optional)
  • Time out setting (30-sec default)
  • One or more backends
    • An instance group (managed or unmanaged)
    • A balancing mode (CPU utilization or RPS)
    • A capacity scaler ⚖️ (ceiling % of CPU/Rate targets)

SSL certificates

  • Required for HTTP/HTTPS load balancing 🏋️‍♂️
  • Up to 10 SSL certificates /target proxy
  • Create an SSL certificate resource 

SSL proxy load balancing – global 🌎 load balancing service for encrypted non-http traffic.

  • Global 🌎 load balancing for encrypted non-HTTP traffic 🚦
  • Terminate SSL session at load balancing🏋️‍♂️ Layer
  • IPv4 or IPv6 clients
  • Benefits:
    • Intelligent routing
    • Certificate management
    • Security🔒 patching
    • SSL policies

TCP proxy load balancing – a global load balancing service for unencrypted non http traffic.

  • Global load balancing for encrypted non-HTTP traffic 🚦
  • Terminates TCP session at load balancing🏋️‍♂️ Layer
  • IPv4 or IPv6 clients
  • Benefits:
    • Intelligent routing
    • Security🔒 patching

Network load balancing – is a regional non-proxy load balancing service.

  • Regional, non-proxied load balancer
  • Forwarding rules (IP protocol data)
  • Traffic:
    • UDP
    • TCP/SSL ports
  • Backends:
    • Instance group
    • Target pool🏊‍♂️ resources – define a group of instances that receive incoming traffic 🚦from forwarding rules:
      • Forwarding rules (TCP and UDP)
      • Up to 50 per project
      • One health check
      • Instances must be in the same region

Internal load balancing – is a regional private load balancing service for TCP and UDP based traffic🚦

  • Regional, private load balancing 🏋️‍♂️
    • VM instances in same region
    • RFC 1918 IP address
  • TCP/UDP traffic 🚦
    • Reduced latency, simpler configuration
    • Software-defined, fully distributed load balancing (is not based on a device or a virtual machine.)

“..clouds☁️ roll by reeling is what they say … or is it just my way?”

Infrastructure Automation – Infrastructure as code (IaC)

Automate repeatable tasks like provisioning, configuration, and deployments for one machine or millions.

Deployment Manager -is an infrastructure deployment service that automates the creation and management of GCP. By defining templates, you only have to specify the resources once and then you can reuse them whenever you want.

  • Deployment Manager is an infrastructure automation tool🛠
  • Declarative Language (allows you to specify what the configuration should be and let the system figure out the steps to take)
  • Focus on the application
  • Parallel deployment
  • Template-driven

Deployment manager creates all the resources in parallel.

Additional infrastructure as code tools🛠 for GCP

  • Terraform 🌱
  • Chef 👨‍🍳
  • Puppet
  • Ansible
  • Packer 📦

It’s recommended that you provisioned and managed resource is on GCP with the tools🛠 you are already familiar with

GCP Marketplace 🛒 – lets you quickly deploy functional software packages that run 🏃‍♂️ on GCP.

  • Deploy production-grade solutions
  • Single bill for GCP and third-party services
  • Manage solutions using Deployment Manager
  • Notification 📩 when a security update is available
  • Direct access to partner support

“… Must be the clouds☁️☁️ in my eyes 👀

Managed Services – automates common activities, such as change requests, monitoring 🎛, patch management, security🔒, and backup services, and provides full lifecycle 🔄 services to provision, run🏃‍♂️, and support your infrastructure.

BigQuery is GCP serverless, highly scalable⚖️, and cost-effective cloud Data warehouse

  • Fully managed
  • Petabyte scale
  • SQL interface
  • Very fast🏃‍♂️
  • Free usage tier

Cloud Dataflow🚰 executes a wide variety of data processing patterns

  • Server-less, fully managed data processing
  • Batch and stream processing with autoscale ⚖️
  • Open source programming using Apache beam 🎇
  • Intelligently scale to millions of QPS

Cloud Dataprep – visually explore, clean, and prepare data for analysis and machine learning

  • Serverless, works at any scale⚖️
  • Suggest ideal analysis
  • Integrated partner service operated by Trifacta

Cloud Dataproc – is a service for running Apache Spark 💥 and Apache Hadoop 🐘 clusters

  • Low cost (per-second, preemptible)
  • Super-fast🏃‍♂️ to start, scale ⚖️, and shut down
  • Integrated with GCP
  • Managed Service
  • Simple and familiar

“Captain Jack will get you by tonight 🌃… Just a little push, and you’ll be smilin’😊 “

Architecting with Google Kubernetes Engine – Foundations

After Steller🌟 job by was Priyanka taking us through the load balancing 🏋️‍♂️ options, infrastructure as code and some of the managed service options in GCP it was time to take the helm⛵️ and get our K8s ☸️ hat 🧢 on.

Cloud Computing and Google Cloud

Just to wet our appetite 😋 for Cloud Computing ☁️and Evan takes through 5 fundamental attributes:

  1. On-demand self-services (No🚫 human intervention needed to get resources)
  2. Broad network access (Access from anywhere)
  3. Resource Pooling🏊‍♂️ (Provider shares resources to customers)
  4. Rapid Elasticity 🧘‍♀️ (Get more resources quickly as needed)
  5. Measured 📏Service (Pay only for what you consume)

Next, Evan introduced some of GCP Services under Compute like Compute Engine, Google Kubernetes Engine (GKE), App Engine, Cloud Functions. He then discussed some Google’s managed services with Storage, Big Data, Machine Learning Services.

Resource Management


  • GCP provides resource is in multi-regions, regions and zones.
  • GCP divides the world 🌎 up into 3 multi-regional areas the Americas, Europe and Asia Pacific.
  • 3 multi regional areas are divided into regions which are independent geographic areas on the same continent.
  • Regions are divided into zones (like a data center) which are deployment areas for GCP resources.

The network interconnect with the public internet at more than 90 internet exchanges and more than 100 points of presence worldwide🌎 (and growing)

  • Zonal resources operate exclusively in a single zone
  • Regional resources span multiple zones
  • Global resources could be managed across multiple regions.
  • Resources have hierarchy
    • Organization is the root node of a GCP resource hierarchy
    • Folders reflect their hierarchy of your enterprise
    • Projects are identified by unique Project ID and Project Number
  • Cloud Identity Access Management (IAM) allows you to fine tune access controls on all resources in GCP.


  • Billing account pay for project resources
  • A billing account in linked to one or more projects
  • Charged automatically or invoiced every month or at threshold limit
  • Subaccounts can be used for separate billing for projects

How to keep billing under control

  1. Budgets and alerts 🔔
  2. Billing export 🧾
  3. Reports 📊
    • Quotas are helpful limits
      • Quotas apply at the level of GCP Project
      • There two types of quotas
        • Rate quotas reset after a specific time 🕰
        • Allocation quotas govern the number of resources in projects

GCP implements quotas which limit unforeseen extra billing charges. Quotas error designed to prevent the over consumption of resources because of an error or a malicious attack 👿.

Interacting with GCP -There are 4 ways to interact with GCP:

  1. Cloud Console
  • Web-based GUI to manage all Google Cloud resources
  • Executes common task using simple mouse clicks
  • Provides visibility into GCP projects and resources

2. SDK

3. Cloud Shell

  • gcloud
  • kubectl
  • gsutil
  • bq
  • bt
  • Temporary Compute Engine VM
  • Command-line access to the instance through a browser
  • 5 GB of persistent disk storage ($HOME dir)
  • Preinstalled Cloud SDK and other tools🛠
    • gcloud: for working with Compute Engine, Google Kubernetes Engine (GKE) and many Google Cloud services
    • gsutil: for working with Cloud Storage
    • kubectl: for working with GKE and Kubernetes
    • bq: for working with BigQuery
  • Language support for Java ☕️Go, Python 🐍 , Node.js, PHP, and Ruby♦️
  • Web 🕸preview functionality
  • Built-in authorization for access to resources and instances

4. Console mobile App

“Cloud☁️ hands🤲 reaching from a rainbow🌈 tapping at the window, touch your hair”

Introduction to Containers

Next Evan took us through a history of computing. First starting with deploying applications on its physical servers. This solution wasted resources and took a lot of time to deploy maintaining scale. It also wasn’t very portable. It all applications were built for a specific operating system, and sometimes even for specific hardware as well.

Next transitioning to Virtualization. Virtualization made it possible to run multiple virtual servers and operating systems on the same physical computer. A hypervisor is the software layer that removes the dependencies of an operating system with its underlying hardware. It allows several virtual machines to share that same hardware.

  • Hypervisors creates and manages virtual machines
  • Running multiple apps on a single VM
  • VM-centric way to solve this problem (run a dedicated virtual machine for each application.)

Finally, Evan introduced us to containers as they solve a lot of the short comings of Virtualization like:

  • Applications that share dependencies are not isolated from each other
  • Resource requirements from one application can starve out another application
  • A dependency upgrade for one application might cause another to simply stop working

Containers are isolated user spaces for running application code. Containers are lightweight as they don’t carry a full operating system. They could be scheduled or packed 📦tightly onto the underlying system, which makes them very efficient.

Containerization is the next step in the evolution of managing code.

Benefits of Containers:

  • Containers appeal to developers 👩🏽‍💻
  • Deliver high performing and scalable ⚖️ applications.
  • Containers run the same anywhere
  • Containers make it easier to build applications that use Microservices design pattern
    • Microservices
      • Highly maintainable and testable.
      • Loosely coupled. Independently deployable. Organized around business capabilities.

Containers and Container Images

An Image is an application and its dependencies

A container is simply a running 🏃‍♂️ instance image.

Docker 🐳 is an open source technology that allows you to create and run 🏃‍♂️ applications and containers, but it doesn’t offer away to orchestrate those applications at scale ⚖️

  • Containers use a varied set of Linux technologies
    • Containers use Linux name spaces to control what an application can see
    • Containers used Linux C groups to control what an application can use.
    • Containers use union file systems to efficiently encapsulate applications and their dependencies into a set of clean, minimal layers.
  • Containers are structured in layers
    • Container manifest is tool🛠 you used to build the image reads instructions from a file
    • Docker 🐳 file is a formatted container image
      • Each instruction in the Docker file specifies a layer (Read Only) inside the container image.
      • Readable ephemeral top layer
  • Containers promote smaller shared images

How to get containers?

  • Download containerized software from a container registry gcr.io
  • Docker 🐳 – Build your own container using the open-source docker 🐳 command
  • Build your own container using Cloud Build
    • Cloud Build is a service that executes your builds on GCP.
    • Cloud Build can import source code from
      • Google Cloud Storage
      • Cloud Source Repositories
      • GitHub,
      • Bitbucket

Introduction to Kubernetes ☸️

Kubernetes ☸️ is an open source platform that helps you orchestrate and manage your container infrastructure on premises or in the cloud☁️.

It’s a container centric management environment. Google originated it and then donated it to the open source community.

K8s ☸️ automates the deployment, scaling ⚖️, load balancing🏋️‍♂️, logging, monitoring 🎛 and other management features of containerized applications.

  • Facilitates the features of an infrastructure as a service
  • Supports Declarative configurations.
  • Allows Imperative configuration
  • Open Source

K8s ☸️ features:

  • Supports both stateful and stateless applications
  • Autoscaling⚖️
  • Resource limits
  • Extensibility

Kubernetes also supports workload portability across on premises or multiple cloud service providers. This allows Kubernetes to be deployed anywhere. You could move Kubernetes ☸️ workloads freely without vendor lock🔒 in

Google Kubernetes Engine (GKE)

GKE easily deploys, manages and scales⚖️ Kubernetes environments for your containerized applications on GCP.

GKE Features:

  • Fully managed
    • Cluster represent your containerized applications all run on top of a cluster in GKE.
    • Nodes are the virtual machines that host your containers inside of a GKE app cluster
  • Container-optimized OS
  • Auto upgrade
  • Auto repair🛠
  • Cluster Scaling⚖️
  • Seamless Integration
  • Identity and access management (IAM)
  • Integrated logging and monitoring (Stack Driver)
  • Integrated networking
  • Cloud Console

Compute Options Detail

 Computer Engine

  • Fully customizable virtual machines
  • Persistent disks and optional local SSDs
  • Global load balancing and autoscaling⚖️
  • Per-second billing

                  Use Cases

  • Complete control over the OS and virtual hardware
    • Well Suited for lift-and shift migrations to the cloud
    • Most flexible 🧘‍♀️ compute solution, often used when a managed solution is too restrictive

  App Engine

  • Provides a fully managed code-first platform
  • Streamlines application deployment and scalability⚖️
  • Provides support for popular programming language and application runtimes
  • Supports integrated monitoring 🎛, logging and diagnostics.

Use Cases

  • Websites
    • Mobile app📱 and gaming backends
    • Restful APIs

Google Kubernetes Engine

  • Fully managed Kubernetes Platform
  • Supports cluster scaling ⚖️, persistent disk, automated upgrades, and auto node repairs
  • Built-in integration with GCP
  • Portability across multiple environments
    • Hybrid computing
    • Multi-cloud computing

Use Cases

  • Containerized applications
    • Cloud-native distributed systems
    • Hybrid applications

Cloud Run

  • Enable stateless containers
  • Abstracts away infrastructure management
  • Automatically scales ⚖️ up⬆️ and down⬇️
  • Open API and runtime environment

Use Cases

  • Deploy stateless containers that listen for requests or events
    • Build applications in any language using any frameworks and tool🛠

Cloud Functions

  • Event-driven, serverless compute services
  • Automatic scaling with highly available and fault-tolerant design
  • Charges apply only when your code runs 🏃‍♂️
  • Triggered based on events in GCP, HTTP endpoints, and Firebase

Use Cases

  • Supporting microservice architecture
    • Serverless application backends
      • Mobile and IoT backends
      • Integrate with third-party services and APIs
    • Intelligent applications
      • Virtual assistant and chat bots
      • Video and image analysis

Kubernetes ☸️ Architecture

There are two related concepts in understanding K8s ☸️ works object model and principle of declarative management

Pods – the basic building block of K8s ☸️

  • Smallest deployable object.
  • Containers in a Pod share resources
  • Pods are not self-healing

Principle of declarative management – Declare some objects to represent those in generic containers.

  • K8s ☸️ creates and maintains one or more objects.
  • K8s ☸️ compares the desired state to the current state.

The Kubernetes ☸️Control Plane ✈️continuously monitor the state of the cluster, endlessly comparing reality to what has been declared and remedying the state has needed.

K8s ☸️ Cluster consists of a Master and Nodes

Master is to coordinate the entire cluster.

  • View or change the state of the cluster including launching pods.
  • kube-API server – the single component that interacts with the Cluster
    • kubectl server interacts with the database on behalf of the rest of the system
  • etcd – key-value store for the most critical data of a distributed system
  • kube-scheduler – assigns Pods to Nodes
  • kube-cloud-manager – embeds cloud-specific control logic.
  • Kube-controller-manager- daemon that embeds the core control loops

Nodes runs run pods.

  • kubelet is the primary “node agent” that runs on each node.
  • kube-proxy is a network proxy that runs on each node in your cluster

Google Kubernetes ☸️ Engine Concepts

GKE makes administration of K8s ☸️ much simpler

  • Master
    • GKE manages all the control plane components
    • GKE provisions and manages all the master infrastructure
  • Nodes
    • GKE manages this by deploying and registering Compute Engine instances as Nodes
    • Use node pools to manage different kinds of nodes
      • node pool (using nodemon) is a subset of nodes within a cluster that share a configuration, such as their amount of memory or their CPU generation.
      • nodemon is GKE specific feature
        • enable automatic node upgrades
        • automatic node repairs 🛠
        • cluster auto scaling ⚖️

Zonal Cluster – has a single control plane in a single zone.

  • single-zone cluster has a single control plane running in one zone
  • multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones.

Regional Cluster – has multiple replicas of the control plane, running in multiple zones within a given region.

Private Cluster – provides the ability to isolate nodes from having inbound and outbound connectivity to the public internet.

Kubernetes ☸️ Object Management – identified by a unique name and a unique identifier.

  • Objects are defined in a YAML file
  • Objects are identified by a name
  • Objects are assigned a unique identifier (UID) by K8s ☸️
  • Labels 🏷 are key value pairs that tag your objects during or after their creation.
  • Labels 🏷 help you identify and organize objects and subsets of objects.
  • Labels 🏷 can be matched by label selectors

Pods and Controller Objects

Pods have a life cycle🔄

  • Controller Object types
    • Deployment – ensure that sets of Pods are running
      • To perform the upgrade, the Deployment object will create a second ReplicaSet object, and then increase the number of (upgraded) Pods in the second ReplicaSet while it decreases the number in the first ReplicaSet
    • StatefulSet
    • DaemonSet
    • Job
  • Allocating resource quotas
  • Namespaces – provide scope for naming resources (pods, deployments and controllers.)

There are 3 initializer spaces in the cluster.

  1. Default name space for objects with no other name space defined.
  2. Kube-system named Space for objects created by the Kubernetes system itself.
  3. Kube-Public name space for objects that are publicly readable to all users.

Best practice tip: namespace neutral YAML

  • Apply name spaces at the command line level which makes YAML files more flexible🧘‍♀️.

Advanced K8s ☸️ Objects


  • set of Pods and assigns a policy by which you can access those pods
    • Services provide load-balanced 🏋️‍♂️ access to specified Pods. There are three primary types of Services:
      • ClusterIP: Exposes the service on an IP address that is only accessible from within this cluster. This is the default type.
      • NodePort: Exposes the service on the IP address of each node in the cluster, at a specific port number.
      • LoadBalancer 🏋️‍♂️: Exposes the service externally, using a load balancing 🏋️‍♂️ service provided by a cloud☁️ provider.


  • A directory that is accessible to all containers in a Pod
    • Requirements of the volume can be specified using Pod specification
    • You must mount these volumes specifically on each container within a Pod
    • Set up Volumes using external storage outside of your Pods to provide durable storage

Controller Objects

  • ReplicaSets – ensures that a population of Pods
    • Deployments
      • Provides declarative updates to ReplicaSets and Pods
      • Create, update, roll back, and scale⚖️ Pods, using ReplicaSets
      • Replication Controllers – perform a similar role to the combination of ReplicaSets and Deployments, but their use is no longer recommended.
    • StatefulSets – similar to a Deployment, Pods use the same container specs
    • DaemonSets – ensures that a Pod is running on all or some subset of the nodes.
    • Jobs – creates one or more Pods required to run a task

“Can I get an encore; do you want more?”

Migrate for Anthos – tool🛠 for getting workloads into containerized deployments on GCP

  • Automated process that moves your existing applications into a K8s ☸️ environment.

Migrate for Anthos moves VMs to containers

  • Move and convert workloads into containers
  • Workloads can start as physical servers or VMs
  • Moves workload compute to container immediately (<10 min)
  • Data can be migrated all at once or “streamed” to the cloud☁️ until the app is live in the cloud☁️

Migrate for Anthos Architecture

  • A migration requires an architecture to be built
  • A migration is a multi-step process
  • Configure processing cluster
  • Add migration source
  • Generate and review plan
  • Generate artifacts
  • Test
  • Deploy

Migrate for Anthos Installation -requires a processing cluster

         Installing Migrate for Anthos uses migctl

$migctl setup install

         Adding a source enables migration from a specific environment

$migctl source create cd my-ce-src –project my-project –zone zone

         Creating a migration generates a migration plan

$migctl migration create test-migration –source my-ce-src –vm- id my-id –intent image

         Executing a migration generates resource and artifacts

$migctl migration generate-artifacts my-migration

         Deployment files typically need modification

$migctl migration get-artifacts test-migration

Apply the configuration to deploy the workload

$Kubectl apply -f deployment_sepc.yaml

“And we’ll bask 🌞 in the shadow of yesterday’s triumph🏆 And sail⛵️ on the steel breeze🌬

Below are some of the destinations I am considering for my travels for next week:

Thanks –


Week of October 16th

Part II of a Cloud☁️ Journey

“Cause I don’t want to come back… Down from this Cloud☁️”

Hi All –

Happy Global 🌎 Cat 😺 Day!

Last week, we started our continuous Cloud☁️ journey exploring Google Cloud☁️ to help us better understand the core services and the full value proposition that GCP can offer.

It has been said “For modern enterprise, that Cloud☁️ is the closest thing to magic🎩🐰 that we have.” Cloud☁️ enables companies large🏢 and small🏠 to be more agile and nimble. It also empowers employees of these companies🏭 to focus on being more creative and innovative and not being bogged down in the minutiae and rigors of managing IT infrastructure. In addition, customers of these companies benefit from an overall better Customer experience as applications are more available and scalable ⚖️.

As you might know, “Google’s mission is and has as always been to organize the world’s🌎 information and make it universally accessible and useful and as result playing a meaningful role in the daily lives of billions of people” Google has been able to hold true to this mission statement through its unprecedented success from products and platforms like Search 🔎, Maps 🗺, Gmail 📧, Android📱, Google Play, Chrome and YouTube 📺. Google continues to strive for the same kind of success with their Cloud Computing☁️ offering with GCP.

This week we continued our journey with GCP and helping us through the Google Cloud☁️ Infrastructure Essentials (Essential Google Cloud☁️ Infrastructure: Foundation & Essential Google Cloud☁️ Infrastructure: Core Services) through a combination of lectures and qwiklabs were esteemed Googlers Phillip Maier who seems to have had more cameos in the Google Cloud☁️ training videos than Stan Lee has made in all of the Marvel Movies combined and the very inspirational Mylene Biddle who exemplifies transformation both in the digital and real world.

Phillip and Mylene begin the course discussing Google Cloud☁️ which is a much larger ecosystem than just GCP. This ecosystem consists of open source software providers, partners, developers, third party software and other Cloud☁️ providers.

GCP uses a state-of-the-art software defined networking and distributed systems technologies to host and deliver services around the world🌎. GCP offers over 90 products and Services that continues to expand. GCP spans from infrastructure as a service or (IaaS) to software as a service (SaaS).

Next, Philip presented an excellent analogy comparing IT infrastructure to one of a city’s 🏙 infrastructure. “Infrastructure is the basic underlying framework of fundamental facilities and systems such as transport, 🚆communications 📞, power🔌, water🚰, fuel ⛽️ and other essential services. The people 👨‍👩‍👧‍👦 in the city 🏙 are like users 👥, and the cars 🚙🚗 and bikes 🚴‍♀️🚴‍♂️ buildings🏬 in the city 🏙 are like applications. Everything that goes into creating and supporting those applications for the users is the infrastructure.

GCP offers wide range of compute services including:

  • Compute Engine – (IaaS) runs virtual machines on demand
  • Google Kubernetes ☸️ Engine (IaaS/PaaS) – run containerized applications on a Cloud☁️ environment that Google manages under your administrative control.
  • App Engine (PaaS) is fully managed platform as a service framework. Run code in the Cloud☁️ without having to worry about infrastructure.
  • CloudFunctions (Serverless) It executes your code in response to events, whether those events occur once a day or many times ⏳

There are Four ways to interact with GCP

  1. Google Cloud Platform Console or GCP Console
  2. CloudShell and the Cloud SDK
  3. API
  4. Cloud Mobile 📱 App

CloudShell provides the following:

  • Temporary Compute Engine VM
  • Command-line access to the instance via a browser
  • 5 GB of persistent disk storage 🗄 ($HOME dir)
  • Pre-installed Cloud☁️ SDK and other tools 🛠
  • gCloud: for working with Compute Engine and many Google Cloud☁️ services
  • gsutil: for working with Cloud Storage 🗄
  • kubectl: for working with Google Container Engine and Kubernetes☸️
  • bq: for working with BigQuery
  • bt: for working with BigTable
  • Language support for Java☕️, Go, Python🐍, Node.js, PHP, and Ruby♦️
  • Web 🕸 preview functionality
  • Built-in authorization for access to resources and instances

“Virtual insanity is what we’re living in”

Virtual Networks

GCP uses a software defined network that is built on a Global 🌎 fiber infrastructure. This infrastructure makes GCP, one of the world’s 🌎 largest and fastest 🏃‍♂️networks.

Virtual Private Cloud☁️

Virtual Private Cloud☁️ (VPC) provides networking functionality to Compute Engine virtual machine (VM) instances, Google Kubernetes Engine (GKE) ☸️ containers, and the App Engine standard and flexible🧘‍♀️ environment. VPC provides networking for your Cloud-based☁️ services that is Global 🌎, scalable⚖️, and flexible🧘‍♀️.

VPC is a comprehensive set of Google managed networking objects:

  • Projects are used to encompass the Network Service as well as all other service in GCP
  • Networks come in three different flavors 🍨.
    • Default
    • Automotive
    • Custom mode.
  • Subnets allow for division or segregation of the environment.
  • Regions and zones (GCP DC) provide continues data protection and high availability.
  • IP addresses provided are internal or external
  • Virtual machines – instances from a networking perspective.
  • Routes and Firewall🔥rules allow or deny connections to or from VMs based specified configuration

Projects, Networks, and Subnets

A Project is the key organizer of infrastructure resources.

  • Associates objects and services with billing🧾
  • Contains networks (up to 5) that can be shared/peered

Networks are Global 🌎 are spans all available regions.

  • Has no IP address range
  • Contains subnetworks
  • Has three different options:
    • Default
      • Every Project
      • One subnet per region
      • Default firewall rules🔥
    • Automotive
      • Default network
      • One subnet per region
      • Regional IP allocation
      • Fixed /20 subnetwork per region
      • Expandable up to /16
    • Custom mode
      • No default subnets created
      • Full control of IP ranges
      • Regional IP allocation

VMs despite different locations geographically 🌎 take advantage of Google’s Global 🌎 fiber network. VMs appear as though they’re sitting in the same rack when it comes to a network configuration protocol.

  • VMs can be on the same subnet but in different zones
  • A single firewall rule 🔥 can apply to both VMs

Subnet is a ranges of IP addresses.

  • Every subnet has 4 reserved IP addresses in its primary IP Range.
  • Subnets can be expanded without re-creating instances or any down Time ⏳
    • Cannot overlap with other subnets
    • Must be inside the RFC 1918 address spaces
    • Can be expanded but cannot be shrunk
    • Auto mode can be expanded from /20 to /16
    • Avoid large subnets (don’t scale ⚖️ beyond what is actually needed)

IP addresses

VMs can have internal and external IP addresses

You also can assign a range of IP addresses as aliases to a VM’s network interface using IP range

Internal IP

  • Allocated from a subnet range to VMs by DHCP
  • DHCP lease is removed every 24 hours
  • VM name + IP is registered with network-scoped DNS

External IP

  • Assigned from pool (ephemeral)
  • Reserved (static)
  • VMs don’t know external IP
  • VMs are mapped to the internal IP

Mapping 🗺 IP addresses

DNS resolution for internal addresses

  • Each instance has a hostname that can be resolved to an internal IP address:
    • The hostname is the same as the instance name
    • FQDN is [hostname]. [zone].c.[project-id].internal
  • Name resolution is handled by internal DNS resolver
  • Configured for use on instance via DHCP
  • Provides answer for internal and external addresses

DNS resolution for external address

  • Instances with external IP addresses can allow connections from hosts outside the project
    • Users connect directly using external IP address
    • Admins can also publish public DNS records pointing to the instance
    • Public DNS records are not published automatically
  • DNS records for external addresses can be published using existing DNS servers (outside of GCP)
  • DNS zones can be hosted using Cloud DNS.

Host DNS zones using Cloud DNS

  • Google DNS service
  • Translate domain name into IP address
  • Low latency
  • High availability (100% uptime SLA)
  • Create and update millions of DNS records

Routes and firewall rules 🔥

Every network has:

  • Routes that let instances in a network send traffic🚦 directly to each other.
  • A default route that directs to destinations that are outside the network.

Routes map🗺 traffic🚦 to destination networks

  • Apply to traffic🚦 egress to a VM
  • Forward traffic🚦 to most specific route
  • Created when a subnet is created
  • Enable VMs on same network to communicate
  • Destination is in CIDR notation
  • Traffic🚦 is delivered only if it also matches a firewall rule 🔥

Firewall rules🔥 protect your VM instances from unapproved connections

  • VPC network functions as a distributed firewall. 🔥
  • Firewall rules🔥 are applied to the network as whole
  • Connections are allowed or denied at the instance level.
  • Firewall rules🔥 are stateful
  • Implied deny all ingress and allow all egress

Create Network

$gcloud compute networks create privatenet --subnet-mode=custom
$gcloud compute networks subnets create privatesubnet-us --network=privatenet --region=us-central1 --range=
$gcloud compute networks subnets create privatesubnet-eu --network=privatenet --region=europe-west1 --range=
$gcloud compute networks list
$gcloud compute networks subnets list --sort-by=NETWORK

Create firewall Rules 🔥

$gcloud compute firewall-rules create privatenet-allow-icmp-ssh-rdp --direction=INGRESS --priority=1000 --network=privatenet --action=ALLOW --rules=icmp,tcp:22,tcp:3389 --source-ranges=

Common network designs

  • Increased availability with multiple zones
    • A regional managed instance group contains instances from multiple zones across the same region, which provides increased availability.
  • Globalization 🌎 with multiple regions
    • Putting resource is in different regions, provides an even higher degree of failure independence by spreading resources across different failure domains
  • Cloud NAT provides internet access to private instances
    • Cloud NAT is Google’s Mesh Network address translation service. Provision application instances without public IP addresses, while also allowing them to access the Internet in a controlled and efficient manner.
  • Private Google Access to Google APIs and services
    • Private Google access to allow VM instances that only have internal IP addresses to reach the external IP addresses of Google APIs and services.

“Know you’re nobody’s fool… So welcome to the machine”

Compute Engine (IaaS)

Predefined or custom Machines types:

  • vCPUs (cores and Memory (RAM)
  • Persistent disks: HDD, SDD, and Local SSD
  • Networking
  • Linux or Windows


Several machine types

  • Network throughput scales⚖️ 2 Gbps per vCPU (small exceptions)
  • Theoretical max of 32 Gbps with 16 vCPU or 100 Gbps with T4 or V100 GPUs

A vCPU is equal to 1 hardware hyper-thread.

Storage 🗄


  • Standard, SSD, or Local SSD
  • Standard and SSD PB scale⚖️ in performance for each GB of space allocated

Resize disks or migrate instances with no downTime ⏳

Local SSD have even higher throughput and lower latency than SSD persistent disks because there are attached to the physical hardware. However, the data that you store local s SSDs persists only until you stop 🛑 or delete the instance.


         Robust network features:

  • Default, custom networks
  • Inbound/outbound firewall rules🔥
    • IP based
    • Instance/group tags
  • Regional HTTPS load balancing
  • Network load balancing
    • Does not require pre-warming
  • Global 🌎 and multi-regional subnetworks

VM access

Linux🐧 SSH (requires firewall to allow tcp:22)

  • SSH from Console CloudShell via Cloud SDK, computer

Windows RDP (requires firewall to allow tcp:3389)

  • RDP clients, PowerShell terminal

VM Lifecycle

Compute Engine offers live migration to keep your virtual machine instances running even when a host system event, such as a software or hardware update, occurs. Live migration keeps your instances running during the following events:

  • Regular infrastructure maintenance and upgrades.
  • Network and power 🔌 grid maintenance in the data centers.
  • Failed hardware such as memory, CPU, network interface cards, disks, power, and so on. This is done on a best-effort basis; if a hardware fails completely or otherwise prevents live migration, the VM crashes and restarts automatically and a hostError is logged.
  • Host OS and BIOS upgrades.
  • Security-related updates, with the need to respond quickly.
  • System configuration changes, including changing the size of the host root partition, for storage 🗄 of the host image and packages.

Compute Options

Machine Types

Predefined machine types Ratio of GB of memory per VCPU

  • Standard machine types
  • High-memory machine types
  • High-CPU machine types
  • Memory-optimized machine types
  • Compute-optimized machine types
  • Shared core machine types

Custom machine types:

  • You specify the amount of memory and number of VCPUs

Special compute configurations

Preemptible (ideal for running batch processing jobs)

  • Lower price for interruptible service (up to 80%)
  • VM might be terminated at any time ⏳
    • No charge if terminated in the first 10 minutes
    • 24 hours max
    • 30-second terminate warning, but not guaranteed
      • Time⏳ for a shutdown script
  • No live migrate; no auto restart
  • You can request that CPU quota for a region be split between regular and preemption
    • Default: preemptible VMs count against region CPU quota

Sole-tenant nodes -physically isolate workloads (Ideal for workloads that require physical isolation)

  • Sole-tenant node is a physical compute engine server that is dedicated to hosting VM Instances
  • if you have existing operating system licenses, you can bring them to compute engine using sole tenant notes while minimizing physical core usage with the in-place restart feature

Shielded VMs🛡offer verifiable integrity

  • Secure🔒 Boot
  • Virtual trusted platform module (vTPM)
  • Integrity Monitoring 🎛


  • Boot loader
  • Operating system
  • File System Structure 🗂
  • Software
  • Customizations

Disk options

Boot disk

  • VM comes with a single root persistent disk
  • Image is loaded onto root disk during first boot:
    • Bootable: you can attach to a VM and boot from it
    • Durable: can survive VM terminate
  • Some OS images are customized for Compute Engine
  • Can survive VM deletion if “Delete boot disk when instance is deleted” is disabled.

Persistent disks

  • Network storage 🗄 appearing as a block device 🧱
    • Attached to a VM through the network interface
    • Durable storage 🗄: can survive VM terminate
    • Bootable: you can attach to a VM and boot from it
    • Snapshots: incremental backups
    • Performance: Scales ⚖️ with Size
  • Features
    • HDD or SSD
    • Disk resizing
    • Attached in read-only mode to multiple VMs
    • Encryption keys 🔑

Local SSD disks are physically attached to a VM

  • More IOPS, lower latency and higher throughput
  • 375-GB up to 8 disks (3TB)
  • Data Survives a reset but not VM Stop 🛑 or terminate
  • VM Specific cannot be reattached to a different VM

RAM disk

  • tmps
  • Faster than local disk, slower memory
    • Use when your application expects a file system structure and cannot directly store its data in memory
    • Fast scratch disk, or fast cache
  • Very volatile: erase on stop 🛑 or reset
  • May require larger machine is RAM sized for application
  • Consider using persistent disk to back up RAM disk data

Common Compute Engine actions

  • Metadata and scripts (Every VM instance store its metadata on a metadata server)
  • Move an instance to a new zone
    • Automated process (moving within region)
      • gcloud compute instance move
      • updates reference to VM; not automatic
    • Manual process (moving between regions):
      • Snapshots all persistent disks
      • Create new persistent disk in destination zone disks restored from snapshots
      • Create new VMs in the destination zone and attach new persistent
      • Assign static IP to new VM
      • Update references to VM
      • Delete the snapshots, original disks and original VM
  • Snapshot: Back up critical data
  • Snapshot: Migrate data between zones
  • Snapshot: Transfer to SSD to improve performance
  • Persistent disk snapshots
    • Snapshot is not available for local SSD
    • Creates an incremental backup to Cloud Storage 🗄
      • Not visible in your buckets; managed by the snapshot service
      • Consider cron jobs for periodic incremental backup
    • Snapshots can be restored to a new persistent disk
      • New disk can be in another region or zone in the same project
      • Basis of VM migration: “moving” a VM to a new zone
        • Snapshot doesn’t back up VM metadata, tags, etc.
      • Resize persistent disk

You can grow disks, but never shrink them!

“If you had my love❤️ and I gave you all my trust 🤝. Would you comfort🤗 me?”

Identity Access Management (IAM)

IAM is a sophisticated system built on top of email, like address names, job type roles in granular permissions.

Who, Which, What

It is a way of identifying who can do what on which resource the who could be a person, group or application. The what refers to specific privileges or actions and the resource could be any DCP service.

  • Google Cloud☁️ Platform Resource is organized hierarchically
  • If you change the resource hierarchy, the policy hierarchy also changes.
  • A best practice is to follow the “principle of least privilege”.
  • Organization node is the root node in this hierarchy. Represents your company.
  • Folders 📂 are the Children of the organization. A Folder📂 could represent a department Cloud☁️
  • Projects are the Children of the Folders📂. Projects provide a trust boundary for a company
  • Resources are the Children of projects. Each Resource has exactly one parent Cloud☁️.


  • An organization node is a root node for Google Cloud☁️ resources
  • Organization roles:
  • Organization Admin: Control over all Cloud☁️ resources: useful for auditing
  • Project Creator: Controls project creation: Control project creation: control over who can create projects
  • Creating and managing Organization when a Workspace or IAM account creates a GCP Project. There are two roles assigned to t users or groups:
  • Super administrator:
  • Assign the Organization admin role to some users
    • Be the point of contact in case of recovery issues
    • Control the lifecycle 🔄 of the Workspace of Cloud☁️ Identity account and Organization
  • Organization admin:
  • Define IAM policies
    • Determine the structure of the resource hierarchy
    • Delegate responsibility over critical components such as Network, billing, and Resource Hierarchy through IAM roles

Folders 📂

  • Additional grouping mechanism and isolation boundaries between projects:
    • Different legal entities
    • Departments
    • Teams
  • Folders 📂 allow delegation of administration rights.


  • There are three types of roles in GCP:
  1. Primitive roles apply across all GCP services in a project
    • Primitive roles offer fixed, coarse-grained levels of access
      • Owner – Full privileges
      • Editor – Deploy, modify & configure
      • Viewer 👓 – Read-only access
      • *Billing Administrator – Manage Billing, Add Administrators
  2. Predefined roles apply to a particular service in a project
  1. Predefined roles offer more fine-grained permissions on a particular service
    • Example: Compute Engine IAM roles:
      • Compute Admin – Full control of Compute Engine
      • Network Admin – Create, modify, delete Network Resources (except FW rules and SSL Certs)
      • Storage Admin– Create, modify, delete disks, Images, and Snapshots
  2. Custom roles define a precise set of permissions


Defined the “who” part of who can do what on which resource.

There are five different types of members:

  1. Google account represents a developer, an administrator or any other person who interacts with GCP. Any email address can be associated with a Google account
  2. Service account is an account that belongs to your application instead of to an individual and user.
  3. Google group is unnamed collection of Google accounts and service accounts.
  4. Workspace domains represent your organization’s Internet domain name
  5. Cloud Identity domains manage users and groups using the Google Admin console, but you do not pay for or received Workspace collaboration products

Google Cloud Directory Sync ↔️ synchronizes ↔️ users and groups from your existing active directory or LDAP system with the users and groups in your Cloud identity domain. Synchronization ↔️ is one way only

Single Sign-om (SSO)

  • Use Cloud Identity to configure SAML, SSO
  • IF SAML2 isn’t supported, use a third-party solution

Service Accounts

  • Provide an identity for carrying out server-to-server interactions
    • Programs running within Compute Engine instances can automatically acquire access tokens with credentials
    • Tokens are used to access any service API or services in your project granted access to a service account
    • Service accounts are convenient when you’re not accessing user data
  • Service accounts are identified by an email address
    • Three types of Service accounts:
      • User-created (custom)
        • Built-in
          • Compute Engine and App Engine default service accounts
        • Google APIs Service account
          • Runs internal Google processes on your behalf
      • Default Compute Engine Service account
        • Automatically created per project with auto-generated name and email address:
        • Name has -compute suffix
        • [email protected]
        • Automatically added as a project Editor
        • By default, enabled on all instances created using glcoud or GCP console
  • Service account permissions
    • Default service accounts: primitive and predefined roles
    • User-created service accounts: predefined roles
    • Roles for Service accounts can be assigned to groups or users

Authorization is the process of determining what permissions and authenticated identity has on a set of specified resource(s)

Scopes are used to determine whether unauthenticated identity is authorized.

Customizing Scopes for a VM

  • Scopes can be changed after an instance is created
  • For user-created service accounts, use IAM roles instead.

IAM Best Practices

  1. Leverage and understand the resource hierarchy
  • Use Projects to group resources that share the same trust boundary
  • Check the policy granted on each resource and make sure you understand inheritance
  • Use “Principles of Least Privilege” when granting roles
  • Audit policies in Cloud☁️ audit logs: setiampolicy
  • Audit membership of groups used in policies
  • Grant roles to Google groups instead of individuals
  • Update group membership instead of changing IAM Policy
    • Audit membership of groups used in policies
    • Control the ownership of the Google group used in IAM policies
  • Service accounts
  • Be very careful granting serviceAccountUser role
    • When you create a service account, give it a display name that clearly identifies its purpose
    • Establish a naming convention for service accounts
    • Establish key rotation policies and methods
    • Audit with serviceAccount.keys.list() method

Cloud Identity-Aware Proxy (Cloud IAP)

Enforce access control policies for application and resources:

  • Identity-based access control
  • Central authorization layer for applications accessed by HTTPS

IAM policy is applied after authentication

“Never gonna give you up… Never gonna say goodbye.”

Storage 🗄 and Database Services 🛢

Cloud Storage 🗄 (Object Storage 🗄) – It allows worldwide🌎 storage 🗄 and retrieval of any amount of data at any Time ⏳.

  • Scalable ⚖️ to exabytes
  • Time⏳ to first byte in milliseconds
  • Very high availability across all storage 🗄 classes
  • Single API across storage 🗄 classes

Use Cases:

  • Website content
  • Storing data for archiving and disaster recovery
  • Distributing large data objects to users via direct download

Cloud Storage 🗄 has four storage 🗄 classes:

  1. Regional storage 🗄 enables you to store data at lower cost with the tradeoff of data being stored in a specific regional location.
  2. Multi regional storage 🗄 is geo redundant, Cloud Storage 🗄 stores, your data redundantly and at least two geographic locations separated by at least 100 miles within the multi-regional location of the bucket 🗑.
  3. Near line storage 🗄 is a low cost, highly durable storage 🗄 service for storing infrequently accessed data.
  4. Cold Line storage 🗄 is a very low cost, highly durable storage 🗄 service for data, archival, online backup and disaster recovery. Data is available within milliseconds, not hours or days.


  • Naming requirements
  • Cannot be nested
  • Regional bucket 🗑 & Multi-Regional cannot be changed
  • Objects can be moved from bucket 🗑 to bucket 🗑


  • Inherit storage 🗄 class of bucket 🗑 when created
  • No minimum size: unlimited storage 🗄


  • gsutil command

Access control lists (ACLs)

For some applications, it is easier and more efficient to grant limited Time ⏳ access tokens that can be used by any user instead of using account-based authentication for controlling resource access.

Signed URLs

“Valet Key” access to buckets 🗑 and objective via ticket:

  • Ticket is a cryptographically signed URL
  • Time-limited
  • Operations specified in ticket: HTTP GET, PUT DELETE (not POST)
  • Any user with URL can invoke permitted operations

Cloud Storage 🗄 Features

  • Customer-supplied encryption key (CSEK)
    • Use your own key instead of Google-managed keys 🔑
  • Object Lifecycle 🔄Management
    • Automatically delete or archive objects
  • Object Versioning
    • Maintain multiple versions of objects
      • Objects are immutable
      • Object Versioning:
        • Maintain a history of modification of objects
        • List archived versions of an object, restore an object to an older state, or delete a version
  • Directory synchronization ↔️
    • Synchronizes a VM directory with a bucket 🗑
  • Object change notification
  • Data import
  • Strong 💪 consistency

Object Lifecycle 🔄 Management policies specify actions to be performed on objects that meet certain rules

  • Examples:
    • Downgrade storage 🗄 class on objects older than a year.
    • Delete objects created before a specific date.
    • Keep only the 3 most recent versions of an object
  • Object inspection occurs in asynchronous batches
  • Changes can take 24 hours to apply

Object change notification can be used to notify an application when an object is updated or added to a bucket 🗑

Recommended: Cloud Pub/Sub Notifications for Cloud Storage 🗄

Data import services

  • Transfer Appliance: Rack, capture and then ship your data to GCP
  • Storage Transfer Service: Import online data (another bucket 🗑, S3 bucket 🗑, Web Service)
  • Offline Media Import: Third-party provider uploads the data from physical media

Cloud Storage 🗄 provides Strong 💪 global consistency

Cloud SQL is a fully managed database 🛢 service (MySQL or PostgreSQL)

  • Patches and updates automatically applied
  • You administer MySQL users
  • Cloud SQL supports many clients
    • gCloud sql
    • App Engine, Workspace scripts
    • Applications and tools 🛠
      • SQL Workbench Toad
      • External applications using standard MySQL drivers
  • Cloud SQL delivers high performance and scalability ⚖️ with up to 30 TBs of storage 🗄 capacity, 40,000 IOPS and 416 GB of RAM
  • Replica service that can replicate data between multiple zones
  • Cloud SQL also provides automated and on demand backups.
  • Cloud SQL scales ⚖️ up (require a restart)
  • Cloud SQL scales ⚖️ out using read replicas.

Cloud Spanner is a service built for the Cloud☁️ specifically to combine the benefits of relational database 🛢 structure with non-relational horizontal scale⚖️

Data replication is synchronized across zones using Google’s global fiber network

  • Scale⚖️ to petabytes
  • Strong 💪 consistency
  • High Availability
  • Used for financial and inventory applications
  • Monthly uptime ⏳
    • Multi-regional:99.999%
    • Regional: 99.99%

Cloud Firestore is a fast, fully managed serverless Cloud☁️ native NoSQL document database 🛢 that simplifies storing, syncing ↔️ and querying data for your Mobile 📱 Web and ioT applications a global scale⚖️.

  • Simplifies storing, syncing ↔️, and querying data
  • Mobile📱, web🕸, and IoT apps at global scale⚖️
  • Live synchronization ↔️ and offline support
  • Security🔒 features
  • ACID transactions
  • Multi-region replication
  • Powerful query engine

Datastore mode (new server projects):

  • Compatible with Datastore applications
  • Strong 💪 consistency
  • No entity group limits

Native mode (new Mobile 📱 and web🕸 apps):

  • Strongly 💪 consistent storage 🗄 layer
  • Collection and document 📄 data model
  • Real-time updates
  • Mobile📱 and Web🕸 Client libraries📚

Cloud Bigtable (Wide Column DB) is a fully managed, no SQL database 🛢 with petabytes scale⚖️ and very low latency.

  • Petabyte-scale⚖️
  • Consistent sub-10ms latency
  • Seamless scalability⚖️ for throughput
  • Learns and adjusts to access patterns
  • Ideal for Ad Tech, FinTech, and IoT
  • Storage 🗄 engine for ML applications
  • Easy integration with open source big data tools 🛠

Cloud MemoryStore is a fully managed Redis Service built on scalable ⚖️, Secure🔒 and highly available infrastructure managed by Google Applications.

  • In-memory data store service
  • Focus on building great apps
  • High availability, failover, patching and Monitoring 🎛
  • Sub-millisecond latency
  • Instances up to 300 GB
  • Network throughput of 12 Gbps
  • “Easy Lift-and-Shift”

Resource Management lets you hierarchically manage resources

  • Resources can be categorized by Project, Folder📂, and Organization
  • Resources are global🌎, regional, or zonal
  • Resource belongs to only one project
  • Resources inherent policies from their parents
  • Resource consumption is measured in quantities like rate of use or Time⏳
  • Policies contain a set of roles, and members and policies are set on
  • Policy is less restrictive; it overrides the more restrictive resource policy.
  • Organization node is root node for GCP resources
  • Organization contains all billing accounts.
  • Project accumulates the consumption of all its resources
    • Track resource and quota usage
    • Enable billing
    • Manage permissions and credentials
    • Enable services and APIs
  • Project use 3 identifying attributes
    • Project Name
    • Project Number (unique)
    • Project ID (unique)


All resources are subject to project quotas or limits


  • Total resources you can create per project: 5 VPC networks/project
    • Rate you make API requests in a project: 5 admin actions/second (Cloud☁️ Spanner)
    • Total resources you can create per region: 24 CPUs region/project

Increase: Quotas page in GCP Console or a support ticket

Your use of GCP expands over time, your quotas may increase accordingly.

Project Quotas:

  • Prevent runaway consumption in case of an error or malicious attack
    • Prevent billing spikes or surprises
    • Forces sizing consideration and periodic review

Labels and names

Labels 🏷 are a utility for organizing GCP resources

  • Attached to resources: VM, disk, snapshot, image
    • GCP Console, gCloud, or API
  • Example uses of labels 🏷 :
    • Inventory
    • Filter resources
    • In scripts
      • Help analyze costs
      • Run bulk operations

Comparing labels and tags

  • Labels 🏷 are a way to organize resources across GCP
  • Disks, image, snapshots
  • User-defined strings.in key-value format
  • Propagated through billing
  • Tags are applied to instances only
  • User-defined strings
  • Tags are primarily used for networking (applying firewall rules🔥)


  • Set a budget lets you track how you spend
  • Set a budget alerts 🔔send alerts 🔔and emails📧to Billing Admin
  • Use Cloud☁️ pubsub notifications to programmatically receive spend updates about this budget.
  • Optimize your GCP spend by using labels
  • Visualize GCP spend with Data Studio

Its recommended labeling all your resources and exporting billing data to Big Query to Analyze spend.

Every single day. Every word you say… Every game you play. Every night you stay. I’ll be watching you”

Resource Monitoring 🎛


  • Integrated Monitoring 🎛, Logging, Error Reporting, Tracing and Debugging
  • Manages across platforms
    • GCP and AWS
    • Dynamic discovery of GCP with smart defaults
    • Open-source agents and integrations
  • Access to powerful data and analytics tools 🛠
  • Collaboration with third-party software

Monitoring 🎛 is important to Google because it is at the base of Site Reliability Engineering (SRE).

  • Dynamic config and intelligent defaults
  • Platform, system, and application metrics
    • Ingests data: Metrics, events, metadata
    • Generates insights through dashboards, charts, alerts
  • Uptime /health checks⛑
  • Dashboards
  • Alerts 🔔

Workspace is the root entity that hold Monitoring 🎛 and configuration information

  • “Single pane of glass 🍸”
    • Determine your Monitoring 🎛 needs up front
    • Consider using separate Workspace for data and control isolation

To access an AWS account, you must configure a project in GCP to hold the AWS connector because workspaces can monitor all of your GCP projects in a single place.

Stack Driver Monitoring 🎛 allows you to create custom dashboards that contain charts 📊of the metrics that you want a monitor.

Uptime checks test the availability of your public services

Stock driver Monitoring 🎛 can access some metrics without the Monitoring 🎛 agent, including CPU utilization, some disk traffic 🚥metrics, network traffic and up Time⏳ information.

Stack Driver Logging provides logging, error, reporting, tracing and debugging.

  • Platform, systems, and application logs
    • API to write to logs
    • 30-day retention
  • Log search/view/filter
  • Log-based metrics
  • Monitoring 🎛 alerts 🔔can be set on log events
  • Data can be exported to Cloud Storage 🗄, BigQuery, and Cloud Pub/Sub
  • Analyze logs in BigQuery and visualize in Data Studio

Stack driver Error Reporting counts, analyzes and aggravates the errors in your running Cloud☁️ services

  • Error notifications
  • Error dashboard
  • Go, Java☕️, .NET, Node.js, PHP, Python 🐍, and Ruby♦️

Stack Driver Tracing is a distributed tracing system that collects Layton. See data from your applications and displays it in the GCP console.

  • Displays data in near real-time ⏳
  • Latency reporting
  • Per-URL latency sampling
  • Collects latency data
    • App Engine
    • Google HTTP(S) load balancers🏋️‍♀️
    • Applications instrumented with the Stackdriver Trace SDKs


  • Inspect an application without stopping 🛑 it or slowing it down significantly
  • Debug snapshots:
    • Capture call stack and local variables of a running application
  • Debug logpoints:
    • Inject logging into a service without stopping 🛑 it
  • Java☕️, Python🐍, Go, Node.js and Ruby♦️

“At last the sun☀️ is shining, the clouds☁️ of blue roll by”

We will continue next week with Part III of this series….

Thanks –


Week of October 9th

Part I of a Cloud☁️ Journey

“On a cloud☁️ of sound 🎶, I drift in the night🌙”

Hi All –

Ok Happy Hump 🐫 Day!

The last few weeks we spent some quality time ⏰ visiting with Microsoft SQL Server 2019. A few weeks back, we kicked 🦿the tires 🚗 with IQP and the improvements made to TempDB. Then week after we were fully immersed with Microsoft’s most ambitious offering in SQL Server 2019 with Big Data Clusters (BDC).

This week we make our triumphant return back to the cloud☁️. If you have been following our travels this past summer☀️ we focused on the core concepts of AWS and then we concentrated on the fundamentals of Microsoft Azure. So, the obvious natural progression of our continuous cloud☁️ journey✈️ would be to further explore the Google Cloud Platform or more affectionately known as GCP. We had spent a considerable amount time 🕰 before learning many of the exciting offerings in GCP but our long awaited return has been long overdue. Besides we felt the need to gives some love ❤️ and oxytocin 🤗 to our friends from Mountain View

“It starts with one☝️ thing…I don’t know why?”

Actually, Google has 10 things when it comes to their philosophy but more on that later. 😊

One of Google’s strong 💪 beliefs is that in “in the future every company will be a data company because making the fastest and best use of data is a critical source of a competitive advantage.”

GCP is Google’s Cloud Computing☁️ solution that provides a wide variety of services such as Compute, Storage🗄, Big data, and Machine Learning for managing and getting value from data and doing that at infinite scale⚖️. GCP offers over 90 products and Services.

“If you know your history… Then you would know where you coming from”

In 2006, AWS began offering cloud computing☁️ to the masses, several years later Microsoft Azure followed suit and shortly right after GCP joined the Flexible🧘‍♀️, Agile, Elastic, Highly Available and scalable⚖️ party 🥳. Although, Google was a late arrival to the cloud computing☁️ shindig🎉 their approach and strategy to Cloud☁️ Computing is far from a Johnny-come-lately” 🤠

“Google Infrastructure for Everyone” 😀

Google does not view cloud computing☁️ as a “commodity” cloud☁️. Google’s methodology to cloud computing☁️ is of a “premier💎 cloud☁️”, one that provides the same innovative, high-quality, deluxe set of services, and rich development environment with the advanced hardware that Google has been running🏃‍♂️ internally for years but made available for everyone through GCP.

“No vendor lockin 🔒. Guaranteed. 👍

In addition, Google who is certainly no stranger to Open Source software promotes a vision🕶 of the “open cloud☁️”. A cloud☁️ environment where companies🏢🏭 large and small 🏠can seamlessly move workloads from one cloud☁️ provider to another. Google wants customers to have the ability to run 🏃‍♂️their applications anywhere not just in Google.

“Get outta my dreams😴… Get into my car 🚙

Now that I extolled the virtues of Google’s vision 🕶 and strategy for Cloud computing☁️, it’s time to take this car 🚙 out for a spin. Fortunately, the team at Google Cloud☁️ have put together one of the best compilations since the Zeppelin Box Set 🎸in there Google Cloud Certified Associate Cloud Engineer Path on Pluralsight.   

Since there is some much to unpack📦, we will need to break our learnings down into multiple parts. So to help us put our best foot 🦶forward through the first part our journey ✈️ will be Googler Brice Rice and former Googler Catherine Gamboa through their Google Cloud Platform Fundamentals – Core Infrastructure course.

In a great introduction, Brian expounds on the definition of Cloud Computing☁️ and a brief history on Google’s transformation from the virtualization model to a container‑based architecture, an automated, elastic, third‑wave cloud☁️ built from automated services.

Next, Brian reviews GCP computing architectures:

Infrastructure as a Service (IaaS) – provide raw compute, storage🗄, and network organized in ways that are familiar from data centers. You pay for what you allocate

Platform as a Service (PaaS) – binds application code you write to libraries📚 that give access to the infrastructure your application needs. You pay 💰 for what you use.

Software as a Service (SaaS) – applications in that they’re consumed directly over the internet by end users. Popular examples: Search 🔎, Gmail 📧, Docs 📄, and Drive 💽

Then we had an overview of Google’s network which according to some estimates carries as much as 40% of the world’s 🌎 internet traffic🚦. The network interconnects at more than 90 internet exchanges and more than 100 points of presence worldwide 🌎 (and growing). One of the benefits of GCP is that it leverages Google’s robust network. Allowing GCP resources to be hosted in multiple locations worldwide 🌎. At granular level these locations are organized by regions and zones. A region is a specific geographical 🌎 location where you can host your resources. Each region has one or more zones (most regions have three or more zones).

All of the zones within a region have fast⚡️network connectivity among them. A zone is like as a single failure domain within a region. A best practice in building a fault‑tolerant application, is to deploy resources across multiple zones in a given region to protect against unexpected failures.

Next, we had summary on Google’s Multi-layered approach to security🔒.


  • Server boards and the networking equipment in Google data centers are custom‑designed by Google.
  • Google also designs custom chips, including a hardware security🔒 chip (Titan) deployed on both servers and peripherals.
  • Google Server machines use cryptographic signatures✍️ to make sure they are booting the correct software.
  • Google designs and builds its own data centers (eco friendly), which incorporate multiple layers of physical security🔒 protections. (Access to these data centers is limited to only a few authorized Google Employees)
  • Google’s infrastructure provides cryptographic🔐 privacy🤫 and integrity for remote procedure‑called data on the network, which is how Google Services communicate with each other.
  • Google has multitier, multilayer denial‑of‑service protections that further reduces the risk of any denial‑of‑service 🛡 impact.

Rounding out the introduction was a sneak peek 👀 into the Budgets and Billing 💰. Google offers customer-friendly 😊 pricing with a per‑second billing for its IaaS compute offering, Fine‑grained billing is a big cost‑savings for workloads that are bursting. GCP provides four tools 🛠to help with billing:

  • Budgets and alerts 🔔
  • Billing export🧾
  • Reports 📊
  • Quotas

Budgets can be a fixed limit, or you can tie it to another metric, for example a percentage of the previous month’s spend.

Alerts 🔔 are generally set at 50%, 90%, and 100%, but are customizable

Billing export🧾 store detailed billing information in places where it’s easy to retrieve for more detailed analysis

Reports📊 is a visual tool in the GCP console that allows you to monitor your expenditure. GCP also implements quotas, which protect both account owners and the GCP community as a whole 😊.

Quotas are designed to prevent the overconsumption of resources, whether because of error or malicious attack 👿. There are two types of quotas, rate quotas and allocation quotas. Both get applied at the level of the GCP project.

After a great intro, next Catherine kick starts🦵 us with GCP. She begins with a discussion around resource hierarchy 👑 and trust🤝 boundaries.

Projects are the main way you organize the resources (all resources belong to a project) you use in GCP. Projects are used to group together related resources, usually because they have a common business objective. A project consists of a set of users, a set of APIs, billing, quotas, authentication, and monitoring 🎛 settings for those APIs. Projects have 3 identifying attributes:

  1. Project ID (Globally 🌎 unique)
  2. Project Name
  3. Project Number (Globally 🌎 Unique)

Projects may be organized into folders 🗂. Folders🗂 can contain other folders 🗂. All the folders 🗂 and projects used by an organization can be put in organization nodes.

Please Note: If you use folders 🗂, you need to have an organization node at the top of the hierarchy👑.

Projects, folders🗂, and organization nodes are all places where the policies can be defined.

A policy is a set on a resource. Each policy contains a set of roles and members👥.

  • A role is a collection of permissions. Permissions determine what operations are allowed on a resource. There are three kinds of roles (primitive):
  1. Owner
  2. Editor
  3. Viewer

Another role made available in IAM is to control the billing for a project without the right to change the resources in the project is billing administrator role.

Please note IAM provides finer‑grained types of roles for a project that contains sensitive data, where primitive roles are too generic.

A service account is a special type of Google account intended to represent a non-human user ⚙️ that needs to authenticate and be authorized to access data in Google APIs.

  • A member(s)👥 can be a Google Account(s), a service account, a Google group, or a Google Workspace or Cloud☁️ Identity domain that can access a resource.

Resources inherit policies from the parent.

Identity and Access Management (IAM) allows administrators to manage who (i.e. Google account, a Google group, a service account, or an entire Work Space) can do what (role) on specific resources There are four ways to interact with IAM and the other GCP management layers:

  1. Web‑based console 🕸
  2. SDK and Cloud shell (CLI)
  3. APIs
    1. Cloud Client Libraries 📚
    1. Google API Client Library 📚
  4. Mobile app 📱

When it comes to entitlements “The principle of least privilege” should be followed. This principle says that each user should have only those privileges needed to do their jobs. In a least privilege environment, people are protected from an entire class of errors.  GCP customers use IAM to implement least privilege, and it makes everybody happier 😊.

For example, you can designate an organization policy administrator so that only people with privilege can change policies. You can also assign a project creator role, which control who can spend money 💵.

Finally, we checked into Marketplace 🛒 which provides an easy way to launch common software packages in GCP. Many common web 🕸 frameworks, databases🛢, CMSs, and CRMs are supported. Some Marketplace 🛒 images charge usage fees, like third parties with commercially licensed software. But they all show estimates of their monthly charges before you launch them.

Please Note: GCP updates the base images for these software packages to fix critical issues 🪲and vulnerabilities, but it doesn’t update the software after it’s been deployed. However, you’ll have access to the deployed system so you can maintain them.

“Look at this stuff🤩 Isn’t it neat? Wouldn’t you think my collection’s complete 🤷‍♂️?

Now with basics of GCP covered, it was time 🕰 to explore 🧭 some the computing architectures made available within GCP.

Google Compute Engine

Virtual Private Cloud (VPC) – manage a networking functionality for your GCP resources. Unlike AWS (natively), GCP VPC is global 🌎 in scope. They can have subnets in any GCP region worldwide 🌎. And subnets can span the zones that make up a region.

  • Provides flexibility🧘‍♀️ to scale️ and control how workloads connect regionally and globally🌎
  • Access VPCs without needing to replicate connectivity or administrative policies in each region
  • Bring your own IP addresses to Google’s network infrastructure across all regions

Much like physical networks, VPCs have routing tables👮‍♂️and Firewall🔥 Rules which are built in.

  • Routing tables👮‍♂️ forward traffic🚦from one instance to another instance
  • Firewall🔥 allows you to restrict access to instances, both incoming(ingress) and outgoing (egress) traffic🚦.

Cloud DNS manages low latency and high availability of the DNS service running on the same infrastructure as Google

Cloud VPN securely connects peer network to Virtual Private Cloud (VPC) network through an IPsec VPN connection.

Cloud Router lets other networks and Google VPC exchange route information over the VPN using the Border Gateway Protocol.

VPC Network Peering enables you to connect VPC networks so that workloads in different VPC networks can communicate internally. Traffic🚦stays within Google’s network and doesn’t traverse the public internet.

  • Private connection between you and Google for your hybrid cloud☁️
  • Connection through the largest partner network of service providers

Dedicated Interconnect which allows direct private connections providing highest uptimes (99.99% SLA) for their interconnection with GCP

Google Compute Engine (IaaS) delivers Linux or Windows virtual machines (VMs) running in Google’s innovative data centers and worldwide fiber network. Compute Engine offers scale ⚖️, performance, and value that lets you easily launch large compute clusters on Google’s infrastructure. There are no upfront investments, and you can run thousands of virtual CPUs on a system that offers quick, consistent performance. VMs can be created via Web 🕸 console or the gcloud command line tool🔧.

For Compute Engine VMs there are two kinds of persistent storage🗄 options:

  • Standard
  • SSD

If your application needs high‑performance disk, you can attach a local SSD. ⚠️ Beware to store data of permanent value somewhere else because local SSD’s content doesn’t last past when the VM terminates.

Compute Engine offers innovative pricing:

  • Per second billing
  • Preemptible instances
  • High throughput to storage🗄 at no additional cost
  • Only pay for hardware you need.

At the time of this post, N2D standard and high-CPU machine types have up to 224 vCPUs and 128 GB of memory which seems like enough horsepower 🐎💥 but GCP keeps upping 🃏🃏 the ante 💶 on maximum instance type, vCPU, memory and persistent disk. 😃

Sample Syntax creating a VM:

$ gcloud compute zones list | grep us-central1

$ gcloud config set compute/zone us-central1-c
$ gcloud compute instances create “my-vm-2” –machine-type “n1-standard-1” –image-project “debian-cloud” –image “ebian-9-stretch-v20170918” –subnet “default”

Compute Engine also offers auto Scaling ⚖️ which adds and removes VMs from applications based on load metrics. In addition, Compute Engine VPCs offering load balancing 🏋️‍♀️ across VMs. VPC supports several different kinds of load balancing 🏋️‍♀️:

  • Layer 7 load balancing 🏋️‍♀️ based on load
  • Layer 4 load balancing 🏋️‍♀️ based on non-http SSL load
  • Layer 4 load balancing 🏋️‍♀️ based on non-http SSL traffic🚦
  • Any Traffic🚦 (TCP, UDP)
  • Traffic🚦inside a VPC

Cloud CDN –accelerates💥 content delivery 🚚 in your application allowing users to experience lower network latency. The origins of your content will experience reduced load, and cost savings. Once you’ve set up HTTPS load balancing 🏋️‍♀️, simply enable Cloud CDN with a single checkbox.

Next on our plate 🍽 was to investigate the storage🗄 options that are available in GCP

Cloud Storage 🗄 is fully managed, high durability, high availability, scalable ⚖️ service. Cloud Storage 🗄 can be used for lots of use cases like serving website content, storing data for archival and disaster recovery, or distributing large data objects.

Cloud Storage🗄 offers 4 different types of storage 🗄 classes:

  • Regional
  • Multi‑regional
  • Nearline 😰
  • Coldline 🥶

Cloud Storage🗄 is comprised of buckets 🗑 which create, and configure, and use to hold storage🗄 objects.

Buckets 🗑 are:

  • Globally 🌎 Unique
  • Different storage🗄 classes
  • Regional or multi-regional
  • Versioning enabled (Objects are immutable)
  • Lifecycle 🔄 management Rules

Cloud Storage🗄supports several ways to bring data into Cloud Storage🗄.

  • Use gsutil Cloud SDK.
  • Drag‑and‑drop in the GCP console (with Google Chrome browser).
  • Integrated with many of the GCP products and services:
    • Import and export tables from and to BigQuery and Cloud SQL
    • Store app engine logs
    • Cloud data store backups
    • Objects used by app engine
    • Compute Engine Images
  • Online storage🗄 transfer service (>TB) (HTTPS endpoint)
  • Offline transfer appliance (>PB) (rack-able, high capacity storage🗄 server that you lease from Google)

“Big wheels 𐃏 keep on turning”

Cloud Bigtable is afully managed, scalable⚖️ NoSQL database🛢 service for large analytical and operational workloads. The databases🛢 in Bigtable are sparsely populated tables that can scale to billions of rows and thousands of columns, allowing you to store petabytes of data. Data encryption inflight and at rest are automatic

GCP fully manages the surface, so you don’t have to configure and tune it. It’s ideal for data that has a single lookup keys🔑 and for storing large amounts of data with very low latency.

Cloud Bigtable is offered through the same open source API as HBase, which is the native database🛢 for the Apache Hadoop 🐘 project.

Cloud SQL is a fully managed relational database🛢 service for MySQL, PostgreSQL, and MS SQL Server which provides:

  • Automatic replication
  • Managed backups
  • Vertical scaling ⚖️ (Read and Write)
  • Horizontal Scaling ⚖️
  • Google integrated Security 🔒

Cloud Spanner is a fully managed relational database🛢 with unlimited scale⚖️ (horizontal), strong consistency & up to 99.999% high availability.

It offers transactional consistency at a global🌎 scale ⚖️, schemas, SQL, and automatic synchronous replication for high availability, and it can provide petabytes of capacity.

Cloud Datastore is a highly scalable ⚖️ (Horizontal) NoSQL database🛢 for your web 🕸 and mobile 📱 applications.

  • Designed for application backends
  • Supports transactions
  • Includes a free daily quota

Comparing Storage🗄 Options

Cloud Datastore is the best for semi‑structured application data that is used in App Engine applications.

Bigtable is best for analytical data with heavy read/write events like AdTech, Financial 🏦, or IoT📲 data.

Cloud Storage🗄 is best for structured and unstructured binary or object data, like images🖼, large media files🎞, and backups.

Cloud SQL is best for web 🕸 frameworks and existing applications, like storing user credentials and customer orders.

Cloud Spanner is best for large‑scale⚖️ database🛢 applications that are larger than 2 TB, for example, for financial trading and e‑commerce use cases.

“Everybody, listen to me… And return me my ship⛵️… I’m your captain👩🏾️, I’m your captain👩🏾‍✈️”

Containers, Kubernetes ☸️, and Kubernetes Engine

Containers provide independent scalable ⚖️ workloads, that you would get in a PaaS environment, and an abstraction layer of the operating system and hardware, like you get in an IaaS environment. Containers virtualize the operating system rather than the hardware. The environment scales⚖️ like PaaS but gives you nearly the same flexibility as Infrastructure as a Service

Kubernetes ️ is an open source orchestrator for containers. K8s ☸️ make it easy to orchestrate many containers on many hosts, scale ⚖️ them, roll out new versions of them, and even roll back to the old version if things go wrong 🙁. K8s ☸️ lets you deploy containers on a set of nodes called a cluster.

A cluster is set of master components that control the system as a whole, and a set of nodes that run containers.

K8s ☸️ deploys a container or a set of related containers, it does so inside an abstraction called a pod.

A pod is the smallest deployable unit in Kubernetes.

Kubectl starts a deployment with a container running in a pod. A deployment represents a group of replicas of the same pod. It keeps your pods running 🏃‍♂️, even if a node on which some of them run on fails.

Google Kubernetes Engine (GKE) ☸️ is Secured and managed Kubernetes service ️ with four-way auto scaling ⚖️ and multi-cluster support.

  • Leverage a high-availability control plane ✈️including multi-zonal and regional clusters
  • Eliminate operational overhead with auto-repair 🧰, auto-upgrade, and release channels
  • Secure🔐 by default, including vulnerability scanning of container images and data encryption
  • Integrated Cloud Monitoring 🎛 with infrastructure, application, and Kubernetes-specific ☸️ views

GKE is like an IaaS offering in that it saves you infrastructure chores and it’s like a PaaS offering in that it was built with the needs of developers 👩‍💻 in mind.

Sample Syntax building a K8 cluster:

gcloud container clusters create k1

In GKE to make the pods in your deployment publicly available, you can connect a load balancer🏋️‍♀️ to it by running the kubectl expose command. K8s ☸️ then creates a service with a fixed IP address for your pods.

A service is the fundamental way K8s ️ represents load balancing 🏋️‍♀️. A K8s ☸️ attaches an external load balancer🏋️‍♀️ with a public IP address to your service so that others outside the cluster can access it.

In GKE, this kind of load balancer🏋️‍♀️ is created as a network load balancer🏋️‍♀️. This is one of the managed load balancing 🏋️‍♀️ services that Compute Engine makes available to virtual machines. GKE makes it easy to use it with containers.

Service groups is a set of pods together and provides a stable end point for them

Imperative commands

kubectl get services shows you your service’s public IP address

kubectl scale – scales ⚖️ a deployment

kubectl expose – creates a service

kubectl get pods watch the pods come online

The real strength 💪of K8s ☸️ comes when you work in a declarative of way. Instead of issuing commands, you provide a configuration file (YAML) that tells K8s ☸️ what you want your desired state to look like, and Kubernetes ☸️ figures out how to do it.

When you choose a rolling update for a deployment and then give it a new version of the software it manages, Kubernetes will create pods of the new version one by one, waiting for each new version pod to become available before destroying one of the old version pods. Rolling updates are a quick way to push out a new version of your application while still sparing your users from experiencing downtime.

“Going where the wind 🌬 goes… Blooming like a red rose🌹

Introduction to Hybrid and Multi-Cloud Computing (Anthos)

Modern hybrid or multi‑cloud☁️ architectures allows you to keep parts of your system’s infrastructure on‑premises, while moving other parts to the cloud☁️, creating an environment that is uniquely suited to many company’s needs.

Modern distributed systems allow a more agile approach to managing your compute resources

  • Move only some of you compute workloads to the cloud ☁️
  • Move at your own pace
  • Take advantage of cloud’s☁️ scalability️ and lower costs 💰
  • Add specialized services to compute resources stack

Anthos is Google’s modern solution for hybrid and multi-cloud☁️ systems and services management.

The Anthos framework rests on K8s ☸️ and GKE deployed on‑prem, which provides the foundation for an architecture that is fully integrated with centralized management through a central control plane that supports policy‑based application life cycle🔄 delivery across hybrid and multi‑cloud☁️ environments.

Anthos also provides a rich set of tools🛠 for monitoring 🎛 and maintaining the consistency of your applications across all of your network, whether on‑premises, in the cloud☁️ K8s ☸️, or in multiple clouds☁️☁️.

Anthos Configuration Management provides a single source of truth for your cluster’s configuration. That source of truth is kept in the policy repository, which is actually a Git repository.

“And I discovered🕵️‍♀️ that my castles 🏰 stand…Upon pillars of salt🧂 and pillars of sand 🏖

App Engine (PaaS) builds a highly scalable ⚖️ application on a fully managed serverless platform.

App Engine makes deployment, maintenance, autoscaling ⚖️ workloads easy allowing developers 👨‍💻to focus on innovation

GCP provides an App Engine SDK in several languages so developers 👩‍💻 can test applications locally before uploaded to the real App Engine service.

App Engine’s standard environment provides runtimes for specific versions of Java☕️, Python🐍, PHP, and Go. The standard environment also enforces restrictions🚫 on your code by making it run in a so‑called sandbox. That’s a software construct that’s independent of the hardware, operating system, or physical location of the server it runs🏃‍♂️ on.

If these constraints don’t work for a given applications, that would be a reason to choose the flexible environment.

App Engine flexible environment:

  • Builds and deploys containerized apps with a click
  • No sandbox constraints
  • Can access App Engine resources

App Engine flexible environment apps use standard runtimes, can access App Engine services such as

  • Datastore
  • Meme cache
  • Task Queues

Cloud Endpoints – Develop, deploy, and manage APIs on any Google Cloud☁️ back end.

Cloud Endpoints helps you create and maintain APIs

  • Distributed API management through an API console
  • Expose your API using a RESTful interface

Apigee Edge is also a platform for developing and managing API proxies.

Apigee Edge focus on business problems like rate limiting, quotas, and analytics a

  • A platform for making APIs available to your customers and partners
  • Contains analytics, monetization, and a developer portal

Developing in the Cloud ☁️

Cloud Source Repositories – Fully featured Git repositories hosted on GCP

Cloud Functions – Scalable ⚖️ pay-as-you-go functions as a service (FaaS) to run your code with zero server management.

  • No servers to provision, manage, or upgrade
  • Automatically scale⚖️ based on the load
  • Integrated monitoring 🎛, logging, and debugging capability
  • Built-in security🔒 at role and per function level based on the principle of least privilege
  • Key🔑 networking capabilities for hybrid and multi-cloud☁️☁️ scenarios
  • Deployment: Infrastructure as code

Deployment: Infrastructure as code

Deployment Manager – creates and manages cloud☁️ resources with simple templates

  • Provides repeatable deployments
  • Create a .yaml template describing your environment and use Deployment Manager to create resources

“Follow my lead, oh, how I need… Someone to watch over me”

Monitoring 🎛: Proactive instrumentation

Stackdriver is GCP’s tool for monitoring 🎛, logging and diagnostics. Stackdriver provides access to many different kinds of signals from your infrastructure platforms, virtual machines, containers, middleware and application tier; logs, metrics and traces. It provides insight into your application’s health ⛑, performance and availability. So, if issues occur, you can fix them faster.

Here are the core components of Stackdriver;

  • Monitoring 🎛
  • Logging
  • Trace
  • Error Reporting
  • Debugging
  • Profiler

Stackdriver Monitoring 🎛 checks the end points of web 🕸 applications and other Internet‑accessible services running on your cloud☁️ environment.

Stackdriver Logging view logs from your applications and filter and search on them.

Stackdriver error reporting tracks and groups the errors in your cloud☁️ applications and it notifies you when new errors are detected.

Stackdriver Trace sample the latency of App Engine applications and report per URL statistics.

Stackdriver Debugger of connects your application’s production data to your source code so you can inspect the state of your application at any code location in production

“Whoa oh oh oh oh… Something big I feel it happening”

GCP Big Data Platform – services are fully managed and scalable ⚖️ and Serverless

Cloud Dataproc is a fast, easy, managed way to run🏃‍♂️ Hadoop 🐘 MapReduce 🗺, Spark 🔥, Pig 🐷 and Hive 🐝 Service

  • Create clusters in 90 seconds or less on average
  • Scale⚖️ cluster up and down even when jobs are running 🏃‍♂️
  • Easily migrate on-premises Hadoop 🐘 jobs to the cloud☁️
  • Uses Spark🔥 Machine Learning Libraries📚 (MLib) to run classification algorithms

Cloud Dataflow🚰 – Stream⛲️ and Batch processing; unified and simplified pipelines

  • Processes data using Compute Engine instances.
  • Clusters are sized for you
  • Automated scaling ⚖️, no instance provisioning required
  • Managed expressive data Pipelines
  • Write code once and get batch and streaming⛲️.
  • Transform-based programming model
  • ETL pipelines to move, filter, enrich, shape data
  • Data analysis: batch computation or continuous computation using streaming
  • Orchestration: create pipelines that coordinate services, including external services
  • Integrates with GCP services like Cloud Storage🗄, Cloud Pub/Sub, BigQuery and BigTable
  • Open source Java☕️ and Python 🐍 SDKs

BigQuery🔎 is a fully‑managed, petabyte scale⚖️, low‑cost analytics data warehouse

  • Analytics database🛢; stream data 100,000 rows /sec
  • Provides near real-time interactive analysis of massive datasets (hundreds of TBs) using SQL syntax (SQL 2011)
  • Compute and storage 🗄 are separated with a terabit network in between
  • Only pay for storage 🗄 and processing used
  • Automatic discount for long-term data storage 🗄

Cloud Pub/Sub – Scalable ⚖️, flexible🧘‍♀️ and reliable enterprise messaging 📨

Pub in Pub/Sub is short for publishers

Sub is short for subscribers.

  • Supports many-to-many asynchronous messaging📨
  • Application components make push/pull subscriptions to topics
  • Includes support for offline consumers
  • Simple, reliable, scalable ⚖️ foundation for stream analytics
  • Building block🧱 for data ingestion in Dataflow, IoT📲, Marketing Analytics
  • Foundation for Dataflow streaming⛲️
  • Push notifications for cloud-based☁️ applications
  • Connect applications across GCP (push/pull between Compute Engine and App Engine

Cloud Datalab🧪 is a powerful interactive tool created to explore, analyze, transform and visualize data and build machine learning models on GCP.

  • Interactive tool for large-scale⚖️ data exploration, transformation, analysis, and visualization
  • Integrated, open source
    • Built on Jupyter

“Domo arigato misuta Robotto” 🤖

Cloud Machine Learning Platform🤖

Cloud☁️ machine‑learning platform🤖 provides modern machine‑learning services🤖 with pre‑trained models and a platform to generate your own tailored models.

TensorFlow 🧮 is an open‑source software library 📚 that’s exceptionally well suited for machine‑learning applications🤖 like neural networks🧠.

TensorFlow 🧮 can also take advantage of Tensor 🧮 processing units (TPU), which are hardware devices designed to accelerate machine‑learning 🤖 workloads with TensorFlow 🧮. GCP makes them available in the cloud☁️ with Compute Engine virtual machines.

Generally, applications that use machine‑learning platform🤖 fall into two categories, depending on whether the data worked on is structured or unstructured.

For structured data, ML 🤖 can be used for various kinds of classification and regression tasks, like customer churn analysis, product diagnostics, and forecasting. In addition, Detection of anomalies like fraud detection, sensor diagnostics, or log metrics.

For unstructured data, ML 🤖 can be used for image analytics, such as identifying damaged shipment, identifying styles, and flagging🚩content. In addition, ML🤖 can be used for text analytics like a call 📞 center log analysis, language identification, topic classifications, and sentiment analysis.

Cloud Vision API 👓 derives insights from your images in the cloud☁️ or at the edge with AutoML Vision👓 or use pre-trained Vision API👓 models to detect emotion, understand text, and more.

  • Analyze images with a simple REST API
  • Logo detection, label detection
  • Gain insights from images
  • Detect inappropriate content
  • Analyze sentiment
  • Extract text

Cloud Natural Language API 🗣extracts information about people, places, events, (and more) mentioned in text documents, news articles, or blog posts

  • Uses machine learning🤖 models to reveal structure and meaning of text
  • Extract information about items mentioned in text documents, news articles, and blof posts

Cloud Speech API 💬 enables developers 👩‍💻 to convert audio to text.

  • Transcribe your content in real time or from stored files
  • Deliver a better user experience in products through voice 🎤 commands
  • Gain insights from customer interactions to improve your service

Cloud Translation API🈴 provides a simple programmatic interface for translating an arbitrary string into a supported language.

  • Translate arbitrary strings between thousands of language pairs
  • Programmatically detect a document’s language
  • Support for dozens of languages

Cloud Video Intelligence API📹 enable powerful content discovery and engaging video experiences.

  • Annotate the contents of videos
  • Detect scene changes
  • Flag inappropriate content
  • Support for a variety of video formats

“Fly away, high away, bye bye…” 🦋

We will continue next week with Part II of this series….

Thanks –


Week of September 25th

“Dynamite🧨 with a laser beam💥…Guaranteed to blow💨 your mind🧠

Happy National Lobster 🦞 Day!

“And here I go again on my own… Goin’ down the only road I’ve ever known”

This week we continued where we last left off the previous week as we continued exploring the depths of SQL Server 2019. Last week, we just merely scratched 💅 the surface of SQL Server 2019 as we dove🤿 into IQP and the improvements made to TempDB. This week we tackled Microsoft’s most ambitious SQL Server offering to date in SQL Server 2019 Big Data Clusters (BDC). When I first thought of BDCs the first thing that came to mind 🤔 was a Twix Bar 🍫. Yes, we all know Twix is the is the “only candy with the cookie crunch” but what makes the Twix bar so delicious 😋 is the perfect melding of smooth Chocolate, Chewy Carmel and of course crisp cookie🍪! Well, that’s exactly what Big Data Cluster is like… You’re probably thinking Huh?

Big Data Clusters (BDC) is MSFT’s groundbreaking new Big Data/Data Lake architecture that unifies virtual business data and operational data stored in relational databases with IoT for true real-time BI and embedded Artificial Intelligence (AI) and Machine Learning (ML). BDC combines the power⚡️of SQL Server, Spark 🔥, and the Hadoop Distributed File System (HDFS) 🐘 into a unified data platform. But that’s not all!  Since BDC runs natively on Linux🐧 it’s able to embrace modern architectures for deploying applications like Linux-based Docker🐧- 🐳 containers on Kubernetes ☸︎.

By Leveraging K8s ☸︎ for orchestration, deployments of BDCs are predictable, fast 🏃🏻, elastic🧘‍♀️ and scalable ⚖️. Seeing that Big data clusters can run any Kubernetes ☸︎ environment whether it be on-premise 🏠 (i.e. Red Hat OpenShift) or in the cloud☁️ (i.e. Amazon EKS); BDC makes a perfect fit to be hosted on Azure Kubernetes Service (AKS).

Another great feature that BDC makes use of is data virtualization, also known as “Polybase“. Polybase made its original debut with SQL Server 2016. It had seemed like Microsoft had gone to sleep😴 on it but now with SQL Server 2019 BDC, Microsoft has broken out big time! BDC takes advantage of Data Virtualization as it’s “data hub”. So, you don’t need to spend time 🕰 and expense💰 of traditional extract, transform, and load (ETL) 🚜 and Data Pipelines.  In addition, it lets organizations leverage existing SQL Server expertise and extract value from third-party data sources such as NoSQL, Oracle, Teradata and HDFS🐘.

Lastly, BDC takes advantage of Azure Data Studio (ADS) for both deployments and administration of BDC. For those who are not familiar with ADS, it’s a really cool 😎 tool 🛠 that you can benefit from by acquainting yourself with. Of course, SSMS isn’t going anywhere but with ADS you get a cross-platform database lightweight tool 🛠 that uses Jupyter notebooks📒 and Python🐍 scripting making deployments and administration of BDCs a breeze.  OK, I am ready to rock and roll 🎶!

“I love❤️ rock n’ roll🎸… So put another dime in the jukebox 📻, baby”

Before we can jump right into the deep end of the Pool 🏊‍♂️ with Big Data Cluster, we felt a need for a little primer on Virtualization, Kubernetes ☸︎, and Containers. Fortunately, we knew just who to deliver the perfect overview, no other then one of the prominent members of the Mount Rushmore of SQL Server, Buck Woody who through his guest appearance with Long time product team member and Technology Evangelist Sanjay Soni in the Introduction to Big Data Cluster on SQL Server 2019 | Virtualization, Kubernetes ☸︎, and Containers YouTube Video. Buck who has a very unique style and an amazing skill of taking complex technologies and making them seem simple. His supermarket 🛒 analogy to explain Big Data, Hadoop and Spark 🔥, virtualization, containers, and Kubernetes ☸︎ is pure brilliance! Now, armed 🛡🗡 with Buck’s knowledge bombs 💣 we were ready to light 🔥 this candle 🕯!

“Let’s get it started (ha), let’s get it started in here”

Taking us through BDC architecture and deployments was newly minted Microsoft Data Platform MVP Mohammad Darab who put together a series of super exciting videos as well as excellent detailed blog posts on BDC and Long Time SQL Server veteran and Microsoft Data Platform MVP Ben Weissman through his awesome Building Your First Microsoft SQL Server Big Data Cluster Pluralsight course.  Ben’s speculator course covers not only architecture and deployment but how to get data in and out of the, make the most of out of them and how to monitor, maintain, and troubleshoot BDC.

“If you have built castles 🏰 in the air, your work need not be lost; that is where they should be. Now put the foundations 🧱 under them.” – Henry David Thoreau

A big data cluster consists of several major components:

  • Controller (Control plane)
    • Master Instance – manage connectivity (endpoints and communication with the other Pools 🏊‍♂️), scale-out ⚖️ queries, metadata and user databases (target for Restore databases), and machine learning services.

Data not residing in your master instance will be exposed through the concept of External tables. Examples are CSV files on an HDFS store or Data on another RDBMS. External tables can be queried the same as local tables on the SQL Server with several cavorts:

  • Unable modify the table structure or content
  • No Indexes can be applied (only statistics are kept. SQL Server to houses that meta data.
  • Data source might require you to provide credentials 🔐.
  • Data source may also need a format definition like text qualifiers or separators for CSV files 🗃
  • Compute Pool 🏊‍♂️
  • Storage Pool 🏊‍♂️
  • Data Pool 🏊‍♂️
  • Application Pool 🏊‍♂️

The controller provides management and security for the cluster and acts as the control plane for the cluster. It takes care of all the interactions with K8s ☸︎, the SQL Server instances that are part of the cluster and other components like HDFS🐘, Spark🔥, Kibana, Grafana, and Elastic Search

The controller manages:

  • Cluster lifecycle, bootstrap, delete update, etc.
  • Master SQL Server Instance
  • Compute, data, and Storage Pools 🏊‍♂️
  • Cluster Security 🔐

The Compute Pool 🏊‍♂️ is a set a stateless multiple SQL Server 2019 instances that work together. The Compute Pool 🏊‍♂️ leverage Data Virtualization or Polybase to scale-out⚖️ queries across partitions. The Compute Pool 🏊‍♂️ is automatically provisioned as part of BDC. Management and Patching of Computer Pools 🏊‍♂️ is easy because they run on Docker 🐳 containers running on K8☸️ pods.

Please note: Queries on BDC can also function without the Compute Pool 🏊‍♂️.  

 Compute Pool 🏊‍♂️ is responsible for:

  • Joining of two or more directories 📂 in HDFS🐘 with 100+ files
  • Joining of two or more data sources
  • Joining multiple tables with different partitioning or distribution schemes
  • Data stored in Blob Storage

The Storage Pool 🏊‍♂️ consists of pods comprised of SQL Servers on Linux🐧 and Spark🔥 on HDFS 🐘 (deployed automatically). All the Nodes of the BDC are members of an HDFS🐘 Cluster. The Storage Pool 🏊‍♂️ stores file‑based 🗂 data like a CSV and queried directly through external tables and T‑SQL, or you can use Python🐍 and Spark🔥. If you already have an HDFS🐘 on either Azure Data Lake (ADLS) store or AWS S3 buckets🗑 you can easily mount your existing storage into your BDC without the need to shift all your data around.

The Storage Pool 🏊‍♂️ is responsible for:

  • Data ingestion through Spark🔥
  • Data Storage in HDFS🐘 (Parquet format). HDFS🐘 data is spread across all storage nodes in the BDC for persistency
  • Data access through HDFS🐘 and SQL Server Endpoints

The Data Pool 🏊‍♂️ is used for data persistence and caching. Under the covers the Data Pool 🏊‍♂️ is a set of SQL Servers (Defined at Deployment) that are using Columnstore Index and Sharding. In other words, SQL Server will create physical tables with same structure and evenly distribute the data across the total number of servers. The Queries will be also be distributed across server but to the user it will be transparent as all the magic✨🎩 is happening behind the scenes. The data stored in the data Pool 🏊‍♂️ does not support transactions as its sole purpose is for caching. It is used to ingest data from SQL Queries or Spark🔥 Jobs. BDC data marts are persisted in the data Pool 🏊‍♂️.

Data Pool 🏊‍♂️ is used for:

  • Complex Query Joins
  • Machine Learning
  • Reporting

The application Pool 🏊‍♂️ is used to run jobs like SSIS, store and execute ML models, and all kinds of other applications which are generally exposed through a web service.

“This is 10% luck🍀… 20% skill… 15% concentrated power🔌 of will… 5% pleasure😆… 50% pain🤕… And a 100% reason to remember the name”

After both great overviews of the BDC architecture by both Mo and Ben, we were eager to build this spaceship 🚀. First, we need to download the required tools 🛠🧰 so we can happily😊 have our base machine where we can deploy our first BDC. I have chosen the Mac 💻 just to spice 🥵 things up as my base machine.

Below are the Required Tools:

  • Azure Data Studio (ADS)
  • ADS Extension for Data Virtualization
  • Python 🐍
  • Kubectl ☸️
  • azdata
  • Azure-CLI (and only required if using AKS)

Optional Tools:

  • Text Editor 🗒
  • PowerShell Extension for ADS
  • Zip utility Software 🗜
  • SQL Server Utilities
  • SSH

Now that we got all our prerequisites downloaded, we next needed to determine where we should deploy our BDC. The most natural choice seemed to be with AKS.

To help walk🚶‍♂️us through the installation of our base machine and the deployment of BDS using ADS on AKS, we once again turned to Mo Darab who put together an excellent and easy to follow along series of videos Deploying Big Data Clusters on his YouTube channel.

Base Machine

In his video “How to set Up a Base Machine”, Mo used a Windows machine opposed to us who went with the Mac 💻 . But for all intents and purposes the steps are pretty much the same. The only difference is the package📦 manager that we need to use. Windows the recommended package manager is Chocolatey 🍫 while on the Mac 💻 its Brew 🍺.

Here were the basic steps:

§  brew install kubernetes-cli
§  brew update && brew install azure-cli
  • Install Python
    • brew install python3
  • Install azdata
    • brew tap microsoft/azdata-cli-release
    • brew update
    • brew install azdata-cli
  • Install ADS Extension for Data Virtualization
  • Install Pandas (manage package option in ADS/ Add New Pandas/ Click Install)

“Countdown commencing, fire one”

Now, we had our base machine up and running 🏃‍♂️, it was time ⏱ to deploy. To guide us through the deployment process, we once again went to Mo and followed along in his How to Deploy Big Data Cluster on AKS using Azure Data Studio. Mo walked us through the wizard 🧙‍♂️ in ADS which basically creates a Notebook 📒 that builds our BDC. We created our deployment Jupyter notebook 📒 and then clicked Run 🏃‍♂️ and let it rip☄️. Everything seemed to be humming 🎶 along except 2 hours later our Jupyter notebook was still running 🏃‍♂️.

Obviously, something wasn’t right? 🤔 Unfortunately, we didn’t’ have much visibility on what’s happening with the install through the notebook but hey that’s ok we have a terminal prompt in ADS. So we can just run some K8☸️ commands to see what’s happening under the covers. Ok, so after running a few K8 commands we noticed “ImagePullBackOff” error with our SQL Server Images. After a little bit research, we determined someone forgot 🙄to update the Microsoft Repo with the latest CU Image.

“But there is no joy in Mudville—mighty Casey has struck out.”

So we filled a bug 🐞 on GitHub and we ran the BDC wizard 🧙‍♂️ changed the Docker🐳 settings pointing to the next latest package available on the Docker🐳 Registry and we were back in business until… And then Bam!🥊🥊

Operation failed with status: 'Bad Request'. Details: Provisioning of resource(s) for container service mssql-20200923030840 in resource group mssql-20200923030840 failed. Message: Operation could not be completed as it results in exceeding approved Total Regional Cores quota.

What shall we do? What does our Azure Fundamentals training tells us to do? That’s right we go to Azure Portal and submit a support ticket and beg the good people at Microsoft to increase our Cores Quota. So, we did that and almost instantaneously MSFT quick obliged. 😊

Great, we are back in business (Well sort of).. After several more attempts we ran into more Quota issues with all three types (SKU, Static, and Basic) of Public IP Addresses . So, 3 more support tickets later and there was finally joy 😊 to the world🌎.  

Next, we turned back to Ben who provided some great demos in his Pluralsight course on how to set up Data Virtualization for both a SQL Server and HDFS 🐘 files as a data sources on BDC

Data Virtualization (SQL Server)

  1. Right‑mouse click ->Virtualize Data
  2. Data virtualization wizard 🧙‍♂️ will launch.
  3. Next step chooses data source
  • Create Master Key
  • Create connection and specify username and password
  • Next, the wizard🧙‍♂️ will connect to the source and a list of the tables and views
  • Choose the script option
  • Click on a specific table(s)

The external table inherits a schema from its source, and transformation would happen in your queries.

You can choose between just having them created or to generate a script.

CREATE EXTERNAL TABLE [virt].[PersonPhone]


            [BusinessEntityID] INT NOT NULL,

            [PhoneNumber] NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,

            [PhoneNumberTypeID] INT NOT NULL,

            [ModifiedDate] SMALLDATETIME NOT NULL


        WITH (LOCATION = N'[AdventureWorks2015].[Person].[PersonPhone]’, DATA_SOURCE = [SQLServer]);

  • Data Virtualization (HDFS🐘)
  1. Right‑mouse‑click your HDFS 🐘 and create a new directory
  2. Upload the flight delay dataset to it.
  3. Expand the directory 📁 to see files 🗂
  • Right‑mouse‑click a file launches a wizard 🧙‍♂️ to virtualize data from CSV files
  • The wizard 🧙‍♂️will ask you for the database in which you want to create the external table, a name for the data source;
  • Next step preview of our data,
  • next step, the wizard 🧙‍♂️ recommends a column type


            WITH (LOCATION = N’sqlhdfs://controller-svc/default’);



        CREATE EXTERNAL TABLE [csv].[airlines]


            [IATA_CODE] nvarchar(50) NOT NULL,

            [AIRLINE] nvarchar(50) NOT NULL


        WITH (LOCATION = N’/FlightDelays/airlines.csv’, DATA_SOURCE = [SqlStoragePool], FILE_FORMAT = [CSV]);

Monitoring 🎛 Big Data Clusters through Azure Data Studio

BDC comes with a pre-deployed Grafana container for SQL Server and system metrics. This map server collects all those metrics from every single node, container, and pod and provides them individual dashboards 📊.

The Kibana dashboard 📊 is part of the Elastic Stack and provides a looking glass🔎 into all of your log files in BDC.

Troubleshooting Big Data Clusters through Azure Data Studio

In ADS, on the main dashboard 📊 there is a button Troubleshoot. This provides a library📚 of notebooks 📒 that you can use to troubleshoot and analyze the cluster. Notebooks 📒 are categorized and provide all kinds of different aspects on monitoring 🎛 of a diagnosing to help you repair 🛠 an issue within your cluster.

In addition, the azdata utility can be monitoring 🎛, running queries and notebooks 📒, and retrieve a cluster’s endpoints. the namespace and username as well.

We really enjoyed spending time learning SQL Server 2019 Big Data Cluster. 😊

“And I feel like it’s all been done Somebody’s tryin’ to make me stay You know I’ve got to be movin’ on”

Below are some of the destinations I am considering for my travels for next week:

  • Google Cloud Certified Associate Cloud Engineer Path

Thanks –


Week of September 18th

“Catch the mist, catch the myth…Catch the mystery, catch the drift”

L’shanah tovah! 🎉

So after basking in last week’s accomplishment of passing the AZ-900: Microsoft Azure Fundamentals certification exam 📝 this week we decided that we need come down from the clouds☁️☁️. Now, grounded we knew exactly where we needed to go. For those who know me well, I have spent a large part of my career supporting and Engineering solutions for Microsoft SQL Server. Microsoft released SQL Server 2019 back in November of the same year.

For the most part, we really haven’t spent too much on our journey going deep into SQL Server. So it has been long overdue that we give SQL Server 2019 a looksy 👀 and besides we needed to put the S-Q-L back in SQL Squirrels 🐿

SQL Server 2019 had been Microsoft’s most ambitious release of the product that has been around now for just a little over 25 years. 🎂 The great folks at MSFT built on previous innovations in the product to further enhance the development of languages, data types, on-premises or cloud ☁️ environments, and operating systems. Just a few of the big headline features of SQL Server 2019 are:

  • Intelligent Query Processing (IQP)
  • Memory-optimized TempDB
  • Accelerated Database Recovery (ADR)
  • Data Virtualization with PolyBase
  • Big Data Clusters

So let’s un-peel the onion 🧅 and begin diving into SQL Server 2019. So, where to begin? Full disclosure, I am kind of a bit of SQL Server 2019 internals geek 🤓. So we started with a tour of some of the offerings in Intelligent Query Processing

The IQP feature in SQL Server 2019 provides a broad impact that improves the performance💥 of existing workloads with minimal implementation effort to adopt. In theory, you just simply change the database compatibility level of your existing user databases when you upgrade to 2019 and your query just works faster 🏃 with no code changes! Of course this sounds 📢 almost too good to be true?

“Here we are now, entertain us”

To take us through an unbiased lens 👓 at IQP is no other than the man, the myth, the legend, Microsoft Certified Master Brent Ozar. Brent produced two amazing presentations for the SQL Server community, What’s New in SQL Server 2019 and The New Robots in SQL Server 2017, 2019 and Azure.

Brent with his always quirky, galvanic, and sometimes cynical style gives us the skinny on the Good 😊, the Bad 😞, and the Ugly😜 on IQP.

I never played by the rules, I never really cared…My nasty reputation takes me everywhere

First we began, looking at Table Variables. For those who have been around the SQL Server block 🧱, table variables have been around since the good o’l days of SQL 2000 as a performance 💥 improvement alternative to using temporary tables.

However, SQL Server doesn’t maintain statistics on table variables which as you probably know the query optimizer uses to determine the best execution plan. Basically, the SQL Server query optimizer just assumes that a table variable(s) only has one row whether it does or doesn’t. So performance wasn’t so great to say the least. Thus, Table Variables don’t have the best reputation.

Well, SQL Server 2019 to rescue! (Sort of..) In SQL Server 2019, by simply switching the Database “compat mode” to SQL Server 2019 (15.x) SQL Server, SQL Server will now use the actual number of rows in the table variable to create the plan. This is just fantastic! However, as Brent diligently points out this nifty little enhancement isn’t all Rainbows 🌈 and Unicorns 🦄 . Although it does solve the pesky estimated row problem with the query optimizer has with table variables, but now introduces the enigma that is parameter sniffing.

Parameter sniffing has plagued SQL Server since the beginning of time 🕰 and is one of the more common performance💥 issues we SQL people encounter. In simplest terms, SQL Server tries to optimize the execution plan by leveraging a previously ran🏃 plan used. The problem lies when there is different results set due to the parameters provided. Overall, this feature is definite improvement but not the silver bullet 🚀.

“Memories may be beautiful and yet…”

Next, we took a look 👀 @ the Memory Grant Feedback (row mode) feature which originally made its debut in SQL Server 2017. A memory grant is used by the SQL Server database engine to allocate how much memory it will use for a given query.

Sometimes it can allocate too much or too little than the actual memory needed. Obviously, if too much memory is allocated than there will be less memory available for other processes, and if too little is allocated than memory will spill 💦 to disk and performance will suffer.

“Here I come to save the day!” 🐭

Once again, by simply switching the Database “compat mode” to SQL Server 2019 (15.x), SQL Server will now automatically adjust the amount of memory granted for a query. Awesome! … but what’s the catch? Well, similar to the new Table variable enhancement situation, memory grants are sensitive to the dreaded parameter sniffing as SQL Server will base its decision making on the previous run query. In addition, If you rebuild your indexes or create new indexes, the adaptive memory grants feature will forget everything it learned about previously run plans and start all over. 😞 This is an improvement to the past performance of memory grants but unfortunately not the panacea.

“I will choose a path that’s clear.. I will choose free will”

Further, we next explored another feature that first made an appearance in SQL Server 2017 in Adaptive Joins. Back then this sleek feature was only available for Columnstore index but now in SQL Server 2019 is available for our run of the mill 🏭 b-tree 🌲 or row-type work loads. In SQL Server, with Adaptive Joins the query optimizer will dynamically determines at runtime the threshold number of rows and then it chooses between a Nested Loop or Hash Match join operator. Again, we simply switch the Database “compat mode” to SQL Server 2019 (15.x) and abracadabra!

This is awesome! But… Well, once again parameter sniffing rears its ugly 😜 head. As Brent astutely points out during his presentation and documents in his detailed blog post when dealing with complex queries often times SQL Server will produce several plans. So when SQL Server 2019 tries to solve this problem with more options for the optimizer to choose from it will in some particular cases might backfire 🔥 depending on your workloads.

“Just when I thought our chance had passed..You go and save the best for last”

The last feature we keyed 🔑 on as part of IQP was Batch Mode execution. This is just another great innovation that came straight out Columnstore Index (SQL 2012). Now offered in SQL Server 2019, users can take advantage of this enhancement without needing to use a Columnstore Index.

Batch mode is a huge performance 💥 enhancement especially with CPU-bound queries and for queries that use aggregation, sorts, and group by operations. Batch mode performs scans and calculations using batches. To enable the batch mode processing all you have to do is… thats right.. you guessed it… Switching the Database “compat mode” to SQL Server 2019 (15.x)

“Don’t cry Don’t raise your eye It’s only teenage wasteland”

Continuing our journey with SQL Server 2019.. We ventured to take a sneak peak 👀 at the improvements SQL Server 2019 made to TempDB. Guiding us through the murky terrain of TempDB was SQL Royalty👸 and Microsoft Certified Master Pam Lahoud. Pam presented in two short but very thorough videos as part of the Data Exposed series:

In these great videos, Pam gives us the lowdown on improvements to TempDB in SQL Server 2019. TempDB is of course one of the famous or infamous system databases as part of SQL Server. As SQL Server, has been enhanced over the last several decades more and more “stuff” has been chucked into TempDB. TempDB has often been referred to as the “Wastelands”. Some of the activities that go on in TempDB are:

  • Temp Tables
  • Table Variables
  • Cursors
  • Table-Valued Functions
  • Row Versions
  • Online Index Operations
  • Sorts
  • Triggers
  • Statistics Updates
  • Hash Worktables
  • Spools

As result of all these happenings in SQL Server, we often might experience some pain 😭 in Object allocation contention, Metadata contention, and Temp table cache contention.

Look up in the sky! It’s a Bird 🕊 … It’s a Plane… It’s ..

..SQL Server 2019. (Of course 😊)

In SQL 2019, there are two major improvements that impact TempDB performance. The first improvement made helps reduce some of the Temp table cache contention issues. SQL 2019 intelligently partitions cache objects and optimizing cache look up.

To address object allocation contention issues SQL 2019 now offers concurrent PFS updates. In addition, by enabling concurrent PFS updates this allows us to have multiple threads share a latch on the PFS page and therefore we can have more more concurrent threads and less attention on those object allocation pages

The Next major enhancement is memory optimized metadata tables. This is the big one that got into the brochure!

Under the hood, SQL Server 2019 moves the system objects into memory optimized tables that have latched free, lock free structures which greatly increases the concurrency that we can have against those metadata tables and that helps us alleviate that metadata contention.

By Default the Memory-Optimized TempDB Metadata feature is not turned out but its quite easy to enable:


For those who see a flurry of tempdb activity, this feature seems like a no brainer 🧠.

“Makes us want to stay, stay, stay… For awhile”

Below are some of the places I am considering for my explorations for next week:

  • Continuing with SQL Server 2019
  • Google Cloud Certified Associate Cloud Engineer Path

Thanks –


Week of September 12th

Now they always say, ‘Congratulations'”  🎉🥳

Happy International🌎 Programmers’👩🏽‍💻day!

So last week we decided to get our Microsoft Azure fundies on. So naturally, it would only make sense to test out our knowledge🤓 and take the AZ-900: Microsoft Azure Fundamentals certification exam 📝.

So, this week we decided to go down that road 🚙 and I am happy 😊 to report we passed.

“That’s what the flair is about… it’s about fun 😃!”  — Stan from Chotchkie’s

Ok, so besides picking up some additional flair to match our AWS Practitioner badge it does provide us with some additional street “Cloud ☁️ Cred” and we can also show lack of favoritism of one cloud ☁️ provider over another. The bottom line is all clouds☁️ are good 😊 and very powerful⚡️!

To help prepare for the exam📝, we turned to several resources. Fortunately, there is a lot of great content out there. For a quick primer we turned to John Savill 💪 who literally got pumped up 🏋🏻‍♂️ and delivered an excellent primer on Azure architecture and the Core Concepts through his AZ-900 Azure Fundamentals Hints and Tips Youtube video and to “Tech Trainer” Tim Warner who was happy 😊 to be our instructor 👨‍🏫 through his amazing Microsoft Azure Fundamentals Study Guide YouTube Series

Finally, just to get a feel for types of questions that might appear on the exam 📝 I purchased TestPrepTraining – Microsoft Azure Fundamentals (AZ-900) Practice Exam which provided over 1,000 Azure Fundamental-type questions. With that being said there were some similar questions I saw in the course’s practice tests but there were still many questions on my actual exam 📝 that I had never seen 👀 before. In fact, the questions were even in a completely different format then the courseware. Overall, I was quite impressed with how Microsoft structured this test. They made it quite challenging and with the expectations that you had to have practical experience using the Azure Portal and that you just didn’t memorize a bunch concepts and plethora of questions.

“My, my, my, I’m once bitten 🦈 twice shy baby”

For those who might remember when I first attempted to take the AWS Cloud Practitioner Exam, it didn’t exactly go as planned. So perhaps now, I might be a little gun shy ☺️ or just proponent of having a highly available and fault tolerant environment (like the cloud ☁️ itself) to be used for taking these exams 📝. So, in preparation of the exam 📝, I prepared 2 Mac books 💻 and one Windows laptop to be available with the secure OnVUE browser hours ⏳ before the test to mitigate against any unforeseen circumstances.

“Hold your head up… Keep your head up, movin’ on”

Below are some areas I am considering for my travels next week:

  • SQL Server 2019
  • Google Cloud Certified Associate Cloud Engineer Path

Thanks –


Week of September 4th

“You’re my blue sky, you’re my sunny️ day.”

Happy Labor Day👷👷🏽‍♀️ Weekend!

Back at the end of July, we decided to re-route course and go back to the basics with AWS Cloud☁️ focusing on the core concepts and principals of the AWS. Despite hitting a temporary obstacle, we subsequently took and passed the AWS Certified Cloud Practitioner certification exam📝 last week. Feeling the need to spread the love❤️ around the Troposphere we decided we should circle 🔵 back to Microsoft’s very popular cloud☁️ offering Azure and focus on the “fundies” or Fundamentals of Azure. Of course, this wasn’t our first time 🕰 at this rodeo🤠🐴. We spent several occasions in the Microsoft Stratosphere☁️☁️ before. The most recent was looking at Microsoft’s NoSQL Azure offerings. This time 🕰 we would concentrate specifically on General Cloud☁️ Concepts, Azure Architectural Components, Microsoft Azure Core Services, Security🔒, Privacy🤫, Compliance, and Pricing💰, Service Level Agreements, and Lifecycles. To obtain such knowledge we would need to explore several resources starting with our first course 🍽 of Azure fundamentals which was an amazing compilation of rich documentation, vignettes🎞 from current and former blue badgers/ Cloud☁️ Advocates Anthony Bortolo, Sonia Cuff, Phoummala Schmitt, Susan Hinton, Rick Claus, Christina Warren, Pierre Roman and Scott Cate and several short labs🧪 that give you free access to the Azure Cloud☁️ and let you implement solutions. For our second course 🍽 we went out to YouTube and found 5 ½ hours ⏳ of goodness 😊 with Paul Browning’s awesome videos on “Microsoft Azure Fundamentals (AZ 900) – Complete Course and then for encore we went to Pluralsight and visited with both Michael Brown and his Microsoft Azure Security🔒 and Privacy🤫 Concepts and with Steve Buchanan and his Microsoft Azure Pricing and Support Options because who can ever get enough of Security🔒 and Pricing💰?

“So, I look in the sky, but I look in vain…Heavy cloud☁️, but no rain🌧”

General Cloud️ Concepts

First, let’s review… What is cloud☁️ computing anyway? There are numerous meanings out there. According to Wikipedia “Cloud☁️ computing is the on-demand availability of computer system resources, especially data storage🗄 (cloud☁️ storage🗄) and computing power🔌, without direct active management by the user. “

It’s really just a catchy name. So, despite contrary to belief Cloud☁️ Computing has nothing to do with the clouds☁️ ☁️ or the weather☔️ in the sky. In simplest terms it means sharing of pooled computing resources over the Internet. And are you ready for the catcher? “that you can rent”. In other words, you pay for what you use. Opposed to traditional computing way where a company or organization would invest in potentially expensive real estate to house owned Compute🖥, Storage🗄, Networking or fancy Analytics.

So now we are faced with the argument Capital expenditure (traditional computing cost model) versus operational expenditure (Cloud☁️ Computing cost model)

Capital expenditure (CapEx) 💰consists of the funds that a company uses to purchase major physical goods or services that the company will use for more than one year and the value will depreciate over time 🕰

Operational expenditure (OpEx) 💰are deducted in the same year they’re made, allowing you to deduct those from your revenues faster.

So, looking from a cost perspective, the cloud☁️ can offer a better solution for a better cost since the cloud☁️ provider’s already has those, so you would benefit from the economies of scale⚖️.

That’s great but let’s leave the expenses to the bean counters. After all we are technologists and we want the best performance and efficient technology solutions. So, what other benefits does Cloud☁️ provide me? How about Scalability ⚖️, Elasticity 🧘‍♀️, Agility 💃, Fault Tolerance, High Availability, Disaster🌪 Recovery and Security🔒.

  • Scalability ⚖️: Cloud☁️ will increase or decrease resources and services used based on the demand or workload at any given time 🕰. Cloud☁️ supports both vertical ⬆️ and horizontal ↔️ scaling ⚖️ depending on your needs.
  • Elasticity 🧘‍♀️: Cloud☁️ compensate spike 📈 or drop 📉 in demand by automatically adding or removing resources.
  • Agility 💃: Cloud☁️ eliminates the burdens of maintaining software patches, hardware setup, upgrades, and other IT management tasks. All of this is automatically done for you. Allowing you to focus on what matters: building and deploying applications.
  • Fault Tolerance: Cloud☁️ has fully redundant datacenters located in various regions all over the globe.
  • High Availability & Disaster Recovery: Cloud☁️ can replicate your services into multiple regions for redundancy and locality or select a specific region to ensure you meet data-residency and compliance laws for your customers.
  • Security🔒: Cloud☁️ offers a broad set of policies, technologies, controls, and expert technical skills that can provide better security🔒 than most organizations can otherwise achieve.

Ok, now I am sold. But what types of clouds☁️ are there?

There are multiple types of cloud☁️ computing services, but the three main ones are:

Infrastructure as a service (IaaS) – enables applications to run🏃🏻on the cloud☁️ instead of using their own infrastructures. Allows the most control over provided hardware that runs 🏃 your applications

Platform as a Service (PaaS)- enables developers to create as software without investing in expensive 🤑 hardware. Allows you to create an application quickly without managing the underlying infrastructure.

Software as a Service (SaaS) – provides answers to desktop needs for end users. Based on an architecture where one version of the application is used for all customers and licensed through a monthly or annual subscription. 

What about Cloud☁️ Deployment models? Well, there are multiple types of cloud☁️ deployment models out there as well.

Public cloud☁️: cloud☁️ vendor that provides cloud☁️ services to multiple clients. All of the clients securely 🔒 share the same hardware in the back end.

Private cloud️: organization uses their own hardware and software resources to achieve cloud☁️ services.

Hybrid cloud️: this cloud☁️ model is a combination of both private and public cloud☁️ models.

Community cloud☁️: this model consists of a pool of computer resources. These resources are available to the different organizations with common needs. Clients or tenants can access the resources quickly and securely🔒. Clients are referred to as tenants.

“Blue skies, smilin’ 😊 at me Nothin’ but blues skies do I see”

So now that we expounded the virtues of Cloud☁️ computing Concepts let’s take a deeper a look on what we came for…

Microsoft Azure is a cloud☁️ computing service created by Microsoft for building, testing, deploying, and managing applications and services through a global network of Microsoft managed data centers.

“Architecture starts when you carefully put two bricks🧱 together. There it begins.”

Azure Architectural Components

Microsoft Azure is made up of data centers located around the globe 🌎. These data centers are organized and made available to end users by region. A region is a geographical 🌎 area on the planet 🌎 containing at least one, but potentially multiple data centers that are in close proximity and networked together with a low-latency network.

Azure divides the world 🌎 into geographies 🌎 that are defined by geopolitical boundaries or country borders. An Azure geography is a discrete market typically containing two or more regions that preserves data residency and compliance boundaries.

Availability sets are a way for you to ensure your application remains online if a high-impact maintenance event is required, or if a hardware failure occurs. Availability sets are made up of Update domains (UD) and Fault domains (FD).

Fault domains is a logical group of underlying hardware that share a common power🔌 source and network switch, similar to a rack within an on-premise data center.

Update domains is a logical group of underlying hardware that can undergo maintenance or be rebooted at the same time 🕰. An update domain is a group of VMs that are set for planned maintenance events at the same time 🕰.

Paired regions support redundancy across two predefined geographic 🌎 regions, ensuring that even if an outage affects an entire Azure region, your solution is still available.

Additional advantages of regional pairs:

  • In the event of a wider Azure outage, one region is prioritized out of every pair to help reduce the time 🕰 to restore for applications.
  • Planned Azure updates are rolled out to paired regions one at a time 🕰 to minimize downtime 🕰 and risk of application outage.
  • Data continues to reside within the same geography as its pair (except for Brazil South) for tax and law enforcement jurisdiction purposes.

Availability Zones are physically separate locations within an Azure region that use availability sets to provide additional fault tolerance.

Resource group is a unit of management for your resources in Azure. A resource group is like container that allows you to aggregate and manage all the resources required for your application in a single manageable unit.

Azure Resource Manager is a management layer in which resource groups and all the resources within it are created, configured, managed, and deleted.

“It is our choices, Harry, that show what we truly are, far more than our abilities.”― J.K. Rowling

Azure Services

Azure provides over 100 services that enable you to do everything from running 🏃 your existing applications on virtual machines to exploring 🔦new software paradigms such as intelligent bots and mixed reality. Below are some of the services available in Azure:

Azure compute 🖥 – is an on-demand computing service for running cloud-based ☁️ applications.

There are four common techniques for performing compute 🖥 in Azure:

  1. Virtual machines – software emulations of physical computers 🖥 .
  2. Containers – virtualization environment for running 🏃 applications.
  3. Azure App Service – (PaaS) offering in Azure that is designed to host enterprise-grade web-oriented applications
  4. Serverless computing – cloud☁️ hosted execution environment that runs 🏃 your code but completely abstracts the underlying hosting environment.

Azure Virtual Machines (VMs) (IaaS) lets you create and use virtual machines in the cloud☁️.

Scaling VMs in Azure

Availability sets is a logical grouping of two or more VMs that help keep your application available during planned or unplanned maintenance 🧹.

  • Up to three fault domains that each have a server rack with dedicated power🔌 and network resources
  • Five logical update domains which then can be increased to a maximum of 20

Azure Virtual Machine Scale⚖️ Sets let you create and manage a group of identical, load balanced VMs.

Azure Batch enables large-scale⚖️ job scheduling and compute 🖥 management with the ability to scale⚖️ to tens, hundreds, or thousands of VMs.

  • Starts a pool of compute 🖥 VMs for you
  • Installs applications and staging data
  • Runs 🏃jobs with as many tasks as you have
  • Identifies failures
  • Re-queues work
  • Scales ⚖️ down the pool as work completes

Containers in Azure

Azure supports Docker🐳 containers (a standardized container model), and there are several ways to manage containers in Azure.

  • Azure Container Instances (ACI)
  • Azure Kubernetes Service (AKS)☸️
  1. Azure Container Instances (ACI) offers the fastest and simplest way to run🏃🏻 a container in Azure.
  2. Azure Kubernetes Service (AKS)☸️ is a complete orchestration service for containers with distributed architectures with multiple containers.

Containers are often used to create solutions using a microservice architecture. This architecture is where you break solutions into smaller, independent pieces.

Azure App Service

Azure App Service (PaaS) enables you to build and host web apps, background jobs, mobile📱backends, and RESTful 😴 APIs in the programming language of your choice without managing infrastructure. It offers automatic scaling ⚖️ and high availability.

App Service, you can host most common app service styles, including:

  1. Web apps- includes full support for hosting web apps using ASP.NET, ASP.NET Core, Java☕️ , Ruby 💎, Node.js, PHP, or Python 🐍 .
  2. API apps – build REST-based 😴 Web 🕸APIs using your choice of language and framework
  3. Web Jobs – allows you to run🏃🏻 a program (.exe, Java☕️ , PHP, Python 🐍 , or Node.js) or script (.cmd, .bat, PowerShell⚡️🐚, or Bash🥊) in the same context as a web 🕸 app, API app, or mobile 📱 app. They can be scheduled or run🏃🏻 by a trigger 🔫.
  4. Mobile📱app back-ends – quickly build a backend for iOS and Android apps.

Azure Networking

Azure Virtual Network enables many types of Azure resources such as Azure VMs to securely🔒 communicate with each other, the internet, and on-premises networks. Virtual networks can be segmented into one or more subnets. Subnets help you organize and secure🔒 your resources in discrete sections.

Azure Load Balancer is a load balancer service that Microsoft provides that helps take care of the maintenance for you. Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput ↔️, and scales⚖️ up to millions of flows for all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) applications. Azure Application Gateway

VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic🚦 between an Azure Virtual Network and an on-premises location over the public internet. It provides a more secure 🔒 connection from on-premises to Azure over the internet.

Application Gateway is a load balancer designed for web applications. It uses Azure Load Balancer at the transport level (TCP) and applies sophisticated URL-based routing rules to support several advanced scenarios.

Here are some of the benefits of using Azure Application Gateway over a simple load balancer:

  • Cookie 🍪 affinity.
  • SSL termination
  • Web 🕸 application firewall🔥 (WAF)
  • URL rule-based routes.
  • Rewrite HTTP headers

Content delivery network (CDN) is a distributed network of servers that can efficiently deliver web 🕸 content to users. It is a way to get content to users in their local region to minimize latency.

Azure Storage🗄 is a service that you can use to store files📁, messages✉️, tables , and other types of information.

Disk storage🗄 provides disks for virtual machines, applications, and other services to access and use as they need, similar to how they would in on-premises scenarios. Disk storage🗄 allows data to be persistently stored and accessed from an attached virtual hard disk. 

Azure Blob storage🗄 is object storage🗄 solution for the cloud☁️. Blob storage🗄 is optimized for storing massive amounts of unstructured data, such as text or binary data.

Blob storage🗄 is ideal for:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access.
  • Streaming video 📽 and audio 📻.
  • Storing data for backup and restore, disaster recovery, and archiving.
  • Storing data for analysis by an on-premises or Azure-hosted service.

Azure Files Storage🗄 enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST 😴 interface or the storage🗄 client libraries 📚

File shares can be used for many common scenarios:

  • Many on-premises applications use file shares.
  • Configuration files 📂 can be stored on a file share and accessed from multiple VMs.
  • Diagnostic logs, metrics, and crash dumps are just three examples of data that can be written to a file 📂 share and processed or analyzed later.

Azure Archive Blob Storage🗄

Azure Archive Blob storage🗄 is designed to provide organizations with a low cost means of delivering durable, highly available, secure cloud☁️ storage🗄 for rarely accessed data with flexible latency requirements. Azure storage🗄 offers different access tiers include:

  • Hot 🥵 – Optimized for storing data that is accessed frequently.
  • Cool 🥶 – Optimized for storing data that is infrequently accessed and stored for at least 30 days.
  • Archive 🗃 – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

Storage🗄 Replication

Azure regions and geographies🌎 become important when you consider the available storage🗄 replication options. Depending on the storage🗄 type, you have different replication options.

  1. Locally redundant storage🗄 (LRS)- Replicates your data 3x within the region in which you create your storage🗄 account.
  2. Zone redundant storage🗄 (ZRS) – Replicates your data 3x across two to three facilities, either within a single region or across two regions.
  3. Geo-redundant storage🗄 (GRS) – Replicates your data to secondary region that is hundreds of miles away from the primary region.
  4. Read-access Geo-Redundant storage🗄 (RA-GRS)- Replicates your data to a secondary region, as with GRS, but also then provides read only access to the data in the secondary location.

Azure Database🛢 services are fully managed PaaS database🛢 services. Enterprise-grade performance with built-in high availability, scales⚖️ quickly and reach global 🌎 distribution.

Azure Cosmos DB 🪐is a globally 🌎distributed database🛢 service that enables you to elastically and independently scale⚖️ throughput and storage🗄 across any number of Azure’s geographic 🌎 regions. It supports schema-less data that lets you build highly responsive and Always On applications to support constantly changing data.

Azure SQL Database🛢 is a relational database🛢 as a service (DBaaS) based on the latest stable version of Microsoft SQL Server database🛢 engine. SQL Database🛢 is a high-performance, reliable, fully managed and secure database🛢 without needing to manage infrastructure. SQL database🛢 offers 4 service tiers to support lightweight to heavyweight 🏋️‍♀️ database🛢 loads:

  • Basic
  • Standard
  • Premium
  • Premium RS

Azure Database🛢 for MySQL is a relational database🛢 service powered by the MySQL community edition. It’s a fully managed database🛢 as a service offering that can handle mission-critical workloads with predictable performance and dynamic Scalability ⚖️. 

Azure Database🛢 for PostgreSQL is a relational database🛢 service based on the open-source Postgres database🛢 engine. It’s a fully managed database-as-a-service offering that can handle mission-critical workloads with predictable performance, security🔒, high availability, and dynamic Scalability ⚖️.

Azure Database🛢 Migration Service is a fully managed service designed to enable seamless migrations from multiple database🛢 sources to Azure data platforms with minimal downtime 🕰 (online migrations).

Dynamic Scalability enables your database🛢 to transparently respond to rapidly changing resource requirements and enables you to only pay for the resources that you need when you need them.

Elastic pools to maximize resource utilization

Elastic pools are designed dial performance up or down on demand especially if usage patterns are relatively predictable.

Azure Marketplace

Azure Marketplace is a service on Azure that helps connect end users with Microsoft partners, independent software vendors (ISVs), and start-ups that are offering their solutions and services, which are optimized to run🏃🏻on Azure. The solution catalog spans several industry categories:

  • Open-source container platforms
  • Virtual machine images
  • Databases🛢
  • Application build and deployment software
  • Developer tools🛠
  • Threat detection 🛡
  • Blockchain🔗

Internet of Things (IoT) 📲is the ability for devices to garner and then relay information for data analysis. There are many services that can assist and drive end-to-end solutions for IoT on Azure. Two of the core Azure IoT service types are:

  • IoT 📲 Central is a fully managed global IoT software as a service (SaaS) solution that makes it easy to connect, monitor, and manage your IoT assets at scale⚖️.
  • IoT 📲 Hub is a managed service hosted in the cloud☁️ that acts as a central message hub for bi-directional communication between your IoT application and the devices it manages. You can use Azure IoT Hub to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud☁️-hosted solution backend.
  • IoT📲 Edge is the technology from Microsoft for building Internet of Things (IoT) solutions that utilize Edge Compute. IoT Edge extends IoT Hub. Analyze device data locally instead of in the cloud to send less data to the cloud☁️, react to events quickly, and operate offline.

Big data and analytics – Data comes in all types of forms and formats. When we talk about big data, we’re referring to large volumes of data.

Azure Synapse Analytics is a limitless analytics service that brings together enterprise data warehousing and big data analytics. 

Azure HDInsight is a fully managed, open-source analytics service for enterprises. It is a cloud☁️ service that makes it easier, faster, and more cost-effective to process massive amounts of data.  HDInsight supports open-source frameworks and create cluster types:

  • Apache Spark ⭐️
  • Apache Hadoop 🐘
  • Apache Kafka
  • Apache HBase
  • Apache Storm🌧
  • Machine Learning Services

Microsoft Azure Databricks🧱 provides data science and data engineering teams with a fast, easy and collaborative Spark-based platform on Azure. It gives Azure users a single platform for Big Data processing and Machine Learning.

Artificial Intelligence (AI) 🧠is the creation of software that imitates human behaviors and capabilities. Key🔑 elements include:

  • Machine learning – This is often the foundation for an AI system, and is the way we “teach” a computer in model to make prediction and draw conclusions from data.
  • Anomaly detection – The capability to automatically detect errors or unusual activity in a system.
  • Computer vision👓 – The capability of software to interpret the world visually through cameras, video, and images.
  • Natural language processing – The capability for a computer to interpret written or spoken language and respond in kind.
  • Conversational AI – The capability of a software “agent” to participate in a conversation.

Azure Machine Learning service is a cloud-based☁️ platform for creating, managing, and publishing machine learning models. Azure Machine Learning provides the following features and capabilities:

  • Automated machine learning
  • Azure Machine Learning designer
  • Data and compute🖥 management
  • Pipelines

Azure Machine Learning studio is a web 🕸 portal for data scientist developers in Azure Machine Learning. The studio combines no-code and code-first experiences for an inclusive data science platform.

Serverless computing lets you run🏃🏻 application code without creating, configuring, or maintaining a server.  Azure has two implementations of serverless compute🖥:

  • Azure Functions, which can execute code in almost any modern language.
  • Azure Logic Apps, which are designed in a web-based designer and can execute logic triggered by Azure services without writing any code.

Azure Event Grid allows you to easily build applications with event-based architectures. Event Grid has built-in support for events coming from Azure services, like storage🗄 blobs and resource groups.

“DevOps brings together people, processes, and technology, automating software delivery to provide continuous value to your users.”

Azure DevOps Services allows you to create build and release pipelines that provide CI/CD (continuous integration, delivery, and deployment) for your applications.

  • Azure DevOps – (SaaS) platform from Microsoft that provides an end-to -end DevOps toolchain⛓ for developing and deploying software. It also integrates with most leading tools🛠 on the market and is a great option for orchestrating a DevOps toolchain⛓.
  • Azure DevTest Labs🧪 – (PaaS) enables developers on teams to efficiently self-manage virtual machines (VMs. DevTest Labs🧪 creates labs 🧪 consisting of pre-configured bases or Azure Resource Manager templates.

Azure management options

You can configure and manage Azure using a broad range of tools🛠 and platforms. Tools🛠 that are commonly used for day-to-day management and interaction include:

  • Azure portal for interacting with Azure via a Graphical User Interface (GUI)
  • Azure PowerShell⚡️🐚 cross-platform version of PowerShell⚡️🐚 that enables you to connect to your Azure subscription and manage resources.
  • Azure Command-Line Interface (CLI) cross-platform command-line program that connects to Azure and executes administrative commands
  • Azure Cloud☁️ Shell 🐚 interactive, authenticated, browser-accessible shell🐚 for managing Azure resources.
  • Azure mobile 📱 app access, manage, and monitor 🎛 all your Azure accounts and resources from your iOS 📱 or Android phone or tablet.
  • Azure SDKs for a range of languages and frameworks, and REST 😴 APIs manage and control Azure resources programmatically.

Azure Advisor is a free service built into Azure that provides recommendations on high availability, security🔒, performance, operational excellence, and cost. Advisor analyzes your deployed services and looks for ways to improve your environment across each of these areas. 

“Don’t worry about a thing Cause every little thing gonna be alright”

Security🔒, Privacy🤫, Compliance

Azure Advisor Security🔒 Assistance.

  • Azure Advisor Security🔒 Assistance integrates with Security🔒 Center.
  • Provide best practice security🔒 recommendations.
  • Azure Advisor Security🔒 Assistance helps prevent, detect, and respond to security🔒 threats.
  • You or your team should be using this tool every day to get the latest security🔒 recommendations.
  • Configuration of this tool 🔧, the amount of information it is gathering, the type of information it is gathering, is controlled through Security🔒 Center.

Securing Azure Virtual Networks

Network Security🔒 Groups (NSGs) – filter traffic🚦.

  • NSG has an inbound list and an outbound list.
  • Attached to subnets or network cards
  • Each NSG could be linked to multiple resources
  • NSG are stateful.

Application Security🔒 Groups – allow us to reference a group of resources

  • Used as even a source or destination of traffic🚦.
  • They do not replace network security🔒 groups.
  • Enhance them network security🔒 groups are still required.

When working with application security🔒 groups,

  • create the application security🔒 group
  •  link the application security🔒 group to a resource
  • use the application security🔒 group when working with network security🔒 groups.

Azure Firewall🔥 is a stateful firewall🔥 service and highly available solution provided by Azure. It’s a virtual appliance configured at the virtual network level. It protects access to your virtual networks. Features of Azure Firewall🔥 include:

  • Threat intelligence. 🧠
  • It supports both outbound and inbound NATing
  • Integrates with Azure Monitor 🎛
  • Network traffic🚦filtering rules
  • Unlimited in scale⚖️

Azure DDos protection provides DDoS mitigation for networks and applications.

Always on as a service.

  • Provides protection all the way up to the application layer.
  • Integrates Azure Monitor 🎛 for reporting services.
  • Features offered by Azure DDoS protection include:
    • Multi‑layered support, so protection from layer 4 attacks up to layer 7 attacks.
    • Attack analytics. So, we can get reports on attacks in progress, as well as post attack reports
    • Scale⚖️ and elasticity
    • Provides protection against unplanned costs💰.

Azure DDoS comes in two different service tiers, basic and standard.

Azure Web Application Firewall🔥 is designed to publish your applications to the outside world 🌎 , whether they’re in Azure or on‑premises, and lures bound traffic🚦 towards them.

Forced tunneling allows the control of flow of internet‑bound traffic🚦.

Security🔒 Scenarios

  • Control Internet traffic🚦 – User defined routes, Azure FW 🔥 or marketplace device
  • Azure hosted SQL Server – NSGs
  • VPN – Forced tunneling

Azure identity services

Identity services will help us in the authentication 🔐and authorization👮🏽‍♂️ of our users. Authentication works hand in hand with authorization.

Authentication🔐 – The act of proving who or what something is

Authorization 👮🏽‍♂️ – Granting the correct level of access to a resource or service

In Azure, authentication🔐 is provided by Azure AD and authorization is provided by role‑based access control.

Azure Active Directory is a cloud-based identity service used in Azure. It used to authenticate and authorize users. When we think Azure Active Directory, think single sign on. Azure Active Directory is not to be equivalent of Active Directory Domain Services used on-premise.

Active Directory Domains Services – full Active Directory Domain Service that we’ve used for years on‑premise.

Azure AD Domain Services (PaaS)- introduced to make it easier to migrate legacy applications as it supports both NTLM and Kerberos for authentication🔐. also supports Group Policies, trust🤝 relationships, as well as several over domain service features.

Azure AD Connect as being like a synchronization tool.

Multifactor authentication involves providing several pieces of information to prove who you are. Microsoft strongly recommends that we use multifactor authentication.

Azure Security🔒 Center reports our complying status against certain standards.

It provides continuous assessment of existing and new services that we deploy, it also provides threat protection for both infrastructure and Platform as a Service services.

Azure Key 🔑 Vault is a service we can use to protect our secrets. Azure Key 🔑 Vault uses hardware security🔒 modules. These hardware security🔒 modules have been validated to support the latest Federal Information Processing Standards.

Azure Information Protection (AIP) to classify documents 📃 and emails 📧. AIP applies Labels to documents 📃. Labeled documents 📃 can be protected. There are two sides to Azure Information Protection

These classifications come in the form of metadata that can be attached with a header or added as watermarks to the document you’re trying to protect.  Once classified, then the documents can be protected.

Classification of Documents. 📃

Azure uses Azure Rights Management to encrypt the documents using Rights Managements templates.

Azure Advanced Threat Protection 🛡 (Azure ATP) is a cloud☁️-based security🔒 solution that identifies, detects, and helps you investigate advanced threats, compromised identities, and malicious insider actions directed at your organization. Azure ATP 🛡 portal allows you to create your Azure ATP 🛡 instance, and view the data received from Azure ATP🛡 sensors.

Azure ATP 🛡sensors are installed directly on your domain controllers. The sensor monitors domain controller traffic🚦 without requiring a dedicated server or configuring port mirroring.

Azure ATP cloud☁️ service runs on Azure infrastructure and is currently deployed in the United States, Europe, and Asia. Azure ATP cloud☁️ service is connected to Microsoft’s intelligent security🔒 graph.

Azure Policy is a collection of rules. Each policy we create is assigned to a scope, such as an Azure subscription. When using Azure Policy, we create a policy definition, a policy assignment, and policy parameters,

When we create Azure policies, they can be used by themselves or they can be used with initiatives.  Initiatives are a collection of policies. To use initiatives, we create an initiative definition, an initiative assignment, and initiative parameters

Role Based Access Control (RBAC) is used daily by your organization. It’s central to access control in Azure. Azure provides shared access. RBAC is made up of several different components

  • Roles are groups of permissions that are needed to perform different administrative actions in Azure.  We then assign role members before configuring a scope for the role.
  • Scope details where a role can be used. There are many built‑in roles, each giving different sets of permissions, but three built‑in roles are used more than any other.

Three used most often roles are:

  • Owner role full control of that resource, including the ability to assign other users and group access.
  • Contributor role allows you to do everything except manage permissions.
  • Reader📖 role. This role is read‑only

Always follow the principle of least privilege.

Locks🔒prevent deletions or editing of resource groups and their content Two types Locks 🔒 are Read‑only and Delete.

If you make a resource group read‑only, then all the resources in there can be accessed, but no new resources can be added to the resource group or removed from the resource group.

Delete, then no resources can be deleted from the resource group, but new resources can be added.

Azure Blueprints – advanced way of orchestrating the deployment of resource templates and artifacts.

  • Blueprints maintain a relationship between themselves and the resources that they deployed.
  • Blueprints include Azure policy and initiatives as well as artifacts such as roles.

To use Blueprints, we require a Blueprint definition, we Publish the Blueprint, and then Assign it to a scope.

Blueprint definition

  • Resource groups can be defined and created
  • Azure policy can be included to enforce compliance
  • Azure resource manager templates can be included to deploy resources

Roles can be assigned to resources that blueprints have created

Azure Monitor to collect and analyze metric information both on premise and in Azure

Azure Service Health that we can use to see the health status of the Azure services

  • Personalized dashboards
  • Configurable alerts🔔
  • Guidance and Support

Service Trust🤝 Portal (STP) hosts the Compliance Manager service and is the Microsoft public site for publishing audit reports and other compliance-related information relevant to Microsoft’s cloud☁️ services. 

  • ISO
  • SOC
  • NIST
  • FedRAMP
  • GDPR

Microsoft Privacy🤫 Statement explains what personal data Microsoft processes, how Microsoft processes it, and for what purposes.

Microsoft Trust🤝 Center is a website resource containing information and details about how Microsoft implements and supports security🔒, Privacy🤫, compliance, and transparency in all Microsoft cloud☁️ products and services.

  • In-depth information about security🔒, Privacy🤫, compliance offerings, policies, features, and practices across Microsoft cloud☁️ products.
  • Recommended resources in the form of a curated list of the most applicable and widely used resources for each topic.
  • Information specific to key🔑 organizational roles, including business managers, tenant admins or data security🔒 teams, risk assessment and Privacy🤫 officers, and legal compliance teams.
  • Cross-company document search, which is coming soon and will enable existing cloud☁️ service customers to search the Service Trust🤝 Portal.
  • Direct guidance and support for when you can’t find what you’re looking for.

Compliance Manager is a workflow-based risk assessment dashboard within the Trust🤝 Portal that enables you to track, assign, and verify your organization’s regulatory compliance activities related to Microsoft professional services and Microsoft cloud☁️ services such as Microsoft 365, Dynamics 365, and Azure.

Compliance Manager provides the following features:

  • Detailed information provided by Microsoft to auditors and regulators (ISO 27001, ISO 27018, and NIST).
  • Compliance with regulations (HIPAA).
  • An organization’s self-assessment on compliance with these standards and regulations.
  • Enables you to assign, track, and record compliance and assessment-related activities
  • Provides a Compliance Score to help you track your progress and prioritize auditing
  • Provides a secure repository in which to upload and manage evidence and other artifacts
  • Produces richly detailed reports which can be provided to auditors and regulators

Special Azure regions exist for compliance and legal reasons. These regions are not generally available, and you have to apply to Microsoft if you want to use one of these special regions.

  • US 🇺🇸 Gov regions support US government agencies (US Gov Virginia and US Gov Iowa)
  • China 🇨🇳special regions. China East, China North. (Partnership with 21Vianet)
  • Germany 🇩🇪 regions. Germany Central and German Northeast. (compliant with German data laws)

There are two types of subscription boundaries that you can use, including:

Azure subscriptions provide you with authenticated and authorized access to Azure products and services and allows you to provision resources. An Azure subscription is a logical unit of Azure services that links to an Azure account, which is an identity in Azure Active Directory (Azure AD) or in a directory that an Azure AD trusts🤝.

An account can have one subscription or multiple subscriptions that have different billing models and to which you apply different access-management policies.

  • Billing boundary. This subscription type determines how an Azure account is billed for using Azure. 
  • Access control boundary. Azure will apply access-management policies at the subscription level, and you can create separate subscriptions to reflect different organizational structures. An example is that within a business, you have different departments to which you apply distinct Azure subscription policies. 

The organizing structure for resources in Azure has four levels: management groupssubscriptionsresource groups, and resources. The following image shows the relationship of these levels i.e. the hierarchy of organization for the various objects

Management groups:

Allow you to apply governance conditions (access & policies) a level of scope above subscriptions

These are containers that help you manage access, policy, and compliance for multiple subscriptions. The resources and subscriptions assigned to a management group automatically inherit the conditions applied to the management group.

Azure offers three main types of subscriptions:

  • A free account
  • Pay-As-You-Go
  • Member offers

There are three main customer types on which the available purchasing options for Azure products and services is contingent, including:

  • Enterprise
  • Web 🕸 direct
  • Cloud☁️ Solution Provider

Options for purchasing Azure products and services

  • Pay-As-You-Go Subscriptions
  • Open Licensing
  • Enterprise Agreements
  • Purchase Directly through a Cloud☁️ Solution Provider (CSP)
Azure free account

The Azure free account includes free access to popular Azure products for 12 months, a credit to spend for the first 30 days, and access to more than 25 products that are always free. 

Factors affecting costs💰

Resource type:

When you provision an Azure resource i.e. Compute🖥, Storage🗄, and Networking, Azure creates one or more-meter instances for that resource. The meters track the resources’ usage, and each meter generates a usage record that is used to calculate your bill.

Services: Azure usage rates and billing periods can differ between Enterprise, Web🕸 Direct, and Cloud☁️ Solution Provider (CSP) customers.


The Azure infrastructure is globally distributed, and usage costs💰 might vary between locations that offer Azure products, services, and resources.

All inbound or ingress data transfers to Azure data centers from on-premises environments are free. However, outbound data transfers (except in few cases like backup recovery) incur charges

Zones for billing purposes

A Zone is a geographical grouping of Azure Regions for billing purposes. the following Zones exist and include the sample regions as listed below:

  • Zone 1 – West US, East US 🇺🇸,Canada West 🇨🇦,West Europe🇪🇺,France Central 🇫🇷 and others
  • Zone 2 – Australia Central 🇦🇺, Japan West 🇯🇵 , Central India 🇮🇳 , Korea South 🇰🇷 and others
  • Zone 3 – Brazil South🇧🇷
  • DE Zone 1 – Germany Central, Germany Northeast 🇩🇪

Pricing Calculator 🧮

The Pricing Calculator 🧮 is a tool that helps you estimate the cost of Azure products. It displays Azure products in categories, and you choose the Azure products you need and configure them according to your specific requirements. Azure then provides a detailed estimate of the costs💰 associated with your selections and configurations.

Total Cost of Ownership Calculator

The Total Cost of Ownership Calculator is a tool that you use to estimate cost savings you can realize by migrating to Azure. To use the TCO calculator, complete the three steps that the following sections explain.

  1. Define your workloads
  2. Adjust assumptions
  3. View the report

Best Practices for Minimizing Azure Costs💰

  • Shut down unused resources
  • Right-size underused resources
  • Configure autoscaling
  • Reserved instances pre‑pay for resources at a discounted rate.
  • Azure cost management provides a set of tools🛠 for monitoring🎛, allocating, and optimizing your Azure costs💰. The main features of the Azure Cost Management toolset include:
  • Reporting
  • Data enrichment
  • Budgets
  • Alerting 🔔
  • Recommendations
  • Price
  • Quotas. place around the resources and the amount of resources that you’re using.
  • Spending limits as your approaching that spending limit, you won’t be able to deploy more resources and not going to go over a budget.
  • Azure Hybrid Benefit Migrate your workloads to Azure, the best cloud☁️ for Windows Server and SQL Server
  • Tags🏷 when deploying resources in Azure, you will want to tag your resources. You can use this to identify resources for chargeback in your organization.

SLAs for Azure products or services

An SLA defines performance targets for an Azure product or service. The performance targets that an SLA defines are specific to each Azure product and service.

  • SLAs describe Microsoft’s commitment to providing Azure customers with certain performance standards.
  • There are SLAs for individual Azure products and services.
  • SLAs also specify what happens if a service or product fails to perform to a governing SLA’s specification.

Service Credits

SLAs also describe how Microsoft will respond if an Azure product or service fails to perform to its governing SLA’s specification.

Application SLA

Azure customers can use SLAs to evaluate how their Azure solutions meet their business requirements and the needs of their clients and users. By creating your own SLAs, you can set performance targets to suit your specific Azure application. When creating an Application SLA consider the following:

  • Identify workloads.
  • Plan for usage patterns.
  • Establish availability metrics 
  • Establish recovery metrics 
  • Implement resiliency strategies.
  • Build availability requirements into your design.

Composite SLA

When combining SLAs across different service offerings, the resultant SLA is a called a Composite SLA. The resulting composite SLA can provide higher or lower uptime 🕰 values, depending on your application architecture.

Service lifecycle in Azure

Microsoft offers previews of Azure services, features, and functionality for evaluation purposes. With Azure Previews, you can test pre-release features, products, services, software, and even regions.

There are two categories of preview that are available:

  • Private preview – An Azure feature is available to certain Azure customers for evaluation purposes.
  • Public preview – An Azure feature is available to all Azure customers for evaluation purposes.

General availability

Once a feature is evaluated and tested successfully, the feature may be made available for all Azure customers. A feature released to all Azure customers typically goes to General Availability or GA.

The Azure updates  page provides the latest updates to Azure products, services, and features, as well as product roadmaps and announcements.

“On the road again…I just can’t wait to get on the road again”

Thanks –


Week of August 28th

Taking care of business (every way)

Happy National Bow Tie 🎀Day!

So last week, we dusted ourselves off, climbed 🧗‍♂️ back up on our horse 🐎, and reconvened with our continuous learning adventures. To get ourselves warmed🔥 up we decided to renew our knowledge with PowerShell ⚡️🐚 but we still had some unfinished business to attend too. If you have been following our travels 🚀, you might remember earlier this month we hit a bump in the road. But just like the armless Black Knight 🛡🗡 from Monty Python and the Holy Grail  it was just a mere “flesh wound”. Ok, perhaps not. Actually, at the time 🕰️ it stung 🐝 pretty bad 😞  but after a two-week investigation by the good folks at AWS ☁️ it was deemed that my test on the 4th of August had indeed got corrupted. They offered their sincere apology for the inconvenience and more importantly provided me a voucher so I can reschedule the AWS Cloud Practitioner Exam at no cost. Well, this week I decided to re-take the exam 📝 and I am happy 😊 to report I passed the exam 📝 without issue.

To help prepare for the exam📝  I purchased the Udemy – AWS Certified Cloud Practitioner 500 Practice Exam Questions. There were some similar questions taken from this course’s practice tests but it seems AWS☁️ likes to keep their certified professionals honest. So there were quite a few questions I have never seen before.

So it’s highly recommend that in addition to gaining practical experience working with AWS that you also review there courseware and fully understand the core concepts like the AWS Well-Architected Framework and have a good basic understanding of many of the AWS Products and services 

Despite the obstacle earlier this month, it was good experience preparing for the exam and ultimately passing and getting the certification. Now, we are even more well reversed in the cloud ☁️ and have street cloud ☁️ credit to back it up. 😎

“And I’m on my way… I don’t know where I’m going 🤷🏻‍♂️… I’m on my way… I’m taking my time … But I don’t know where?”

Below are some areas I am considering for my travels next week:

  • Azure Fundamentals Courseware

Thanks –


Week of August 21st

Pulling mussels from a shell

Happy Poet’s 📜 Day!

“I’ve been too long, I’m glad to be back”

So, after a 2-week hiatus we are glad to be back and all powered⚡️up and ready to get back on our continuous journey ✈️in learning. In our most recent learning, we have spent time🕰️ in the clouds ☁️☁️ (in particular AWS) so we thought we might circle⭕ back to an oldie but goodie with PowerShell⚡️🐚.  Why PowerShell⚡️🐚 you might ask? That’s seems so yesterday. However, it’s still very much still today. Microsoft has been very much committed to PowerShell⚡️🐚 especially in regards to managing Windows. Every application from Exchange to SQL to Active Directory is built with PowerShell⚡️🐚 as a basis for administration. In addition, Cloud ☁️ solutions such as Office 365 and Azure integrate with PowerShell⚡️🐚.

When we first started with PowerShell⚡️🐚 around circa ~2012 with version 3 and then of course with version 4 which debuted just a year later it was all the rage. Just a quick a review for those not familiar with PowerShell⚡️🐚. PowerShell⚡️🐚 is a management framework that combines a command-line shell 🐚 and scripting language allowing IT professionals🧑‍💻the ability to perform real‑world🌎 tasks such as gathering remote computer 🖥️ information, automate repetitive tasks, and troubleshooting system problems. Initially, PowerShell⚡️🐚 was built for Windows only running🏃‍♂️ on top of the Windows .NET Framework but in 2016 Microsoft made the strategic decision and pivoted to offer PowerShell⚡️🐚  as Open Source through PowerShell Core running🏃‍♂️ on top of course .NET Core which is an open source framework that runs 🏃 on Linux🐧, macOS🍎 and Windows .

So where shall we begin? In the past, we have found when reviewing a familiar technology solution that the  best place to start is with the basics and we found the perfect course on Pluralsight through Michael Bender’s PowerShell: Getting Started. Michael’s excellent course provides a strong fundamental knowledge of PowerShell⚡️🐚 with a personal guarantee that upon completion of the course you will be able to “hit the ground running 🏃 with usable PowerShell⚡️🐚 ninja🤺 skills.”

In the course introduction, Michael succinctly discusses why PowerShell⚡️🐚 is more than a scripting language and a command line interface but an execution engine that provides the ability for you to interface with your environment using a variety of tools 🔧. First, he touches on the traditional Windows PowerShell console that allows us to run many different commands i.e. Get‑Service. Then he dives into PowerShell Core which pretty much lets you do much of the same things but with the flexibility to run across multiple platforms. Then he discusses the tools🔧 we can use to developed PowerShell⚡️🐚 scripts with the legacy Windows PowerShell⚡️🐚 ISE (Integrated Scripting Engine) and the newer Visual Studio Code. Next, he discusses Windows Admin Center which uses PowerShell⚡️🐚 on the back-end to do all of the administration on Windows Server and also integrates with Microsoft Azure☁️.

Now after a great introduction, we were ready to dive right into the PowerShell⚡️🐚 Basics. At a very basic level with PowerShell⚡️🐚 follows a Verb‑Noun syntax. In other words, you do something to something.  For example, if you wanted to get information about the verbs that are available for use in PowerShell⚡️🐚. You would use a cmdlet called “Get‑Verb”. Another important piece to PowerShell⚡️🐚 commands is parameter(s). Parameters are used to pass information into PowerShell⚡️🐚 commands so that we have information that we can use for the command to use to do its work.

Ex:  Get‑Service ComputerName

Next, Michael introduces us to three of the most important commands that everyone should know which of course is Get‑Command, Get‑Help, and Get‑Member. These commands are so significant because they allow you to find the answers you need to work within PowerShell⚡️🐚. So, there is no need to Google or Bing to find the right syntax. Everything is right there in the console. Also, it’s really the best way to learn how to do something within PowerShell⚡️🐚.

·         Get‑Command is used to search for the installed commands within PowerShell⚡️🐚.

·         Get‑Help allows us to see how we use a specific command that we found so it displays the help information.

·         Get‑Member allows us to get the properties and methods of objects that are the output of a specific command

PowerShell⚡️🐚 is an object‑oriented language. Unlike other scripting languages that rely on syntax to get things done, PowerShell⚡️🐚 uses objects as its output and objects have properties that make them up and they have methods that you can perform actions against them. The best way to visualize objects in PowerShell⚡️🐚 is to view the data in a table format. PowerShell⚡️🐚 places all of the data from commands into a collection or a table to store that data.

Next, we took a deep dive with pipeline in PowerShell⚡️🐚 which is really where the real power in PowerShell⚡️🐚 comes in. PowerShell⚡️🐚 treats all data as objects that can be used to their full potential in PowerShell⚡️🐚. So pipe-lining PowerShell⚡️🐚 is a way to send the output of one command into a second command, and this allows you to do more complex work like sort or filter something, and then from that output, we can do something.

Ex: Get-Service | where {$_.Status -eq “Stopped”}

Then the course further discusses one of the most common use cases for PowerShell⚡️🐚 which is gathering system information. In PowerShell⚡️🐚 there are several options. The first option is Windows Management Instrumentation (WMI). WMI is based on the Common Information Model, an open standard that defines how managed elements in an IT environment are represented as a common set of objects, as well as the relationship between them.  The second option is CIM. CIM was originally introduced in PowerShell⚡️🐚 v3 to work with WMI.  CIM cmdlets are now the de facto standard and the WMI cmdlets are now considered legacy as there has been no recent development or enhancements to WMI. When WMI Cmdlets are being called information is accessed through namespaces in the WMI repository. CIM is namespace that for the specifics classes that we’re looking for. An example of a classes is Win32_Processor which contains information like device ID and name for our processors. This information is stored as properties that are accessible from the objects output by the command.

Now that we are comfortable using PowerShell⚡️🐚 on local systems, Michael now discusses how we can use PowerShell⚡️🐚 to connect remote systems which is generally how PowerShell⚡️🐚 is used as much of our troubleshooting and problem resolution happens remotely. PowerShell⚡️🐚 has a few options for remoting, WMI and Windows Remote management (WinRM).  WinRM is a Web Service for Management Protocol (WS Man) which allows users to run PowerShell⚡️🐚 commands on remote computers 🖥️. Both options are available on Windows PowerShell and PowerShell Core. PowerShell⚡️🐚 remoting allows you to send commands to a remote machine on your network. WinRM is responsible for maintaining the connections between these two systems. The computers 🖥️ you want to connect to need must have listener set up so that WinRM knows to listen for the power shell connections. By default, Windows clients don’t have PowerShell⚡️🐚 remoting turned on. So, it must be enabled if you plan to use PowerShell⚡️🐚 for remote Administration.

Enable-PSRemoting needs to be enabled on the target machine

Next, you need to give the user access to PowerShell⚡️🐚  remoting Enter-PSSession

This allows you to modify the session permissions and this will allow the remote connection to happen for PowerShell core. You need to set up the remote system to be an endpoint for power shell connections. This is done by installing a script located in the PS Home Directory

To put the finishing touches on this excellent introductory course Michael walks us through on how to Build a User Inventory Script with PowerShell⚡️🐚

By default, PowerShell⚡️🐚 ‘s execution policy is set to Restricted; which means that scripts will not run.

The Set-ExecutionPolicy cmdlet enables you to determine which Windows PowerShell⚡️🐚 scripts will be allowed to run on your computer.

Windows PowerShell⚡️🐚 has four different execution policies:

  • Restricted – No scripts can be run. Windows PowerShell⚡️🐚 can be used only in interactive mode.
  • AllSigned – Only scripts signed by a trusted publisher can be run.
  • RemoteSigned – Downloaded scripts must be signed by a trusted publisher before they can be run.
  • Unrestricted – No restrictions; all scripts can be run.

After completing this amazing course, we are now armed with foundational knowledge on how to use PowerShell⚡️🐚. So,next we decided to continue our review of PowerShell⚡️🐚 and how we can utilize it with SQL Server to perform common DBA and Developer tasks. Once again, we turned to Pluralsight through SQL Server MVP Robert C. Cain fantastic course on PowerShell and SQL Server

Robert’s course provides the fundamentals of using PowerShell⚡️🐚 to manage SQL Servers. The course is designed as a six-part series covering basic DBA Tasks using just PowerShell⚡️🐚, an introduction to SQL Management Objects (SMO) and the SQL Provider, Basic DBA tasks using both SMO and SQL Provider, Development using SQL Provider, Development using SMO, and Real-World🌎 Examples, SQL PS, PowerShell⚡️🐚  Jobs

After providing us with a simple PowerShell⚡️🐚 email function “Send Easy Mail” that can be used with SQL Server notifications or alerts we take a look how to manage the SQL Server Services which we obviously find status of their services through “Get-Service” cmdlet. When using the Get-Service cmdlet you need to pass in several parameters.

Get-service -ComputerName $server | where {($_.name -like “MSSQL$*” -or $_.name -like “MSSQLSERVER” -or $_.name -like “SQL Server (*”)}

Next, we took a look at counters. As Database professionals many of us keep a customized list of counters to monitor our database environment. PowerShell⚡️🐚 makes it extremely easy to get counter information using the Get-Counter cmdlet.

A great example on we can use counters with PowerShell⚡️🐚 can be found on MSSQL Tips. WMI which we learned earlier stands for Windows Management Instrumentation.  WMI and PowerShell⚡️🐚 can be used for a common task like monitoring how much disk space you have available on the system. 

Another great example can be found on MSSQL Tips

PowerShell⚡️🐚 also makes it very easy to get information out of the event log using Get-EventLog. 

See MSSQL Tips

Next, Robert introduces us to SQL Provider and the SQL Management Object (SMO). In order to use SMO effectively you will need to have a basic understanding of objects most of which received during Michael’s course

Just to briefly review, objects are based on classes, classes are generally referred to as the blueprint for creating an object. In PowerShell⚡️🐚, you would have instantiated an instance of that class. Instantiation refers to the process of creating a single object from the class. An instance in object terms refers to that particular object which you just instantiated. As we know, objects have properties. These properties hold information about the current state of an object. Objects can also have methods and method is similar to how stored procedure works. In other words, it will only work inside of its current database. Whenever you organize a group of objects together it’s known as a collection. For example, a database object would have a collection of table objects.  Another important concept to grasp is objects can contain other objects and collections.

SMO is a set of. NET libraries specifically designed for working with SQL Server. These libraries stored in. NET DLLs are loaded with classes. From these classes you can create objects that parallel things in SQL Server.

The SQL Provider is a PSSnapin which is a compiled module written PowerShell⚡️🐚. The SQL Provider PSSnapin is collection of cmdlets specific to SQL Server.

By default, neither the SQL Provider or the SMO libraries are loaded into PowerShell⚡️🐚. So, they need to be added manually by executing the following syntax:

Install-Module -Name SqlServer


Install-Module -Name SqlServer -AllowClobber

As a basic example, we can use SQL Provider to connect to a SQL Server and find what instances are on that server. We can use Get-ChildItem cmdlet to return the instance information and then put that into an array. Then we can utilize foreach and for each object in these instances that’s returned so we can do our work against each child name contained in our instances array. SMO is a. NET library that Microsoft designed for working with SQL Server. At the very core of SMO, everything is a server object and an object is something as it exists. For example, a server object has a corresponding server associated with it. Each server has things on it such as backup devices, credentials and databases, etc. In addition to objects, SMO has collections which are basically an array of objects. The SMO model considers each instance to be an individual server. Because SMO starts at the instance level, there’s no really good way using SMO to go out and query all of your machines and find out what instances you’re running🏃‍♂️. However, we can use ADO. NET to get all our servers we want to manage.

After covering SQL Provider and SMO, Robert then show us through his custom script how we can use SQL Provider and SMO to perform tasks a DBA might do.  Next, we looked at the relationship between SMO and the SQL Provider.

After performing DBA tasks with SQL Provider and SMO, we learned how to develop against SQL Server using just the SQL Provider. See Scripts

Next, we were provided the challenge of doing the same tasks in the previous module like creating databases but this time🕰️ using SMO. See Scripts

Finally, as we wrapped up Robert’s excellent series, we were provided real world🌎 example of using PowerShell⚡️🐚. Our mission was to find tables that contain Text, end text and image data types. See Scripts

As an encore, the course covers SQLPS. SQLPS is a special customized shell just for working with SQL Server. It was initially written between PowerShell⚡️🐚 v1 and PowerShell⚡️🐚 v2, so it’s bit of amalgamation.  It most cases SQLPS should be avoided as it not the full PowerShell⚡️🐚 environment and especially should be avoided for SQL Server Agent Scheduled jobs as you should call to the full shell when executing SQL Server Scheduled Jobs. We really enjoyed our time🕰️ re-learning PowerShell⚡️🐚 and we were glad to be back on our learning Journey.

“The Magical Mystery Tour Is coming to take you away… Coming to take you away”

Thanks –


Week of August 7th

Whisper🤫 words  of wisdom, let it be

Happy International Beer 🍺 day!

Let me apologize as this a different kind of blog post. If you have been following my journey ✈️ in adventures of learning exciting and interesting technologies you’ll find this update more of life lesson than technology one. 

If you recall last week, we decided to expand on our knowledge on AWS ☁️ by focusing on the core foundation of AWS ☁️ core principals and the well factored architecture. Just as soon as we clicked submit with our summary of last week’s learning, I decided to register to take the Amazon Cloud ☁️ Practitioner exam  through the remote testing option as this would be the coup de grâce of our awesome learning with AWS ☁️.

In effort, to fully prepare myself for the exam 📝 I would dedicate the early part of the week for cramming for the test before moving on to my next journey.

Commencing countdown, engines on…

I was all set to take the exam 📝 on Tuesday and as time ⏰ approached closer to test 📝 time⏰ I decided I should login in early and go through all the pre-checkouts to ensure that my testing environment was in compliance. 

After about 15 minutes of getting everything ready go, I buckled💺up and  began my course but unfortunately Mother Nature 🌬 had other ideas..

Just as soon as I was taken to the disclaimer splash screen I was literally within a microsecond kicked out of the test and unable to get back in.  

At the same time outside, tropical storm ⛈ Isaias was kicking🦵up. Violent winds 🌪 were throwing trees 🌳 and large objects all around. In addition, millions of people in the Northeastern United States were embarking on loosing power 🔌 (in some cases for days), I lost connectivity 💻 with the test 📝 which turned out to be no option to get back in. 😞

As soon as I realized, I couldn’t get back into the test I began a 4.5 hour saga including numerous phone 📞 calls to testing facility including a 2 hour shared session with Tech Support 👩‍💻 in order to figure out what happened and to try to give me an opportunity to take this test 📝.

But to no avail.. It just wasn’t in the cards 🃏🃏I was just simply SOL. My only recourse was to open case with AWS ☁️ (which I did) and re-register for the exam 📝 at a later date. At the time ⏰, I was left devastated 😩. I had been so regimented with my learning journey and was not prepared for such an obstacle to impede my progress.

And here’s to my momma.. Had to listen to all this drama

So feeling sort of hopeless,  I did what any self respecting individual would do. I called my mom to cry 😭. OK, maybe not cry 😢 but certainly complain. My mom like the wise sage she is put things in perspective and set me straight to focus other priorities that I needed to attend to later in the week.

You’ve got to..You’ve got to move! Come on!

So with a little bit of hesitation , I was able to put this momentary lapse of adversity a side and reminded myself of the famous Epictetus quote  “The chief task in life is simply this: to identify and separate matters so that I can say clearly to myself which are externals not under my control, and which have to do with the choices I actually control.”

Next week, I plan take a brief respite but I shall return in 2 weeks recharged 🔋 re-calibrated and ready to go!

“Why, why, why, why, why do you say goodbye, goodbye – wow

Oh no – You say goodbye, and I say hello”

Thanks –


Week of July 31st

A new day will dawn 🌄… For those who stand long”

Happy National Avocado🥑 Day!

Our journey 🚞 this week takes us back to our humble beginnings. Well, sort of… If you recall we began our magical✨ mystery tour of learnings back in March with AWS EC2. Then the last 2 weeks we re-routed course back to AWS, concentrating on AWS’s data services. So, we thought it might make sense to take one step 👣 back in order to take two steps 👣 👣 forward by focusing this week’s enlightenments on the fundamentals of the AWS Cloud☁️ and its Key🔑 concepts, core services, security🔐, architecture, pricing 💶, and support.

Fortunately, we knew the right place to load up on such knowledge. Where of course you ask? But to no other than the fine folks at AWS Training through their free online course AWS Cloud☁️ Practitioner Essentials (Second Edition). AWS spared no expense💰 by putting together an all-star🌟 lineup of AWS-er’s led by Kirsten Dupart, an old familiar friend, Blaine Sundrud ,Mike Blackmer,Raf Lopes, Heiwad Osman, Kent Rademacher , Russell Sayers ,Seph Robinson , Andy Cummings , Ian Falconer ,Wilson Santana ,Wes Gruver, Tipu Qureshi, and Alex Buell

The objective of the course was to highlight the following main areas:

  • AWS Cloud☁️ global infrastructure 
  • AWS Cloud☁️ architectural principles 
  • AWS Cloud☁️ value proposition
  • AWS Cloud☁️ overview of security🔐 and compliance
  • AWS Cloud☁️ overview of billing, account management, and pricing 💶 models

The course beings with introduction to the concept of “Cloud☁️ Computing” which of course is the on-demand availability of computing system resources, data Storage 🗄 and computing power⚡️, without direct active management by the user. Instead of having to design and build traditional data centers, Cloud☁️ computing enables us to access a data center and all of its resources, via the Internet or Cloud☁️.

Amazon Web Services (AWS) is a secure🔐 Cloud☁️ services platform, offering compute power⚡️, database Storage 🗄, content delivery and other functionality to help businesses to scale⚖️ up or scale⚖️ down based on actual needs. There are 5 main areas that AWS Cloud☁️ emphases Scalability ⚖️, Agility, Elasticity🧘‍♂️, Reliability and Security🔐.

  1. Scalability ⚖️ is the ability to resize your resources as necessary. AWS Cloud☁️ provides a scalable computing platform designed for high availability and dependability through tools and solutions.
  • Agility is the ability to increase speed🏃🏻, offer an ease of experimentation and promoting innovation. AWS empowers the user to seamlessly spin up servers in minutes, shut down servers when not needed or allow unused resources to be allocated for other purposes
  • Elasticity🧘‍♂️ is the ability to scale ⚖️ computing resources up or down easily. AWS makes it easy to quickly deploy new applications, scale⚖️ up as the workloads increase and shut down resources that are no longer required
  • Reliability is the ability of a system to recover from infrastructure or service failure. AWS provides reliability by hosting your instances and resources across multiple locations utilizing regions, availability zones and edge locations.
  • Security 🔐 is the ability to retain complete control and ownership over your data and meet regional compliance and data residency requirements. AWS provides highly secure 🔐 data centers, continuous monitoring 🎛 and industry-leading capabilities across facilities, networks, software, and business processes

There are three methods in which you can access AWS resources:

  1. AWS management console which provides a graphical user interface (GUI) to access AWS services
  2. AWS command line interface (CLI) which allows you to control AWS services from the command line
  3. AWS Software Development kits (SDK) enables you to access AWS using a variety of programming languages

Next the course provides us with some brief vignettes covering the AWS core services, AWS Global Infrastructure, and AWS Integrated Services.

AWS Core Services

Elastic Compute Cloud☁️ (EC2) is a web service that provides secure, resizable compute capacity in the Cloud☁️. EC2 instances are “pay as go”. You only pay for the capacity you use, and you have the ability to have different Storage 🗄 requirements.

Key🔑 components of EC2 are:

  • Amazon machine image (AMI) which is an OS image used to build an instance
  • Instance Type refers to hardware capabilities (CPU, Memory)
  • Network – Public and Private IP addresses
  • Storage 🗄 – SSDs, Provisioned IOPs SSD, Magnetic disks
  • Keypairs🔑 (Secure) allow you to connect to instances after they are launched
  • Tags 🏷 provide a friendly name to identify resources in an AWS.

Elastic block store (EBS) provides persistent block level Storage🗄 volumes for your EC2 instances

  • EBS volumes are designed for being durable and available volumes that are automatically replicated across multiple servers running in the availability zones.
  • EBS Volumes must be in the same AZ as the instances they are attached to
  • EBS gives you the ability to create point in time⏰ snapshots of your volumes and allows AWS to create a new volumes from a snapshot at any time⏰.
  • EBS volumes have the ability to increase capacity and change to different types
  • Multiple EBS volumes can be attached to an instance

Simple Storage🗄 Service (S3) is a fully managed Storage🗄 service that provides a simple API for storing and retrieving data. S3 uses buckets🗑 to store data. S3 buckets🗑 are associated with a particular AWS region When you store data in a bucket🗑 it’s redundantly stored across multiple AWS availability zones within a given region

  • Data stored in S3 is serverless. So, you do not need to manage any infrastructure.
  • S3 supports objects as large as several terabytes.
  • S3 also provides low latency access to data over HTTP or HTTPS

AWS Global Infrastructure

The AWS Global Infrastructure consists of RegionsAvailability Zones, and Edge locations providing highly available, fault tolerant, and scalable infrastructures. 

AWS regions are multiple geographical 🌎 areas that host two or more availability zones and are the organizing level for AWS services.

Availability zones are a collection of data centers within a specific region. Each availability zone is physically isolated from one another but connected together through low latency, high throughput, and highly redundant networking. AWS recommends provisioning your data across multiple availability zones.

As of April 2020, AWS spans 70 Availability Zones within 22 Regions around the globe 🌎.

Edge locations are where end users access services located at AWS. They are located in most of the major cities 🏙 around the world 🌎 and are specifically used by Amazon CloudFront🌩 which is a content delivery network (CDN) that distributes content to end user to reduce latency.

Amazon Virtual Private Cloud⛅️ (VPC) is a private network within the AWS Cloud☁️ that adheres to networking best practices as well as regulatory and organizational requirements. VPC is an AWS foundational service that integrates with many of the AWS services. VPC leverages the AWS global infrastructure of regions and availability zones. So, it easily takes advantage of high availability provided by AWS. VPC exists within regions and can span across multiple availability zones. You can create multiple subnets in a VPC. Although fewer is recommended to limit the complexity of the network.

Security🔐 Groups acts as a virtual 🔥 firewall for your virtual servers to control incoming and outgoing traffic🚦. It’s another method to filter traffic🚦 to your instances. It provides you control on what traffic🚦 to allow or to deny. To determine who has access to your instances you would configure a Security🔐 group rule.

AWS CloudFormation🌨 “Infrastructure as Code” allows you to use programming languages, JSON files, or simple text files; to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

“Every AWS service that you learned about is another tool 🛠to build solutions. The more tools you can bring to the table, the more powerful 💪 you become.” -Andy Cummings

AWS Integrated Services

AWS offers a variety of services from A-Z. So, it would impossible to review every service in a six-hour course. Below are some of the services highlighted in the course:

Elastic Load Balancing 🏋🏻‍♀️ distributes incoming application traffic🚦across multiple AWS services like EC2 instances, containers, IP addresses, and Lambda functions automatically. There are 3 kinds of load balancers Network Load Balancer🏋🏻‍♂️, Classic Load Balancer 🏋🏻‍♂️ (ELB) and Application Load Balancer🏋🏻‍♂️(ALB).

  1. Network Load Balancer 🏋🏻‍♀️ is best suited for load balancing of TCP, UDP and TLS traffic🚦 where extreme performance is required.
  2. Classic Load Balancer 🏋🏻‍♀️ provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level.
  3. Application Load Balancer 🏋🏻‍♀️ offers most of the features provided by the classic load Balancer 🏋🏻‍♀️ and adds some important features and enhancements. Its best suited for load balancing of HTTP and HTTPS traffic🚦

AWS Autoscaling⚖️ monitors 🎛 your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Autoscaling⚖️ removes the guesswork of how many EC2 instances you need at a point in time⏰ to meet your workload requirements. Their are three core components that are need at launch configuration, “the what to deploy”; An Autoscaling⚖️ group (“the where to deploy”) and an Autoscaling⚖️ policy (“when to deploy”).

  • Dynamic auto scaling⚖️ is a common configuration used with AWS CloudWatch⌚️alarms based on performance information from your EC2 instance or load balancer🏋🏻‍♂️.

Autoscaling⚖️ and Elastic load balancing 🏋🏻‍♀️ automatically scale⚖️ up or down based on demands backed by Amazon’s massive infrastructure you have access to compute and Storage 🗄 resources whenever you need them

Amazon route 53👮is a global, highly available DNS service that allows you to easily register and resolve DNS names, providing a managed service of reliable and highly scalable ⚖️ way to route 👮‍♀️ end users to Internet applications. Route 53 offers multiple ways to route👮‍♀️ your traffic🚦 enabling you to optimize latency for your applications and users.

Amazon Relational Database Service (RDS🛢) is a database as a service (DBaaS) that makes provisioning, operating and scaling⚖️ either up or out seamless. In addition, RDS🛢makes other time-consuming administrative tasks such as patching, and backups a thing of the past. Amazon RDS🛢provides high availability and durability through the use of Multi-AZ deployments. It also lets you run your database instances an Amazon VPC, which provides you the control and security🔐.

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales⚖️ automatically to thousands of requests per second.

AWS Elastic Beanstalk🌱 is an easy-to-use service for deploying and scaling web applications and services developed with Java☕️ , NET, PHP, Node.js, Python🐍, Ruby💎, Go, and Docker🐳 on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk🌱 employs Auto Scaling⚖️ and Elastic Load Balancing🏋🏻‍♂️ to scale⚖️ and balance workloads. It provides tools🛠 in the form of Amazon CloudWatch⌚️ to monitor 🎛 the health❤️ of deployed applications. It also provides capacity provisioning due to its reliance on AWS S3 and EC2.

Amazon Simple Notification Service (SNS) 📨 is a highly available, durable, secure🔐, fully managed pub/sub messaging service like Google’s pub/sub that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, SNS📨 can be used to fan out notifications to end users using mobile push, SMS, and email✉️.

Amazon CloudWatch⌚️ is a monitoring 🎛 service that allows you to monitor 🎛 AWS resources and the applications you run 🏃🏻 on them in real time. Amazon CloudWatch⌚️ features include collecting and tracking metrics like CPU utilization, data transfer, as well as disk I/O and utilization. Some of the components that make up Amazon CloudWatch⌚️ include metrics, alarms, events, logs and dashboards

Amazon CloudFront🌩 uses a global 🌎network of more than 80 locations and more than 10 regional edge caches for content delivery (CDN). It’s integrated with the AWS services such as AWS web application🔥 firewall, certificate manager, route 53, and S3 as well as other AWS services.

AWS CloudFormation🌨 is a fully managed service which acts as an engine 🚂 to automate the provisioning of AWS resources. CloudFormation🌨 reads template files which specify the resources to deploy. Provision resources are known as the stack. Stacks 📚 can be created updated or deleted through CloudFormation🌨.

AWS Architecture

When one refers to AWS Architecture one need to refer to no further than to the AWS Well-Architected Framework. The AWS Well-Architected Framework originally began as a single whitepaper but expanded into more of a doctrine focused on Key🔑 concepts, design principles, and architectural best practices for designing secure, high-performing, resilient, and efficient infrastructure and running 🏃🏻 workloads in the AWS Cloud☁️ .

The AWS Well-Architected Framework is based on five pillars; operational excellencesecurity🔐 , reliabilityperformance efficiency, and cost optimization.

  1. Operational excellence focuses on running 🏃🏻 and monitoring 🎛 systems and continually improving processes and procedures.
  2. Security🔐 centers on protecting information and systems.
  3. Reliability highlights that workload performs consistently as intended and could quickly recover from failure
  4. Performance efficiency concentrates on efficiently using computing resources.
  5. Cost optimization emphasis on cost avoidance.

Reference Architecture – Fault Tolerance and High Availability

Both Fault Tolerance and High Availability are cornerstones of infrastructure design strategies to keep critical applications and data up and running 🏃🏻

Fault Tolerance refers to the ability of a system (computer, network, Cloud☁️ cluster, etc.) to continue operating without interruption when one or more of its components fail.

High availability refers to systems that are durable and likely to operate continuously functioning and accessible and that downtime is minimized as much as possible, without the need for human intervention.

AWS provides services and infrastructure to build reliable, fault-tolerant, and highly available systems in the Cloud☁️.

Some AWS services that can assist in providing high availability:

  • Elastic load balancers🏋🏻‍♀️
  • Elastic IP addresses
  • Route 53 👮‍♀️
  • Auto scaling⚖️
  • CloudWatch⌚️

Some AWS services that provide fault tolerant tools are:

  • SQS
  • S3 🗄
  • RDS🛢

Amazon Web Services offers Cloud☁️ web 🕸 hosting solutions that provide businesses, non-profits, and governmental organizations with low-cost ways to deliver their websites and web 🕸 applications.


When it comes to security🔐 AWS doesn’t take this lightly. So much so that when you are a newbie to AWS it could be quite challenging just to connect to your Cloud☁️ environment. AWS global infrastructure is built with the highest standards to ensure privacy🤫 and data security🔐. AWS infrastructure has strong 💪 safeguards in place to protect customers privacy 🤫. AWS continuously improves and innovates security🔐 incorporating customer feedback and changing requirements. AWS provides security🔐 specific tools🛠 and features across network security🔐, configuration management, access control, and data security🔐. AWS provides monitoring 🎛 and logging tools🛠 to provide full visibility👀 into what is happening in your environment. AWS provides several security🔐 capabilities and services like built-in firewalls🔥 to increase privacy🤫 and control network access. In addition, AWS offers Encryption of data both in transit and data at rest in the Cloud☁️. AWS offers you capabilities to define, enforce, and manage user👤 access policies across AWS services.

The shared👫responsibility model

AWS believes Security🔐 and Compliance is a shared👫responsibility between AWS and the customer. The shared👫responsibility model alleviates the operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security🔐 of the facilities in which the service operates. The customer assumes responsibility and management of the guest operating system (including updates and security🔐 patches), other associated application software as well as the configuration of the AWS provided security🔐 group🔥 firewalls.

Security🔐 “of” the Cloud☁️ vs Security🔐 “in” the Cloud☁️

  • “Security🔐 of the Cloud☁️ – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud☁️.
  • “Security🔐 in the Cloud☁️ – Customer responsibility will be determined by the AWS Cloud☁️ services that a customer selects.

Inherited Controls are controls that the customer fully inherits from AWS.

Shared👫Controls are controls which apply to both the infrastructure layer and the customer layers, but in completely separate contexts or perspectives. Examples include:

  • Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
  • Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
  • Awareness & Training – AWS trains AWS employees, but a customer must train their own employees.

Customer Specific – Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services. Examples include:

AWS Cloud☁️ Security🔐 Infrastructure and services

AWS Identity and Access Management (IAM) is one of the core secure services (at no additional charge) to enforce security🔐 across all AWS Service offerings. IAM provides Authentication, Authorization, User Management and Central User Repository. In IAM, you can create and manage users, groups, and roles to either allow or deny access to specific AWS resources.

  • Users👤 are permanent named Operator (can be either human or machine)
  • Groups 👥 are collections of users (can be either human or machine)
  • Roles – are an authentication method. The Key🔑 part is the credentials with the role are temporary.

As for permissions, it is enforced by a separate object known as a policy document 📜.

A Policy document📜 is JSON document📜 that attaches either directly to a user👤 or a group👥 or it can be attached directly to a role.

AWS CloudTrail 🌨 is a service that enables governance, compliance, operational auditing, and risk auditing CloudTrail🌨 records all successful or declined authentication and authorization.

Amazon Inspector 🕵️‍♂️ is an automated security🔐 and vulnerability assessment service that assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector produces a detailed list of security🔐 findings prioritized by level of severity in the following areas:

  • Identify Application Security🔐 Issues
  • Integrate Security🔐 into DevOps
  • Increase Development Agility
  • Leverage AWS Security🔐 Expertise
  • Streamline Security🔐 Compliance
  • Enforce Security🔐 Standards

AWS Shield🛡 is a managed Distributed Denial of Service (DDoS) protection service. There are two tiers of AWS Shield🛡 Standard and Advanced.

  • AWS Shield🛡Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications.
  • AWS Shield🛡Advanced provides additional detection against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS web 🕸application 🔥 firewall (WAF).

Pricing and Support

AWS offers a wide range of Cloud☁️ computing services. For each service, you pay for exactly the amount of resources you actually use.

  • Pay as you go.
  • Payless then when you reserve.
  • Pay even less per unit by using more
  • Pay even less as your AWS Cloud☁️ grows

There are three fundamental characteristics you pay for with AWS:

  1. Compute 💻
  2. Storage 🗄
  3. Data transfer out ➡️

Although you are charged for data transfer out, there is no charge for inbound data transfer or for data transfer between other services with the same region.

AWS Trusted Advisor is an online tool🔧 that optimizes your AWS infrastructure, increase reliabilitysecurity🔐 and performance, reduce your overall costs, and monitoring 🎛. AWS Trusted Advisor enforces AWS best practices in five categories:

  1. Cost optimization
  2. Performance
  3. Security🔐
  4. Fault tolerance
  5. Service limits

AWS offers 4 levels of Support

  1. Basic support plan (Included with all AWS Services)
  2. Developer support plan
  3. Business support plan
  4. Enterprise support plan

Obviously, there was a lot to digest😋 but now we have a great overall understanding of the AWS Cloud☁️ concepts, some of the AWS services, security🔐, architecture, pricing 💶, and support and feel confident to continue our journey in the AWS Cloud☁️. 😊

“This is the end, Beautiful friend.. This is the end, My only friend, the end”?

Below are some areas I am considering for my travels next week:

  • Neo4J and Cypher 
  • More with Google Cloud Path
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Thanks –


Week of July 24th

“And you may ask yourself 🤔, well, how did I get here?” 😲

Happy Opening⚾️ Day!

Last week, we hit a milestone of sorts with our 20th submission🎖since we started our journey way back in March.😊 To commemorate the occasion, we made a return back to AWS by gleefully 😊 exploring their data ecosystem. Of course, trying to cover all the data services that are made available in AWS in such a short duration 🕰 would be a daunting task.

So last week, we concentrated our travels to three of their main offerings in the Relational Database, NoSQL, and Data warehouse realms. This being of course RDS🛢, DynamoDB🧨, and Redshift🎆. We felt rather content 🤗 and enlighten💡with AWS’s Relational Database and Data warehouse offerings, but we were still feeling a little less satisfied 🤔 with NoSQL as we really just scratched the surface on what AWS had to offer.

To recap, we had explored 🔦 DynamoDB🧨 AWS’s multi-model NoSQL service which offers support for a key-value🔑and their propriety document📜 database. But we were still curious to learn more about a Document📜 database that offers MongoDB🍃support as well in AWS. In addition, an answer to the hottest🔥 trend in “Not Just SQL Solutions”, Graph📈 Database.

Well of course being the Cloud☁️ Provider that offers every Cloud☁️native service from A-Z, AWS delivered with many great options. So we began our voyage heading straight over to DocumentDB📜. AWS’s fully managed database service with MongoDB🍃compatibility. As with all AWS services, Document DB📜 was designed from the ground up to give the most optimal performance, scalability⚖️, and availability. DocumentDB📜 like the Cosmo DB🪐 MongoDB🍃API makes it easy to set up, operate, and scale MongoDB-compatible databases. In other words, no code changes are required, and all the same drivers can be utilized by existing legacy MongoDB🍃applications.

In addition, Document DB📜 solves the friction and complications of when an application tries to map JSON to a relational model. DocumentDB📜 solves this problem by making JSON documents a first-class object of the database. Data is stored in the form of documents📜. These documents📜 are stored into collections. Each document📜can have a unique combination and nesting of fields or key-value🔑 pairs. Thus, making querying the database faster⚡️, indexing more flexible, and repetitions easier.

Similar to other AWS Data offerings, the core unit that makes up DocumentDB📜 is the cluster. A cluster consists of one or more instances and cluster storage volume that manages the data for those instances. All writes📝 are done through the primary instance. All instances (primary and replicas) support read 📖 operations.  The cluster’s data stores six copies of your data across three different Availability Zones. AWS easily allows you to create or modify clusters. When you modify a cluster, AWS is really just spinning up a new cluster behind the curtains and then migrates the data taking what is an otherwise complex task and making it seamless.

As prerequisite, you first must create and configure a virtual private cloud☁️ (VPC) to place DocumentDB📜 in. You can leverage an existing one or you can create a dedicated one just for DocumentDB📜. Next, you need to configure security🔒 groups for your VPC. Security🔒 groups are what controls who has access to your Document📜 Databases . As for credentials🔐 and entitlements in DocumentDB📜, it is managed through AWS Identity and Access Management (IAM).By default, the cluster Document DB📜accepts secure connections using Transport Layer Security (TLS). So, all traffic in transit is encrypted and Amazon DocumentDB📜 uses the 256-bit Advanced Encryption Standard (AES-256) to encrypt your data or allows you to encrypt your clusters using keys🔑 you manage through AWS Key🔑Management Service (AWS KMS) so data at rest is always encrypted. 

“Such wonderful things surround you…What more is you lookin’ for?”

Lately, we have been really digging Graph📈 Databases. We had our first visit with Graph📈 Databases when we were exposed to the Graph📈 API through Cosmos DB🪐 earlier this month and then furthered our curiosity through Neo4J. Well, now armed with a plethora of knowledge in the Graph📈 Database space we wanted to see what AWS had to offer and once again they did not disappoint.😊

First let me start by writing, It’s a little difficult to compare AWS Neptune🔱 to Neo4J although Nous Populi from Leapgraph does an admirable job. Obviously, both are graph📈 databases but architecturally there some major differences in their graph storage model and query languages. Neo4J uses Cypher and Neptune🔱 uses Apache TinkerPop or Gremlin👹 same as Cosmos DB🪐 as well as SPARQL. Where Neptune🔱 really shines☀️ is that it’s not just another graph database but a great service offering within the AWS portfolio. So, it leverages all the great bells🔔 and whistles like fast⚡️ performance, scalability⚖️, High availability and durability.  As well as being a fully managed service that we have come accustomed too like handling hardware provisioning, software patching, backup, recovery, failure detection, and repair. Neptune🔱  is an optimized for storing billions of relationships and querying the graph with milliseconds latency.

Neptune🔱 uses database instances. The primary database instance supports both read📖 and write📝 operations and performs all the data modifications to the cluster. Neptune🔱  also uses replicas which connects to the same cloud-native☁️ storage service as the primary database instance but only supports read-only operations. There can be up to 15 of these replicas across multiple AZs. In addition, Neptune🔱  supports encryption at rest.

As prerequisite, you first must create and configure a virtual private cloud☁️ (VPC) to place Neptune🔱  in. You can leverage an existing one or you can create a dedicated one just for Neptune🔱  Next, you need to configure security🔒 groups for your VPC. Security🔒 groups are what controls who has access to your Neptune🔱. As for credentials🔐 and entitlements in Neptune🔱  is managed through AWS Identity and Access Management (IAM). Your data at rest in the Neptune🔱  is encrypted using the industry standard AES-256-bit encryption algorithm on the server that hosts your Neptune🔱  instance.  Keys🔑 can also be used, which are managed through AWS Key🔑 Management Service (AWS KMS).

“Life moves pretty fast⚡️. If you don’t stop 🛑 and look 👀 around once in a while, you could miss it.”

So now feeling pretty good 😊 about NoSQL on AWS, where do we go now? 

Well, as mentioned we have been learning so much over the last 5 months it could be very easy to forget somethings especially with limited storage capacity. So we decided to take a pause for the rest of the week and go back and review all that we have learned by re-reading all our previous blog posts as well as engaging in some Google Data Engineered solution Quests🛡to help reinforce our previous learnings

Currently, the fine folks at qwiklabs.com are offering anyone who wants to learn Google Cloud ☁️ skills an opportunity for unlimited access for 30 days.  So with an offer too good to be true as well as an opportunity to add some flare to our linked in profile and who doesn’t like flare?  We dove right in head first!

“Where do we go? Oh, where do we go now? Now, now, now, now, now, now, now”

Below are some topics I am considering for my travels next week:

  • Neo4J and Cypher 
  • More with Google Cloud Path
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)



Week of July 17th

“Any timeof year… You can find it here”

Happy World🌎 Emoji 😊😜 Day! 

The Last few weeks we have been enjoying our time in Microsoft’s Cloud☁️ Data Ecosystem and It was just last month that we found ourselves hanging out with the GCP☁️ gang and their awesome Data offerings. All seemed well and good😊 except that we had been missing out on excursion to the one cloud☁️ provider where it all began literally and figuratively.

Back when we first began our journey on a cold 🥶 and rainy☔️ day in March just as Covid-19🦠 occupied Wall Street 🏦 and the rest of the planet 🌎 we started at the one place that disrupted how infrastructure and operations would be implemented and supported going forward.

That’s right Amazon Web Services or more endearingly known to humanity as AWS. AWS was released just two decades ago by the its parent company that offers everything from A to Z.

AWS like its parent company has a similar mantra in the cloud ☁️ computing world as they offer 200+ Cloud☁️ Services. So how the heck with so some many months passed that we haven’t been back since? The question is quite perplexing? But like they say “all Clouds☁️☁️ lead to AWS. So, here we are back in the AWS groove 🎶 and eager 😆 to explore 🔦the wondrous world🌎 of AWS Data solutions. Guiding us through this vast journey would be Richard Seroter (who ironically recently joined the team at Google). In 2015, Richard authored an amazing Pluralsight course covering Amazon RDS🛢, Amazon DynamoDB 🧨 and Amazon’s Redshift 🎆. It was like getting 3 courses for the price of 1! 😊

Although the course was several years old, for the most part it still out lasted the test of time ⏰  by providing a strong foundational knowledge for Amazon’s relational, NoSQL, and Data warehousing solutions. But unfortunately in technology years, it’s kind of like dog🐕  years. So obviously, there have been many innovations to all three of these incredible solutions including UI enhancements, architectural improvements and additional features to these great AWS offerings making them even more awesome!

So for a grand finale to our marvelous week of learning and to help us fill in the gaps on some of these major enhancements as well as offering some additional insights were the team from AWS Training and certification which includes the talented fashionista Michelle Metzger, the always effervescent and insightful Blaine Sundrud and on demos the man with a quirky naming convention for database objects the always witty Stephen Cole 

Back in our Amazon Web Services Databases in Depth course and in effort to make our journey that more captivating, Richard provided us with a nifty mobile sports 🏀 ⚾️ 🏈  app solution written in Node.js which leverages the Amazon data offerings covered in the course as components for an end to end solution. As the solution, was written several years back it did require some updating on some deprecated libraries📚 and some code changes in order to make the solution work which made our learning that more fulfilling. So, after a great introduction from Richard where he compares and contrasts RDS🛢, DynamoDB🧨, and Redshift🎆, we began our journey with Amazon’s Relational Database Service (RDS🛢). RDS🛢 is a database as a service (DBaaS) that makes provisioning, operating and scaling⚖️  either up or out seamless. In addition, RDS🛢 makes other time-consuming administrative tasks such as patching, and backups a thing of the past. Amazon RDS🛢 provides high availability and durability through the use of Multi-AZ deployments. In other words, AWS creates multiple instances of the databases in different Availability Zones making recovery from infrastructure failure automatic and almost transparent to the application. Of course like with all AWS offerings there always a heavy emphasis on security🔐 which it’s certainly reassuring when you putting your mission critical data in their hands 🤲 but it could also be a bit challenging at first to get things up and running when you are simply just trying connect to from your home computer 💻  back to the AWS infrastructure

As prerequisite, you first must create and configure a virtual private cloud☁️ (VPC) to put to your RDS🛢instance(s) in. You can leverage an existing one or you can create a dedicated one for RDS🛢instance(s).

It is required that your VPC have at least two subnets in order to support the Availability Zones for high availability. If direct internet access is needed that you will need to add an internet gateway to your VPC.  

Next, you need to configure security🔒 groups for your VPC. Security🔒 groups are what controls who has access to the RDS🛢. RDS🛢 leverages three types of security groups (database, VPC, and EC2). As for credentials🔐 and entitlements in RDS🛢, it is managed through AWS Identity and Access Management (IAM). At the time of the release of Richard’s courseAmazon Aurora was new in the game and was not covered in depth in the course. In addition, at the same time only MySQL, Postgres, Oracle, MS SQL Server and the aforementioned Aurora were only supported at this time. AWS has since added support for MariaDB to their relational database service portfolio.  

Fortunately, our friends from the AWS Training and Certification group gave us the insights that we would require on Amazon’s innovation behind their relational database built for the cloud☁️ better known as Aurora.

So, with six familiar database engines (licensing costs apply) to choose from you have quite a few options. Another key🔑 decision is to determines the resources you want your database to have. RDS🛢offers multiple options for optimized for memory, performance, or I/O.

I would be remised if we didn’t briefly touch on Amazon’s Aurora. As mentioned, it’s one of Amazon’s 6 database options with RDS🛢. Aurora is fully managed by RDS🛢. So it leverages the same great infrastructure and has all the same bells 🔔 and whistles. Aurora comes in two different flavors🍦 MySQL and PostgreSQL. Although, I didn’t benchmark Aurora performance in my evaluation AWS claims that Aurora is 5x faster than the standard MySQL databases. However, it’s probably more like 3x faster. But the bottom line it is that it is faster and more cost-effectiveness for MySQL or PostgreSQL databases that require optimal performances, availability, and reliability. The secret sauce behind Aurora is that automatically maintains 6 copies of your data (which can be increased up to 15 replicas) that is spanned across 3 AZs making data highly available and ultimately providing laser⚡️ fast performance for your database instances.

Please note: There is an option that allows a single Aurora database to span multiple AWS Regions 🌎 for an additional cost

In addition, Aurora uses an innovative and significantly faster log structured distributed storage layer than other RDS🛢offerings.

“Welcome my son, welcome to the machine🤖

Next on our plate 🍽 was to take a deep dive into Amazon’s fast and flexible NoSQL database service a.k.a DynamoDB🧨.. DynamoDB🧨 like Cosmo DB🪐 is a multi-model NoSQL solution.

DynamoDB🧨 combines the best of those two ACID compliant non-relational databases in a key-value🔑 and document database. It is a proprietary engine, so can’t just take your MongoDB🍃 database and convert it to DynamoDB🧨. But don’t worry if you looking to move your MongoDB🍃 works loads to Amazon, AWS offers Amazon DocumentDB (with MongoDB compatibility) but that’s for a later discussion 😉

As for DynamoDB🧨, it delivers a blazing⚡️ single-digit millisecond guaranteed performance at any scale⚖️. It’s a fully managed, multi-Region, multi-master database with built-in security🔐, backup and restore options, and in-memory caching for internet-scale applications. DynamoDB🧨 automatically scales⚖️  up and down to adjust for the capacity and maintain performance of your systems. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities. An important concept to grasp while working with DynamoDB🧨  is that the databases are comprised of tables, items, and attributes. Again, there has been some major architectural design changes to DynamoDB🧨 since Richard’s course was released. Not to go into too many details as its kind or irrelevant but at time⏰ the course was released DynamoDB🧨 used to offer the option to either use a Hash Primary Key🔑 or Hash and Range Primary Key🔑 to organize or partition data and of course as you would imagine choosing the right combination was rather confusing. Intuitively, AWS scrapped this architectural design and the good folks at the AWS Training and Certification group were so kind to offer clarity here as well 😊

Today, DynamoDB🧨 uses partition keys🔑  to find each item in the database similar to Cosmo DB🪐. Data is distributed on physical storage nodes. DynamoDB🧨 uses the partition key🔑  to determine which of the nodes the item is located on. It’s very important to choice the right partition key 🔑 to avoid the dreaded hot 🔥partitions. Again “As rule of thumb 👍, an ideal Partition key🔑 should have a wide range of values, so your data is evenly spread across logical partitions. Also in DynamoDB🧨 items can have an optional sort key🔑 to store related attributes in a sorted order.

One major difference to Cosmos DB🪐 is that DynamoDB🧨 utilizes a primary key🔑 on each table. If there is no sort key🔑, the primary and partition keys🔑 are the same. If there is a sort key🔑, the primary key🔑 is a combination of the partition and sort key 🔑 which is called a composite primary key🔑 .

DynamoDB🧨 allows for secondary indexes for faster searches. It supports two types indexes local (up to 5 per table) and global (up to 20 per table). These indexes can help improve the application’s ability to access data quickly and efficiently.

Differences Between Global and Local Secondary Indexes

Hash or hash and range keyHash and range key
No size limitFor each key, 10GB max
Add during table create, or laterAdd during table create
Query all partitions in a tableQuery single partition
Eventually consistent queriesEventually/strong consistent queries
Dedicated throughput unitsUser table throughput units
Only access projected itemsAccess all attributes from table

DynamoDB🧨 like Cosmo DB🪐 offers multiple Data Consistency Levels. DynamoDB🧨 Offers both Eventually and Strongly consistent Reads but like I said previously “it’s like life itself there is always tradeoffs. So, depending on your application needs. You will need to determine what’s the most important need for your application latency or availability.”

As a prerequisite, you first must create and configure a virtual private cloud☁️ (VPC) to put DynamoDB🧨  in. You can leverage an existing one or you can create a dedicated one for DynamoDB🧨  Next, you need to configure security🔒 groups for your VPC. Security🔒  groups are what controls who has access to DynamoDB🧨. As for authentication🔐  and permission to access a table, it is managed through Identity and Access Management (IAM). DynamoDB🧨 provides end-to-end enterprise-grade encryption for data that is both in transit and at rest. All DynamoDB tables have encryption at rest enabled by default. This provides enhanced security by encrypting all your data using encryption keys🔑stored in the AWS Key🔑Management System, or AWS KMS.

“Quicker than a ray of light I’m flying”

Making our final destination for this week’s explorations would be to Amazon’s fully managed, fast, scalable data warehouse known as Redshift🎆 . A “Red Shift🎆” is when a wavelength of the light is stretched, so the light is seen as ‘shifted’ towards the red part of the spectrum but according to anonymous sources “RedShift🎆 was apparently named very deliberately as a nod to Oracle’ trademark red branding, and Salesforce is calling its effort to move onto a new database “Sayonara,”” Be that what it may, this would be the third Data Warehouse cloud☁️ solution we would have the pleasure of be aquatinted with. 😊

AWS claims Redshift🎆 delivers 10x faster performance than other data warehouses. We didn’t have a chance to benchmark RedShift’s 🎆  performance but based some TPC tests vs some of their top competitors there might be some discrepancies with these claims but either case it’s still pretty darn on fast.

Redshift🎆 uses Massively parallel processing (MPP) and columnar storage architecture. The core unit that makes up Redshift🎆  is the cluster. The Cluster is made up of one or more compute nodes. There is a single leader node and several compute nodes. Clients access to Redshift🎆 is via a SQL endpoint on the leader node. The client sends a query to the endpoint. The leader node creates jobs based on the query logic and sends them in parallel to the compute nodes. The compute nodes contain the actual data the queries need. The compute nodes find the required data, perform operations, and return results to the leader node. The leader node then aggregates the results from all of the compute nodes and sends a report back to the client.

The compute nodes themselves are individual servers, they have their own dedicated memory, CPU, and attached disks. An individual compute node is actually split up into slices🍕, one slice🍕 for every core of that node’s processor. Each slice🍕 is given a piece of memory, disk, and so forth, where it processes whatever part of the workflow that’s been assigned to it by the leader node.

The way the columnar database storage works data is stored by columns rather than by rows. This allows for fast retrieval of columns of data. An additional advantage is that, since each block holds the same type of data, block data can use a compression scheme selected specifically for the column data type, further reducing disk space and I/O. Again, there have been several architectural changes to RedShift🎆 as well since Richard’s course was released.

In the past you needed to pick a distribution style. Today, you still have the option to choose a distribution style but if don’t specify a distribution style, Redshift🎆 will uses AUTO distribution making it little easier not to make the wrong choice 😊. Another recent innovation to Redshift🎆 that didn’t exist when the Richard’s course was released is the ability to build a unified data platform. Amazon Redshift🎆 Spectrum allows you to run queries across your data warehouse and Amazon S3 buckets simultaneously. Allowing you to save time ⏰  and money💰 as you don’t need to load all your data into the data warehouse.

As prerequisite, you first must create and configure a virtual private cloud☁️ (VPC) to place Redshift🎆 in. You can leverage an existing one or you can create a dedicated one just for Redshift🎆. In addition, you will need to create an Amazon Simple Storage Service (S3) bucket and S3 Endpoint to be used with Redshift🎆. Next, you need to configure security🔒 groups for your VPC. Security🔒 groups are what controls who has access to your data warehouse. As for credentials 🔐  and entitlements in Redshift🎆, it is managed through AWS Identity and Access Management (IAM).

One last point worth mentioning is that AWS Cloud ☁️  Watch ⌚️ is included with all the tremendous Cloud☁️ Services offered by AWS. So you get great monitoring 📉right of the box! 😊 We enjoyed 😊 our time⏰ this week in AWS exploring 🔦 some of their data offerings, but we merely just scratched the service.

So much to do, so much to see… So, what’s wrong with taking the backstreets? You’ll never know if you don’t go. You’ll never shine if you don’t glow

Below are some topics I am considering for my travels next week:

  • More with AWS Data Solutions
  • Neo4J and Cypher 
  • More with Google Cloud Path
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)



Week of July 10th

“Stay with the spirit I found”

Happy Friday! 😊

You take the blue pill 💊— the story ends, you wake up in your bed 🛌 and believe whatever you want to believe. You take the red pill 💊 — you stay in Wonderland and I show you how deep the rabbit🐰 hole goes.”  

Last week, we hopped on board the Nebuchadnezzar🚀 and traveled through the cosmos🌌 to Microsoft’s multi-model NoSQL Solution. So, this week we decided to go further down the “rabbit🐰 hole” and explore the wondrous land of Microsoft’s NoSQL Azure solutions as well as Graph📈 database. We would once again revisit with Cosmos DB🪐 exploring all 5 APIs. In addition, we would have brief journey with Azure Storage (Table), Azure Data Lake (Gen2), and Azure’s managed data analytics service for real-time analysis (ADLS), and Azure Data Explorer (ADX). Then for an encore we would venture into the world’s🌎 most popular Graph📈 database Neo4J 

First, playing the role as our leader “Morpheus” in our first mission would be featured Pluralsight author and premier trainer Reza Salehi through his recently released Pluralsight course  Implementing NoSQL Databases in Microsoft Azure . Reza doesn’t take us quite as deep in the weeds 🌾 with Cosmos DB🪐 as Lenni Lobel’s Learning Azure Cosmos DB🪐 Pluralsight course but that is because his course covers a wide range of topics in the Azure NoSQL ecosystem. Reza provides us a very practical real-world🌎  scenario like migrating from MongoDB🍃Atlas to Cosmos DB🪐(MongoDB🍃 API) and he also covers the Cassandra API which was omitted from Lenni’s offerings. In addition, Reza spends some time giving a comprehensive overview on Azure Storage (Table) and introduces us to ADLS and ADX all of which were all new to our learnings.

In the introduction of the course, Reza gives us a brief history on NoSQL which apparently has existed since the 1960s! It just wasn’t called NoSQL. He then gives us his definition of NoSQL and emphasizes its main goal to provide horizontal scalability, availability and optimal pricing.  Reza’s mentions an interesting factoid that Azure NoSQL solitons have been used by Microsoft for about decade through Skype, Xbox 🎮, Office 365 🧩 neither of which scaled very well with a traditional relational database.

Next, he discusses Azure Table Storage (soon to be deprecated and replaced by Cosmos DB🪐 Table API). Azure Table storage can store large amounts of structured and non-relational data (datasets that don’t require complex joins, foreign keys🔑 or stored procedures) cost effectively.  In addition, It is durable and highly available, secure, and massively scalable⚖️. A table is basically a collection of entities with no schema enforced. An entity is a set of properties (maximum of 252) similar to a row in table in a relational database. A property is a name-value pair. Three main system properties that must exist with each entity are a partition Key🔑 , row key🔑  and a timestamp. In the case of a Partition Key🔑 and a Row key🔑 the application is responsible for inserting and updating these values whereas the Timestamp is managed by Azure Table Storage and this value is immutable. Azure automatically manages the partitions and the underline storage, so as the data in your table grows, the data in your table is divided into different partitions. This allows for faster query performance⚡️ of entities with the same partition key🔑 and for atomic transactions on inserts and updates.

Next on the agenda was Microsoft’s globally distributed, multi-model database service better known Cosmos DB🪐. Again, we had been down this road just last week but just like re-watching the first Matrix movie🎞 I was more than happy 😊  to do so.

As a nice review, Reza reiterated some of the core Cosmos DB🪐 concepts like Global distribution, Multi-homing, Data Consistency Levels, Time-to-live (TTL), and Data Partitioning. All of which are included with of all five flavors or APIs in Cosmos DB🪐  because at the end of the day each API is just another container to the Cosmos DB🪐. Some of the highlights included:

Global distribution

·        Cosmos DB🪐 allows you to add or remove any of the azure regions to your cosmos account at any time with a click of a button.

·        Cosmos DB🪐 will seamlessly replicate your data to all the region’s associate ID with your cosmos account.

·        The multi homing capability of Cosmos DB🪐 allows your application to be highly available.

Multi-homing APIs

·        Your application is aware of the nearest region and sends requests to that region.

·        Nearest region is identified without any configuration changes

·        When a new region is added or removed, the connection string stays the same

Time-to-live (TTL)

•               You can set the expiry time (TTL) on Cosmos DB data items

•               Cosmos DB🪐 will automatically remove the items after this time period, since the last modification time ⏰

Cosmos DB🪐 Consistency Levels

•       Cosmos DB🪐 offers five consistency levels to choose from:

•       Strong, bounded staleness, session, consistent prefix, eventual

Data Partitioning

·        A logical partition consists of a set of items that have the same partition key🔑.

·        Data that’s added to the container is automatically partitioned across a set of logical partitions.

·        Logical partitions are mapped to physical partitions that are distributed among several machines.

·        Throughput provisioned for container, is divided evenly among physical partitions.

Then Reza’s breaks down each of the 5 Cosmos DBs🪐 APIs in separate modules. But at the risk, of being redundant from last week’s submission, we will just focus on the MongoDB🍃 API and the Cassandra API as we covered the other APIs in-depth last week. I will make one important point for all APIs that you are working with that is you must choose an appropriate partition key🔑. As rule of thumb 👍, an ideal Partition key🔑 should have a wide range of values, so your data is evenly spread across logical partitions.

MongoDB🍃 API in Cosmos DB🪐 supports the popular MongoDB🍃 Document database with absolutely no code changes other than a connection string to existing applications. It now supports up to MongoDB 🍃version 3.6.

During this module, Reza provides us with a very practical real world 🌎  scenario migrating from MongoDB🍃Atlas to Cosmos DB🪐 (MongoDB🍃 API). We were happy😊  to report that we were able to follow along easily and successfully migrate our own MongoDB🍃 Atlas collections to Cosmos DB🪐. 

Important to note: Before starting a migration from MongoDB🍃 to Cosmos DB🪐, you should estimate the amount of throughput to provisioned for your azure cosmos databases on collections and of course pick an optimal partition key🔑  for your data.

Next, we will focused on the Cassandra API in Cosmos DB🪐. This one admittedly,  I was really looking forward too as it wasn’t in scope in our previous journey.  Cosmos DB🪐 – Cassandra API can be used as the data store for apps written for Apache Cassandra. Just like for MongoDB🍃, existing Cassandra applications using CQLv4 compliant drivers, can easily communicate with the Cosmos DB🪐 Cassandra API. Making it easy to switch from Apache Cassandra to Cosmos DB🪐 Cassandra API with only requiring an update to the connection string. The familiar CQL, Cassandra client drivers, and Cassandra-based tools can all be used making for seamless migration with of course the benefits of Cosmos DB🪐 like

·        No operations management (PaaS)

·        Low latency reads and writes

·        Use existing code and tools

·        Throughput and storage elasticity

·        Global distribution and availability

·        Choice of five well-defined consistency levels

·        Interact with Cosmos DB🪐 Cassandra API

Next we ventured on to new ground with Azure Data Lake Storage (ADLS). ADLS is a hyper-scale repository for big data analytic workloads. Azure Storage (Gen 2) is the foundation for building enterprise data lakes on ADLS.  ADLS supports hundreds of gigabits of throughput and manages massive amounts of data. Some Key features of ADLS include:

·        Hadoop compatible – manage data same as Hadoop HDFS

·        POSIX permissions – supports ACL and POSIX file permissions

·        Cost effective – offers low cost storage capacity

Last but certainly not least on this Journey with Reza was an introduction to Azure Data Explorer (ADX) a fast and highly scalable⚖️ data exploration service for log and telemetry data. ADX is designed to ingest data from devices like websites, logs and more. These ingestion sources come natively from Azure Event Hub, IoT hub and Blob Storage. Data is then stored in highly scalable⚖️ database and analytics are performed using Kusto Query Language (KQL). ADX can be provisioned with Azure CLI, PowerShell, C# (NuGet package), Python 🐍 SDK and the ARM template. One of the key features of ADS is Anomaly Detection. ADX uses machine learning under the hood to find these anomalies. ADX also supports many data visualization tools like

·        Kusto query language visualizations

·        Azure Data Explorer dashboards (Web UI)

·        Power BI connector

·        Microsoft Excel connector

·        ODBC connector

·        Granfana (ADX plugin)

·        Kibana Connector (using k2bridge)

·        Tableau (via ODBC connector)

·        Qlik (via ODBC connector)

·        Sisense (via JDBC connector)

·        Redash

ADX can easily integrate with other services like:

·        Azure Data Factory

·        Microsoft Flow

·        Logic Apps

·        Apache Spark Connector

·        Azure Databricks

·        CI/CD using Azure DevOps

I’ll show these people what you don’t want them to see. A world🌎 without rules and controls, without borders or boundaries. A world🌎 where anything is possible. -Neo

After spending much time in Cosmos DB🪐 and in particular the Graph📈Database API, I have become very intrigued by this type of NoSQL solution. The more I explored the more I coveted. I had a particular yearning to learn more about the world’s 🌎 most popular graph 📈database Neo4J. For those not aware of Neo4J its developed by Swedish 🇸🇪 Technology company sometimes referred to as Neo4J or Neo Technology. I guess it depends on the day of the week?

According to all accounts the name Neo” was named for Swedish 🇸🇪 pop artist and favorite of the Swedish🇸🇪 developers Linus “Neo” Ingelsbo, “4” (for version) and “J” for the Swedish🇸🇪 word “Jätteträd” which of course means “giant tree 🌳” because a tree 🌳 signifies the huge data structures that could now be stored in this amazing database product. But to me this story seems a bit curious.. With a database name like “Neo” and Querying language called “Cypher” and with Awesome Procedures On Cypher better known as APOC I somehow believe there is another story here..

Anyway to guide us through our learning with Neo4J would be no other than the “Flying Dutchman” 🇳🇱 Roland Guijt through his Introduction to Graph📈 Databases, Cypher, and Neo4j which was short but sweet (sort of like a Stroopwafel🧇)

In the introduction, Roland tells us the Who, What, When, Where, Why and How about graph📈 databases. A graph 📈consists of nodes or vertices which are connected by directional relationships or Edges. A node represents an entity. An entity is typically something in the real world🌎 like a customer, an order or a person A collection of nodes and relationships together is called a graph 📈. Graph📈databases are very mind friendly compared to other data storage technologies because graphs📈 act a lot like how the human brain🧠 works. It’s easier to think of the data structure and also easier to write queries. These patterns are much like the patterns of the brain🧠 uses to fetch data or retrieve memories. 

Graph 📈 Databases are all about relationships and thus are very strong in storing and retrieving highly related data. They are also very performant during querying even with large number of nodes like in the millions. They offer great flexibility as like all NoSQL databases it doesn’t require a fixed schema. In addition, they are quite agile as you can add or delete nodes and property of nodes without affecting already stored nodes and it’s extensible supporting multiple query languages

After a comprehensive overview with graph📈 database, Roland dives right into Neo4J the leader in Graph 📈database. Unlike document databases, Neo4j is ACID compliant which means that all data modification is done within a transaction. If something goes wrong, Neo4j will simply roll back to a state where the data was reliable.

Neo4J is Java☕️ based which allows you to install it on multiple platforms like Windows, Linux, and OS X. Neo4j can scale⚖️ up as it can easily adjust to a hardware changes i.e. adding more physical memory in which it will automatically add more nodes in the cache. Neo4J can also scale ⚖️ out like most NoSQL Solutions i.e. adding more servers meaning it can distribute the load of transactions or create a highly available cluster in which a server will take over when the active one fails.

Since by definition Neo4J is a graph📈 database, it’s all about relationships and nodes. Both nodes and relationships are equally as important. Nodes are schema-less entities with properties (key-value pairs) which are always strings. Relationship connects a node to another node. Just like nodes, they also can contain properties that also support indexing.

Next, Roland discusses Querying Data with Cypher which is the most powerful⚡️of Query languages supported by Neo4J. Cypher was developed and optimized for Neo4j and for graph📈 databases. Cypher is a very fluid language meaning it continuously changes with each release. The good 😊 news is all major releases are backwards compatible to all old versions of the language. It’s very different for SQL so there is a bit of a learning curve. However, it’s not as steep as a learning curve you would imagine because Cypher uses patterns to match the data in the database very much how the brain🧠 works. That and Neo4J Desktop has intellisense. 😊

As example to demonstrate the query language and CRUD we worked with a very cool Dr. Who graph 📈database filled multiple nodes with Actors, Roles, Episodes, Villains and their given relationships. To begin we started with “R” or Reads part of CRUD learning the MATCH command

Below is some MATCH – RETURN syntax:

MATCH (:Actor{name:’Matt Smith’}) -[:PLAYED]->(c:Character) RETURN c.name as name

MATCH (actors:Actor)-[:REGENERATED_TO]-> (others) RETURN actors.name, others.name

MATCH (:Character{name:’Doctor’})<-[:ENEMY_OF]-(:Character)-[:COMES_FROM]->(p:Planet) RETURN p.name as Planet, count(p) AS Count

MATCH (:Actor{name:’Matt Smith’})-[:APPEARED_IN]-> (ep:Episode)<-[:APPEARED_IN]- (:Character{name:’Amy Pond’}),(ep) <-[:APPEARED_IN]-(enemies:Character)<-[:ENEMY_OF]-(Character{name:’Doctor’}) RETURN ep AS Episode, collect(enemies.name) AS Enemies;

Further, Roland discussed the WHERE Clause and ORDER BY Clauses which are very similar to ANSI SQL. Then he converses about other Cypher syntax like:

SKIP – which skips the number of result items you specify.

LIMIT – which limits the numbers of items returned.

With UNION which allows to connect two queries together and generate one result set.

Then he ends the module conferring on Scalar functions like TOINT,


Then he reviews two of his favorite some advanced query features like COMPANION_OF and SHORTESTPATH.

Continuing on with C,U,D in CRUD, we played with the CREATE, MATCH WITH SET and MATCH DELETE

Below is some Syntax:

CREATE p= (:Actor{name:’Peter Capaldi’})-[:APPEARED_IN]->(:Episode{name:’The Time of The Doctor’}) RETURN p

MATCH (Matt:Actor{name: ‘Matt Smith’}}


MATCH (Matt:Actor{name: ‘Matt Smith’}}

SET matt.salary = 1000

Then looking at MERGE and FOREACH with the below syntax as example:

MERGE (peter:Actor{name: ‘Peter Capaldi’}) RETURN peter

Match p =(actors:Actor)-[r:PLAYED]->others)

WHERE actors.salary > 10000

FOREACH (n IN nodes(p)| set n.done = true)

As we continued our journey with Neo4J, we reconnoitered on Indexes and Constraints. Indexes are only good for data retrieval. So, if your application performs lots of writes it’s probably best to avoid them. As for constraints, the unique constraint is currently the only constraint available in Neo4j. That is why this is often called just constraint. Lastly, in the module we reviewed Importing CSV which makes importing data from other sources a breeze. It enables you to import data into a Neo4j’s database from many sources. CSV files can be loaded from the local file system, as well as remote locations.  Cypher has a LOAD CSV statement, which is used together with CREATE and/or MERGE.

Finally, Roland reviewed Neo4j’s APIs which was a little bit out of our lexicon but interesting nonetheless. Neo4j supports two API types out of the box. The traditional REST and their proprietary Bolt⚡️. The advantage of Bolt⚡️is mainly performance. Bolt⚡️ doesn’t have the HTTP overhead, and it uses a binary format instead of text to return data. For both the REST and Bolt APIs Roland provides C# code sample that can be run with NuGet packages in Visual Studio my new favorite IDE.

Ever have that feeling where you’re not sure if you’re awake or dreaming?

Below are some topics I am considering for my learnings next week:

·      More on Neo4J and Cypher 

·      More on MongoDB

·      More with Google Cloud Path

·      Working with Parquet files 

·      JDBC Drivers

·      More on Machine Learning

·      ONTAP Cluster Fundamentals

·      Data Visualization Tools (i.e. Looker)

·      Additional ETL Solutions (Stitch, FiveTran) 

·      Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well –


Week of July 3rd

Hanging in the cosmos 🌌 like a space ornament”

Happy Birthday🎂🎁🎉🎈America🇺🇸 !

“Now let me welcome everybody to the Wild, Wild West 🤠. A state database that’s untouchable like Eliot Ness.” So, after spending a good concentrated week in the “humongous” document database world better known as the popular MongoDB🍃, it only made sense to continue our Jack Kerouac-like adventures through the universe 🌌 of “Not only SQL” databases.  

“So many roads, so many detours. So many choices, so many mistakes.” -Carrie Bradshaw

But with so many Document databases, Table and Key-value stores, Columnar and Graph databases to choose from in the NoSQL universe, where shall we go?  Well, after a brief deliberation, we turned to the one place that empowers every person and every organization on the planet to achieve more. That’s right, Microsoft! Besides we haven’t been giving Mr. Softy enough love ❤️ in our travels. So, we figured we would take a stab and see what MSFT had to offer. Oh boy, did we hit eureka with Microsoft’s Cosmos DB🪐!

For those not familiar with Microsoft’s Cosmos DB🪐 it was released for GA in 2017. The solution had morphed out of the Azure DocumentDB (the “Un-cola”🥤of document databases of its day) which was initially released in 2014. During the time of its inception, Azure DocumentDB was the only NoSQL Cloud☁️ solution (MongoDB🍃 Atlas☁️ was released two years later in 2016) but its popularity was still limited. Fortunately, MSFT saw the “forest 🌲🌲🌲through the trees🌳” or I shall I say the planets🪐 through the stars ✨ and knew there was a lot more to NoSQL then just some JSON and bunch of curly braces. So, they “pimped up” Azure DocumentDB and gave us the Swiss🇨🇭 Army knives of all NoSQL solutions through their rebranded offering Cosmos DB🪐

Cosmos DB 🪐 is multi-model NoSQL Database as a Service (NDaaS) that manages data at planetary 🌎 scale ⚖️! Huh? In other words, Cosmos DB🪐 supports 6 different NoSQL solutions through the beauty of APIs (Application Program Interfaces). Yes, you read that correctly. Six! Cosmos DB🪐 supports the SQL API which was originally intended to be used with aforementioned Azure DocumentDB which uses the friendly SQL query language, the MongoDB🍃 API (For all the JSON fans), Cassandra (Columnar database), Azure Table Storage (Table) and etcd (Key Value Store) and last but certainly not least the Gremlin👹 API (Graph database).

Cosmos DB🪐 provides virtually unlimited scale ⚖️ through both storage and throughput and it automatically manages the growth of the data with server-side horizontal partitioning.

So, no worrying about adding more nodes or shards.  …And that’s not all! Cosmos DB🪐 does all the heavy lifting 🏋🏻‍♀️ with automatic global distribution and server-side partitioning for painless management over the scale and growth of your database. Not to mention, offers a 99.999% SLA when data is distributed across multi-regions 🌎 (Only a mere four 9s when you stick to a single region).

Yes, you read that right, too. 99.999% guarantee! Not just on availability… No, No, No… but five 9s on latency, throughput, and consistency as well!

Ok, so now I sound like a MSFT fanboy. Perhaps? So now, we were fully percolating ☕️  with excitement who will guide us through such amazing innovation? Well, we found just the right tour guide in a Native New Yorker Lenni Lobel. Through his melodious 🎶  voice and over 5 decades of experience in IT, Lenni takes us through an amazing journey through Cosmos DB🪐 with his Plural sight course Learning Azure Cosmos DB🪐

In the introduction, Lenni gives his us his interpretation on NoSQL which answers the common problem of 3Vs in regards to data and the roots of Cosmos DB🪐 which we touched on earlier. Lenni then explains how the Cosmos DB🪐 engine is an atom-record-sequence (ARS) based system. In other words the database engine of Cosmos DB🪐 is capable of efficiently translating and projecting multiple data models by leveraging ARS. Still confused?  Don’t be. In more simplistic terms, under the covers Cosmos DB🪐 leverages the ARS framework to be able support multiple NoSQL technologies. It does this through APIs and then placing each of data models in separate schema-agnostic containers which is super cool 😎! Next, he discusses one of the cooler 😎  features of Cosmos DB🪐 “Automatic Indexing”. If you recall from our MongoDB travels  one of the main takeaways was a strong emphasis on the need for indexes in MongoDB🍃. Well, in Cosmos DB🪐 you need not to worry. Cosmos DB🪐 does this for you automatically. The only main concern is choosing the right partition key🔑  on your container but you must choose wisely otherwise performance and cost will suffer.

Lenni further explains how one quantifies performance for data through Latency and throughput. In the world 🌎 of data, Latency is how long the data consumer waits for the data to be received from end to end. Whereas throughput is the performance of database itself. First, Mr. Lobel demonstrates how to provision throughput through Cosmos DB🪐 which provides predictable throughput to the database through a server-less approach measured in Request Units (RUs). RUs are a blended measure of computational cost CPU, memory, disk I/O, network I/O.

So, like most server-less approaches you don’t need to worry about provisioning hardware to scale ⚖️  your workloads. You just need to properly allocate the right amount of RUs to a given container. The good news on RUs is that this setting is flexible. So it can be easily throttled up and down through the portal or even specify on an individual query level.

Please note: data writes are generally more expensive than data reads. The beauty of the RU approach is that you are guaranteed throughput and you can predict cost. You will even be notified through a friendly error message when your workloads exceed a certain threshold.  There is an option to run your workloads in an “auto-pilot ✈️ mode” in which Cosmos DB🪐 will adjust the RUs to a given workload but beware this option could be quite costly so proceed with risk and discuss this option with MSFT before considering using it.

In effort of being fully transparent, unlike some of their competitors, Microsoft offers a Capacity Calculator  So you can figure out exactly how much it will cost you to run your workloads (Reserved RU/sec per hour $0.008 for 100 RU/sec). The next import considerations in regards to throughput is Horizontal Partitioning. Some might regard, Horizontal Partitioning as strictly just for storage, but in fact it also massively impacts throughput.

“Yes, it’s understood that Partitioning and throughput are distinct concepts, but they’re symbiotic in terms of scale-out.

Anyway, no need to fret… We just simply create a container and let Cosmos DB🪐 automatically manage these partitions for us behind the scenes (including the distribution of partitions within a given data center). However, keep in mind that we must choose a proper partition key🔑 otherwise we can have a rather unpleasant😞 and costly🤑 experience with Cosmos DB🪐. Luckily, there are several best practices around choosing the right partition key🔑. Personally, I like to stick to the rule of thumb 👍 to always choose a key🔑 with many distinct values like in 100s or 1000s. This can hopefully help avoid the dreaded Hot🔥 Partition 

Please note: Partition keys 🔑 are immutable but there are documented workarounds on how to deal with changing this in case you find yourself in this scenario.

Now, that we have a good grasp on how Cosmos DB🪐 handles throughput and latency through RUs and horizontal partitioning but what if your application is global 🌎  and your primary data is located halfway around the world 🌍 ? Our performance could suffer tremendously… 😢 

Cosmos DB🪐 handles such challenges with one of its most compelling features in the solution through Global Distribution of Data. Microsoft intuitively leverages the ubiquitousness of its global data centers and offers a Turnkey global distribution “Point-and-click” control so your data can seamlessly be geo-replicated across regions.

In cases, where you have multiple-masters or data writers, Cosmos DB🪐 offers three options to handle such conflicts:

  • Option 1: Last Writer Wins (default) based on the highest _ts property or any other numeric property) Conflict Resolver Property Write with higher valuer wins if blank than master with high _ts property wins 
  • Option 2: Merge Procedure (Custom) – Based on stored procedure result
  • Option 3: Conflict feed (Offline resolution) Based Quorum majority

Whew 😅 …  But what about data consistency? How do we ensure our data is consistent in all of our locations? Well once again, Cosmos DB🪐 does not disappoint supporting five different options.  Of course, like life itself there is always tradeoffs. So, depending on your application needs. You will need to determine what’s the most important need for your application latency or availability? Below are the options based higher latency to lowest availability:

  1. Strong – (No Dirty Reads) Higher latency on writes waiting for write to be written to Cosmos DB Quorum. Higher RU costs
  2. Bounded Staleness – Dirty reads possible Bounded by time and updates which kind of like “Skunked🦨 beer🍺” You decide the level of freshness you can tolerate.
  3. Session – (Default) No dirty reads for writers (read your own writes). Dirty Reads are possible for other users
  4. Consistent Prefix – Dirty reads possible. Reads never see out-of-order writes. Never experience data returned out of order.
  5. Eventual – Stale reads possible, No guaranteed order. Fastest

So, after focusing on these core concepts within Cosmos DB🪐, we were ready to dig our heels 👠 👠 right in and get this bad boy up and running 🏃🏻 . So after waiting about 15 minutes or so… we had our Cosmos DB🪐 fired up 🔥 and running in Azure… Not bad for a such complex piece of infrastructure. 😊

Next, we created a Container and then a Database and started our travels with the SQL API.  Through the portal, We were easily able manually write some JSON documents and add them to our collection.

In addition, through Lenni’s brilliantly written .Net Core code samples, we were able to automate writing, Querying, and reading in bulk data. Further, we were able to easily adjust throughput and latency through the portal by tweaking the RUs and enabling multi-region replication. We were able to demonstrate this by re-running Lenni’s code after the changes 

Although, getting Lenni’s code to work did take a little bit of troubleshooting with visual studio 2019 and a little bit of understanding how to fix the .Net SDK errors and some of Compilation errors NuGet from packages . All of which was out of our purview.. But needless to say we figured how to troubleshooted the NuGet Packages and modify some of the parameter’s in the code like _ID field and Cosmos DB🪐 Server and Cosmos DB master key 🔑.

We were able to enjoy the full experience of SQL API including the power⚡️ of using the familiar SQL query language and not to having to worrying about the all 

db.collection.insertOne() this 






We also got to play with server‑side programming in Cosmos DB🪐 like the familiar concept of stored procedures, triggers, and user‑defined functions which  in Cosmos DB🪐 are basically self‑contained JavaScript functions that are deployed to the database for execution.  But one can always pretend like we are in the relational database world. 😊

Next we, got to test drive 🚙  the Data Migration tool 🛠 that allows you to import data from an existing data sources into Cosmos DB🪐. 

From our past experiences, we have found Microsoft has gotten quite good at creating these type of tools 🧰.  Cosmos DB🪐 Data Migration tool offers great support for many data sources like SQL Server, JSON files, CSV files, MongoDB, Azure Table storage, and others.

First, we used the UI to move data from Microsoft SQL Server 2016 and the popular example Adventureworks database to Cosmos DB🪐 and then later through the CLI (azcopy) from Azure Table storage.

Notably, Azure Table Storage is on the road map to be deprecated and automatically migrated to Cosmos DB🪐 but this was good exercise for those who can’t wait and want to take advantage such awesome platform today!

As a grand finale, we got to play with Graph Databases through the Gremlin 👹 API.  As many of you might be aware, Graph databases are becoming excessively popular these days. Mostly because Data in the real world is naturally connected through relationships and Graph Databases do a better job managing when many complex relationships exist opposed to our traditional RDBMS.

Again, it’s worth noting that in the case of Cosmos DB🪐, it doesn’t really matter what data model you’re implementing because as we mentioned earlier it leverages the ARS framework. So as far as Cosmos DB🪐 concerned it’s just another container to manage and we get all the Horizontal partitioning, provisioned throughput, global distribution, indexing goodness 😊.  

We were new to whole concept of Graph Databases so we were very excited to get some exposure here which looks to be a precursor for further explorations. The most important highlights of Graph database is understanding Vertex and Edge objects. These are basically just fancy schmancy words for Entities and Relationships. A Vertex is an entity and a Edge is a relationship between any two vertices respectively. Both can hold arbitrary key-value pairs 🔑🔑 and are the building blocks of a graph database.

Cosmos DB🪐 utilizes the Apache TinkerPop standard which uses Gremlin as a functional step-by-step language to create vertices and edges and stores the data as GraphSON or “Graphical JSON”.  

In addition, Gremlin 👹 allows you to query the graph database by using simple transversals though a myriad of relationships or Edges. The more edges you add, the more relationships you define, and the more questions you can answer by running Gremlin👹 Queries. 😊

To further our learning Lenni once again gave us some nice demos using a fictitious company “Acme” and its relationships of employees, Airport terminals and Restaurants and another example using Comic Book hero’s which made playing along fun.

Below is some example of some Gremlin 👹 syntax language from our voyage.




g.V().has(‘id’,’John’).addE(‘worksAt’).property(‘weekends’, true).to(g.V().has(‘id’,’Acme’))

g.V().has(‘id’,’Alan’).addE(‘worksAt’).property(‘weekends’, true).to(g.V().has(‘id’,’Acme’))


When in comes to Graph databases the possibilities are endless. Some good use cases for Graph Database would be:

  • Complex Relationships – Many “many-to-many” relationships
  • Excessive JOINS
  • Analyze interconnected data relationships
  • Typical graph applications
    • Social networks 
    • Recommendation Engines

In Cosmos DB🪐, it’s clear to see how a graph database is no different than any other key value data model. Graph database gets provisioned throughput, fully indexed, partitioned, and globally distributed just like a document collection in this SQL API or a table in the Table API

Cosmos DB🪐 will one day allow you to switch freely between different APIs and data models within the same account, and even over the same data set. So by adding this graph functionality to Cosmos DB🪐 Microsoft really hit ⚾️  this one out of the park 🏟!

Closing time …Every new beginning.. comes from some other beginning’s end

Below are some topics I am considering for my wonderings next week:

  • Neo4J and Graph DB
  • More on Cosmos DB
  • More on MongoDB
  • More with Google Cloud Path
  • Working with Parquet files 
  • JDBC Drivers
  • More on Machine Learning
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well –


Week of June 26th

 “…And I think to myself… What a wonderful world 🌎 .

Happy Coconut🥥 Day!

Recently, I had been spending so much time ⏰ in GCP land☁️ that it started to feel like it was my second home 🏡 . However, it was time for a little data sabbatical. I needed to visit a land of mysticism✨ and intrigue. A place where developers can roam freely and where data can be flexible, semi-structured, hierarchical nature, and can be easily scaled horizontally… A place not bound to the rigidness of relational tables but a domicile of flexible documents. We would journey to the world of MongoDB🍃.

Ok, so we have been there before, but we needed a refresher. It had been about 6 years since we first became acquainted with this technological phenomenon. Besides we hadn’t played around too much with some of the company’s past innovations like MongoDB🍃 Compass 🧭 a sleek visual environment that allows you to analyze and understand the contents of your data in MongoDB🍃 and MongoDB🍃 Atlas☁️ the managed service used to provision, maintain and scale MongoDB🍃 clusters of instances that is conveniently offered on AWS, Azure and GCP.

To assist us on getting started would be our old comrade in arms, Pinal Dave from SQLAuthority fame. Pinal had put together an outstanding condensed course on Foundations of Document Databases with MongoDB

So, this is where we would begin. The course commences with an introduction on NoSQL (Not Just SQL) databases and some the advantages of a Document Database like an Intuitive Data Model, dynamic Schema and distributed Scalable Database. Then he gives us comprehensible explanation to the CAP (Consistency, Availability and Partition Tolerance) Theorem and that only 2/3 are necessary. MongoDB🍃 fits in under the CP variety while compromising on availability. Next, the following key 🔑 points are made in relation to MongoDB🍃

  • All write operations in MongoDB🍃 are atomic on the level of a single document
  • If the collection does not currently exist, the insert operator will create one in the collection.

Next, Pinal takes use through a few quick and easy steps on how to get setup with MongoDB Atlas☁️. Once, our fully managed MongoDB🍃 Cluster was fired🔥 up it was time to navigate our collections with MongoDB Compass🧭

For much of the rest of the course, we would concentrate on CRUD (Create, Read, Update, Delete) operations in MongoDB🍃 through both Compass🧭 and the CLI (Mongo Shell).

The course would also present us a terse walk through with syntax and elucidation on Read Concerns and Write Concerns in MongoDB🍃

Read Concern

Allows to control the consistency and isolation properties of the data read from replica sets and replica set shards

  1. Local – (No guarantee the data has applied to all Replicas) – Primary
  2. Available – (No guarantee the data has applied to all Replicas) – Secondary
  3. Majority – (default) acknowledged by a majority to all Replicas
  4. Linearizable – All successful acknowledged by a majority to all Replicas before Read (Query might have to wait)
  5. Snapshot – Used with multi document projection data from the majority to all Replicas

Write Concern

Level of acknowledgement requested from MongoDB🍃 for write operations

w:1 – Ack from primary

w:0 – No ack

w(n) – Primary + (n-1) secondary

w: majority

Timeout: Time limit to prevent write operations from blocking indefinitely

As lasting point on database writes (UD), all write operations in MongoDB🍃 are atomic on the level of a single document. In other words, if you are updating a single document or if you are updating multiple documents in a collection at any time every single update is just atomic at a single document.

Closing time 🕰 … Time 🕰 for you to go out go out into the world 🌎 … Closing time 🕰…

Finally, dénouements of the course are on Common SQL Concepts and Semantics to MongoDB🍃 including some of the major differences between a typical RDBS and MongoDB🍃which can be represented by the table below:

RDBS                         MongoDB
SQL                           MQL (Mongo Query Language)
Predefined Schema   Dynamic Schema
Relational Keys         No foreign key
Triggers                      No Triggers
ACID PropertiesCAP theorem

Sadly 😢, that was it for Foundations of Document Databases with MongoDB. This left us clambering for more. Fortunately, Nuri Halperin happily😊 delivered and then some… Nuri a MongoDB🍃 Guru 🧙‍♂️ and a Love❤️ 👨‍⚕️ of sorts (known for creating the wildly popular jdate.com platform) put together a series of timeless MongoDB🍃 courses that have managed to stand up through the test of time 🕰. In which, I might add is not an easy feat when it comes to a burgeoning technologies like MongoDB🍃.

Part 1:  Introduction course Introduction to MongoDB in-depth look at both the Mongo Shell (CLI) and CRUD Syntax and Indexing

Part 2: MongoDB Administration takes a deep dive into MongoDB🍃 administration key concepts i.e. installation, configuration, Security, Backup/Restores, Monitoring, High Availability and Performance

Nuri like Pinal, discusses some of the challenges found in Relational Databases like Impedance mismatch and the need for Object-relational mapping (ORM) for developers. He demonstrates how MongoDB🍃 solves these challenges through its schema-less approach and no relationships required model. In addition, he touches on to how MongoDB🍃 lends itself nicely to data polymorphism. 

Next he takes us through the MongoDB architecture a collection humongous arrays that utilizes memory mapped BSON (Binary script object notation) or simply “Binary JSON” (Java Script object notation) files. MongoDB🍃 intuitively leverages the OS to handle the loading of data and saving to disk which allows the engine to center on speed, optimization, and stability. 

MongoDBs🍃 main mission is to just serve up data quickly and efficiently. Next, Nuri takes through the Mongo Shell (CLI) which basically is just a Java☕️Script interpreter that  allows you to interactively get insight into the MongoDB🍃 Server. Further he discusses indexes, types of indexes, and how paramount indexes are in MongoDB🍃 for practical usability.

Lastly, Nuri takes through MongoDB🍃 Replication which uses the simple to configure but highly scalable replica sets. This is how MongoDB🍃 achieves “Eventual Consistency”, Automatic Failover, and Automatic recovery… And this is just the introduction of the Part 2 of the course…

Like trying to watch all 3 parts of The Lord of Rings 💍 (Director’s Edition) trilogy in a single helping, it’s just wasn’t possible to complete all of Nuri’s two part sequel in a single week but we did get through most of it. 😊  This also left us with a little bit more on our plate🍽 as we continue through our Mongo Journey…

This Week’s Log

Out of the tree 🌳 of life I just picked me a plum… You came along and everything started’ in to hum 🎶… Still it’s a real good bet… The best is yet to come

Below are some topics I am considering for my voyage next week:

  • More with Nuri and MongoDB 
  • Cosmos DB
  • More with Google Cloud Path
  • Working with Parquet files 
  • JDBC Drivers
  • More on Machine Learning
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well –


Week of June 26 Detailed Log

CRUD: Create One – Command Prompt

 mongo “mongodb+srv://cluster****.azure.mongodb.net”b–username newuser -password *****


show dbs

use sqlauthoritynew


show collections



“DisplayName”:”Pinal Dave”,




“Area”:”Database Performance Tuning”,


“Programming Language”:[“T-SQL”,”JS”,”HTML”]



db.newusers.find( {} )

db.newusers.find( {} ).pretty()

Demo: CRUD: Create Many – Command Prompt





“DisplayName”:”Pinal Dave”,


“job”: {


“Area”:”Database Performance Tuning”,



“Programming Language”:[“T-SQL”,”JS”,”HTML”]



“DisplayName”:”Jason Brown”,


“job”: {


“Area”:”Database Performance Tuning”,



“Programming Language”:[“T-SQL”,”JS”,”HTML”]



“DisplayName”:”Mark Smith”,


“job”: {


“Area”:”Database Performance Tuning”,




“Programming Language”:[“T-SQL”,”HTML”]




db.newusers.find( {} ).pretty()

CRUD Operations: Retrieving Objects


show dbs

use sample_mflix

show collections



db.movies.find({runtime: 11}).pretty()

db.movies.find({runtime: 11}).pretty().limit(3)

db.movies.find({runtime: 11},{runtime:1, title:1,_id:0}).pretty().limit(3)

db.movies.find({runtime: 11},{runtime:1, title:1}).pretty().limit(3)

db.movies.find({runtime: 11},{runtime:1, title:1}).pretty().limit(5).sort({title: 1})

db.movies.find({runtime: 11},{runtime:1, title:1}).pretty().limit(5).sort({title: -1})

db.movies.find({runtime: 11},{runtime:1, title:1}).pretty().limit(5).sort({title: -1}).readConcern(“majority”)

db.movies.find({runtime: 11},{runtime:1, title:1}).pretty().limit(5).sort({title: -1}).readConcern(“linearizable”).maxTimeMS(10000)

CRUD Operations: Updating and Deleting Objects

use sample_mflix

CRUD: Update One – Command Prompt

db.movies.updateOne( {title :{$eq: “The Old Crocodile” }},{ $set: { “title”: “The New Crocodile” }} )

db.movies.find({runtime: 12},{runtime:1, title:1, year:1, _id:0}).pretty().limit(3).sort({title: -1})

db.movies.updateOne( {title :{$eq: “The New Crocodile” }},{ $set: { “title”: “The Oldest Crocodile”, “Year”: 2020 }} )

db.movies.find({runtime: 12},{runtime:1, title:1, year:1, _id:0}).pretty().limit(3).sort({title: -1})

db.movies.updateOne( {title :{$eq: “The Oldest Crocodile” }},{ $set: { “title”: “The New Crocodile”, “year”: 2020 }} )

db.movies.find({runtime: 12},{runtime:1, title:1, year:1, Year:1, _id:0}).pretty().limit(3).sort({title: -1})

CRUD: Update One – Command Prompt

db.movies.find({year: {$eq: 1988}} ).count()

db.movies.find({year: {$eq: 2025}} ).count()

db.movies.updateMany({year: {$eq: 1988}}, { $set: { “year”: 2025 }})

db.movies.updateMany({year: {$eq: 1988}}, { $set: { “year”: 2025 }},{upsert:true})

db.movies.updateMany({year: {$eq: 1988}}, { $set: { “title”:”MySuperFunnyTitle”,”awards.wins”:9 }},{upsert:true})

db.movies.find({year: {$eq: 1988}} ).pretty()

db.movies.updateMany({runtime: {$eq: 1122}}, { $set: { “title”: “MySuperFunnyTitle”,”Year”: 2020, “awards.wins”:9 }},{upsert:true, w:”majority”, wtimeout:1000})

db.movies.find({runtime: 1122}).pretty()

db.movies.replaceOne({runtime: {$eq: 1122}}, { runtime:1122, “NoTitle”: “ReplaceOneExample”, “NewYear”: 2020, “awards.losts”: 5}

Demo: CRUD: Delete – Command Prompt

db.movies.find({runtime: 25}).count()

db.movies.deleteOne( {runtime: 25})

db.movies.deleteMany( {runtime: 25})

db.movies.find({runtime: 25}).count()

db.movies.find({runtime: 35}).count()

db.movies.remove({runtime: 35}, true )

db.movies.remove({runtime: 35} )


show collections

show dbs

db.foo.save({_id:1, x:10})

show collections

Set Operator

db.a.save({_id:1, x:10})





Unset Operator


Rename Operator  

db.a.save({_id:1, Naem:’bob’})

db.a.update({_id:1},{$rename:{‘Naem’: ‘Name’}})

Push Operator  


db.a.update({_id:2},{$push:{things: ‘One’}})

db.a.update({_id:2},{$push:{things: ‘Two’}})

db.a.update({_id:2},{$push:{things: ‘Three’}})

db.a.update({_id:2},{$addToSet:{things: ‘four’}})

Pull Operator 

db.a.update({_id:2},{$push:{things: ‘Three’}})

db.a.update({_id:2},{$pull:{things: ‘Three’}})

Pop Operator 

#Remove Last field

db.a.update({_id:2},{$pop:{things: 1 }}) 

#Remove First Field

db.a.update({_id:2},{$pop:{things: -1 }})

Multi Update  

# Updates only 1st record

db.a.update({},{$push:{things: 4 }});

#Updates all records

db.a.update({},{$push:{things: 4 }},{multi:true});

#Updates all records that have 2 in the array

db.a.update({things:2} ,{$push:{things: 42}},{multi:true});


# Return where ID = 1 and show 1 field


db.a.find({_id: {$gt:2} }, {_id:1})

db.a.find({_id: {$lt:2} }, {_id:1})

db.a.find({_id: {$lte:2} }, {_id:1})

db.a.find({_id: {$gte:2} }, {_id:1})


db.a.find({_id: {$gte:2,$lt:4} }, {_id:1})

db.a.find({_id: {$not: {$gt:2}}}, {_id:1})

db.a.find({_id: {$in: [1,3] } }, {_id:1})

db.a.find({_id: {$nin: [1,3] } }, {_id:1})

#Sort Asc


#Sort Desc



#Show Indexes 



db.a.find({things: 4}).explain()

Create Index


Drop Index


Week of June 19th

“I had some dreams, they were clouds ☁️ in my coffee☕️ … Clouds ☁️ in my coffee ☕️ , and…”

Hi All –

Last week, we explored Google’s fully managed “No-Ops” Cloud ☁️ DW solution, BigQuery🔎. So naturally it made sense to drink🍹more of the Google Kool-Aid and further discover the data infrastructure offerings within the Google fiefdom 👑. Besides we have been wanting to find what all the hype was about with Datafusion ☢️ for some time now which we finally did and happily😊 wound up getting a whole lot more than we bargained for…

To take us through the Google’s stratosphere☁️ would be no other than some of more prominent members of the Google Cloud Team;  Evan JonesJulie Price, and Gwendolyn Stripling. Apparently, these Googlers (all of which seemed have mastered the art of using their hands👐 while speaking) collaborated with other data aficionados at Google on a 6 course compilation of  awesomeness😎 for the Data Engineering on the Google Cloud☁️Path.  The course that fit the bill to start this week’s learning off was Building Batch Data Pipelines on GCP

Before we were able to dive right into DataFusion☢️, we first started off with a brief review of EL (Extract and Load), ELT (Extract, Transform and Load), and ETL (Extract, Load, and Transform) .

The best way to think of these types of data extraction is the following:

  • EL is like a package📦 delivered right to your door🚪 where the contents can be taken right out of the box and used. (data can be imported “as is”) 
  • ELT is like a hand truck 🛒 which allows you to move packages easily, but the packages 📦 📦 stilled need to be unloaded and items possibly stored a particular way.
  • ELT is like a forklift 🚜 this is when heavy lifting needs to be done to transfer packages and have them fit in the right place

In the case of EL and ELT our flavor du jour in the Data Warehouse space, Bigquery🔎 is an ideal target 🎯system but when you need the heavy artily (ELT) that’s when you got to bring an intermediate solution. The best way to achieve these goals is the following: 

  • Data pipelines 
  • Manage pipelines 
  • UI to build those pipelines

Google offers several data transformation and streaming pipeline solutions (Dataproc🔧 and Dataflow🚰) and one easy to use UI (DataFusion☢️) that makes it easy to build those pipelines. Our first stop was Dataproc🔧 which is a fast, easy-to-use, fully managed cloud☁️ service meant for Apache Spark⚡️ and Apache Hadoop🐘 clusters. Hadoop🐘 solutions are generally not really our area of expertise but nevertheless we spent some time here to get a good general understanding of how this solution works and since Datafusion ☢️ sits on top of Dataproc🔧. It was worth our while to understand how it all works

Next, we ventured over too the much anticipated Datafusion☢️ which was more than worth our wait! Datafusion☢️ uses ephemeral Dataproc🔧VMs to perform all the transforms in batch data pipelines (Streaming currently not supported but coming soon through Dataflow🚰 support). Under the hood Datafusion☢️ leverages five main components

1.     Kubernetes☸️ Engine (runs in a containerized environment on GKE)

2.     Key🔑 Management Service (For Security)

3.     Persistent Disk

4.     Google Cloud☁️ Storage (GCS) (For long term storage)

5.     Cloud☁️ SQL – (To manages user and Pipeline Data)

The good news is that you don’t really need to muck around with any of these components. In fact, you shouldn’t even concern yourself with them at all.  I just mentioned them because I thought it was kind of cool stack 😎. The most important part of datafusion☢️ is the data fusion☢️ studio which is the graphical “no code” tool that allows Data Analysts and ETL Developers to wrangle data and build batch data pipelines. Basically, it allows you to build pretty complex pipelines by simply “drag and drop”.

“Don’t reinvent the wheel, just realign it.” – Anthony J. D’Angelo

So now with a cool 😎 and easy to build batch pipeline UI under our belt, what about a way to orchestrate all these pipelines? Well, Google pulled no punches🥊🥊 and gave us Cloud☁️ Composer which is fully managed data workflow and orchestration service that allows you to schedule and monitor pipelines. Following the motto of “not reinventing the wheel”, Cloud☁️ Composer leverages Apache Airflow 🌬.

For those who don’t know Apache Airflow 🌬is the popular Data Pipeline orchestration tool originally developed by the fine folks at Airbnb. Airflow🌬 is written in Python 🐍(our new favorite programming language), and workflows are created via Python 🐍 scripts (1 Python🐍 file per DAG).  Airflow 🌬uses directed acyclic graphs (DAGs) to manage workflow orchestration. Not to be confused with a uncool person or an unpleasant sight on a sheep 🐑  A DAG* is a simply a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies.

Take a bow for the new revolution… Smile and grin at the change all around me

Next up on our adventures was onto Dataflow🚰 which is a fully managed streaming 🌊 analytics service that minimizes latency, processing time, and cost through autoscaling as well as batch processing.  So why Dataflow🚰 and not DataProc🔧?

No doubt, Dataproc🔧 is a solid data pipeline solution which meets most requirements for either Batch or Streaming🌊Pipelines but it’s a bit clunky and requires existing knowledge of Hadoop🐘/Spark⚡️ infrastructure. 

Dataproc🔧 is still an ideal solution for those who want to bridge 🌉 the gap by moving their on-premise Big Data Infrastructure to GCP.  However, if you have a green field project than Dataflow🚰 definitely seems like the way to go.

DataFlow🚰 is “Server-less” which means the service “just works” and you don’t need to manage it! Once again, Google holds true to form with our earlier mantra (“not reinventing the wheel”) as Cloud Dataflow🚰 is built on top of the popular batch and streaming pipeline solution Apache Beam.

For those not familiar with Apache BEAM (Batch and StrEAM)  it was also developed by Google to ensure the perfect marriage between batch and streaming data-parallel processing pipelines. A true work of art!

The show must go on….

So ya… Thought ya. Might like to go to the show…To feel the warm🔥 thrill of confusion. That space cadet glow

Now that we were on role with our journey through GCP’s Data Ecosystem it seemed logical to continue our path with the next course Building Resilient Streaming Analytics Systems on GCP.  This exposition was taught by the brilliant Raiyann Serang who maintains a well kempt hairdo throughout his presentations and the distinguished Nitin Aggarwal as well as the aforementioned Evan Jones.

First, Raiyann’s takes us through a brief introduction on streaming 🌊 data (data processing for unbounded data sets). In addition, he provides The reasons for streaming 🌊 data and value that streaming🌊 data provides to the business by enabling real time information in a dashboard or another means to see the current state. He touches on the ideal architectural model using Google Pub/Sub and Dataflow🚰 to construct a data pipeline to minimize latency at each step during the ingestion process.

Next, he laments about the infamous 3Vs  in regards to streaming 🌊 data and how might a data engineer deal with these challenges.


  • How to ingest this data into the system?
  • How to store and organize data to process this data quickly?
  • How will the storage layer be integrated with other processing layers?


  • 10,000 records/sec being transferred (Stock market Data, etc.)
  • How systems need to be able handle the load change?


  • Type and format of data and the constraints of processing

Next, he provides a preview to the rest of the course as he unveils Google’s triumvirate to the streaming data challenge. Pub/Sub to deal with variable volumes of data, Dataflow🚰 to process data without undue delays and Bigquery🔎 to address need of ad-hoc analysis and immediate insights.

Pure Gold!

After a great Introduction, Raiyann’s takes us right to Pub/Sub. Fortunately, we has been to this rodeo before and were well aware of the value of Pub/Sub  Pub/Sub Is a ready to use asynchronous distribution system that fully manages data ingestion for both on cloud ☁️ and on premise environments. It’s a highly desirable solution when it comes to streaming solutions because of how well it addresses Availability, Durability, and Scalability.

The short story around Pub/Sub is a story of two data structures, the topic and the subscription.  The Pub/Sub Client that creates the topic is called the Publisher and the Cloud Pub/Sub client that creates the subscription is the subscriber. Pub/Sub provides both Push (periodically calling for messages) and Pull (Clients have to acknowledge the message as separate step) deliveries.

Now, that we covered how to process data, it was time to move to the next major piece in our data architectural model and that is how to process the data without undue delays Dataflow🚰 

Taking us through this part of the journey would be Nitin. We had already covered Dataflow🚰 earlier in the week in the previous course but that was only in regards to batch data (Bound data or unchanging data) pipelines. 

DataFlow🚰 if you remember is built on Apache Beam, so in another words it has “the need for speed” and can support streams🌊 of data.  Dataflow🚰 is highly scalable with low latency processing pipelines for incoming messages. Nitin further discusses the major challenges with handling streaming or real time data and how DataFlow🚰 tackles these obstacles.

  • Streaming 🌊 data generally only grows larger and more frequent
  • Fault Tolerance – Maintain fault tolerance despite increasing volumes of data
  • Model – Is it streaming or repeated batch?
  • Time – (Latency) what if data arrives late

Next, he discusses one of DataFlow🚰 key strengths “Windowing” and provides details in the three kinds of Windows.

  • Fixed – Divides Data into time Slices
  • Sliding – Those you use for computing (often required when doing aggregations over unbounded data)
  • Sessions – defined by minimum gap duration and timing is triggered by another element (communication is bursty)

Then Nitin rounds it off with one of the key concepts when it comes to Streaming🌊 data pipelines the “watermark trigger”. The summit of this module is the lab on Streaming🌊 Data Processing which requires building a full end to end solution using Pub/Sub, Dataflow🚰, and Bigquery. In addition,  he gave us a nice introduction to Google Cloud☁️ Monitoring which we had not seen before.

So much larger than life… I’m going to watch it growing 

We next headed over to another spoke in the data architecture wheel 🎡with Google’s Bigtable . Bigtable (built on Colossus) is Google’s NoSQL solution for high performance applications. We hadn’t done much so far with NoSQL up until this point.  So, this module offered us a great primer for future travels.

Bigtable is ideal for storing very large amounts of data in a key-value store or non-structured data and it supports high read and write throughput at low latency for fast access to large datasets. However, Bigtable is not a good solution for Structured data, small data (< TB) or data that requires SQL Joins. Bigtable is good for specific use cases like real-time lookups as part of an application, where speed and efficiency are desired beyond that of a database. When Bigtable is a good match for specific workloads “it’s so consistently fast that it is magical 🧙‍♂️”.

“And down the stretch they 🐎 come!”

Next up, Evan takes us down the homestretch by surveying Advanced BigQuery 🔎 Functionality and Performance. He first begins with an overview and a demo of BigQuery 🔎 and GIS (Geographic Information Systems) functions which allows you to analyze and visualize Geo Spatial data in BigQuery🔎. This is a little beyond the scope of our usual musings but it’s good to know from an informational standpoint. Then Evan covers  a critical topic for any data engineer or analyst to understand, which is how to break apart a single data set into groups or Window Functions .

Followed by a lab that demonstrated some neat tricks on how to reduce I/O, Cache results, and perform efficient joins by using the the WITH Clause, Changing the parameter of Region location, and denormalization of the data respectively. Finally, Evan leaves use with a nice parting gift by providing a handy cheatsheet and a quick lab on Partitioned Tables in Google BigQuery🔎

* DAG is a directed graph data structure that uses a topological ordering. The sequence can only go from earlier to later. DAG is often applied to problems related to data processing, scheduling, finding the best route in navigation, and data compression.

“It’s something unpredictable, but in the end it’s right”

Below are some topics I am considering for my travels next week:

  • NoSQL – MongoDB, Cosmos DB
  • More on Google Data Engineering with
  • Google Cloud Path <-Google Cloud Certified Professional Data Engineer
  • Working JSON Files
  • Working with Parquet files 
  • JDBC Drivers
  • More on Machine Learning
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • Additional ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well


Week of June 12th

“Count to infinity, ones and zeroes”

Happy Russia 🇷🇺 Day !

Earth🌎 below us, Drifting, falling, Floating, weightless, Coming, coming home

After spending the previous two weeks in the far reaches 🚀 of space 🌌 , it was time 🕰 to return to my normal sphere of activity and back to the more familiar data realm. This week we took a journey into Google’s Data Warehouse solution better known as BigQuery🔎. This was our third go around in GCP as we previously looked at Google’s Compute and Data messaging service Pub/Sub.

It seems to me like the more we dive into the Google Cloud Platform ecosystem the more impressed we become with the GCP offerings. As for Data Warehouse, we had previously visited with Snowflake❄️ which we found to be a nifty SaaS solution for DW. After tinkering around a bit with BigQuery🔎 we found it to be equally utilitarian. BigQuery🔎 like its strong competitor offers a “No-Ops” approach to data warehousing while adhering to the 3Vs of Big Data. Although we didn’t benchmark either DW solution both offerings are highly performant based on 99 TPC-DS industry benchmarks 

In the case of Snowflake❄️, it offers the flexibility to data professionals to scale the compute and storage resources up and down independently based on workloads whereas BigQuery 🔎 is “server-less” and all scaling is done automatically . BigQuery🔎 doesn’t use indexes but rather it relies on its awesome clustering technology to make its queries scream 💥. 

Both Snowflake❄️ and BigQuery🔎 are quite similar with their low maintenance and minimal task administration but where both products make your head spin🌪 is trying to make heads or tails out of their pricing models.  Snowflake’s❄️ pricing is a little bit easier to try to interrupt whereas BigQuery🔎 seems like you need to have P.H.D. in cost models just to read through the fine print.

To accurately determine which product, offers a better TCO, really depends on your individual workloads. From what I gather if you’re running lots of queries sporadically, with high idle time than BigQuery🔎 is the place to be from a pricing standpoint. However, if your workloads are more consistent than its probably more cost effective to go with Snowflake❄️ based on their pay as you go model.

To assist us on our exploration of BigQuery was the founder of LoonyCorn, the bright and talented Janani Ravi through her excellent Pluralsight course.  Janani gives a great overview of the solution and keeps the flow of the course at a reasonable pace as we took a deep dive into this complex data technology.

One Interesting observations about the course as it was published about a year and half ago (15 Oct 2018) is how much improvements Google has made to the product offering since then including a refined UI, more options for partition keys and enhancements to Python Module. The course touches on design, comparison to RDBMS, and other DWs and shows us different ingestion of file types including the popular Binary format Avaro.

The meat🥩 and potatoes🥔 of the course is the section of Programmatically Accessing BigQuery from Client Programs. This is where Jani goes into some advanced programming options like UNNEST, ARRAY_AGG, STRUCT Operators and the powerful Windowing Operations.

See log for more details

To round out the course, she takes us through some additional nuggets in GCP like Google Data Studio (https://datastudio.google.com/) for Data Visualization and Cloud☁️ Notebooks📓 and Python by utilizing Google Datalab🧪.

Stay, ahhh Just a little bit longer Please, please, please, please, please Tell me that you’re going to….

Below are some topics I am considering for my travels next week:

  • Google Cloud Data Fusion (EL/ETL/ELT)
  • More on Google Big Query
  • More on Data Pipelines
  • NoSQL – MongoDB, Cosmos DB
  • Working JSON Files
  • Working with Parquet files 
  • JDBC Drivers
  • More on Machine Learning
  • ONTAP Cluster Fundamentals
  • Data Visualization Tools (i.e. Looker)
  • ETL Solutions (Stitch, FiveTran) 
  • Process and Transforming data/Explore data through ML (i.e. Databricks)

Stay safe and Be well –