Week of April 3rd

“The other day, I met a bear. A great big bear, a-way out there.”

As reported last week, I began to dip my toe into the wonderful world of Python.. Last week, I wasn’t able to complete the Core Python: Getting Started by Robert Smallshire and Austin Bingham  Pluralsight course . So I had to do some extended learning over last weekend. So last weekend, I was able to finish the “Iteration and Iterables” module which I started last Friday and then spent the rest of the weekend with the module on “Classes” which was nothing short of a nightmare. I spent numerous hours on this module trying to debug my horrific code and rewatching this lessons in the module over and over again. This left me with the conclusion that I just simply don’t get object oriented programming and probably never will.. 

View Post

Ironically, a conclusion, I derived almost 25 years ago when I attended my last class at University at Albany which was in C++ Object Oriented programming. Fortunately, I escaped that one with a solid D- and was able to pass go and collect $200 and move on to the working world. So after languishing with Classes in Python, I was able to proceed with the final module on File IO and Resource Managements which seemed more straight forward and practical on Monday. 


On Tuesday, life got a whole lot easier when I Installed Anaconda – Navigator. Up until this point I was writing my python scripts in TextWrangler Editor on the Mac which was not ideal. 


Through Anaconda, I discovered Spider IDE which was like a breath of fresh air.  No longer did I have to worry about aligned spaces, open and closed parenthesis, curly and square brackets. Now with the proper IDE environment I was able to begin my journey down the Pandas Jungle…


Here is what I did:

  1. Completed the course of Pandas Fundamentals
  2. Installed Anaconda  Panda Python Module, SQL Lite
  3. Created Pandas/Python Scripts:
  1. Read in CSV file (Tate Museum Collection) and output to pickle file
  2. Read in JSON file write output to screen
  3. Traverse directories with multiple JSON files and write output to a file
  4. Perform iteration, aggregation, and filtering (transformation)
  5. Created indexes on data from CSV file for faster retrieval or data
  6. Read data source (Tate Museum Collection) and output data to Excel Spreadsheets, with multiple columns, multiple sheets, and with colored columns options
  7. Connects to RDBMS using SQLAlchemy module (Used SQL Lite Database as POC) which creates a table and writes data to the table from a data source (pickle file)
  8. Create JSON file output from a data source (pickle file)
  9. Create graph using matplotlib.pyplot and matplotlib modules. See attachment.

**Bonus Points ** Continued to drudge old nightmares from freshman year of Highs school as I took a stroll down memory lane with distribute binomials, perfect square binomials, difference of square binomials, factor perfect square trinomials and factor difference of squares, F.O.I.L. and other Algebraic muses.

In addition, revisited conjugating verbs in Español and writing descriptions (en Español) for 9 family members   Next Steps.. 
There are many places I still need to explore..

Below are some topics I am considering:

  • A Return to SQL Server Advanced Features:

            – Columnstore Indexes
            – Best practices around SQL Server AlwaysOn (Snapshot Isolation/sizing of Tempdb, etc)

  • Getting Started with Kubernetes with an old buddy (Nigel)
  • Getting Started with Apache Kafka 
  • Understanding Apache ZooKeeper and its use cases

I will give it some thought over the weekend and start fresh on Monday.
Stay safe and Be well

—MCS 

Leave a Reply