The Hunt for a Cisco ACI Lab

As an independent consultant one of the things I have to provide for myself are labs.

It’s a wonderful time for labs! Virtual capabilities and offerings make testing and modeling a clients network easier than ever.

Cisco DevNet offers “Always On” Sandbox devices that are always there if you need to test a “unit of automation”. Network to Code labs are also available on demand (at a small cost) and a huge time saver.

But Cisco’s Application Centric Infrastructure (ACI) is a different animal altogether.

Cisco offers a simulator (hardware and VM) (check out the DevNet Always On APIC to see it in action) which is terrific for automation testing and configuration training but there is no data plane so testing routing, vpcs, trunks, and contracts is a non starter. For that you need the hardware…and so began my search for an ACI Lab for rent.

The compilation below is still a work in progress but I thought I would share my findings, so far, over the last 6 months.

First let’s set the context.

I was looking to rent a physical ACI lab (not a simulator) with the following requirements:

  • Gen2 or better hardware (to test ACL logging among other things) and ACI 4.0 or later
  • Accessible (Configurable) Layer 3 device or devices (to test L3 Outs)
  • Accessible (Configurable) Layer 2 device or devices (to test L2)
  • VMM integration (vCenter and ESXi Compute)
  • Pre configured test VMs
  • My own custom test VMs
  • A means of running Ansible and/or Python within the lab environment and cloning repositories

Going in, I didn’t have a timeframe in mind. I would take a 4 hour block of time or a week and that is still the case. I also realized that it was unlikely anything for rent would meet all of my requirements but that was the baseline I would use to assess the various offerings.

A lower cost option for just a few hours is handy for quick tests.  Having a lab for a week is very nice and gives you the latitude to test without the time pressure.

Out of 13 possible options, I’ve tried 4. Two were very good and I would use them again and 2 were the opposite.

Recommendations:

While the INE lab is a bit difficult to schedule (you have to plan ahead which isn’t compatible with my “immediate gratification” approach to things) it’s a terrific lab with configuration access to the L2/L3 devices which gives you good flexibility.  

NterOne also offers a terrific lab and I was able to schedule it within a week of when I called. The lab is superbly documented and designed so as to make it very easy to understand. I got two pods/tenants which gave me some good flexibility in terms of exporting and testing contracts etc. The L2 and L3 devices are read only and pre-configured so those are a little limiting.

Some observations:

  • So far, no one is running Gen2+ equipment.
  • Almost all of the designs I have seen have single link L2/L3s so its difficult to test vpcs (and you generally need access to the other device unless its been preconfigured for you).
  • All the labs were running ACI 4.x

Global Knowledge has some interesting offerings and I was very excited initially but even getting the simplest answer was impossible. Like many larger companies, if you try to deviate from the menu it does not go well. I moved on.

INE, NterOne, and Firefly all spent the time understanding my requirements and offering solutions. Sadly Firefly was way out of my price range.

On a final note, I would avoid My CCIE Rack and CCIE Rack Rentals which may actually use the same lab. Documentation is terrible and I’ve tried 3 or 4 times and have gotten in about 50% of the time. The first time, I didn’t realize I needed to rent both of their DC labs (one has ACI and the other gives you access to the L2/L3 devices). The last time I rented a lab (both labs) they just simply cancelled my labs and never responded to emails either before or after.  If you have money you would like to dispose of, send it here (Coral Restoration Foundation Curaçao) or some other worthy cause. A much better use of those funds I’d have to say.

If anyone has has good experience with an ACI Lab rental that I’ve not included here I would love to hear about it!

Kudos to INE and NterOne for great customer service and flexibility!  

Summary

No alt text provided for this image

1. The INE Staff was open to allowing me to put some of my own repos and tools into the environment but when I scheduled the lab that became problematic.    INE was very honorable and let me have the lab in the non-customized way for the week without charge since they were not able to honor my customization request at that time!!

2. The Student Jump box can be customized which was very nice (I had access to my GitHub repos) and Python was available although it was Python 2.7.

3. Cost is not unreasonable but there is a minimum of

4. students so unless you have 3 like minded friends it becomes very expensive. 4 I’ve always been a big fan of Global Knowledge but my interactions with them were not positive.  I could not get even the most basic question answered (for example, did they have a money back guarantee or 30 day return policy since I was never able to get my more specific questions answered?  I figured if I had 30 days to see if the lab met my requirements then I could test it out and find out for myself.)

5.  Great customer service but the pricing was a non starter. $$$+ per day and it would have bene limited to business hours

6. When I first reached out with questions about their ACI lab they said it would not be available until late October (I assumed this year).  When I reached out in November, they didn’t even answer the question so clearly this is still al work in progress.

7. Worthy of further investigation


Details and Links

Cost Legend:

  • $ Less than $200
  • $$ Hundreds
  • $$$ Thousands

INE $$

CCIE Data Center – 850 Tokens/Week (Weekly rentals only) (1$ = 1 Token)

Excellent lab but very busy (because it’s very good) and so can be difficult to schedule.

NterOne $$

Excellent lab with good functionality at a reasonable price point.

Fast Lane $$$

Minimum of 4 Students @ $439/Student

Global Knowledge $$$

On Demand (12 Months)

Very poor customer support (my experience)

CloudMyLab

Lab not available yet. No timeframe given.

Octa Networks

More course focused but awaiting response.

Labs 4 Rent

INDIA: +91-9538 476 467  |  UAE: +971-589 703 499 | Email: info@labs4rent.com

No response to emails

FireFly $$$+ /day

Too expensive (for me)!

Rack Professionals

Needs further investigation

NH Networkers Home

+91-8088617460 / +91-8088617460

Needs further investigation

Micronics Training

They do not rent out their racks.

My CCIE Rack $

support@myccierack.com. Whatsapp number- 7840018186

Very poor experience

CCIE Rack Rentals $

support@ccierack.rentals WhatsApp : +918976927692

Very poor experience

The Struggle with Structure – Network Automation, Design, and Data Models

Preface

Modern enterprise networking is going to require a level of structure and consistency that the majority of its networking community may find unfamiliar and perhaps uncomfortable. As a community, we’ve never had to present our designs and configuration data in any kind of globally consistent or even industry standard format.

I’m fascinated by all things relating to network automation but the one thing that eluded me was the discussion around data models (YANG, OpenConfig).

Early on, the little I researched around YANG led me to conclude that it was interesting, perhaps something more relevant to the service provider community, and a bit academic. In short, not something directly relevant to what I was doing.

Here is how I figured out that nothing could be further from the truth and why I think this is an area that needs even more focus.

If you want to skip my torturous journey towards the obvious, see the resources section at the end or jump over to Cisco’s DevNet Model Driven Programmability for some excellent material.

You can also cut to the chase by going to the companion repository Data_Model_Design on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document.


The Current Landscape

Since its earliest days as a discipline, networking (at least in the enterprise) has generally allowed quite a bit of freedom in the design process and its resulting documentation. That is one of the things I love about it and I’m certain I’m not alone in that feeling. A little island of creativity in an ocean of the technical.

For every design, I put together my own diagrams, my own documentation, and my own way to represent the configuration or just the actual configuration. Organizations tried to put some structure around that with a Word, Visio, or configuration text template, but often even that was mostly just for the purposes of branding and identification of the material. How many of us have been given a Word template with the appropriate logos on the title page and if you were lucky a few headings? Many organizations certainly went further requiring a specific format and structure so that there was consistency within the organization but move on to a different organization and everything was different.

The resulting design documentation sets were many and varied and locally significant.

In effect, the result was unstructured data. Unstructured or even semi structured data as text or standard output from a device or system is well know but this is unstructured data on a broader scale.

Design and Configuration Data

Over the last few years I’ve observed a pattern that I’m just now able to articulate. This pattern speaks to the problem of unstructured design and configuration data. The first thing I realized is that, as usual, I’m late to the party. Certainly the IETF has been working on the structured configuration data problem for almost 20 years and longer if you include SNMP! The Service Provider community is also working hard in this area.

The problem of structured vs unstructured data has been well documented over the years. Devin Pickell describes this in great detail in his Structured vs Unstructured Data – What’s the Difference? post.

For the purposes of this discussion let me summarize with a very specific example.

We have a text template that we need to customize with specific configuration values for a specific device:

!EXAMPLE SVI Template <configuration item to be replaced with actual value>

interface Vlan< Vlan ID >
description < SVI description >
ipv6 address < IP Address >/< IP MASK>
ipv6 nd prefix < Prefix >/< Prefix MASK > 0 0 no-autoconfig
ipv6 nd managed-config-flag
ipv6 dhcp relay destination < DHCP6 Relay IP >

If we are lucky we have this:

More often than not we have this:

Or this:

The problem is a little broader but I think this very specific example illustrates the bigger issue. Today there is no one standard way to represent our network design and configuration data. A diagram (in Visio typically) is perhaps the de-facto standard but its not very automation friendly. I’ve had design and configuration data handed to me in Word, PowerPoint, Excel (and their open source equivalents), Text, Visio, and the PDF versions of all of those.

Let me be clear. I am not advocating one standard way to document an entire network design set…yet. I’m suggesting that automation will drive a standard way to represent configuration data and that should drive the resulting design documentation set in whatever form the humans need it. That configuration data set or data model should drive not just the actual configuration of the devices but the documentation of the design. Ultimately, we can expect to describe our entire design in a standard system data model but that is for a future discussion.

Structured design and configuration data

In order to leverage automation we need the configuration data presented in a standard format. I’m not talking configuration templates but rather the actual data that feeds those templates (as shown above) and generates a specific configuration and state for a specific device.

Traditionally, when developing a design you were usually left to your own devices as to how to go about doing that. In the end, you likely had to come up with a way to document the design for review and approval of some sort but that documentation was static (hand entered) and varied in format. Certainly not something that could be easily ingested by any type of automation. So over the last few years, I’ve developed certain structured ways to represent what I will call the “configuration payload”…all the things you need to build a specific working configuration for a device and to define the state it should be in.

Configuration payload includes:

  • hostname
  • authentication and authorization configuration
  • timezone
  • management configuration (NTP, SNMP, Logging, etc.)
  • interface configuration (ip, mask, description, routed, trunked, access, and other attributes)
  • routing configuration (protocol, id, networks, neighbors, etc.)

All of this data should be in a format that could be consumed by automation to, at the very least, generate specific device configurations and, ideally, to push those configurations to devices, QA, and ultimately to document.

My experience over the last few years tells me we have some work ahead of us to achieve that goal.

The problem – Unstructured design and configuration data is the norm today

As a consultant you tend to work with lots of different network engineers and client engineering teams. I started focusing on automation over 4 years ago and during that time I’ve seen my share of different types of configuration payload data. I’m constantly saying, if you can give me this data in this specific format, look what can be done with it!

My first memorable example of this problem was over 2 years ago. The client at the time had a very particular format that they wanted followed to document their wireless design and the deliverable had to be in Visio. I put together a standard format in Excel for representing access point data (name, model, and other attributes). This structured data set in Excel (converted to CSV) would then allow you to feed that data into a diagram that had a floor plan. You still had to move the boxes with the data around to where the APs were placed but it saved quite a lot of typing (and time) and reduced errors. I demonstrated the new workflow but the team felt that it would be simpler for them to stick to the old manual process. I was disappointed to be sure but it was a bit of a passion project to see how much of that process I could automate. We had already standardized on how to represent the Access Point configuration data for the automated system that configured the APs so it was a simple matter of using that data for the documentation.

The issue was more acute on the LAN side of the house. On the LAN side the structured documentation format (also in Excel) was not an option. It fed all the subsequent stages of the process including ordering hardware, configurations (this was the configuration payload!), staging, QA, and the final documentation deliverable.

When fellow network engineers were presented with the format we needed to use, lets just say, the reception lacked warmth. I used Excel specifically because I thought it would be less intimidating and nearly everyone has some familiarity with Excel. These seasoned, well credentialed network engineers, many who were CCIEs, struggled. I struggled right along with them…they could not grasp why we had to do things this way and I struggled to understand why it was such an issue. It is what we all do as part of network design…just the format was a little different, a little more structured (in my mind anyway).

I figured I had made the form too complicated and so I simplified it. The struggle continued. I developed a JSON template as an alternative. I think that made it worse. The feedback had a consistent theme. “I don’t usually do it that way.” “I’ve never done it this way before.” “This is confusing.” “This is complicated.”

Lets be clear, at the end of the day we were filling in hostname, timezone information, vlans, default gateway, SVI IP/Mask, uplink Interface configuration, and allowed vlans for the uplinks. These were extremely capable network engineers. I wasn’t asking them to do anything they couldn’t do half asleep. I was only requiring a certain format for the data!

During these struggles I started working with a young engineer who had expressed an interest in helping out with the automation aspects of the project. He grasped the structured documentation format (aka the Excel spreadsheet) in very little time! So much so, that he took on the task of training the seasoned network engineers. So it wasn’t the format or it wasn’t just the format if a young new hire with very little network experience could not just understand it, but master it enough to teach it to others.

With that, the pieces fell into place for me. What I was struggling against was years of tradition and learned behavior. Years of a tradition where the configuration payload format was arbitrary and irrelevant. All you needed was your Visio diagram and your notes and you were good to go.

Unstructured configuration payload data in a variety of formats (often static and binary) is of little use in this brave new world of automation and I started connecting the dots. YANG to model data, Vendor Yang data models…OK…I get it now. These are ways to define the configuration payload for a device in a structured way that is easily consumed by “units of automation” and the device itself.

This does not solve the broader issue of unlearning years of behavior but it does allow for the learning of a process that has one standard method of data representation. So if that transition can be made I can get out of the data modeling business (Excel) and there is now a standard way to represent the data and a single language we can use to talk about it. That is, of course, the ideal. I suspect I’m not out of the data modeling business just yet but I’m certainly a step closer to being out of it and, most importantly, understanding the real issue.

The diagram above shows an evolution of the network design process. The initial design activities won’t change much. We are always going to need to:

  • Understand the business needs, requirements, and constraints
  • Analyze the current state
  • Develop solutions perhaps incorporating new technology or design options using different technologies
  • Model the new design
  • Present & Review the new design

In this next evolution, we may use many of the old tools, some in new ways. We will certainly need new tools many of which I don’t believe exist yet. As we document requirements in a repository and configuration payload in a data model, those artifacts can now drive:

  • An automated packaging effort to generate the design information in human readable formats that each organization wants see
    • Here the Design Document, Presentation, and Diagram are an output of the configuration payload captured in the data model (I’ve deliberately not used Word, PowerPoint, and Visio…)
  • The actual configuration of the network via an automation framework since by definition our data models can be consumed by automation.
  • All of it in a repository under real revision control (not filenames with the date or a version identifier tacked on)

As with any major technology shift, paradigm change, call it what you will, the transition will likely result in three general communities.

  1. Those who eagerly adopt, evangelize, and lead the way
  2. Those who accept and adapt to fit within the new model
  3. Those who will not adapt

I’m sorry to say I’ve been wholly focused on the first community until now. It is this second “adapt community”, which will arguably be the largest of the three, that needs attention. These will be the network engineers who understand the benefits of automation and are willing to adapt but at least initally will likely not be the ones evangelizing or contributing directly to the automation effort. They will be the very capable users and consumers of it.

We need to better target skills development for them as the current landscape can be overwhelming.

Its also important to note that the tooling for this is woefully inadequate right now and is likely impeding adoption.

What’s next?

The solution may elude us for a while and may change over time. For me at least the next steps are clear.

  • I need to do a better job of targeting the broader network community who isn’t necessarily exited (yet) about all the automation but is willing to adapt.
  • I will start discussing and incorporating data models into the conversations and documentation products with my new clients and projects.
  • I will start showcasing the benefits of this approach in every step of the design process and how it can help improve the overall product.
    • revision control
    • improved accuracy
    • increased efficiency

Example Repository

If you want to see how some of these parts can start to fit together please visit my Data_Model_Design repository on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document which I then saved to PDF for “human consumption”.

Don’t miss these resources

Always a fan of DevNet, the DevNet team once again does not disappoint.

YANG for Dummies by David Barroso

YANG Opensource Tools for Data Modeling-driven Management by Benoit Claise

YANG Modules Overview from Juniper Networks Protocol Developer Guide

YANG and the Road to a Model Driven Network by Karim Okasha

The OpenConfig group bears watching as they are working on vendor agnostic real world models using the YANG language. This is very much a Service Provider focused initiative whose efforts may be prove very useful in the Enterprise space.

OpenConfig Site

OpenConfig GitHub Repository

Building a Production-ish Ready WebEx Teams ChatBot

Introduction

  • Is your interrupt-driven day no longer supportable?
  • Is there a particular Project Manager that asks you the same question every morning?
  • Do you often have to take some technical data and simplify it for semi- or non-technical consumption?
  • Would you like to pull out relevant sections of technical data for a sibling team? Today you send them all the data and hope they can pull out what they need, or find what they need because you just don’t have the time to put together something that is customized for them containing only what they need?

I suspect many of us have these situations far too often.

I spend quite a bit of time on Webex Teams (referred to as Spark from here on out in protest of the terrible re-branding of Spark to WebEx Teams which not only completely lacks imagination but also confuses anyone who also has to deal with Microsoft Teams…. a rebranding so awful it might actually unseat the rebranding of brigade). It is currently the messaging application of choice for my two biggest projects and a principal culprit in my interrupt-driven day. Maybe this can help turn it into an advantage… a mini-you if you will.

Not long ago, I lost a young and incredibly capable engineer to another project and that loss was pretty impactful. Many of the day to day activities fell back to me and I found lots of little (and not so little) things falling through the cracks. How was I going to work smarter not harder?

Network engineer getting asked questions from everyone!

The Back Story

I got to attend Interop this year and was very inspired by every session I attended but a few in particular really got me thinking about how I might automate my way out of my current predicament.

Jeremy Schulman– – Self-service Network Automation Using Slack

Nick RussoAutomation for Bureaucracies

Hank PrestonA Practical Look at NetDevOps – While Hank’s session inspired other ideas it is his work with DevNet and a SparkBot module that is most relevant for this exercise.

The Resulting Scene

The effort documented here is a real world experience and solution. To protect the innocent I’ve created a “demo” version of this solution that will highlight the functionality and the possibilities. Over the course of several posts, I’ll document the requirements, use cases, solutions, and issues.


I like to start with the problem statement:

  1. I need to be able to provide fine grained project status on demand
  2. I need to be able to print a pretty Summary report for management on demand
  3. I need to be able to extract the relevant portion of some technical information for a parallel project on demand
  4. I’d like to incorporate something fun to get the team comfortable and familiar with the Bot (on demand of course)

I also like to document the initial state and the characteristics of the environment:

  • Spark was the required platform for messaging. Enough of the team utilized Spark so as to make it an extremely viable delivery platform.
  • A SAAS platform was used for project status.
  • All the collateral and technical information resided in a document repository that everyone constantly works on and syncs to.

The solution concept: Develop a SparkBot attendant available to the Spark Team under which all of our spaces are created which can access an external API, a document repository, and respond to simple commands starting with:

  1. site_status
  2. site_summary
  3. site_networks
  4. comic_relief

OK..I started Googling and found lots of material but there were two issues:

  • Almost everything leveraged a dummy test environment often using ngrok, a fabulous little tool, but I needed this to be at least in the neighborhood of production ready. I did not want it to run on my home system.
  • Assembly was required in that there was good material on specific steps but not much on putting it all together. I suspect the thought there was that some of that should have been obvious but it wasn’t to me.

When I knew enough to be dangerous I figured I needed the following components:

ComponentDescriptionPlatform Used
Messaging Client & Account
The most basic requirement was a messaging client that was in use by enough of the team so as to be a viable delivery platform. This one was easy. Spark was already the messaging application of choice for the project. Using the messaging client solved all kinds of issues. I already had a client for a wide range of platforms, web, desktop, and mobile and I had a known User Interface to which the team was already accustomed.


Cisco WebEx Teams (Spark!!) Web Client, Desktop App, Mobile App
Messaging PlatformOf course the back end messaging platform had to support the required functionality for a Bot type application and research showed that it did. Spark has all the hooks for provisioning a Webhook and a Bot application.

Cisco WebEx Teams (Spark!!)
Document RepositoryThe Document repository the Team used had to have a client that I could install on the Web Server so that the data store would be available to bot functions and would always be synchronized (the latest).

Google Drive
Project Status APIThe Project Status tool had to support an API and I knew that the project SAAS did. (More on this later)

SAAS Project Management Tool
Web Server (Front End)Here is where it started getting tricky. I am by no means a System Administrator but I had to find a suitable web server and web server technology to use. The Web Server had to support the back end environment for the functionality and as I mentioned, also had to have a client so that I could present and sync the document repository.

Nginx
Web Server BackendFor the back end technology I always gravitate to Django but it was not a good fit in this case.
1. I did not foresee a need for a database
2. Most of the examples I found utilized Flask and I needed to get this working quickly before the village came after me with pitchforks and torches.

This was my first use of Flask and it is a testament to what you always here about it. It is simple and lightweight with quite a lot of functionality. More than enough for my purposes. With the excellent examples out there I had no issues getting it to work.

Flask & Python
Hosting PlatformThis was another challenge.

I mentioned I’m not a SysAdmin and initially I felt that bringing up a Linux server that I could “harden” sufficiently was probably not something I was going to do well so I started with a MacInCloud instance. It was a good Proof of Concept but I wasn’t happy with the performance or the cost so I abandoned it as the platform for the Bot Server.

There was nothing for it.

I needed to go down the path of a Linux host. I think I knew that already but I’ve been wanting to check out MacInCloud for a while and I thought that route would also save me time in researching how to secure a Linux server.
I always gravitate to Digital Ocean for this. They have superb documentation and How-Tos, simple and intuitive interface, excellent price points, and I’m a diver.

The first iteration of the Bot was actually on Digital Ocean and working quite well but I ran into a showstopper. The document sync client for Linux did not give you the option of changing the mount point. I could not grow the home volume on my Droplet sufficiently (the project data store is at about ~500GB) to host the data. I could add a volume but the sync client would try to sync to the home directory. I found a hack but it talked about sync instability and if I could not rely on the data being in sync on the Web Server then the rest was going to be pointless. Much of the functionality I needed involved accessing the latest data and documents on the repository.

Well, the other hosting platform I’ve been wanting to try is the Google Compute Platform. I must say I’m very impressed. Everything is pretty intuitive. I already had the domain I wanted to use on Google so that made DNS a breeze!

The production Bot is currently on GCP.

The public Demo system that is part of this post is on Digital Ocean. The Demo does not have the constraints of the production system and without the Digital Ocean documentation its doubtful I would have a working solution.

MacInCloud
Digital Ocean Droplet
Google Compute Platform (GCP)
Note: The Demo is built on a Digital Ocean Droplet
Messaging Platform SDK or ModuleI knew Spark had an SDK so worst case I had tools for the Bot.

Further research resulted in quite a few options for the Bot.

Here is where Hank Preston really saved me a ton of time. Hank’s webexteamsbot module provides a very nice framework for your own bot functions. It comes with a few that you will want to keep (/help is one) and provides some very good examples.

As your bot gets more capable (i.e. understands new functions or commands) you simply add those functions to your main script. The biggest issue I had was understanding what the module abstracted out. I needed to tap into a lot of the information…for example, the Spark space or room name had information that I needed to parse out so that if you ran a command within a space it would customize the output.

hpreston/webexteamsbot
ScriptsEach “function” or Bot command would need its own script or set of scripts and functions.

Python
HTTPS
I wanted to include as much security as my feeble SysAdmin abilities could muster. At a minimum SSL and some basic firewall functions.

HTTPS
certbot
DomainI wanted to use a FQDN.

cdl-automation.net
The Webex Teams ChatBot Subsystems
The ChatBot Subsystems

This is the first of a 5 Part Series on creating a production-ish ready WebEx Teams Chat Bot.

  1. Introduction
  2. The Basic Web Server
  3. The WebEx Teams (Spark) Backend Webhook
  4. The Bot Application on the Web Server
  5. The Bot

Decomposing Data Structures

Whether you are trying to find all the tenants in an ACI fabric, or all the interface IPs and descriptions on a network device, or trying to determine if the Earth is in imminent danger from an asteroid hurtling towards it, understanding complex data structures is a critical skill for anyone working with modern IT infrastructure technology.

As APIs become more and more prevalent, acting on an object (device, controller, cloud-based service, management system, etc.) will return structured data in a format you may have never seen before. For beginners, this may be a little daunting but once you get a few basics down you will be decomposing data structures at dinner parties!

Most modern APIs return data in XML or JSON format. Some give you the option to choose. We won’t spend any time on how you get this structured data. That will be a topic for another day. The focus today is how to interpret and manipulate data returned to you in JSON. I’m not a fan of XML so if I ever run into an API that only returns XML (odds are you will too) I do my very best to convert it to JSON first.

Lets get some basics out of the way. Curly braces {}, square brackets [], and spacing (whitespace) provide a syntax for this returned data. If you know a little Python these will be familiar to you.

Symbol


Meaning
[ ]LIST
Square brackets denote a list of values will be found between the opening [ and closing ] brackets separated by commas List elements are referenced by the number of its position in the list so that in a list like my_list = [1,2,3,4], if you want the information in the 3rd position (the third element) which is the number 3 you say my_list[2] because lists are zero indexed and so my_list[0] is the number 1, my_list[1] is the number 2, etc.
{ }DICTIONARY
Curly braces denote a list of key value pairs will be found between the opening { and closing } braces separated by commas with the key and value pair separated by a colon : key: value Dictionary key:value pairs are reference by the key so that in a dictionary like my_dict = {‘la’: ‘Dodgers’, ‘sf’: ‘Giants’} if you want to know the baseball team in LA you reference my_dict[‘la’].

Lists look like this:

[
item1,
item2,
item3
]

or (equally valid)

[item1,item2,item3]

Dictionaries look like this:

{
key:value,
otherkey:othervalue
}

or (equally valid)

{key:value, otherkey:othervalue}

The other thing to note is that when these structures are formatted, spacing is important and can tell you a lot about the hierarchy.

The examples above are not quoted and they should be. I didn’t because I wanted to highlight that single and double quotes can be used in Python but the JSON standard requires double quotes.

Its also important to understand that these two basic structures can be combined so that you can have a list of dictionaries or a dictionary of key:value pairs where the value is a list or a dictionary. You can have many levels and combinations and often that is what makes the data structure seem complex.

Lets start with a simple example.

Below is the output of an Ansible playbook that queries an ACI fabric for the configured tenants.

Cisco ACI Tenatn Query Playbook output
ACI Tenant Query Playbook – output

Lets look at the data that was returned, which is highlighted in a big yellow outer box in the image below. Also highlighted are the syntax symbols that will let us decompose this structure. As you can see from a, the entire structure is encased in curly braces {}. That tells us that the outermost structure is a dictionary and we know that dictionaries provide us with a key (think of this as an index value you use to get to the data in the value part) followed by a colon : followed by a value. In this case, we have a dictionary with one element or key:value pair and the value is a list. We know it is a list from b, which shows left square bracket immediately following the colon and denoting the start of a list structure.

{ "key" : "value" } ( where value is a list structure [])
{"tenantlist": ["common", "mgmt", "infra", etc..]}

Building a Custom TextFSM Template
ACI Tenant Query Playbook – output with annotations


This is a simple structure. It is short, so you can see the entire data structure in one view, but it has both types of data structures that you will encounter, a list and a dictionary.


Lets look at something a bit more complex.

REST COUNTRIES provides a public REST API for obtaining information about countries.

Check them out!

Using Postman, I submitted a query about Kenya. Below are the first 25 or so lines of the data returned.

Noting the colored letters and line numbers below in the image:

a. Line 1 shows us the left square bracket (outlined in a green box) which tells us this is a list. That is the outermost structure so you know that to get to the first element you will need to use a list reference like mylist[0] for the first element.

b. Line 2 shows us the left curly brace (outlined in a yellow box) which indicates that what follows as the first element of the list is a dictionary. Line 3 is a key:value pair where the key “name” has a value of “Kenya”.

c. Line 4 is a key:value pair but the value of key “topLevelDomain” is a list (of one element “.ke”).
Line7 returns us to a simple key:value pair.

Structured data returned from REST Countries API query
Structured data returned from REST Countries API query

Here is where it can start getting confusing. Remembering our reference rules..that is:

  • you reference a list element by its zero indexed positional number and
  • you reference the value of a dictionary element by its key

Don’t be distracted by the data within the syntax symbols just yet. If you see something like [ {},{},{} ], (ignoring the contents inside the braces and brackets) you should see that this is a list of three elements and those elements are dictionaries. Assuming alist = [ {},{},{} ] you access the first element (dictionary) with alist[0].

Going a step further, if you see this structure [ {"key1":[]},{"a_key": "blue"},{"listkey": [1,2,3]} ], you already know its a list of dictionaries. Now you can also see that the first element in the list is a dictionary with a single key:value pair and the key is “key1” and the value is an empty list. The second element in the list is also a dictionary with a single key:value pair with a key of “a_key” and a value of a string “blue”. I’ll leave you to describe the third element in the list.

Assuming my_list = [ {"key1":[]},{"a_key": "blue"},{"listkey": [1,2,3]} ], if I wanted to pull out the string “blue” I would reference my_color = my_list[1]["a_key"] and the variable my_color would be equal to “blue”. The string “blue” is the value of the second dictionary in the list. Remembering that list elements start at “0” (zero indexed), you need to access the element in the second position with [1]. To get to “blue” you have to use the key of “a_key” and so you have mylist[1][“a_key”] which will give you “blue”.

Lets try to extract the two letter country code for Kenya.

So I’m going to cheat a little here to introduce you to the concept of digging in to the data structure to “pluck” out the data you want. You really want to understand the entire data structure before doing this so I’m going to assume that this is a list with one element and that element is a dictionary of key value pairs with different information about Kenya.

That being the case, first I have to reference the element in the list.

  • The list has one element so I know I have to reference it with the zero index [0] (the first and only element in the list)
  • Next I have to pluck out the 2 letter country code for Kenya and that is in a key:value pair with the key of 'alpha2code'

So assuming we have a variable country_info that has our data structure, then to reference the 2 letter country code I would need to use

country_info[0]["alpha2Code"]

That reference structure above would return “KE”.

[0] takes us one level deep into the first element of the list. This is where the dictionary (note the curly brace in line 2 below) can be accessed. At this level we can access the key we need “alpha2Code” to get the 2 letter country code.

country_info =

Extracting the Country Code from the JSON output
Extracting the Country Code from the JSON output

Lets build on this. What if I need the country calling code so I can make a phone call to someone in Kenya from another country? For this, we need to go a level deeper as the country code is in a key named "callingCodes" at the same level of "alpha2Code" but the value is a list rather than a variable. See lines 9 – 11 in the image above. We know how to reference a list, so in this case, if I wanted the first country code in the list my reference structure would look like:

country_info[0]["callingCodes"][0]

That would return “254” (a string).

In many cases, you might want the entire list and so to get that:

country_info[0]["callingCodes"]

That would return [“254”] as a list (yes a list with only one element but a list because its enclosed in square brackets). There are cases where you may want to do some specific manipulation and you need the entire list.

Extra: In the companion GitHub repository to this post there is a quick & dirty Python3 script country_info_rest.py that will let you get some country data and save it to a file. There is also an optional “decompose function” that you can tailor to your needs to get a feel for decomposing the data structure via a script.

(generic_py3_env) Claudias-iMac:claudia$ python country_info_rest.py -h
usage: country_info_rest.py [-h] [-n CNAME] [-d]
​
Call REST Countries REST API with a country name.
​
optional arguments:
  -h, --help            show this help message and exit
  -n CNAME, --cname CNAME
                        Country Name to override default (Mexico)
  -d, --decompose       Execute a function to help decompose the response
​
Usage: 'python country_info_rest.py' without the --cname argument the script
will use the default country name of Mexico. Usage with optional name
parameter: 'python country_info_rest.py -n Singapore'. Note: this is a python3
script.
​​

Lets look at something really complex.

Now a word about “complex”. At this point I’m hoping you can start to see the pattern. It is a pattern of understanding the “breadcrumbs” that you need to follow to get to the data you want. You now know all the “breadcrumb” formats:

  • [positional number] for lists and
  • [“key”] for dictionaries

From here on out its more of the same. Lets see this in action with the data returned from the NASA Asetroids – Near Earth Object Web Service.

But first, here is why I called this “really complex”. A better term might be “long with scary data at first glance”. At least it was for me because when I first ran my first successful Ansible playbook and finally got data back it was like opening a shoe box with my favorite pair of shoes in it and finding a spider in the box…yes, shrieking “what IS that!” and jumping away.

I hope that by this point there is no shrieking but more of a quizzical..”Hmmm OK, I see the outer dictionary with a key of “asteroid_output” and a value of another dictionary with quite alot of keys and some odd looking stuff…”. Lets get into it!

…and if there is shrieking then I hope this helps get you on your way to where its quieter.

Raw output from an Ansible Playbook querying the NASA Near Earth Web Service
Raw output from an Ansible Playbook

I want to pluck out the diameter of the asteroid as well as something that tells me if there is any danger to Earth.

Somewhere, in the middle of all of this output, is this section which has the data we want but where is it in reference to the start of the entire data structure? Where are the breadcrumbs? Hard to tell… or is it?

Information we need to get to within the data structure returned by the Ansible playbook

You can visually walk the data but for data structures of this size and with many levels of hierarchy it can be time consuming and a bit daunting until you get the hang of it. There are a number of approaches I’ve tried including:

  1. visually inspecting it (good for 25 lines or less….if you cannot fit it on a single page to “eyeball” it try one of the other methods below…I promise you it will save you time)
  2. saving the output to a text file and opening it up in a modern text editor or IDE so that you can inspect and collapse sections to get a better understanding of the structure
  3. using a Python script or Ansible playbook to decompose by trial and error
  4. using a JSON editor to convert to a more readable structure and to interpret the data structure for you

I don’t recommend the first approach at all unless your data is like our first example or you combine it with the Python (or Ansible) trial and error approach but this can be time consuming. I do have to recommend doing it this way once because it really helps you understand what is going on.

Using a good advanced editor (*not* Notepad.exe) or IDE (Integrated Development Environment) is a good approach but for something that makes my eyes cross like the output above I use a JSON editor.

In the two sections below I’ll show you a bit more detail on approach #2 and #4. Play around with the companion GitHub repository for an example of approach #3.

Asteroid Data collapsed down in Sublime Text Editor

Note that this has already been collapsed down to the value of the key asteroid_output so the outer dictionary is already stripped off. In this view it looks a bit more manageable and the values we want can be found at the level shown below:

asteroid_output["json"]["near_earth_objects"][<date_key>]

where <date_key> can be any of the 8 date keys found in line 23, line 858, line 1154 etc. The gap in the line numbers give you a sense of how much data we’ve collapsed down but I hope you can begin to see how that makes it easier to start understanding you how you need to walk the path to the data you want.

asteroid_output =

Expanding one of the date keys as shown in the next image shows us how we might start to get to the data we want.

asteroid_output["json"]["near_earth_objects"]["2019-07-07"]

The date key we expanded, "2019-07-07", has a value that is a list. If we take the first element of that list we can get the estimated diameter in feet and the boolean value of “are we to go the way of the dinosaurs” or technically they value of key "is_potentially_hazardous_asteroid".

Estimated maximum diameter in feet:

asteroid_output["json"]["near_earth_objects"]["2019-07-07"][0]["estimated_diameter"]["feet"]["estimated_diameter_max"]

Is this going to be an extinction level event?:

asteroid_output["json"]["near_earth_objects"]["2019-07-07"][0]["is_potentially_hazardous_asteroid"]

Which will give us false (for that one date anyway :D).

Using a good text editor or IDE to investigate a data structure by expanding
Using a good text editor or IDE to investigate a data structure by expanding

Using JSON tools to decompose your data structure

It is here I must confess that these days if I can’t visually figure out or “eyeball” the “breadcrumbs” I need to use to get to the data I want, I immediately go to this approach. Invariably I think I can “eyeball” it and miss a level.

If I’m working with non-sensitive data JSON Editor Online is my personal favorite.

  1. I copy the output and paste it into the left window,
  2. click to analyze and format into the right window, and
  3. then I collapse and expand to explore the data structure and figure out the breadcrumbs that I need.

The Editor gives you additional information and element counts and has many other useful features. One of them is allowing you to save an analysis on line so you can share it.

Decomposing_Data_Structures_asteroid_output in JSON Editor Online

Using the JSON Editor Online to navigate through the returned data from your API call
Using the JSON Editor Online to navigate through the returned data from your API call

There are occasions where I’m not working with public data and in those cases I’m more comfortable using a local application. My “go to” local utility is JSON Editor from Vlad Badea available from the Apple Store. I don’t have a recommendation for Windows but I know such tools exist and some look interesting.

For this data set, the local JSON Editor application does a nicer job of representing the asteroid_output because it really collapses that hairy content value.

Using the JSON Editor App on your system to navigate through the returned data from your API call
JSON Editor APP on local system

Using a Python script to decompose by trial and error

In this repository there is a rudimentary Python3 script country_info_rest.py which, when executed with the “-d” option, will attempt to walk the response from the REST Country API a couple of levels.

The first part of the script executes a REST GET and saves the response. With the “-d” option it also executes a “decompose” function to help understand the returned data structure. Some sample output from the script follows.

Outer structure (0) levels deep:
        The data structure 0 levels deep is a <class 'list'>
        The length of the data structure 0 levels deep is 1
​
One level deep:
        The data structure 1 level deep is a <class 'dict'>
        The length of the data structure 1 level deep is 24
​
        Dictionary keys are dict_keys(['name', 'topLevelDomain', 'alpha2Code', 'alpha3Code', 'callingCodes', 'capital', 'altSpellings', 'region', 'subregion', 'population', 'latlng', 'demonym', 'area', 'gini', 'timezones', 'borders', 'nativeName', 'numericCode', 'currencies', 'languages', 'translations', 'flag', 'regionalBlocs', 'cioc'])
​
                Key: name       Value: Singapore
​
                Key: topLevelDomain     Value: ['.sg']
​
                Key: alpha2Code         Value: SG
​
                Key: alpha3Code         Value: SGP
​
                Key: callingCodes       Value: ['65']
​
                Key: capital    Value: Singapore
​
                Key: altSpellings       Value: ['SG', 'Singapura', 'Republik Singapura', '新加坡共和国']
​
                Key: region     Value: Asia
​
                Key: subregion  Value: South-Eastern Asia
​
                Key: population         Value: 5535000
​
                Key: latlng     Value: [1.36666666, 103.8]
​
                Key: demonym    Value: Singaporean
​
                Key: area       Value: 710.0
​
                Key: gini       Value: 48.1
​
                Key: timezones  Value: ['UTC+08:00']
​
                Key: borders    Value: []
​
                Key: nativeName         Value: Singapore
​
                Key: numericCode        Value: 702
​
                Key: currencies         Value: [{'code': 'BND', 'name': 'Brunei dollar', 'symbol': '$'}, {'code': 'SGD', 'name': 'Singapore dollar', 'symbol': '$'}]
​
                Key: languages  Value: [{'iso639_1': 'en', 'iso639_2': 'eng', 'name': 'English', 'nativeName': 'English'}, {'iso639_1': 'ms', 'iso639_2': 'msa', 'name': 'Malay', 'nativeName': 'bahasa Melayu'}, {'iso639_1': 'ta', 'iso639_2': 'tam', 'name': 'Tamil', 'nativeName': 'தமிழ்'}, {'iso639_1': 'zh', 'iso639_2': 'zho', 'name': 'Chinese', 'nativeName': '中文 (Zhōngwén)'}]
​
                Key: translations       Value: {'de': 'Singapur', 'es': 'Singapur', 'fr': 'Singapour', 'ja': 'シンガポール', 'it': 'Singapore', 'br': 'Singapura', 'pt': 'Singapura', 'nl': 'Singapore', 'hr': 'Singapur', 'faگاپور'}
​
                Key: flag       Value: https://restcountries.eu/data/sgp.svg
​
                Key: regionalBlocs      Value: [{'acronym': 'ASEAN', 'name': 'Association of Southeast Asian Nations', 'otherAcronyms': [], 'otherNames': []}]
​
                Key: cioc       Value: SIN

===== Plucking out specific data:     
2 Letter Country Code:                          SG     
First (0 index) International Calling Code:     65     
List of International Calling Code:             ['65']     
==========================================

Feel free to take this script and add to it and modify it for your own data structure!


Apart from the first example, I have deliberately not used data from network devices. I wanted to show that the data source really does not matter. Once you understand how to decompose the data, that is, get to the data you want within a returned data structure, you can pluck out data all day long from any data set. A secondary objective was to play around with some of these APIs. While checking to see if the Earth is about to get broadsided by an asteroid is clearly important there are quite a few public and fee-based APIs out there with perhaps more practical use.

Of course within our own networks, we will be querying device and controller APIs for status and pushing configuration payloads. We will be pulling inventory data from a CMDB system API, executing some actions, perhaps some updates, and recording any changes via API to the Ticketing System.


Some final tips, links, and notes:

Some sites have excellent API documentation that tell you exactly what will be returned but some don’t, so in many instances you have to do this decomposition exercise anyway. It’s best to get familiar with it. It’s like knowing how to read a map and and how to navigate in case you forget your GPS.

JSON Tools

JSON Editor Online

REST APIs

REST COUNTRIES

NASA Asteroids – Near Earth Object Web Service

https://api.nasa.gov/api.html#NeoWS

Examples Repository cldeluna/Decomposing_DataStructures


The Gratuitous. Arp

2019-07-07


Building a Custom TextFSM Template

If you have seen any of the TextFSM posts on this site you know how useful the Network To Code TextFSM Template repository can be. Rarely do I not find what I need there!

I recently had to parse route summary information from JUNOS Looking Glass routers. I always check the very rich set of templates in the NTC Template index repository but in this case I was out of luck. I was going to have to build my own… and you get to watch.

Two fantastic resources you can use when you are in the same boat are here:

Its good to begin by familiarizing yourself with the output you need to parse. Here is a snippet of the show command output.

>show route summary
Autonomous system number: 2495
Router ID: 164.113.193.221
inet.0: 762484 destinations, 1079411 routes (762477 active, 0 holddown, 12 hidden)
Direct: 1 routes, 1 active
Local: 1 routes, 1 active
BGP: 1079404 routes, 762470 active
Static: 5 routes, 5 active
inet.2: 3073 destinations, 3073 routes (3073 active, 0 holddown, 0 hidden)
BGP: 3073 routes, 3073 active

Start with something simple like ASN and RouterID

A basic TextFSM Template

I wanted to start slowly with something I knew I could get to work. Looking at the data, it should be simple to extract the first two values I need:
– ASN
– Router ID

I started with those values as they are by far the simpler to extract from the ‘show route summary’ command. I will try not to cover material that is covered by the two Google links above. However I do want to point out the concept of TextFSM (as I understand it or explain it to myself) which is to provide context for your regular expressions. That is, not only can you define the specific pattern to search for but you can also define its “environment”. As you can see below the “Value” keyword lets me define a variable I want to pluck out of the unstructured text (the show command output). LIne 4 defines the “action” section to start processing and the first thing to look for is a line that starts with “Autonomous system number:” one or more space noted by the \s+ and then our ASN variable which we defined above as being a pattern of one or more digits \d+. So you have the power of the regular expression that defines the value you want and the power of regular expressions to help you define the context where your value will be found.

Junos ‘show route summary’ TextFSM Template – Version 1

For this exercise we will use my textfsm3 GitHub repository and the “test_textfsm.py” script for our testing rather than the Python command interpreter. Simply clone the repository to get started.
Note that the repository has the completed version of the template. Look at the history of the template file on GitHub to see its “evolution”.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py -h
usage: test_textfsm.py [-h] [-v] template_file output_file
This script applys a textfsm template to a text file of unstructured data (often show commands). The resulting structured data is saved as text (output.txt) and CSV (output.csv).
positional arguments:
template_file TextFSM Template File
output_file Device data (show command) output
optional arguments:
-h, --help show this help message and exit
-v, --verbose Enable all of the extra print statements used to investigate the results

In the first iteration of the template file, we obtain the output below.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary
.template junos_show_route_summary.txt

TextFSM Results Header:
['ASN', 'RTRID']
================================
['2495', '164.113.193.221']
================================

Extract more details

So we have successfully built a template that will extract ASN and RouterID from the Junos show route summary command. Now it will get interesting because we also want this next set of values.

  • Interface
  • Destinations
  • Routes
  • Active
  • Holddown
  • Hidden

The first challenge here was to pick up the totals line. Here, one of my favorite tools comes into play, RegEx101. Regular expressions don’t come easy to me and this site makes it so easy! I saved the working session for trying to match the first part of that long totals line. As you can see, you can’t just match “inet”, or “inet” plus a digit, you also have to account for the “small.” Using RegEx101 and trial and error I came up with the following regular expression.

Value INT (([a-z]+.)?[a-z]+(\d)?.\d+)

inet.0: 762484 destinations, 1079411 routes (762477 active, 0 holddown, 12 hidden)

inet6.0: 66912 destinations, 103194 routes (66897 active, 0 holddown, 30 hidden)
Direct: 3 routes, 3 active

small.inet6.0: 31162 destinations, 31162 routes (31162 active, 0 holddown, 0 hidden)
BGP: 31162 routes, 31162 active

Let’s break it down…

The diagram below breaks the regex down into the key sections and numbers them. At the bottom you can see the actual text we are trying to parse and the numbers above indicate which section of the regex picked up the text we were interested in.

Breaking down the regular expression to extract the interface identifier (inet.x) for your TextFSM Template

The regex for INT (inet.x) was by far the most complicated. See 3 and 4 above. The rest of the line is far simpler and you just need to make sure you have it exactly as it appears in the raw text. Note that the parenthesis, which are part of the raw text show command, must also be ‘escaped’ just like the period.

Here is the TextFSM Template so far:

 Value Filldown ASN (\d+)
Value Filldown RTRID (\S+)
Value INT (([a-z]+.)?[a-z]+(\d)?.\d+)
Value DEST (\d+)
Value Required ROUTES (\d+)
Value ACTIVE (\d+)
Value HOLDDOWN (\d+)
Value HIDDEN (\d+)
Start
^Autonomous system number:\s+${ASN}
^Router ID:\s+${RTRID}
^${INT}:\s+${DEST}\s+destinations,\s+${ROUTES}\s+routes\s+\(${ACTIVE}\s+active,\s+${HOLDDOWN}\s+holddown,\s+${HIDDEN}\s+hidden\) -> Record

…and the resulting structured data:

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary.template junos_show_route_summary.txt
TextFSM Results Header:
['ASN', 'RTRID', 'INT', 'DEST', 'ROUTES', 'ACTIVE', 'HOLDDOWN', 'HIDDEN']
['2495', '164.113.193.221', 'inet.0', '762484', '1079411', '762477', '0', '12']
['2495', '164.113.193.221', 'inet.2', '3073', '3073', '3073', '0', '0']
['2495', '164.113.193.221', 'small.inet.0', '116371', '116377', '116371', '0', '0']
['2495', '164.113.193.221', 'inet6.0', '66912', '103194', '66897', '0', '30']
['2495', '164.113.193.221', 'small.inet6.0', '31162', '31162', '31162', '0', '0']

A few things to highlight, I used the ‘Filldown’ keyword for ASN and RTRID so that each “record” would have that information. The ‘Filldown’ keyword will take a value that appears once and duplicate it in subsequent records. If nothing else, it IDs the router from which the entry came but it also serves to simplify some things you might want to do down the line as each “record” has all the data. I also used the ‘Required’ keyword for routes to get rid of the empty last row that is generated when you used ‘Filldown’.

Almost there! We just need to pick up the source routes under each totals line.

Value SOURCE (\w+)
Value SRC_ROUTES (\d+)
Value SRC_ACTIVE (\d+)

Here is what the final (for now anyway) template looks like:

 Value Filldown ASN (\d+)
Value Filldown RTRID (\S+)
Value Filldown INT (([a-z]+.)?[a-z]+(\d)?.\d+)
Value DEST (\d+)
Value ROUTES (\d+)
Value ACTIVE (\d+)
Value HOLDDOWN (\d+)
Value HIDDEN (\d+)
Value SOURCE (\w+)
Value SRC_ROUTES (\d+)
Value SRC_ACTIVE (\d+)

Start
^Autonomous system number:\s+${ASN}
^Router ID:\s+${RTRID}
^${INT}:\s+${DEST}\s+destinations,\s+${ROUTES}\s+routes\s+(${ACTIVE}\s+active,\s+${HOLDDOWN}\s+holddown,\s+${HIDDEN}\s+hidden) -> Record
^\s+${SOURCE}:\s+${SRC_ROUTES}\s+routes,\s+${SRC_ACTIVE}\s+active -> Record

A few highlights. Because I wanted to store the source routes in a different value (SRC_ROUTES) I had to remove required from Routes in order to pick up the rows. I now have an extra row at the end but I can live with that for now. I also added Filldown to INT so that its clear where the source information came from.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary.template junos_show_route_summary.txt

TextFSM Results Header:
['ASN', 'RTRID', 'INT', 'DEST', 'ROUTES', 'ACTIVE', 'HOLDDOWN', 'HIDDEN', 'SOURCE', 'SRC_ROUTES', 'SRC_ACT
IVE']
['2495', '164.113.193.221', 'inet.0', '762484', '1079411', '762477', '0', '12', '', '', '']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Direct', '1', '1']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Local', '1', '1']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'BGP', '1079404', '762470']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Static', '5', '5']
['2495', '164.113.193.221', 'inet.2', '3073', '3073', '3073', '0', '0', '', '', '']
['2495', '164.113.193.221', 'inet.2', '', '', '', '', '', 'BGP', '3073', '3073']
['2495', '164.113.193.221', 'small.inet.0', '116371', '116377', '116371', '0', '0', '', '', '']
['2495', '164.113.193.221', 'small.inet.0', '', '', '', '', '', 'BGP', '116377', '116371']
['2495', '164.113.193.221', 'inet6.0', '66912', '103194', '66897', '0', '30', '', '', '']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Direct', '3', '3']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Local', '2', '2']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'BGP', '103185', '66888']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Static', '4', '4']
['2495', '164.113.193.221', 'small.inet6.0', '31162', '31162', '31162', '0', '0', '', '', '']
['2495', '164.113.193.221', 'small.inet6.0', '', '', '', '', '', 'BGP', '31162', '31162']
['2495', '164.113.193.221', 'small.inet6.0', '', '', '', '', '', '', '', '']

The test_textfsm.py file will save your output into a text file as well as into a CSV file.
I did try using ROUTES for both sections and making it Required again. This got rid of the extra empty row but really impacts readability. I would have to keep track of how I used ROUTES as I would have lost the SRC_ROUTES distinction. That is a far greater sin in my opinion than an empty row at the end which is clearly just an empty row.