Configuration Creation with Nornir

I tend to assess automation tools in four different contexts which is, in fact, a very general networking and automation workflow:

  • Discovery
    • How easy is it to find out about the network, document its configuration (the configuration of a device itself) and state (show commands “snapshotting” its state)?
  • Configuration Creation
    • How easy is it to generate configurations for a device, given a template?
  • Configuration Application
    • How easy is it to apply a set of configuration commands to one or more devices based on criteria?
  • Verification & Testing
    • How easy is it to audit and validate configurations and script results?
    • Yes, unglamorous though this may be it is a vital function for any network engineer and I would say doubly so with automation.

 We looked at Step 1, discovery, in the introduction to Nornir. This is step 2 in that workflow as I become more familiar with Nornir.

In my initial notes about Nornir I mentioned a few areas where Nornir really seemed to shine. Since then, I’ve had occasion to truly appreciate its native python roots. I recently worked with a Client where I was not able to install Ansible in their environment but they had no issues with Python. Nornir saves the day!

In keeping with my “assessment methodology” (trust me that sounds far more rigorous than it is) my first use of Nornir (then called Brigade) involved using Napalm get_facts against a couple of network devices and then decomposing the returned data (figuring out what it is and how to get to the data). In this way, I was easily able to discover key facts about all the network devices in my inventory and return them as structured data.

Why do we talk about “structured data” so often? Its our way of saying you don’t have to parse the ever changing stream of data you get back from some network devices. Perhaps it is more accurate to say that someone has already parsed the data for us and is returning it in a nice data structure that we can easily manipulate (a list, a dictionary, or more commonly a combination of both). For todays task we are going to parse the unstructured data we get back from each device ourselves so we can truly appreciate all the heavy lifting tools like Napalm do for us.

It drove me crazy that the first thing everyone always taught in Ansible was how to generate configs because that is not what I found powerful about it. For quite a while all anyone (in the networking community at least) ever learned to do with Ansible was generate configs! So I was pleased that most of the early Nornir examples started with what I call “discovery”. However, now it is time to look at configuration creation with Nornir.

Let’s get started.

I have a simple task I want to accomplish. I need to evaluate all of my switches and remove any vlans that are not in use. I also want to make sure I have a set of standard vlans configured on each switch:

  • 10 for Data
  • 100 for Voice
  • 300 for Digital Signage
  • 666 for User Static IP Devices

We will take this one step at a time. 

First we will query our devices for the data that we need to help us decide what vlans to keep and what vlans to remove. 

We will then take that data and generate a customized configuration “snippet” or set of commands that we can later apply to each device to achieve our desired result.  

Each switch will only have vlans that are in use and a standard set of vlans supporting voice, data, digital signage, and user devices with static IPs. As an added wrinkle, I have some devices that are only accessible via Telnet (you would be surprised at how often I find this to be the case.)

I’m not doing too much here with idempotency or “declarative” networking but I find that I understand things a bit better if I look at it from the current “classical” perspective and then look at how these tools can help leapfrog me into a much more efficient way of doing things.

This little automation exercise starts to bring in a variety of tools which will help us accomplish our task.

  • Nornir, and python of course, provide our ready made inventory, connection, and task environment. With Nornir in place I can now perform tasks on any subset of my inventory. We won’t cover filtering here but know it is an available feature.
  • Napalm easily accessible to us via Nornir provides the connectivity method (napalm_cli) that allows us to query, real time, our devices and obtain the data we need to achieve our “vlan cleanup & standardization” task.
  • TextFMS and NetworkToCode parsing templates allow us to extract our data as structured data so we can easily manipulate it and apply logic. We need this because napalm does not yet make available a “get vlans” getter and so we have to do this ourselves. 
    • Note that for some devices we might be able to use the interface getter for this data but I think there is great value in knowing how to do this ourselves should the need arise. We can’t have Mr. Barroso and team do all of our work for us!
  • Jinja2 also available to us via the Nornir framework will allow us to generate the customized configuration snippets.

Let see where they come into play in our task breakdown

Query devices for data

  • Environment set up via Nornir
  • Connectivity method used via a standard Nornir task using Napalm CLI

Analysis

Analyze the data retrieved from each device and determine a vlan “disposition” (keeping or removing)

  • Taking the data provided by the Napalm CLI (and the “show vlan” command we sent) we were able to quickly parse it via TextFSM and an existing TextFSM Template from the NetworkToCode TextFSM Template Repository
  • With our data now in a python data structure we were able to apply our “business rules logic” to get an understanding of the changes required in each device.
== Parsing vlan output for device arctic-as01 using TextFSM and NetworkToCode template.

================================================================================


VLAN_ID NAME          STATUS     TOTAL_INT_IN_VLAN    ACTION

1    default         active              7   Keeping this vlan

10   Management_Vlan active              1   Keeping this vlan

20   Web_Tier        active              0   This vlan will be removed!

30   App_Tier        active              0   This vlan will be removed!

40   DB_Tier         active              0   This vlan will be removed!

================================================================================

Configuration Creation

Generate a customized set of configuration commands to achieve our desired vlan state

  • Using the built in Nornir task that allows us to generate configurations based on Jinja2, we used the logic above to generate specific configuration commands for each device that, when applied, will achieve our desired state.

Here is the resulting configuration snippet for this particular device.

! For device arctic-as01
!

no vlan 20

no vlan 30

no vlan 40


vlan 10
 name Data_Vlan

vlan 100
 name Voice_Vlan

vlan 300
 name Digital_Signage

vlan 666
 name User_Static_IP_Devices

As you can see, Nornir is bringing together the tools and functions that we use for day to day network automation tasks under one native Python wrapper.  Where we need some customization or a tool does not exist it is a simple matter of using Python to bridge that gap!

Lets add a little more functionality to our original repository and try to gain a better understanding of Nornir.

Please see my nornir-config GitHub repository for the details!

Handy Links:

Nornir Documentation

Cisco Blog – Developer Exploring Nornir, the Python Automation Framework

A quick example of using TextFSM to parse data from Cisco show commands

Nornir – A New Network Automation Framework

nornir (formerly brigade) – A new network automation framework

Before getting started, let me say that I’m big fan of Ansible. It is one of my go-to automation frameworks. Having said that, there have been use cases where I’ve run into some of the limitations of Ansible and to be fair some of those limitations may have been my own. 

By limitations, I don’t mean I could not do what I wanted but rather to do what I wanted got a bit more complex than perhaps it should have been. When I go back in 6 months, I’ll have no idea how I got it to work. These use cases often involve more complex logic than what Ansible handles with its “simpler” constructs and Domain Specific Language (DSL) . 

So I was very intrigued to hear that the following automation heavyweights have been working on a new automation framework, Nornir:

  • David Barroso (think NAPALM – the python library not the sticky flammable stuff)
  • Kirk Byers (think Netmiko, Netmiko tools, and teacher extraordinaire – I can say that with full confidence as I’m pretty sure I’ve taken every one of his courses – some twice!)
  • Patrick Ogenstad (think NetworkLore)

Nornir Documentation

As an engineer one of my favorite question is “What problem are we trying to solve?” and here is my answer to that question when applied to Nornir.

Simpler, more complex logic

An oxymoron?  Certainly. Read on and I will try to explain.

By using pure Python, this framework solves the complex logic frustrations you may ultimately encounter with Ansible. If you can do it with python or have done it with python you are good to go. Thats not to say you can’t take your python script and turn it into an Ansible module but Nornir may save you that step.

Domain specific languages can be both a blessing and a curse. They can allow you the illusion that you are not programing and so facilitate getting you started if “programming” is not your cup of tea. They can help you get productive very quickly but eventually you may hit a tipping point where the cost of doing what you need to do with the tools and features in the DSL is too high in terms of complexity and supportability. Nornir “simplifies” that complex logic by allowing you access to all the tools you have in native python. As a side, and not insignificant, benefit you might actually remember what your code does when you get back to it in 6 months.  

Native on all platforms

Many companies only provide Windows laptops and so I’ve always tried to be very mindful of that when developing solutions. 

Scenario: I’ve got all of these Ansible play books that we can use to standardize network discovery, build configurations, apply configurations but most of my colleagues have Windows laptops and while I was able to develop and run these on my Mac where I can easily run an Ansible control server now we need to get an Ansible control server on a bunch of Windows laptops (this is not natively supported by Ansible). 

There are certainly solutions for this (see Using Docker as an Ansible and Python platform for Network Engineers) but that’s an extra step. There may be other reasons for taking that step but Nornir is a pip installable module and so you don’t need to.

I spent a Sunday afternoon dabbling in Nornir and it was well worth the time. It took me about 45 minutes to get things set up on my Windows system and run the example Patrick Ogenstad included in his post. While Nornir is said to support Python 2.7 (but recommends Python 3.6) I did have installation issues even with the latest pip installed. That was a significant part of the 45 minutes. Once I set up a Python3 virtual environment it worked flawlessly. You can see my work in this GitHub repository.

This is an exciting new framework with a great deal of promise which we can add to our automation arsenal!

Over the next few weeks (or months ) I’ll continue to familiarize myself with Nornir and report back.

This was originally published May 7, 2018 on Linkedin but has been updated to support the latest Nornir and scripts have been renamed so that there is no confusion between brigade and nornir. But let me say that this renaming is tied with the renaming of Cisco’s Spark platform to Webex Teams as the worst ever, or at least the last decade.

Original Brigade GitHub repository

Part 2 of this Series – Configuration Creation with Nornir

Pandas for Network Engineers (Who doesn’t love Pandas? )

The module not the mammal!

My original title for this article was going to be *Decomposing Pandas* as a follow on to *Decomposing Data Structures* but I was advised against that name. Go figure.

One of the things I love most about Python is that its always waiting for me to get just a little bit better so it can show me a slightly smarter way to do something. Pandas is the latest such example.

Pandas is a powerful data science Python library that excels at manipulating multidimensional data.

Why is this even remotely interesting to me as a network engineer?

Well, thats what Excel does, right?

I spend more time than I care to admit processing data in Excel. I find that Excel is always the lowest common denominator. I understand why and often I’m a culprit myself but eventually one grows weary of all the data being in a spreadsheet and having to manipulate it. I’m working on the former and Pandas is helping on the latter.

Google around enough for help on processing spreadsheets and you will come across references to the Pandas Python module.

If you are anything like me, you go through some or all of these stages:

  • You dismiss it as irrelevant to what you are trying to do
  • You dismiss it because its seems to be about big data, analytics, and scientific analysis of data (not your thing right?)
  • As you continue to struggle with what got you here in the first place (there has got to be a better way to deal with this spreadsheet data) you reconsider. So you try to do some processing in Pandas and pull a mental muscle…and what the heck is this NaN thing that keeps making my program crash? Basically, you find yourself way way out of your comfort zone (well..I did)!
  • You determine that your limited Python skills are not up to something quite this complex…after all, you know just enough Python to do the automation stuff you need to do and you are not a data scientist.

Finally, in a fit of desperation as you see all the Excel files you have to process, you decide that a python module is not going to get the better of you and you give it another go!

So here I am, on the other side of that brain sprain, and better for it, as is usually the case.

What is possible with Pandas…

Once you get the hang of it, manipulating spreadsheet-like data sets becomes so much simpler with Pandas. In fact, thats true for any data set, not just ones from spreadsheets. In fact, in the examples below, the data set comes from parsing show commands with TextFSM.

Knowing how to work with Pandas, even in a limited fashion as is the case with me, is going to be a handy skill to have for any Network Engineer who is (or is trying to become) conversant in programmability & automation.

My goal here is not to teach you Pandas as there is quite alot of excellent material out there to do that. I’ve highlighted the content which helped me the most in the “Study Guide” section at the end.

My goal is to share what I’ve been able to do with it as a Network Engineer, what I found most useful as I tried to wrap my head around it, and my own REPL work.

Lets look at something simple. I need to get the ARP table from a device and “interrogate” the data.

In this example, I have a text file with the output of the “show ip arp” command which I’ve parsed with TextFSM.

Here is the raw data returned from the TextFSM parsing script:

 # Executing textfsm strainer function only to get data
  strained, strainer = basic_textfsm.textfsm_strainer(template_file, output_file, debug=False)

In [1]: strained                                                                                                                                                                                                            
Out[1]:
[['Internet', '10.1.10.1', '5', '28c6.8ee1.659b', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.11', '4', '6400.6a64.f5ca', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.10', '172', '0018.7149.5160', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.21', '0', 'a860.b603.421c', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.37', '18', 'a4c3.f047.4528', 'ARPA', 'Vlan1'],
['Internet', '10.10.101.1', '-', '0018.b9b5.93c2', 'ARPA', 'Vlan101'],
['Internet', '10.10.100.1', '-', '0018.b9b5.93c1', 'ARPA', 'Vlan100'],
['Internet', '10.1.10.102', '-', '0018.b9b5.93c0', 'ARPA', 'Vlan1'],
['Internet', '71.103.129.220', '4', '28c6.8ee1.6599', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.170', '0', '000c.294f.a20b', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.181', '0', '000c.298c.d663', 'ARPA', 'Vlan1']]

Note: don’t read anything into the variable name strained. The function I use to parse the data is called textfsm_strainer because I “strain” the data through TextFSM to get structured data out of it so I put the resulting parsed data from that function into a variable called “strained”.

Here is that data in a Pandas Data Frame:

# strained is the parsed data from my TextFSM function and the first command below
# loads that parsed data into a Pandas Data Frame called "df"
​
In [1]: df = pd.DataFrame(strained, columns=strainer.header)                                                                                                                                                                                                           
In [2]: df                                                                                                                                                                                                                                                      
Out[2]: 
​
    PROTOCOL         ADDRESS  AGE             MAC  TYPE INTERFACE
0   Internet       10.1.10.1    5  28c6.8ee1.659b  ARPA     Vlan1
1   Internet      10.1.10.11    4  6400.6a64.f5ca  ARPA     Vlan1
2   Internet      10.1.10.10  172  0018.7149.5160  ARPA     Vlan1
3   Internet      10.1.10.21    0  a860.b603.421c  ARPA     Vlan1
4   Internet      10.1.10.37   18  a4c3.f047.4528  ARPA     Vlan1
5   Internet     10.10.101.1    -  0018.b9b5.93c2  ARPA   Vlan101
6   Internet     10.10.100.1    -  0018.b9b5.93c1  ARPA   Vlan100
7   Internet     10.1.10.102    -  0018.b9b5.93c0  ARPA     Vlan1
8   Internet  71.103.129.220    4  28c6.8ee1.6599  ARPA     Vlan1
9   Internet     10.1.10.170    0  000c.294f.a20b  ARPA     Vlan1
10  Internet     10.1.10.181    0  000c.298c.d663  ARPA     Vlan1

I now have a spreadsheet like data structure with columns and rows that I can query and manipulate.


My first question:

What are all the IPs in Vlan1?

Just Python

Before Pandas, I would initialize an empty list to hold the one or more IPs and then I would iterate through the data structure (strained in this example) and where the interface “column” value (which in this list of lists in the strained variable is at index 5) was equal to ‘Vlan1’ I appended that IP to the list. The IP is in index 1 in each item the strained list.

# Using Python Only
print("\n\tUsing Python only..")
vlan1ips = []
for line in strained:
    if line[5] == 'Vlan1':
        vlan1ips.append(line[1])
print(f"{vlan1ips}")

The resulting output would look something like this:

['10.1.10.1', '10.1.10.11', '10.1.10.10', '10.1.10.21', '10.1.10.37', '10.1.10.102', '71.103.129.220', '10.1.10.170', '10.1.10.181']

Python and Pandas

Using a Pandas data frame df to hold the parsed data:

pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].values

The resulting output from the one liner above would look something like this:

 ['10.1.10.1' '10.1.10.11' '10.1.10.10' '10.1.10.21' '10.1.10.37'
'10.1.10.102' '71.103.129.220' '10.1.10.170' '10.1.10.181']

Same output with a single command!

Python List Comprehension

For those more conversant with Python, you could say that list comprehension is just as efficient.

# Using list comprehension
print("Using Python List Comprehension...")
lc_vlan1ips = [line[1] for line in strained if line[5] == 'Vlan1' ]

Results in:

Using List Comprehension: 
['10.1.10.1', '10.1.10.11', '10.1.10.10', '10.1.10.21', '10.1.10.37', '10.1.10.102', '71.103.129.220', '10.1.10.170', '10.1.10.181']

So yes..list comprehension gets us down to one line but I find it a bit obscure to read and a week later I will have no idea what is in line[5] or line[1].

I could turn the data into a list of dictionaries so that rather than using the positional indexes in a list I could turn line[1] into line[‘IP_ADDRESS’] and line[5] into line[‘INTERFACE’] which would make reading the list comprehension and the basic python easier but now we’ve added lines to the script.

Finally, Yes its one line but I’m still iterating over the data.

Pandas is set up to do all the iteration for me and lets me refer to data by name or by position “out of the box” and without any extra steps.

Lets decompose the one line of code:

If you think of this expression as a filter sandwich, the df[‘ADDRESS’] and .values are the bread and the middle .loc[df[‘INTERFACE’]] == ‘Vlan1’] part that filters is the main ingredient.

Without the middle part you would have a Pandas Series or list of all the IPs in the ARP table. Basically you get the entire contents of the ‘ADDRESS” column in the data frame without any filtering.

When you “qualify” df[‘ADDRESS’] with .loc[df[‘INTERFACE’]] == ‘Vlan1’] you filter the ADDRESS column in the data frame for just those records where INTERFACE is ‘Vlan1’ and you only return the IP values by using the .values method.

Now, this will return a numpy.ndarray which might be great for some subsequent statistical analysis but as network engineers our needs are simple.

I’m using iPython in the examples below as you can see from the “In” and “Out” line prefixes.

In [1]: pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].values

In [2]: type(pandas_vlan1ips) Out[2]: numpy.ndarray

I would like my list back as an actual python list and thats no problem for Pandas.

pandas-vlan1ips-list-2019-12-30_07-46-43

In [3]: pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].to_list()

In [4]: type(pandas_vlan1ips) Out[4]: list

In [5]: pandas_vlan1ips Out[5]:` `['10.1.10.1',` `'10.1.10.11',` `'10.1.10.10',` `'10.1.10.21',` `'10.1.10.37',` `'10.1.10.102',` `'71.103.129.220',` `'10.1.10.170',` `'10.1.10.181']

You know what would be really handy? A list of dictionaries where I can reference both the IP ADDRESS and the MAC as keys.

In [5]: vlan1ipmac_ldict = df[['ADDRESS', 'MAC']].to_dict(orient='records')

In [6]: type(vlan1ipmac_ldict) Out[6]: list

In [7]: vlan1ipmac_ldict Out[7]:` `[{'ADDRESS': '10.1.10.1', 'MAC': '28c6.8ee1.659b'},` `{'ADDRESS': '10.1.10.11', 'MAC': '6400.6a64.f5ca'},` `{'ADDRESS': '10.1.10.10', 'MAC': '0018.7149.5160'},` `{'ADDRESS': '10.1.10.21', 'MAC': 'a860.b603.421c'},` `{'ADDRESS': '10.1.10.37', 'MAC': 'a4c3.f047.4528'},` `{'ADDRESS': '10.10.101.1', 'MAC': '0018.b9b5.93c2'},` `{'ADDRESS': '10.10.100.1', 'MAC': '0018.b9b5.93c1'},` `{'ADDRESS': '10.1.10.102', 'MAC': '0018.b9b5.93c0'},` `{'ADDRESS': '71.103.129.220', 'MAC': '28c6.8ee1.6599'},` `{'ADDRESS': '10.1.10.170', 'MAC': '000c.294f.a20b'},` `{'ADDRESS': '10.1.10.181', 'MAC': '000c.298c.d663'}]

In [8]: len(vlan1ipmac_ldict) Out[8]: 11

MAC address Lookup

Not impressed yet. Let see what else we can do with this Data Frame.

I have a small function that performs MAC address lookups to get the Vendor OUI.

This function is called get_oui_macvendors() and you pass it a MAC address and it returns the vendor name.

It uses the MacVendors.co API.

I’d like to add a column of data to our Data Frame with the Vendor OUI for each MAC address.

In the one line below, I’ve added a column to the data frame titled ‘OUI’ and populated its value by performing a lookup on each MAC and using the result from the get_oui_macvendors function.

df['OUI'] = df['MAC'].map(get_oui_macvendors)

The left side of the equation references a column in the data Fram which does not exist so it will be added.

The right side takes the existing MAC column in the data frame and takes each MAC address and runs it through the get_oui_macvendors function to get the Vendor OUI and “maps” that result into the new OUI “cell” for that MACs row in the data frame.

pandas-newcolumn diagram to show what is happening under the hood in the one line command to ad a coloumn

Now we have an updated Data Frame with a new OUI column giving the vendor code for each Mac.

In [1]: df                                                                                                                                                                                                                                                      
 Out[1]: 
     PROTOCOL         ADDRESS  AGE             MAC  TYPE INTERFACE                 OUI
 0   Internet       10.1.10.1    5  28c6.8ee1.659b  ARPA     Vlan1             NETGEAR
 1   Internet      10.1.10.11    4  6400.6a64.f5ca  ARPA     Vlan1           Dell Inc.
 2   Internet      10.1.10.10  172  0018.7149.5160  ARPA     Vlan1     Hewlett Packard
 3   Internet      10.1.10.21    0  a860.b603.421c  ARPA     Vlan1         Apple, Inc.
 4   Internet      10.1.10.37   18  a4c3.f047.4528  ARPA     Vlan1     Intel Corporate
 5   Internet     10.10.101.1    -  0018.b9b5.93c2  ARPA   Vlan101  Cisco Systems, Inc
 6   Internet     10.10.100.1    -  0018.b9b5.93c1  ARPA   Vlan100  Cisco Systems, Inc
 7   Internet     10.1.10.102    -  0018.b9b5.93c0  ARPA     Vlan1  Cisco Systems, Inc
 8   Internet  71.103.129.220    4  28c6.8ee1.6599  ARPA     Vlan1             NETGEAR
 9   Internet     10.1.10.170    0  000c.294f.a20b  ARPA     Vlan1        VMware, Inc.
 10  Internet     10.1.10.181    0  000c.298c.d663  ARPA     Vlan1        VMware, Inc.

More questions

Lets interrogate our data set further.

I want a unique list of all the INTERFACE values.

In [3]: df['INTERFACE'].unique()                                                                                                                                                                                                                                
 Out[3]: array(['Vlan1', 'Vlan101', 'Vlan100'], dtype=object)

How about “Give me a total count of each of the unique INTERFACE values?”

In [4]: df.groupby('INTERFACE').size()                                                                                                                                                                                                                          
 Out[4]: 
 INTERFACE
 Vlan1      9
 Vlan100    1
 Vlan101    1
 dtype: int64

Lets take it down a level and get unique totals based on INTERFACE and vendor OUI.

In [2]: df.groupby(['INTERFACE','OUI']).size()                                                                                                                                                                                                                  
 Out[2]: 
 INTERFACE  OUI               
 Vlan1      Apple, Inc.           1
            Cisco Systems, Inc    1
            Dell Inc.             1
            Hewlett Packard       1
            Intel Corporate       1
            NETGEAR               2
            VMware, Inc.          2
 Vlan100    Cisco Systems, Inc    1
 Vlan101    Cisco Systems, Inc    1
 dtype: int64

I could do this all day long!

Conclusion

I’ve just scratched the surface of what Pandas can do and I hope some of the examples I’ve shown above illustrate why investing in learning how to use data frames could be very beneficial. Filtering, getting unique values with counts, even Pivot Tables are possible with Pandas.

Don’t be discouraged by its seeming complexity like I was.

Don’t discount it because it does not seem to be applicable to what you are trying to do as a Network Engineer, like I did. I hope I’ve shown how very wrong I was and that it is very applicable.

In fact, this small example and some of the other content in this repository comes from an actual use case.

I’m involved in several large refresh projects and our workflow is what you would expect.

  1. Snapshot the environment before you change out the equipment
  2. Perform some basic reachability tests
  3. Replace the equipment (switches in this case)
  4. Perform basic reachability tests again
  5. Compare PRE and POST state and confirm that all the devices you had just before you started are back on the network.
  6. Troubleshoot as needed

As you can see if you delve into this repository, its heavy on APR and MAC data manipulation so that we can automate most of the workflow I’ve described above. Could I have done it without Pandas? Yes. Could I have done it as quickly and efficiently with code that I will have some shot of understanding in a month without Pandas? No.

I hope I’ve either put Pandas on your radar as a possible tool to use in the future or actually gotten you curious enough to take the next steps.

I really hope that the latter is the case and I encourage you to just dive in.

The companion repository on GitHub is intended to help and give you examples.


Next Steps

The “Study Guide” links below have some very good and clear content to get you started. Of all the content out there, these resources were the most helpful for me.

Let me also say that it took a focused effort to get the point where I was doing useful work with Pandas and I’ve only just scratched the surface. I was worth every minute! What I have described here and in this repository are the things that were useful for me as a Network Engineer.

Once you’ve gone through the Study Guide links and any others that you have found, you can return to this repository to see examples. In particular, this repository contains a Python script called arp_interrogate.py.

It goes through loading the ARP data from the “show ip arp” command, parsing it, and creating a Pandas Data Frame.

It then goes through a variety of questions (some of which you have seen above) to show how the Data Frame can be “interrogated” to get to information that might prove useful.

There are comments throughout which are reminders for me and which may be useful to you.

The script is designed to run with data in the repository by default but you can pass it your own “show ip arp” output with the -o option.

Using the -i option will drop you into iPython with all of the data still in memory for you to use. This will allow you to interrogate the data in the Data Frame yourself..

If you would like to use it make sure you clone or download the repository and set up the expected environment.

Options for the arp_interrogate.py script:

(pandas) Claudias-iMac:pandas_neteng claudia$ python arp_interrogate.py -h
usage: arp_interrogate.py [-h] [-t TEMPLATE_FILE] [-o OUTPUT_FILE] [-v]
                        [-f FILENAME] [-s] [-i] [-c]

Script Description

optional arguments:
-h, --help           show this help message and exit
-t TEMPLATE_FILE, --template_file TEMPLATE_FILE
                      TextFSM Template File
-o OUTPUT_FILE, --output_file OUTPUT_FILE
                      Full path to file with show command show ip arp output
-v, --verbose         Enable all of the extra print statements used to
                      investigate the results
-f FILENAME, --filename FILENAME
                      Resulting device data parsed output file name suffix
-s, --save           Save Parsed output in TXT, JSON, YAML, and CSV Formats
-i, --interactive     Drop into iPython
-c, --comparison     Show Comparison

Usage: ' python arp_interrogate.py Will run with default data in the
repository'
(pandas) Claudias-iMac:pandas_neteng claudia$

Study Guide

A Quick Introduction to the “Pandas” Python Library

https://towardsdatascience.com/a-quick-introduction-to-the-pandas-python-library-f1b678f34673

For me this is the class that made all the other classes start to make sense.

Note that this class is not Free.

Pandas Fundamentals by Paweł Kordek on PluralSight is exceptionally good.

There is quite alot to Pandas and it can be overwhelming (at least it was for me) but this course in particular got me working very quickly and explained things in a very clear way.

Python Pandas Tutorial 2: Dataframe Basics by codebasics <- good for Pandas operations and set_index

Python Pandas Tutorial 5: Handle Missing Data: fillna, dropna, interpolate by codebasics

Python Pandas Tutorial 6. Handle Missing Data: replace function by codebasics

Real Python <- this is terrific resource for learning Python

There is a lot of content here. Explore at will. The two below I found particularly helpful.

https://realpython.com/search?q=pandas

Intro to DataFrames by Joe James <–great ‘cheatsheet’



What others have shared…

Analyzing Wireshark Data with Pandas


Disclaimer

THE SOFTWARE in the mentioned repository IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

What language for Network Automation?

I’m often asked variants of “What language would you recommend when getting started with Network Automation and how would you get started?

As with most things in IT, the answer requires context.

In general, right now I would still pick Python.  Go is getting popular but still can’t compete with the huge body for work and modules and examples available for Python.

However….

Before getting to some suggestions on how I would start learning Python if I had it to do over, let me mention that there are other ways to get started with automation that also involve learning “new syntax” but may seem simpler (at first). Ansible and other automation frameworks use Domain Specific Language (DSL) which may be simpler to learn if you are doing very basic things or things that have examples in the “Global Data Store” more commonly known as Google & the Internet!

So if you are new to programming you may want to start with Ansible…lets you do lots of basic automation with more of an abstracted language.  All of the resources I mention below also have Ansible training.  Try to get one that is recent and geared towards networking as many Ansible courses are geared towards servers and while useful, you will wind up spending more time ramping up.

All roads lead to Python today

Having said that, once you start trying to do more complex things you may run into limitations or issues where some Python knowledge would be useful. Ansible is written in Python. Also, if you are new to linux, then while you may have saved some time with Ansible, you will need to invest some time figuring out how to get around in Linux as the Ansible control host runs on Linux and its variants but there are many other reasons why being conversant with Linux as a network engineer is important.

If you are serious about “upgrading” your skill set as a network engineer make sure you are somewhat comfortable with Linux basics and, while Ansible can get you going with some simple immediate tasks you may need to do for work, get started with Python as soon as you can.

If you are not comfortable moving around in Linux, David Bombal has an excellent intro on YouTube which explains why its an important skill to have and his full course is available via various training options.
Linux for Network Engineers: You need to learn Linux! (Prt 1) Demo of Cisco 9k, Arista EOS & Cumulus


One more point. In the interest of getting you productive as quickly as possible, I have built a series of Docker images that provide an Ansible control server as well as a python programing environment with many of the modules network engineers tend to use.

Ansible Server in Docker for Network Engineers



So back to the question at hand:

Step 1 (Investment: Time)

I would start with Kurt Byers free Learning Python course.

Note: I have no affiliation with him or his company but he speaks to Network Engineers better than anyone I’ve seen and it is an on-line course of recorded lessons (with a lab), one lesson a week, so from a time perspective is very easy to “consume” his course while still having an opportunity to interact and ask questions. The course with the community forum is well worth the additional cost. You will make new acquaintances who are interested in what you are doing and you sometimes run into old friends.

I would also start going through the coursework available for free on DevNet (requires an account).

Intermission (Investment: Time)

Pick a small task and get it working. This step is essential and will help focus your next step.

Spend time working with complex data structures, YAML, and JSON. See my post on “Decomposing Data Structures“.

Step 2 (Investment: Time & Money)

Your next step can take you along various paths. All of these are paid courses and can be done on demand or in a class. There are many options here but I’m listing the options with which I have first hand experience.

Kurt Byer

I really recommend Kurt Byers  Python for Network Engineers course. I’ve taken it several times!! 😀 . You have a lesson a week and this is taught by a Network Engineer for Network Engineers!

Udemy

I’m also a BIG fan of Udemy and they have some very good courses as well at very reasonable price points. Always listen to the free examples and always check the dates before you purchase. Make sure you find out if the Python course is using Python3 as there are many courses there which use Python 2 and if you are just staring, please start with Python3.!

These courses are at a nominal cost, self-paced but, again, make sure you get a current one that is Python 3.

INE

Sign up for an INE course. These guys are also very good and more self-paced but a bit more costly.

Network to Code

The Network to Code guys have great content in a more formalized setting where you can go for a week.

Note: I have taught for these guys so I do have a loose affiliation which does not lessen the point that they have terrific content.

Using Docker as an Ansible and Python platform for Network Engineers

A quick start guide for using the purpose built Docker images for Ansible and Python

Built for Network Engineers by a Network Engineer

Over the last few years I’ve built up a repository of Docker images to help me learn Ansible. If you are new to Ansible you may not know that while Ansible can control all manner of devices (Windows, Linux, Network, Virtual or Bare Metal, Cloud, etc.) the Ansible control server itself only runs on Linux. There are ways around that now (Win10 Subsystem for Linux and Cygwin) but they are not supported. If you are trying to get started, you first have to stand up a control server. Depending on your familiarity with Linux and Virtualization technology you can spend quite a bit of time going down different avenues just to get to the point where you can run a playbook.

Docker was the answer for me.

These Ansible containers have been built over the years to provide a quick alternative to running an Ansible Control server on a local virtualization VM (VirtualBox or VMware). A container is also handy if you need an Ansible Control server to use via a VPN connection. You will see that many of the test playbooks are playbooks designed to perform discovery on a set of devices (aka run lots of show commands and save the output) and so a common practice is to VPN in (or connect directly) to a client network and quickly perform this discovery task.

I’ve kept the various images with different Ansible versions so I can test playbooks on specific versions. In many cases I work with clients who use a specific version of Ansible and so its handy to be able to test locally on the version they use.

This “Quick Start” cheat sheet is intended to get you up and running quickly with the various Ansible containers. Each offers an Ansible control server and a python environment.

The following Docker images are available on Docker Hub

Docker Images providing an Ansible and Python environment for Network Engineers – Complete List

Select a specific Ansible Version:

* if you are not sure which image to use, go with Bionic Immigrant! It’s the most mature, based on Ubuntu Long Term Support (LTS), supports the automated Documentation examples you may have seen, and includes the Batfish client, Batfish Ansible module/role, and Ansible Network Engine role. It is also the image that will run all of my shared repositories.


Installing Docker

To run any Docker container (aka Docker image) you will need to install Docker Desktop on your Mac or Windows system (see note below about Docker Desktop for Windows and consider Docker Toolbox if you run other virtualization software) or Docker Engine on your linux host. You can use the free Docker Community Edition.

The instructions below focus on Mac and Windows operating systems as those tend to be the most prevalent among Network Engineers (at least the ones I know!).

Install on your Operating System of choice

Installing Docker Desktop on Mac

Installing Docker Desktop on Windows WARNING: The Docker Desktop application is only supported on Windows 10 Pro (or better) 64-bit and requires Hyper-V and the Containers Window features to be enabled.

This means that other Virtualization software that does not support HyperV will not work work (i.e. VMware Workstation and VirtualBox) while you have Hyper-V enabled and Docker Desktop won’t work when you have have Hyper-V disabled (but VirtualBox and VMware will).

If you have existing Virtualization software installed and which you use, Docker Toolbox for Windows is still available.

For the Linux aficionados:

Installing Docker Engine Community version on Linux


Now that you have Docker installed – Command Cheat Sheet

Now that you have Docker installed, here is a cheat sheet of the commands I find most useful.

Docker4NetEng_Cheatsheet

Apple
OS-X

Getting Started on Mac with Docker Desktop

Environment:

  • Mac OS X (macOS Sierra Version 10.12.6)
  • Intel based iMac
  • Intel Core i7 4GHz 23 GB Memory

Summary of Steps

  1. Make sure Docker Desktop is installed and running
  2. Open a terminal window and launch your container with the docker run command
  3. Look around the ready built repositories which are cloned into the container to get you started quickly (always remember to git pull to get the latest).

Details

Docker Desktop on Mac OS X

Using Docker Desktop on Mac OS X Video ~13min

  1. Before starting make sure that Docker is installed and running on your Mac.
About Docker

2. Open a terminal window and use the docker run -it command to start the container on your Ma

Full command to start an interactive session

docker run -it cldeluna/disco-immigrant

The first time you execute this command, the docker image will download and then put you into an interacive bash shell in the container.

-i, --interactive Keep STDIN open even if not attached

`-t, --tty Allocate a pseudo-TTY``

This will basically take over your terminal window so if you need to do something else on your system open up a new or different terminal window. Check the command cheat sheet for alternatives like using the -dit option to run in the background and the docker exec command to “attach” to the running container.

If you have not downloaded the image using the docker pull <image> command the docker run command will know and pull it down for your. Once the download is complete and the container is running you will notice that the prompt in your terminal window has changed.

It will look something like “root@c421cab61f1f:/ansible_local“.

Claudias-iMac:disco-immigrant claudia$ docker run -it cldeluna/disco-immigrant
root@c421cab61f1f:/ansible_local#

3. Start looking around

  • Check the version of ansible on the container. In the example below we are using the disco-immigrant image which comes with ansible 2.9.1.
  • Several repositories are cloned into the container to get you started quickly. Check out the Ansible playbook repositories and change directory into one to see the example playbooks and try one!…you can find details in the “Run one of the ready built Playbooks!” At this point, once you are in a working docker CLI the process is basically the same across all operating systems.
  • Always do a “git pull” in any of the cloned repositories to make sure you are running the latest scripts and playbooks.
  • Run your first playbook! You don’t need to bring up any device as many of the playbooks use the DevNet AlwaysOn Sandbox devices.

If you cd or change directory into the cisco_ios directory you can get started with some basic Playbooks.

Claudias-iMac:disco-immigrant claudia$ docker run -it cldeluna/disco-immigrant
root@c421cab61f1f:/ansible_local#
root@c421cab61f1f:/ansible_local# ls
ansible2_4_base cisco_aci cisco_ios
root@c421cab61f1f:/ansible_local# cd cisco_ios
root@c421cab61f1f:/ansible_local/cisco_ios# ls
ansible.cfg     ios_all_vrf_arp.yml   ios_show_cmdlist_zip.yml logs
filter_plugins ios_facts_lab.yml     ios_show_lab.yml         nxos_facts_lab.yml
group_vars     ios_facts_report.yml ios_show_lab25.yml       nxos_show_cmdlist.yml
hosts           ios_show_cmdlist.yml ios_show_lab_brief.yml   templates
root@c421cab61f1f:/ansible_local/cisco_ios#

Using Docker Desktop on Mac OS X Video ~13min


Windows 10 Pro Logo

Getting Started with Docker Toolbox on Windows

Environment:

  • Microsoft Windows 10 Pro
  • x64-based PC
  • Intel(R) Core(TM)i7-6700K CPU @ 4.00GHz, 4008 Mhz, 4Core(s)…

Summary of Steps

  1. Make sure Docker Desktop is installed and running
  2. Open a terminal window or “default” VirtualBox VM console
  3. Look around the ready built repositories which are cloned in the container to get you started quickly.

Details

  1. Docker Toolbox is quirky, no question about it. The desktop shortcuts often don’t work for me but going directly to the VirtualBox VM typically does. Open up VirtualBox and make sure the Docker Toolbox VM is running (it is actually called “default”!)
  2. For me, what generally works is opening up the default VM console directly. Double-click on 1 in the image below to start the default container and open up the container VM console.
  3. Start looking around! Check the version of ansible on the container. In the example below we are using the disco-immigrant image which comes with ansible 2.9.1. Check out the Ansible playbook repositories and change directory into one to see the example playbooks and try one!…you can find details in the “Run one of the ready built Playbooks!” At this point, once you are in a working docker CLI the process is basically the same across all operating systems.
  4. Always do a “git pull” in any of the cloned repositories to make sure you are running the latest scripts and playbooks.
  5. Run your first playbook! You don’t need to bring up any device as most use the DevNet AlwaysOn Sandbox devices.
Docker Toolbox on Windows

Using Docker Toolbox on Windows Video ~13min


Run one of the ready built Playbooks!

Summary of Steps

  1. Select a repository to try. In this example we will try the cisco_ios playbook repository
  2. Enter the git pull command in your container terminal to make sure the repository has the latest code
  3. Try one of the ready made Playbooks
  4. Take one of the example playbooks and modify it to suit your needs or create a new Playbook.

Details

  1. Move into the desired playbook repository by issuing the change directory command cd cisco_ios from the ansible_local directory
  2. Before you try any of the playbooks, its a good idea to execute a git pull so that you have the latest version of the repository.

Example of updated repository:

root@c421cab61f1f:/ansible_local/cisco_ios# git pull
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (1/1), done.
remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (3/3), done.
From https://github.com/cldeluna/cisco_ios
  275f642..a8f951f master     -> origin/master
Updating 275f642..a8f951f
Fast-forward
ios_show_lab_brief.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
root@c421cab61f1f:/ansible_local/cisco_ios#

This means that updates were made to the repository since the time the Docker image was built

Example of repository already up to date:

root@c421cab61f1f:/ansible_local/cisco_ios# git pull
Already up to date.
root@c421cab61f1f:/ansible_local/cisco_ios#

*This means that no updates have been made to the repository since the time the Docker image was built

3 Execute the ios_show_lab_brief.yml Playbook.

The comments in the playbook explain in some detail what the playbook is doing and how to execute it.

root@c421cab61f1f:/ansible_local/cisco_ios# ansible-playbook -i hosts ios_show_lab_brief.yml

You will see that the playbook saves different types of output into different text files. cd to the logs directory and review the results.

root@c421cab61f1f:/ansible_local/cisco_ios# cd logs
root@c421cab61f1f:/ansible_local/cisco_ios/logs# tree
.
|-- ios-xe-mgmt.cisco.com-config.txt
|-- ios-xe-mgmt.cisco.com-raw-output.txt
|-- ios-xe-mgmt.cisco.com-readable-show-output.txt
`-- output_directory

0 directories, 4 files

4 At this point, you can start making these playbooks your own.

Update the hosts file and create your own group of devices. Update the show commands. Start your own Playbook now that you have an Ansible Control server you are ready to go!

Since this is a container, it will leverage your systems network connection so if you VPN into your lab for example, you can use Control Server on your system.

Tip: Mount a Shared Folder

All the work you do will be “captive” in your container unless you start your container with an option to share a folder between your desktop and the container.

That may not be a problem. I try to develop locally and then “git pull” updates inside the container and test but there are time when I want to try new things within the container and without this mounting option can’t get to the new files you created. You often have to copy them out and worst case you forget and destroy your container and lose work. So if the git push/pull model is not to your liking then the -v option is for you!

Note that this is more difficult to do using Docker Toolbox as there are several levels of abstraction.

docker-v-annotated_shell
Directory shared between the Docker host and the Docker container.


References


Full output of ios_show_lab_brief.yml Playbook Execution:

This Playbook ran against the DevNet IOS XE device and so your output may differ from what is shown below.

root@c421cab61f1f:/ansible_local/cisco_ios# 
 root@c421cab61f1f:/ansible_local/cisco_ios# ansible-playbook -i hosts ios_show_lab_brief.yml
 PLAY [Pull show commands form Cisco IOS_XE Always On Sandbox device] *
 TASK [Iterate over show commands] **
 ok: [ios-xe-mgmt.cisco.com] => (item=show run)
 ok: [ios-xe-mgmt.cisco.com] => (item=show version)
 ok: [ios-xe-mgmt.cisco.com] => (item=show inventory)
 ok: [ios-xe-mgmt.cisco.com] => (item=show ip int br)
 ok: [ios-xe-mgmt.cisco.com] => (item=show ip route)
 TASK [debug] *
 ok: [ios-xe-mgmt.cisco.com] => {
     "output": {
         "changed": false,
         "deprecations": [
             {
                 "msg": "Distribution Ubuntu 19.04 on host ios-xe-mgmt.cisco.com should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information",
                 "version": "2.12"
             }
         ],
         "msg": "All items completed",
         "results": [
             {
                 "ansible_facts": {
                     "discovered_interpreter_python": "/usr/bin/python"
                 },
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show run"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show run",
                 "stdout": [
                     "Building configuration…\n\nCurrent configuration : 6228 bytes\n!\n! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root\n!\nversion 16.9\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nplatform qfp utilization monitor load 80\nno platform punt-keepalive disable-kernel-core\nplatform console virtual\n!\nhostname csr1000v\n!\nboot-start-marker\nboot-end-marker\n!\n!\nno logging console\nenable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/\n!\nno aaa new-model\n!\n!\n!\n!\n!\n!\n!\nip domain name abc.inc\n!\n!\n!\nlogin on-success log\n!\n!\n!\n!\n!\n!\n!\nsubscriber templating\n! \n! \n! \n! \n!\nmultilink bundle-name authenticated\n!\n!\n!\n!\n!\ncrypto pki trustpoint TP-self-signed-1530096085\n enrollment selfsigned\n subject-name cn=IOS-Self-Signed-Certificate-1530096085\n revocation-check none\n rsakeypair TP-self-signed-1530096085\n!\n!\ncrypto pki certificate chain TP-self-signed-1530096085\n certificate self-signed 01\n  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \n  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \n  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 \n  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \n  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 \n  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \n  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 \n  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F \n  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA \n  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 \n  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 \n  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE \n  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 \n  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 \n  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \n  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 \n  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 \n  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 \n  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B \n  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 \n  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A \n  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 \n  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 \n  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 \n  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 \n  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958\n  \tquit\n!\n!\n!\n!\n!\n!\n!\n!\nlicense udi pid CSR1000V sn 9ZL30UN51R9\nlicense boot level ax\nno license smart enable\ndiagnostic bootup level minimal\n!\nspanning-tree extend system-id\n!\nnetconf-yang\n!\nrestconf\n!\nusername developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.\nusername cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.\nusername root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/\n!\nredundancy\n!\n!\n!\n!\n!\n!\n! \n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n! \n! \n!\n!\ninterface Loopback18\n description Configured by RESTCONF\n ip address 172.16.100.18 255.255.255.0\n!\ninterface Loopback702\n description Configured by charlotte\n ip address 172.17.2.1 255.255.255.0\n!\ninterface Loopback710\n description Configured by seb\n ip address 172.17.10.1 255.255.255.0\n!\ninterface Loopback2101\n description Configured by RESTCONF\n ip address 172.20.1.1 255.255.255.0\n!\ninterface Loopback2102\n description Configured by Charlotte\n ip address 172.20.2.1 255.255.255.0\n!\ninterface Loopback2103\n description Configured by OWEN\n ip address 172.20.3.1 255.255.255.0\n!\ninterface Loopback2104\n description Configured by RESTCONF\n ip address 172.20.4.1 255.255.255.0\n!\ninterface Loopback2105\n description Configured by RESTCONF\n ip address 172.20.5.1 255.255.255.0\n!\ninterface Loopback2107\n description Configured by Josia\n ip address 172.20.7.1 255.255.255.0\n!\ninterface Loopback2108\n description Configured by RESTCONF\n ip address 172.20.8.1 255.255.255.0\n!\ninterface Loopback2109\n description Configured by RESTCONF\n ip address 172.20.9.1 255.255.255.0\n!\ninterface Loopback2111\n description Configured by RESTCONF\n ip address 172.20.11.1 255.255.255.0\n!\ninterface Loopback2112\n description Configured by RESTCONF\n ip address 172.20.12.1 255.255.255.0\n!\ninterface Loopback2113\n description Configured by RESTCONF\n ip address 172.20.13.1 255.255.255.0\n!\ninterface Loopback2114\n description Configured by RESTCONF\n ip address 172.20.14.1 255.255.255.0\n!\ninterface Loopback2116\n description Configured by RESTCONF\n ip address 172.20.16.1 255.255.255.0\n!\ninterface Loopback2117\n description Configured by RESTCONF\n ip address 172.20.17.1 255.255.255.0\n!\ninterface Loopback2119\n description Configured by RESTCONF\n ip address 172.20.19.19 255.255.255.0\n!\ninterface Loopback2121\n description Configured by RESTCONF\n ip address 172.20.21.1 255.255.255.0\n!\ninterface Loopback3115\n description Configured by Breuvage\n ip address 172.20.15.1 255.255.255.0\n!\ninterface GigabitEthernet1\n description MANAGEMENT INTERFACE - DON'T TOUCH ME\n ip address 10.10.20.48 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet2\n description Configured by RESTCONF\n ip address 10.255.255.1 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet3\n description Network Interface\n no ip address\n shutdown\n negotiation auto\n no mop enabled\n no mop sysid\n!\nip forward-protocol nd\nip http server\nip http authentication local\nip http secure-server\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254\n!\nip ssh rsa keypair-name ssh-key\nip ssh version 2\nip scp server enable\n!\n!\n!\n!\n!\ncontrol-plane\n!\n!\n!\n!\n!\nbanner motd ^C\nWelcome to the DevNet Sandbox for CSR1000v and IOS XE\n\nThe following programmability features are already enabled:\n  - NETCONF\n  - RESTCONF\n\nThanks for stopping by.\n^C\n!\nline con 0\n exec-timeout 0 0\n stopbits 1\nline vty 0 4\n login local\n transport input ssh\n!\nntp logging\nntp authenticate\n!\n!\n!\n!\n!\nend"
                 ],
                 "stdout_lines": [
                     [
                         "Building configuration…",
                         "",
                         "Current configuration : 6228 bytes",
                         "!",
                         "! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root",
                         "!",
                         "version 16.9",
                         "service timestamps debug datetime msec",
                         "service timestamps log datetime msec",
                         "platform qfp utilization monitor load 80",
                         "no platform punt-keepalive disable-kernel-core",
                         "platform console virtual",
                         "!",
                         "hostname csr1000v",
                         "!",
                         "boot-start-marker",
                         "boot-end-marker",
                         "!",
                         "!",
                         "no logging console",
                         "enable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/",
                         "!",
                         "no aaa new-model",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "ip domain name abc.inc",
                         "!",
                         "!",
                         "!",
                         "login on-success log",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "subscriber templating",
                         "! ",
                         "! ",
                         "! ",
                         "! ",
                         "!",
                         "multilink bundle-name authenticated",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "crypto pki trustpoint TP-self-signed-1530096085",
                         " enrollment selfsigned",
                         " subject-name cn=IOS-Self-Signed-Certificate-1530096085",
                         " revocation-check none",
                         " rsakeypair TP-self-signed-1530096085",
                         "!",
                         "!",
                         "crypto pki certificate chain TP-self-signed-1530096085",
                         " certificate self-signed 01",
                         "  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 ",
                         "  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 ",
                         "  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 ",
                         "  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 ",
                         "  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 ",
                         "  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 ",
                         "  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 ",
                         "  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F ",
                         "  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA ",
                         "  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 ",
                         "  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 ",
                         "  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE ",
                         "  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 ",
                         "  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 ",
                         "  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF ",
                         "  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 ",
                         "  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 ",
                         "  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 ",
                         "  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B ",
                         "  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 ",
                         "  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A ",
                         "  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 ",
                         "  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 ",
                         "  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 ",
                         "  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 ",
                         "  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958",
                         "  \tquit",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "license udi pid CSR1000V sn 9ZL30UN51R9",
                         "license boot level ax",
                         "no license smart enable",
                         "diagnostic bootup level minimal",
                         "!",
                         "spanning-tree extend system-id",
                         "!",
                         "netconf-yang",
                         "!",
                         "restconf",
                         "!",
                         "username developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.",
                         "username cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.",
                         "username root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/",
                         "!",
                         "redundancy",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "! ",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "! ",
                         "! ",
                         "!",
                         "!",
                         "interface Loopback18",
                         " description Configured by RESTCONF",
                         " ip address 172.16.100.18 255.255.255.0",
                         "!",
                         "interface Loopback702",
                         " description Configured by charlotte",
                         " ip address 172.17.2.1 255.255.255.0",
                         "!",
                         "interface Loopback710",
                         " description Configured by seb",
                         " ip address 172.17.10.1 255.255.255.0",
                         "!",
                         "interface Loopback2101",
                         " description Configured by RESTCONF",
                         " ip address 172.20.1.1 255.255.255.0",
                         "!",
                         "interface Loopback2102",
                         " description Configured by Charlotte",
                         " ip address 172.20.2.1 255.255.255.0",
                         "!",
                         "interface Loopback2103",
                         " description Configured by OWEN",
                         " ip address 172.20.3.1 255.255.255.0",
                         "!",
                         "interface Loopback2104",
                         " description Configured by RESTCONF",
                         " ip address 172.20.4.1 255.255.255.0",
                         "!",
                         "interface Loopback2105",
                         " description Configured by RESTCONF",
                         " ip address 172.20.5.1 255.255.255.0",
                         "!",
                         "interface Loopback2107",
                         " description Configured by Josia",
                         " ip address 172.20.7.1 255.255.255.0",
                         "!",
                         "interface Loopback2108",
                         " description Configured by RESTCONF",
                         " ip address 172.20.8.1 255.255.255.0",
                         "!",
                         "interface Loopback2109",
                         " description Configured by RESTCONF",
                         " ip address 172.20.9.1 255.255.255.0",
                         "!",
                         "interface Loopback2111",
                         " description Configured by RESTCONF",
                         " ip address 172.20.11.1 255.255.255.0",
                         "!",
                         "interface Loopback2112",
                         " description Configured by RESTCONF",
                         " ip address 172.20.12.1 255.255.255.0",
                         "!",
                         "interface Loopback2113",
                         " description Configured by RESTCONF",
                         " ip address 172.20.13.1 255.255.255.0",
                         "!",
                         "interface Loopback2114",
                         " description Configured by RESTCONF",
                         " ip address 172.20.14.1 255.255.255.0",
                         "!",
                         "interface Loopback2116",
                         " description Configured by RESTCONF",
                         " ip address 172.20.16.1 255.255.255.0",
                         "!",
                         "interface Loopback2117",
                         " description Configured by RESTCONF",
                         " ip address 172.20.17.1 255.255.255.0",
                         "!",
                         "interface Loopback2119",
                         " description Configured by RESTCONF",
                         " ip address 172.20.19.19 255.255.255.0",
                         "!",
                         "interface Loopback2121",
                         " description Configured by RESTCONF",
                         " ip address 172.20.21.1 255.255.255.0",
                         "!",
                         "interface Loopback3115",
                         " description Configured by Breuvage",
                         " ip address 172.20.15.1 255.255.255.0",
                         "!",
                         "interface GigabitEthernet1",
                         " description MANAGEMENT INTERFACE - DON'T TOUCH ME",
                         " ip address 10.10.20.48 255.255.255.0",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "interface GigabitEthernet2",
                         " description Configured by RESTCONF",
                         " ip address 10.255.255.1 255.255.255.0",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "interface GigabitEthernet3",
                         " description Network Interface",
                         " no ip address",
                         " shutdown",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "ip forward-protocol nd",
                         "ip http server",
                         "ip http authentication local",
                         "ip http secure-server",
                         "ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254",
                         "!",
                         "ip ssh rsa keypair-name ssh-key",
                         "ip ssh version 2",
                         "ip scp server enable",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "control-plane",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "banner motd ^C",
                         "Welcome to the DevNet Sandbox for CSR1000v and IOS XE",
                         "",
                         "The following programmability features are already enabled:",
                         "  - NETCONF",
                         "  - RESTCONF",
                         "",
                         "Thanks for stopping by.",
                         "^C",
                         "!",
                         "line con 0",
                         " exec-timeout 0 0",
                         " stopbits 1",
                         "line vty 0 4",
                         " login local",
                         " transport input ssh",
                         "!",
                         "ntp logging",
                         "ntp authenticate",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "end"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show version"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show version",
                 "stdout": [
                     "Cisco IOS XE Software, Version 16.09.03\nCisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2019 by Cisco Systems, Inc.\nCompiled Wed 20-Mar-19 07:56 by mcpre\n\n\nCisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.\nAll rights reserved.  Certain components of Cisco IOS-XE software are\nlicensed under the GNU General Public License (\"GPL\") Version 2.0.  The\nsoftware code licensed under GPL Version 2.0 is free software that comes\nwith ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such\nGPL code under the terms of GPL Version 2.0.  For more details, see the\ndocumentation or \"License Notice\" file accompanying the IOS-XE software,\nor the applicable URL provided on the flyer accompanying the IOS-XE\nsoftware.\n\n\nROM: IOS-XE ROMMON\n\ncsr1000v uptime is 1 day, 2 hours, 8 minutes\nUptime for this control processor is 1 day, 2 hours, 10 minutes\nSystem returned to ROM by reload\nSystem image file is \"bootflash:packages.conf\"\nLast reload reason: reload\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nLicense Level: ax\nLicense Type: Default. No valid license found.\nNext reload license Level: ax\n\n\nSmart Licensing Status: Smart Licensing is DISABLED\n\ncisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.\nProcessor board ID 9ZL30UN51R9\n3 Gigabit Ethernet interfaces\n32768K bytes of non-volatile configuration memory.\n8113280K bytes of physical memory.\n7774207K bytes of virtual hard disk at bootflash:.\n0K bytes of WebUI ODM Files at webui:.\n\nConfiguration register is 0x2102"
                 ],
                 "stdout_lines": [
                     [
                         "Cisco IOS XE Software, Version 16.09.03",
                         "Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)",
                         "Technical Support: http://www.cisco.com/techsupport",
                         "Copyright (c) 1986-2019 by Cisco Systems, Inc.",
                         "Compiled Wed 20-Mar-19 07:56 by mcpre",
                         "",
                         "",
                         "Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.",
                         "All rights reserved.  Certain components of Cisco IOS-XE software are",
                         "licensed under the GNU General Public License (\"GPL\") Version 2.0.  The",
                         "software code licensed under GPL Version 2.0 is free software that comes",
                         "with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such",
                         "GPL code under the terms of GPL Version 2.0.  For more details, see the",
                         "documentation or \"License Notice\" file accompanying the IOS-XE software,",
                         "or the applicable URL provided on the flyer accompanying the IOS-XE",
                         "software.",
                         "",
                         "",
                         "ROM: IOS-XE ROMMON",
                         "",
                         "csr1000v uptime is 1 day, 2 hours, 8 minutes",
                         "Uptime for this control processor is 1 day, 2 hours, 10 minutes",
                         "System returned to ROM by reload",
                         "System image file is \"bootflash:packages.conf\"",
                         "Last reload reason: reload",
                         "",
                         "",
                         "",
                         "This product contains cryptographic features and is subject to United",
                         "States and local country laws governing import, export, transfer and",
                         "use. Delivery of Cisco cryptographic products does not imply",
                         "third-party authority to import, export, distribute or use encryption.",
                         "Importers, exporters, distributors and users are responsible for",
                         "compliance with U.S. and local country laws. By using this product you",
                         "agree to comply with applicable laws and regulations. If you are unable",
                         "to comply with U.S. and local laws, return this product immediately.",
                         "",
                         "A summary of U.S. laws governing Cisco cryptographic products may be found at:",
                         "http://www.cisco.com/wwl/export/crypto/tool/stqrg.html",
                         "",
                         "If you require further assistance please contact us by sending email to",
                         "export@cisco.com.",
                         "",
                         "License Level: ax",
                         "License Type: Default. No valid license found.",
                         "Next reload license Level: ax",
                         "",
                         "",
                         "Smart Licensing Status: Smart Licensing is DISABLED",
                         "",
                         "cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.",
                         "Processor board ID 9ZL30UN51R9",
                         "3 Gigabit Ethernet interfaces",
                         "32768K bytes of non-volatile configuration memory.",
                         "8113280K bytes of physical memory.",
                         "7774207K bytes of virtual hard disk at bootflash:.",
                         "0K bytes of WebUI ODM Files at webui:.",
                         "",
                         "Configuration register is 0x2102"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show inventory"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show inventory",
                 "stdout": [
                     "NAME: \"Chassis\", DESCR: \"Cisco CSR1000V Chassis\"\nPID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9\n\nNAME: \"module R0\", DESCR: \"Cisco CSR1000V Route Processor\"\nPID: CSR1000V          , VID: V00  , SN: JAB1303001C\n\nNAME: \"module F0\", DESCR: \"Cisco CSR1000V Embedded Services Processor\"\nPID: CSR1000V          , VID:      , SN:"
                 ],
                 "stdout_lines": [
                     [
                         "NAME: \"Chassis\", DESCR: \"Cisco CSR1000V Chassis\"",
                         "PID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9",
                         "",
                         "NAME: \"module R0\", DESCR: \"Cisco CSR1000V Route Processor\"",
                         "PID: CSR1000V          , VID: V00  , SN: JAB1303001C",
                         "",
                         "NAME: \"module F0\", DESCR: \"Cisco CSR1000V Embedded Services Processor\"",
                         "PID: CSR1000V          , VID:      , SN:"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show ip int br"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show ip int br",
                 "stdout": [
                     "Interface              IP-Address      OK? Method Status                Protocol\nGigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      \nGigabitEthernet2       10.255.255.1    YES other  up                    up      \nGigabitEthernet3       unassigned      YES NVRAM  administratively down down    \nLoopback18             172.16.100.18   YES other  up                    up      \nLoopback702            172.17.2.1      YES other  up                    up      \nLoopback710            172.17.10.1     YES other  up                    up      \nLoopback2101           172.20.1.1      YES other  up                    up      \nLoopback2102           172.20.2.1      YES other  up                    up      \nLoopback2103           172.20.3.1      YES other  up                    up      \nLoopback2104           172.20.4.1      YES other  up                    up      \nLoopback2105           172.20.5.1      YES other  up                    up      \nLoopback2107           172.20.7.1      YES other  up                    up      \nLoopback2108           172.20.8.1      YES other  up                    up      \nLoopback2109           172.20.9.1      YES other  up                    up      \nLoopback2111           172.20.11.1     YES other  up                    up      \nLoopback2112           172.20.12.1     YES other  up                    up      \nLoopback2113           172.20.13.1     YES other  up                    up      \nLoopback2114           172.20.14.1     YES other  up                    up      \nLoopback2116           172.20.16.1     YES other  up                    up      \nLoopback2117           172.20.17.1     YES other  up                    up      \nLoopback2119           172.20.19.19    YES other  up                    up      \nLoopback2121           172.20.21.1     YES other  up                    up      \nLoopback3115           172.20.15.1     YES other  up                    up"
                 ],
                 "stdout_lines": [
                     [
                         "Interface              IP-Address      OK? Method Status                Protocol",
                         "GigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      ",
                         "GigabitEthernet2       10.255.255.1    YES other  up                    up      ",
                         "GigabitEthernet3       unassigned      YES NVRAM  administratively down down    ",
                         "Loopback18             172.16.100.18   YES other  up                    up      ",
                         "Loopback702            172.17.2.1      YES other  up                    up      ",
                         "Loopback710            172.17.10.1     YES other  up                    up      ",
                         "Loopback2101           172.20.1.1      YES other  up                    up      ",
                         "Loopback2102           172.20.2.1      YES other  up                    up      ",
                         "Loopback2103           172.20.3.1      YES other  up                    up      ",
                         "Loopback2104           172.20.4.1      YES other  up                    up      ",
                         "Loopback2105           172.20.5.1      YES other  up                    up      ",
                         "Loopback2107           172.20.7.1      YES other  up                    up      ",
                         "Loopback2108           172.20.8.1      YES other  up                    up      ",
                         "Loopback2109           172.20.9.1      YES other  up                    up      ",
                         "Loopback2111           172.20.11.1     YES other  up                    up      ",
                         "Loopback2112           172.20.12.1     YES other  up                    up      ",
                         "Loopback2113           172.20.13.1     YES other  up                    up      ",
                         "Loopback2114           172.20.14.1     YES other  up                    up      ",
                         "Loopback2116           172.20.16.1     YES other  up                    up      ",
                         "Loopback2117           172.20.17.1     YES other  up                    up      ",
                         "Loopback2119           172.20.19.19    YES other  up                    up      ",
                         "Loopback2121           172.20.21.1     YES other  up                    up      ",
                         "Loopback3115           172.20.15.1     YES other  up                    up"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show ip route"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show ip route",
                 "stdout": [
                     "Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP\n       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area \n       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2\n       E1 - OSPF external type 1, E2 - OSPF external type 2\n       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2\n       ia - IS-IS inter area, * - candidate default, U - per-user static route\n       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP\n       a - application route\n       + - replicated route, % - next hop override, p - overrides from PfR\n\nGateway of last resort is 10.10.20.254 to network 0.0.0.0\n\nS*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1\n      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks\nC        10.10.20.0/24 is directly connected, GigabitEthernet1\nL        10.10.20.48/32 is directly connected, GigabitEthernet1\nC        10.255.255.0/24 is directly connected, GigabitEthernet2\nL        10.255.255.1/32 is directly connected, GigabitEthernet2\n      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks\nC        172.16.100.0/24 is directly connected, Loopback18\nL        172.16.100.18/32 is directly connected, Loopback18\n      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks\nC        172.17.2.0/24 is directly connected, Loopback702\nL        172.17.2.1/32 is directly connected, Loopback702\nC        172.17.10.0/24 is directly connected, Loopback710\nL        172.17.10.1/32 is directly connected, Loopback710\n      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks\nC        172.20.1.0/24 is directly connected, Loopback2101\nL        172.20.1.1/32 is directly connected, Loopback2101\nC        172.20.2.0/24 is directly connected, Loopback2102\nL        172.20.2.1/32 is directly connected, Loopback2102\nC        172.20.3.0/24 is directly connected, Loopback2103\nL        172.20.3.1/32 is directly connected, Loopback2103\nC        172.20.4.0/24 is directly connected, Loopback2104\nL        172.20.4.1/32 is directly connected, Loopback2104\nC        172.20.5.0/24 is directly connected, Loopback2105\nL        172.20.5.1/32 is directly connected, Loopback2105\nC        172.20.7.0/24 is directly connected, Loopback2107\nL        172.20.7.1/32 is directly connected, Loopback2107\nC        172.20.8.0/24 is directly connected, Loopback2108\nL        172.20.8.1/32 is directly connected, Loopback2108\nC        172.20.9.0/24 is directly connected, Loopback2109\nL        172.20.9.1/32 is directly connected, Loopback2109\nC        172.20.11.0/24 is directly connected, Loopback2111\nL        172.20.11.1/32 is directly connected, Loopback2111\nC        172.20.12.0/24 is directly connected, Loopback2112\nL        172.20.12.1/32 is directly connected, Loopback2112\nC        172.20.13.0/24 is directly connected, Loopback2113\nL        172.20.13.1/32 is directly connected, Loopback2113\nC        172.20.14.0/24 is directly connected, Loopback2114\nL        172.20.14.1/32 is directly connected, Loopback2114\nC        172.20.15.0/24 is directly connected, Loopback3115\nL        172.20.15.1/32 is directly connected, Loopback3115\nC        172.20.16.0/24 is directly connected, Loopback2116\nL        172.20.16.1/32 is directly connected, Loopback2116\nC        172.20.17.0/24 is directly connected, Loopback2117\nL        172.20.17.1/32 is directly connected, Loopback2117\nC        172.20.19.0/24 is directly connected, Loopback2119\nL        172.20.19.19/32 is directly connected, Loopback2119\nC        172.20.21.0/24 is directly connected, Loopback2121\nL        172.20.21.1/32 is directly connected, Loopback2121"
                 ],
                 "stdout_lines": [
                     [
                         "Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP",
                         "       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area ",
                         "       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2",
                         "       E1 - OSPF external type 1, E2 - OSPF external type 2",
                         "       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2",
                         "       ia - IS-IS inter area, * - candidate default, U - per-user static route",
                         "       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP",
                         "       a - application route",
                         "       + - replicated route, % - next hop override, p - overrides from PfR",
                         "",
                         "Gateway of last resort is 10.10.20.254 to network 0.0.0.0",
                         "",
                         "S*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1",
                         "      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks",
                         "C        10.10.20.0/24 is directly connected, GigabitEthernet1",
                         "L        10.10.20.48/32 is directly connected, GigabitEthernet1",
                         "C        10.255.255.0/24 is directly connected, GigabitEthernet2",
                         "L        10.255.255.1/32 is directly connected, GigabitEthernet2",
                         "      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks",
                         "C        172.16.100.0/24 is directly connected, Loopback18",
                         "L        172.16.100.18/32 is directly connected, Loopback18",
                         "      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks",
                         "C        172.17.2.0/24 is directly connected, Loopback702",
                         "L        172.17.2.1/32 is directly connected, Loopback702",
                         "C        172.17.10.0/24 is directly connected, Loopback710",
                         "L        172.17.10.1/32 is directly connected, Loopback710",
                         "      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks",
                         "C        172.20.1.0/24 is directly connected, Loopback2101",
                         "L        172.20.1.1/32 is directly connected, Loopback2101",
                         "C        172.20.2.0/24 is directly connected, Loopback2102",
                         "L        172.20.2.1/32 is directly connected, Loopback2102",
                         "C        172.20.3.0/24 is directly connected, Loopback2103",
                         "L        172.20.3.1/32 is directly connected, Loopback2103",
                         "C        172.20.4.0/24 is directly connected, Loopback2104",
                         "L        172.20.4.1/32 is directly connected, Loopback2104",
                         "C        172.20.5.0/24 is directly connected, Loopback2105",
                         "L        172.20.5.1/32 is directly connected, Loopback2105",
                         "C        172.20.7.0/24 is directly connected, Loopback2107",
                         "L        172.20.7.1/32 is directly connected, Loopback2107",
                         "C        172.20.8.0/24 is directly connected, Loopback2108",
                         "L        172.20.8.1/32 is directly connected, Loopback2108",
                         "C        172.20.9.0/24 is directly connected, Loopback2109",
                         "L        172.20.9.1/32 is directly connected, Loopback2109",
                         "C        172.20.11.0/24 is directly connected, Loopback2111",
                         "L        172.20.11.1/32 is directly connected, Loopback2111",
                         "C        172.20.12.0/24 is directly connected, Loopback2112",
                         "L        172.20.12.1/32 is directly connected, Loopback2112",
                         "C        172.20.13.0/24 is directly connected, Loopback2113",
                         "L        172.20.13.1/32 is directly connected, Loopback2113",
                         "C        172.20.14.0/24 is directly connected, Loopback2114",
                         "L        172.20.14.1/32 is directly connected, Loopback2114",
                         "C        172.20.15.0/24 is directly connected, Loopback3115",
                         "L        172.20.15.1/32 is directly connected, Loopback3115",
                         "C        172.20.16.0/24 is directly connected, Loopback2116",
                         "L        172.20.16.1/32 is directly connected, Loopback2116",
                         "C        172.20.17.0/24 is directly connected, Loopback2117",
                         "L        172.20.17.1/32 is directly connected, Loopback2117",
                         "C        172.20.19.0/24 is directly connected, Loopback2119",
                         "L        172.20.19.19/32 is directly connected, Loopback2119",
                         "C        172.20.21.0/24 is directly connected, Loopback2121",
                         "L        172.20.21.1/32 is directly connected, Loopback2121"
                     ]
                 ]
             }
         ]
     }
 }
 TASK [copy] **
 changed: [ios-xe-mgmt.cisco.com -> localhost]
 TASK [copy] **
 changed: [ios-xe-mgmt.cisco.com -> localhost]
 TASK [Generate Device Show Command File(s)] 
 changed: [ios-xe-mgmt.cisco.com] => (item={'msg': u'All items completed', 'deprecations': [{'msg': u'Distribution Ubuntu 19.04 on host ios-xe-mgmt.cisco.com should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information', 'version': u'2.12'}], 'changed': False, 'results': [{'ansible_loop_var': u'item', u'stdout': [u"Building configuration…\n\nCurrent configuration : 6228 bytes\n!\n! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root\n!\nversion 16.9\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nplatform qfp utilization monitor load 80\nno platform punt-keepalive disable-kernel-core\nplatform console virtual\n!\nhostname csr1000v\n!\nboot-start-marker\nboot-end-marker\n!\n!\nno logging console\nenable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/\n!\nno aaa new-model\n!\n!\n!\n!\n!\n!\n!\nip domain name abc.inc\n!\n!\n!\nlogin on-success log\n!\n!\n!\n!\n!\n!\n!\nsubscriber templating\n! \n! \n! \n! \n!\nmultilink bundle-name authenticated\n!\n!\n!\n!\n!\ncrypto pki trustpoint TP-self-signed-1530096085\n enrollment selfsigned\n subject-name cn=IOS-Self-Signed-Certificate-1530096085\n revocation-check none\n rsakeypair TP-self-signed-1530096085\n!\n!\ncrypto pki certificate chain TP-self-signed-1530096085\n certificate self-signed 01\n  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \n  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \n  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 \n  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \n  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 \n  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \n  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 \n  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F \n  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA \n  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 \n  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 \n  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE \n  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 \n  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 \n  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \n  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 \n  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 \n  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 \n  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B \n  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 \n  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A \n  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 \n  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 \n  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 \n  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 \n  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958\n  \tquit\n!\n!\n!\n!\n!\n!\n!\n!\nlicense udi pid CSR1000V sn 9ZL30UN51R9\nlicense boot level ax\nno license smart enable\ndiagnostic bootup level minimal\n!\nspanning-tree extend system-id\n!\nnetconf-yang\n!\nrestconf\n!\nusername developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.\nusername cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.\nusername root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/\n!\nredundancy\n!\n!\n!\n!\n!\n!\n! \n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n! \n! \n!\n!\ninterface Loopback18\n description Configured by RESTCONF\n ip address 172.16.100.18 255.255.255.0\n!\ninterface Loopback702\n description Configured by charlotte\n ip address 172.17.2.1 255.255.255.0\n!\ninterface Loopback710\n description Configured by seb\n ip address 172.17.10.1 255.255.255.0\n!\ninterface Loopback2101\n description Configured by RESTCONF\n ip address 172.20.1.1 255.255.255.0\n!\ninterface Loopback2102\n description Configured by Charlotte\n ip address 172.20.2.1 255.255.255.0\n!\ninterface Loopback2103\n description Configured by OWEN\n ip address 172.20.3.1 255.255.255.0\n!\ninterface Loopback2104\n description Configured by RESTCONF\n ip address 172.20.4.1 255.255.255.0\n!\ninterface Loopback2105\n description Configured by RESTCONF\n ip address 172.20.5.1 255.255.255.0\n!\ninterface Loopback2107\n description Configured by Josia\n ip address 172.20.7.1 255.255.255.0\n!\ninterface Loopback2108\n description Configured by RESTCONF\n ip address 172.20.8.1 255.255.255.0\n!\ninterface Loopback2109\n description Configured by RESTCONF\n ip address 172.20.9.1 255.255.255.0\n!\ninterface Loopback2111\n description Configured by RESTCONF\n ip address 172.20.11.1 255.255.255.0\n!\ninterface Loopback2112\n description Configured by RESTCONF\n ip address 172.20.12.1 255.255.255.0\n!\ninterface Loopback2113\n description Configured by RESTCONF\n ip address 172.20.13.1 255.255.255.0\n!\ninterface Loopback2114\n description Configured by RESTCONF\n ip address 172.20.14.1 255.255.255.0\n!\ninterface Loopback2116\n description Configured by RESTCONF\n ip address 172.20.16.1 255.255.255.0\n!\ninterface Loopback2117\n description Configured by RESTCONF\n ip address 172.20.17.1 255.255.255.0\n!\ninterface Loopback2119\n description Configured by RESTCONF\n ip address 172.20.19.19 255.255.255.0\n!\ninterface Loopback2121\n description Configured by RESTCONF\n ip address 172.20.21.1 255.255.255.0\n!\ninterface Loopback3115\n description Configured by Breuvage\n ip address 172.20.15.1 255.255.255.0\n!\ninterface GigabitEthernet1\n description MANAGEMENT INTERFACE - DON'T TOUCH ME\n ip address 10.10.20.48 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet2\n description Configured by RESTCONF\n ip address 10.255.255.1 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet3\n description Network Interface\n no ip address\n shutdown\n negotiation auto\n no mop enabled\n no mop sysid\n!\nip forward-protocol nd\nip http server\nip http authentication local\nip http secure-server\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254\n!\nip ssh rsa keypair-name ssh-key\nip ssh version 2\nip scp server enable\n!\n!\n!\n!\n!\ncontrol-plane\n!\n!\n!\n!\n!\nbanner motd ^C\nWelcome to the DevNet Sandbox for CSR1000v and IOS XE\n\nThe following programmability features are already enabled:\n  - NETCONF\n  - RESTCONF\n\nThanks for stopping by.\n^C\n!\nline con 0\n exec-timeout 0 0\n stopbits 1\nline vty 0 4\n login local\n transport input ssh\n!\nntp logging\nntp authenticate\n!\n!\n!\n!\n!\nend"], u'changed': False, 'failed': False, 'item': u'show run', u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show run'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Building configuration…', u'', u'Current configuration : 6228 bytes', u'!', u'! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root', u'!', u'version 16.9', u'service timestamps debug datetime msec', u'service timestamps log datetime msec', u'platform qfp utilization monitor load 80', u'no platform punt-keepalive disable-kernel-core', u'platform console virtual', u'!', u'hostname csr1000v', u'!', u'boot-start-marker', u'boot-end-marker', u'!', u'!', u'no logging console', u'enable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/', u'!', u'no aaa new-model', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'ip domain name abc.inc', u'!', u'!', u'!', u'login on-success log', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'subscriber templating', u'! ', u'! ', u'! ', u'! ', u'!', u'multilink bundle-name authenticated', u'!', u'!', u'!', u'!', u'!', u'crypto pki trustpoint TP-self-signed-1530096085', u' enrollment selfsigned', u' subject-name cn=IOS-Self-Signed-Certificate-1530096085', u' revocation-check none', u' rsakeypair TP-self-signed-1530096085', u'!', u'!', u'crypto pki certificate chain TP-self-signed-1530096085', u' certificate self-signed 01', u'  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 ', u'  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 ', u'  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 ', u'  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 ', u'  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 ', u'  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 ', u'  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 ', u'  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F ', u'  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA ', u'  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 ', u'  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 ', u'  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE ', u'  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 ', u'  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 ', u'  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF ', u'  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 ', u'  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 ', u'  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 ', u'  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B ', u'  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 ', u'  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A ', u'  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 ', u'  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 ', u'  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 ', u'  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 ', u'  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958', u'  \tquit', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'license udi pid CSR1000V sn 9ZL30UN51R9', u'license boot level ax', u'no license smart enable', u'diagnostic bootup level minimal', u'!', u'spanning-tree extend system-id', u'!', u'netconf-yang', u'!', u'restconf', u'!', u'username developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.', u'username cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.', u'username root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/', u'!', u'redundancy', u'!', u'!', u'!', u'!', u'!', u'!', u'! ', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'! ', u'! ', u'!', u'!', u'interface Loopback18', u' description Configured by RESTCONF', u' ip address 172.16.100.18 255.255.255.0', u'!', u'interface Loopback702', u' description Configured by charlotte', u' ip address 172.17.2.1 255.255.255.0', u'!', u'interface Loopback710', u' description Configured by seb', u' ip address 172.17.10.1 255.255.255.0', u'!', u'interface Loopback2101', u' description Configured by RESTCONF', u' ip address 172.20.1.1 255.255.255.0', u'!', u'interface Loopback2102', u' description Configured by Charlotte', u' ip address 172.20.2.1 255.255.255.0', u'!', u'interface Loopback2103', u' description Configured by OWEN', u' ip address 172.20.3.1 255.255.255.0', u'!', u'interface Loopback2104', u' description Configured by RESTCONF', u' ip address 172.20.4.1 255.255.255.0', u'!', u'interface Loopback2105', u' description Configured by RESTCONF', u' ip address 172.20.5.1 255.255.255.0', u'!', u'interface Loopback2107', u' description Configured by Josia', u' ip address 172.20.7.1 255.255.255.0', u'!', u'interface Loopback2108', u' description Configured by RESTCONF', u' ip address 172.20.8.1 255.255.255.0', u'!', u'interface Loopback2109', u' description Configured by RESTCONF', u' ip address 172.20.9.1 255.255.255.0', u'!', u'interface Loopback2111', u' description Configured by RESTCONF', u' ip address 172.20.11.1 255.255.255.0', u'!', u'interface Loopback2112', u' description Configured by RESTCONF', u' ip address 172.20.12.1 255.255.255.0', u'!', u'interface Loopback2113', u' description Configured by RESTCONF', u' ip address 172.20.13.1 255.255.255.0', u'!', u'interface Loopback2114', u' description Configured by RESTCONF', u' ip address 172.20.14.1 255.255.255.0', u'!', u'interface Loopback2116', u' description Configured by RESTCONF', u' ip address 172.20.16.1 255.255.255.0', u'!', u'interface Loopback2117', u' description Configured by RESTCONF', u' ip address 172.20.17.1 255.255.255.0', u'!', u'interface Loopback2119', u' description Configured by RESTCONF', u' ip address 172.20.19.19 255.255.255.0', u'!', u'interface Loopback2121', u' description Configured by RESTCONF', u' ip address 172.20.21.1 255.255.255.0', u'!', u'interface Loopback3115', u' description Configured by Breuvage', u' ip address 172.20.15.1 255.255.255.0', u'!', u'interface GigabitEthernet1', u" description MANAGEMENT INTERFACE - DON'T TOUCH ME", u' ip address 10.10.20.48 255.255.255.0', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'interface GigabitEthernet2', u' description Configured by RESTCONF', u' ip address 10.255.255.1 255.255.255.0', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'interface GigabitEthernet3', u' description Network Interface', u' no ip address', u' shutdown', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'ip forward-protocol nd', u'ip http server', u'ip http authentication local', u'ip http secure-server', u'ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254', u'!', u'ip ssh rsa keypair-name ssh-key', u'ip ssh version 2', u'ip scp server enable', u'!', u'!', u'!', u'!', u'!', u'control-plane', u'!', u'!', u'!', u'!', u'!', u'banner motd ^C', u'Welcome to the DevNet Sandbox for CSR1000v and IOS XE', u'', u'The following programmability features are already enabled:', u'  - NETCONF', u'  - RESTCONF', u'', u'Thanks for stopping by.', u'^C', u'!', u'line con 0', u' exec-timeout 0 0', u' stopbits 1', u'line vty 0 4', u' login local', u' transport input ssh', u'!', u'ntp logging', u'ntp authenticate', u'!', u'!', u'!', u'!', u'!', u'end']], 'ansible_facts': {u'discovered_interpreter_python': u'/usr/bin/python'}}, {'item': u'show version', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Cisco IOS XE Software, Version 16.09.03\nCisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2019 by Cisco Systems, Inc.\nCompiled Wed 20-Mar-19 07:56 by mcpre\n\n\nCisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.\nAll rights reserved.  Certain components of Cisco IOS-XE software are\nlicensed under the GNU General Public License ("GPL") Version 2.0.  The\nsoftware code licensed under GPL Version 2.0 is free software that comes\nwith ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such\nGPL code under the terms of GPL Version 2.0.  For more details, see the\ndocumentation or "License Notice" file accompanying the IOS-XE software,\nor the applicable URL provided on the flyer accompanying the IOS-XE\nsoftware.\n\n\nROM: IOS-XE ROMMON\n\ncsr1000v uptime is 1 day, 2 hours, 8 minutes\nUptime for this control processor is 1 day, 2 hours, 10 minutes\nSystem returned to ROM by reload\nSystem image file is "bootflash:packages.conf"\nLast reload reason: reload\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nLicense Level: ax\nLicense Type: Default. No valid license found.\nNext reload license Level: ax\n\n\nSmart Licensing Status: Smart Licensing is DISABLED\n\ncisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.\nProcessor board ID 9ZL30UN51R9\n3 Gigabit Ethernet interfaces\n32768K bytes of non-volatile configuration memory.\n8113280K bytes of physical memory.\n7774207K bytes of virtual hard disk at bootflash:.\n0K bytes of WebUI ODM Files at webui:.\n\nConfiguration register is 0x2102'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show version'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Cisco IOS XE Software, Version 16.09.03', u'Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)', u'Technical Support: http://www.cisco.com/techsupport', u'Copyright (c) 1986-2019 by Cisco Systems, Inc.', u'Compiled Wed 20-Mar-19 07:56 by mcpre', u'', u'', u'Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.', u'All rights reserved.  Certain components of Cisco IOS-XE software are', u'licensed under the GNU General Public License ("GPL") Version 2.0.  The', u'software code licensed under GPL Version 2.0 is free software that comes', u'with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such', u'GPL code under the terms of GPL Version 2.0.  For more details, see the', u'documentation or "License Notice" file accompanying the IOS-XE software,', u'or the applicable URL provided on the flyer accompanying the IOS-XE', u'software.', u'', u'', u'ROM: IOS-XE ROMMON', u'', u'csr1000v uptime is 1 day, 2 hours, 8 minutes', u'Uptime for this control processor is 1 day, 2 hours, 10 minutes', u'System returned to ROM by reload', u'System image file is "bootflash:packages.conf"', u'Last reload reason: reload', u'', u'', u'', u'This product contains cryptographic features and is subject to United', u'States and local country laws governing import, export, transfer and', u'use. Delivery of Cisco cryptographic products does not imply', u'third-party authority to import, export, distribute or use encryption.', u'Importers, exporters, distributors and users are responsible for', u'compliance with U.S. and local country laws. By using this product you', u'agree to comply with applicable laws and regulations. If you are unable', u'to comply with U.S. and local laws, return this product immediately.', u'', u'A summary of U.S. laws governing Cisco cryptographic products may be found at:', u'http://www.cisco.com/wwl/export/crypto/tool/stqrg.html', u'', u'If you require further assistance please contact us by sending email to', u'export@cisco.com.', u'', u'License Level: ax', u'License Type: Default. No valid license found.', u'Next reload license Level: ax', u'', u'', u'Smart Licensing Status: Smart Licensing is DISABLED', u'', u'cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.', u'Processor board ID 9ZL30UN51R9', u'3 Gigabit Ethernet interfaces', u'32768K bytes of non-volatile configuration memory.', u'8113280K bytes of physical memory.', u'7774207K bytes of virtual hard disk at bootflash:.', u'0K bytes of WebUI ODM Files at webui:.', u'', u'Configuration register is 0x2102']], u'changed': False}, {'item': u'show inventory', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'NAME: "Chassis", DESCR: "Cisco CSR1000V Chassis"\nPID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9\n\nNAME: "module R0", DESCR: "Cisco CSR1000V Route Processor"\nPID: CSR1000V          , VID: V00  , SN: JAB1303001C\n\nNAME: "module F0", DESCR: "Cisco CSR1000V Embedded Services Processor"\nPID: CSR1000V          , VID:      , SN:'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show inventory'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'NAME: "Chassis", DESCR: "Cisco CSR1000V Chassis"', u'PID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9', u'', u'NAME: "module R0", DESCR: "Cisco CSR1000V Route Processor"', u'PID: CSR1000V          , VID: V00  , SN: JAB1303001C', u'', u'NAME: "module F0", DESCR: "Cisco CSR1000V Embedded Services Processor"', u'PID: CSR1000V          , VID:      , SN:']], u'changed': False}, {'item': u'show ip int br', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Interface              IP-Address      OK? Method Status                Protocol\nGigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      \nGigabitEthernet2       10.255.255.1    YES other  up                    up      \nGigabitEthernet3       unassigned      YES NVRAM  administratively down down    \nLoopback18             172.16.100.18   YES other  up                    up      \nLoopback702            172.17.2.1      YES other  up                    up      \nLoopback710            172.17.10.1     YES other  up                    up      \nLoopback2101           172.20.1.1      YES other  up                    up      \nLoopback2102           172.20.2.1      YES other  up                    up      \nLoopback2103           172.20.3.1      YES other  up                    up      \nLoopback2104           172.20.4.1      YES other  up                    up      \nLoopback2105           172.20.5.1      YES other  up                    up      \nLoopback2107           172.20.7.1      YES other  up                    up      \nLoopback2108           172.20.8.1      YES other  up                    up      \nLoopback2109           172.20.9.1      YES other  up                    up      \nLoopback2111           172.20.11.1     YES other  up                    up      \nLoopback2112           172.20.12.1     YES other  up                    up      \nLoopback2113           172.20.13.1     YES other  up                    up      \nLoopback2114           172.20.14.1     YES other  up                    up      \nLoopback2116           172.20.16.1     YES other  up                    up      \nLoopback2117           172.20.17.1     YES other  up                    up      \nLoopback2119           172.20.19.19    YES other  up                    up      \nLoopback2121           172.20.21.1     YES other  up                    up      \nLoopback3115           172.20.15.1     YES other  up                    up'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show ip int br'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Interface              IP-Address      OK? Method Status                Protocol', u'GigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      ', u'GigabitEthernet2       10.255.255.1    YES other  up                    up      ', u'GigabitEthernet3       unassigned      YES NVRAM  administratively down down    ', u'Loopback18             172.16.100.18   YES other  up                    up      ', u'Loopback702            172.17.2.1      YES other  up                    up      ', u'Loopback710            172.17.10.1     YES other  up                    up      ', u'Loopback2101           172.20.1.1      YES other  up                    up      ', u'Loopback2102           172.20.2.1      YES other  up                    up      ', u'Loopback2103           172.20.3.1      YES other  up                    up      ', u'Loopback2104           172.20.4.1      YES other  up                    up      ', u'Loopback2105           172.20.5.1      YES other  up                    up      ', u'Loopback2107           172.20.7.1      YES other  up                    up      ', u'Loopback2108           172.20.8.1      YES other  up                    up      ', u'Loopback2109           172.20.9.1      YES other  up                    up      ', u'Loopback2111           172.20.11.1     YES other  up                    up      ', u'Loopback2112           172.20.12.1     YES other  up                    up      ', u'Loopback2113           172.20.13.1     YES other  up                    up      ', u'Loopback2114           172.20.14.1     YES other  up                    up      ', u'Loopback2116           172.20.16.1     YES other  up                    up      ', u'Loopback2117           172.20.17.1     YES other  up                    up      ', u'Loopback2119           172.20.19.19    YES other  up                    up      ', u'Loopback2121           172.20.21.1     YES other  up                    up      ', u'Loopback3115           172.20.15.1     YES other  up                    up']], u'changed': False}, {'item': u'show ip route', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP\n       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area \n       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2\n       E1 - OSPF external type 1, E2 - OSPF external type 2\n       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2\n       ia - IS-IS inter area, * - candidate default, U - per-user static route\n       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP\n       a - application route\n       + - replicated route, % - next hop override, p - overrides from PfR\n\nGateway of last resort is 10.10.20.254 to network 0.0.0.0\n\nS*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1\n      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks\nC        10.10.20.0/24 is directly connected, GigabitEthernet1\nL        10.10.20.48/32 is directly connected, GigabitEthernet1\nC        10.255.255.0/24 is directly connected, GigabitEthernet2\nL        10.255.255.1/32 is directly connected, GigabitEthernet2\n      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks\nC        172.16.100.0/24 is directly connected, Loopback18\nL        172.16.100.18/32 is directly connected, Loopback18\n      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks\nC        172.17.2.0/24 is directly connected, Loopback702\nL        172.17.2.1/32 is directly connected, Loopback702\nC        172.17.10.0/24 is directly connected, Loopback710\nL        172.17.10.1/32 is directly connected, Loopback710\n      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks\nC        172.20.1.0/24 is directly connected, Loopback2101\nL        172.20.1.1/32 is directly connected, Loopback2101\nC        172.20.2.0/24 is directly connected, Loopback2102\nL        172.20.2.1/32 is directly connected, Loopback2102\nC        172.20.3.0/24 is directly connected, Loopback2103\nL        172.20.3.1/32 is directly connected, Loopback2103\nC        172.20.4.0/24 is directly connected, Loopback2104\nL        172.20.4.1/32 is directly connected, Loopback2104\nC        172.20.5.0/24 is directly connected, Loopback2105\nL        172.20.5.1/32 is directly connected, Loopback2105\nC        172.20.7.0/24 is directly connected, Loopback2107\nL        172.20.7.1/32 is directly connected, Loopback2107\nC        172.20.8.0/24 is directly connected, Loopback2108\nL        172.20.8.1/32 is directly connected, Loopback2108\nC        172.20.9.0/24 is directly connected, Loopback2109\nL        172.20.9.1/32 is directly connected, Loopback2109\nC        172.20.11.0/24 is directly connected, Loopback2111\nL        172.20.11.1/32 is directly connected, Loopback2111\nC        172.20.12.0/24 is directly connected, Loopback2112\nL        172.20.12.1/32 is directly connected, Loopback2112\nC        172.20.13.0/24 is directly connected, Loopback2113\nL        172.20.13.1/32 is directly connected, Loopback2113\nC        172.20.14.0/24 is directly connected, Loopback2114\nL        172.20.14.1/32 is directly connected, Loopback2114\nC        172.20.15.0/24 is directly connected, Loopback3115\nL        172.20.15.1/32 is directly connected, Loopback3115\nC        172.20.16.0/24 is directly connected, Loopback2116\nL        172.20.16.1/32 is directly connected, Loopback2116\nC        172.20.17.0/24 is directly connected, Loopback2117\nL        172.20.17.1/32 is directly connected, Loopback2117\nC        172.20.19.0/24 is directly connected, Loopback2119\nL        172.20.19.19/32 is directly connected, Loopback2119\nC        172.20.21.0/24 is directly connected, Loopback2121\nL        172.20.21.1/32 is directly connected, Loopback2121'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show ip route'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP', u'       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area ', u'       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2', u'       E1 - OSPF external type 1, E2 - OSPF external type 2', u'       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2', u'       ia - IS-IS inter area, * - candidate default, U - per-user static route', u'       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP', u'       a - application route', u'       + - replicated route, % - next hop override, p - overrides from PfR', u'', u'Gateway of last resort is 10.10.20.254 to network 0.0.0.0', u'', u'S*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1', u'      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks', u'C        10.10.20.0/24 is directly connected, GigabitEthernet1', u'L        10.10.20.48/32 is directly connected, GigabitEthernet1', u'C        10.255.255.0/24 is directly connected, GigabitEthernet2', u'L        10.255.255.1/32 is directly connected, GigabitEthernet2', u'      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks', u'C        172.16.100.0/24 is directly connected, Loopback18', u'L        172.16.100.18/32 is directly connected, Loopback18', u'      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks', u'C        172.17.2.0/24 is directly connected, Loopback702', u'L        172.17.2.1/32 is directly connected, Loopback702', u'C        172.17.10.0/24 is directly connected, Loopback710', u'L        172.17.10.1/32 is directly connected, Loopback710', u'      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks', u'C        172.20.1.0/24 is directly connected, Loopback2101', u'L        172.20.1.1/32 is directly connected, Loopback2101', u'C        172.20.2.0/24 is directly connected, Loopback2102', u'L        172.20.2.1/32 is directly connected, Loopback2102', u'C        172.20.3.0/24 is directly connected, Loopback2103', u'L        172.20.3.1/32 is directly connected, Loopback2103', u'C        172.20.4.0/24 is directly connected, Loopback2104', u'L        172.20.4.1/32 is directly connected, Loopback2104', u'C        172.20.5.0/24 is directly connected, Loopback2105', u'L        172.20.5.1/32 is directly connected, Loopback2105', u'C        172.20.7.0/24 is directly connected, Loopback2107', u'L        172.20.7.1/32 is directly connected, Loopback2107', u'C        172.20.8.0/24 is directly connected, Loopback2108', u'L        172.20.8.1/32 is directly connected, Loopback2108', u'C        172.20.9.0/24 is directly connected, Loopback2109', u'L        172.20.9.1/32 is directly connected, Loopback2109', u'C        172.20.11.0/24 is directly connected, Loopback2111', u'L        172.20.11.1/32 is directly connected, Loopback2111', u'C        172.20.12.0/24 is directly connected, Loopback2112', u'L        172.20.12.1/32 is directly connected, Loopback2112', u'C        172.20.13.0/24 is directly connected, Loopback2113', u'L        172.20.13.1/32 is directly connected, Loopback2113', u'C        172.20.14.0/24 is directly connected, Loopback2114', u'L        172.20.14.1/32 is directly connected, Loopback2114', u'C        172.20.15.0/24 is directly connected, Loopback3115', u'L        172.20.15.1/32 is directly connected, Loopback3115', u'C        172.20.16.0/24 is directly connected, Loopback2116', u'L        172.20.16.1/32 is directly connected, Loopback2116', u'C        172.20.17.0/24 is directly connected, Loopback2117', u'L        172.20.17.1/32 is directly connected, Loopback2117', u'C        172.20.19.0/24 is directly connected, Loopback2119', u'L        172.20.19.19/32 is directly connected, Loopback2119', u'C        172.20.21.0/24 is directly connected, Loopback2121', u'L        172.20.21.1/32 is directly connected, Loopback2121']], u'changed': False}]})
 PLAY RECAP ***
 ios-xe-mgmt.cisco.com      : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
 root@c421cab61f1f:/ansible_local/cisco_ios#

The Hunt for a Cisco ACI Lab

As an independent consultant one of the things I have to provide for myself are labs.

It’s a wonderful time for labs! Virtual capabilities and offerings make testing and modeling a clients network easier than ever.

Cisco DevNet offers “Always On” Sandbox devices that are always there if you need to test a “unit of automation”. Network to Code labs are also available on demand (at a small cost) and a huge time saver.

But Cisco’s Application Centric Infrastructure (ACI) is a different animal altogether.

Cisco offers a simulator (hardware and VM) (check out the DevNet Always On APIC to see it in action) which is terrific for automation testing and configuration training but there is no data plane so testing routing, vpcs, trunks, and contracts is a non starter. For that you need the hardware…and so began my search for an ACI Lab for rent.

The compilation below is still a work in progress but I thought I would share my findings, so far, over the last 6 months.

First let’s set the context.

I was looking to rent a physical ACI lab (not a simulator) with the following requirements:

  • Gen2 or better hardware (to test ACL logging among other things) and ACI 4.0 or later
  • Accessible (Configurable) Layer 3 device or devices (to test L3 Outs)
  • Accessible (Configurable) Layer 2 device or devices (to test L2)
  • VMM integration (vCenter and ESXi Compute)
  • Pre configured test VMs
  • My own custom test VMs
  • A means of running Ansible and/or Python within the lab environment and cloning repositories

Going in, I didn’t have a timeframe in mind. I would take a 4 hour block of time or a week and that is still the case. I also realized that it was unlikely anything for rent would meet all of my requirements but that was the baseline I would use to assess the various offerings.

A lower cost option for just a few hours is handy for quick tests.  Having a lab for a week is very nice and gives you the latitude to test without the time pressure.

Out of 13 possible options, I’ve tried 4. Two were very good and I would use them again and 2 were the opposite.

Recommendations:

While the INE lab is a bit difficult to schedule (you have to plan ahead which isn’t compatible with my “immediate gratification” approach to things) it’s a terrific lab with configuration access to the L2/L3 devices which gives you good flexibility.  

NterOne also offers a terrific lab and I was able to schedule it within a week of when I called. The lab is superbly documented and designed so as to make it very easy to understand. I got two pods/tenants which gave me some good flexibility in terms of exporting and testing contracts etc. The L2 and L3 devices are read only and pre-configured so those are a little limiting.

Some observations:

  • So far, no one is running Gen2+ equipment.
  • Almost all of the designs I have seen have single link L2/L3s so its difficult to test vpcs (and you generally need access to the other device unless its been preconfigured for you).
  • All the labs were running ACI 4.x

Global Knowledge has some interesting offerings and I was very excited initially but even getting the simplest answer was impossible. Like many larger companies, if you try to deviate from the menu it does not go well. I moved on.

INE, NterOne, and Firefly all spent the time understanding my requirements and offering solutions. Sadly Firefly was way out of my price range.

On a final note, I would avoid My CCIE Rack and CCIE Rack Rentals which may actually use the same lab. Documentation is terrible and I’ve tried 3 or 4 times and have gotten in about 50% of the time. The first time, I didn’t realize I needed to rent both of their DC labs (one has ACI and the other gives you access to the L2/L3 devices). The last time I rented a lab (both labs) they just simply cancelled my labs and never responded to emails either before or after.  If you have money you would like to dispose of, send it here (Coral Restoration Foundation Curaçao) or some other worthy cause. A much better use of those funds I’d have to say.

If anyone has has good experience with an ACI Lab rental that I’ve not included here I would love to hear about it!

Kudos to INE and NterOne for great customer service and flexibility!  

Summary

No alt text provided for this image

1. The INE Staff was open to allowing me to put some of my own repos and tools into the environment but when I scheduled the lab that became problematic.    INE was very honorable and let me have the lab in the non-customized way for the week without charge since they were not able to honor my customization request at that time!!

2. The Student Jump box can be customized which was very nice (I had access to my GitHub repos) and Python was available although it was Python 2.7.

3. Cost is not unreasonable but there is a minimum of

4. students so unless you have 3 like minded friends it becomes very expensive. 4 I’ve always been a big fan of Global Knowledge but my interactions with them were not positive.  I could not get even the most basic question answered (for example, did they have a money back guarantee or 30 day return policy since I was never able to get my more specific questions answered?  I figured if I had 30 days to see if the lab met my requirements then I could test it out and find out for myself.)

5.  Great customer service but the pricing was a non starter. $$$+ per day and it would have bene limited to business hours

6. When I first reached out with questions about their ACI lab they said it would not be available until late October (I assumed this year).  When I reached out in November, they didn’t even answer the question so clearly this is still al work in progress.

7. Worthy of further investigation


Details and Links

Cost Legend:

  • $ Less than $200
  • $$ Hundreds
  • $$$ Thousands

INE $$

CCIE Data Center – 850 Tokens/Week (Weekly rentals only) (1$ = 1 Token)

Excellent lab but very busy (because it’s very good) and so can be difficult to schedule.

NterOne $$

Excellent lab with good functionality at a reasonable price point.

Fast Lane $$$

Minimum of 4 Students @ $439/Student

Global Knowledge $$$

On Demand (12 Months)

Very poor customer support (my experience)

CloudMyLab

Lab not available yet. No timeframe given.

Octa Networks

More course focused but awaiting response.

Labs 4 Rent

INDIA: +91-9538 476 467  |  UAE: +971-589 703 499 | Email: info@labs4rent.com

No response to emails

FireFly $$$+ /day

Too expensive (for me)!

Rack Professionals

Needs further investigation

NH Networkers Home

+91-8088617460 / +91-8088617460

Needs further investigation

Micronics Training

They do not rent out their racks.

My CCIE Rack $

support@myccierack.com. Whatsapp number- 7840018186

Very poor experience

CCIE Rack Rentals $

support@ccierack.rentals WhatsApp : +918976927692

Very poor experience

The Struggle with Structure – Network Automation, Design, and Data Models

Preface

Modern enterprise networking is going to require a level of structure and consistency that the majority of its networking community may find unfamiliar and perhaps uncomfortable. As a community, we’ve never had to present our designs and configuration data in any kind of globally consistent or even industry standard format.

I’m fascinated by all things relating to network automation but the one thing that eluded me was the discussion around data models (YANG, OpenConfig).

Early on, the little I researched around YANG led me to conclude that it was interesting, perhaps something more relevant to the service provider community, and a bit academic. In short, not something directly relevant to what I was doing.

Here is how I figured out that nothing could be further from the truth and why I think this is an area that needs even more focus.

If you want to skip my torturous journey towards the obvious, see the resources section at the end or jump over to Cisco’s DevNet Model Driven Programmability for some excellent material.

You can also cut to the chase by going to the companion repository Data_Model_Design on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document.


The Current Landscape

Since its earliest days as a discipline, networking (at least in the enterprise) has generally allowed quite a bit of freedom in the design process and its resulting documentation. That is one of the things I love about it and I’m certain I’m not alone in that feeling. A little island of creativity in an ocean of the technical.

For every design, I put together my own diagrams, my own documentation, and my own way to represent the configuration or just the actual configuration. Organizations tried to put some structure around that with a Word, Visio, or configuration text template, but often even that was mostly just for the purposes of branding and identification of the material. How many of us have been given a Word template with the appropriate logos on the title page and if you were lucky a few headings? Many organizations certainly went further requiring a specific format and structure so that there was consistency within the organization but move on to a different organization and everything was different.

The resulting design documentation sets were many and varied and locally significant.

In effect, the result was unstructured data. Unstructured or even semi structured data as text or standard output from a device or system is well know but this is unstructured data on a broader scale.

Design and Configuration Data

Over the last few years I’ve observed a pattern that I’m just now able to articulate. This pattern speaks to the problem of unstructured design and configuration data. The first thing I realized is that, as usual, I’m late to the party. Certainly the IETF has been working on the structured configuration data problem for almost 20 years and longer if you include SNMP! The Service Provider community is also working hard in this area.

The problem of structured vs unstructured data has been well documented over the years. Devin Pickell describes this in great detail in his Structured vs Unstructured Data – What’s the Difference? post.

For the purposes of this discussion let me summarize with a very specific example.

We have a text template that we need to customize with specific configuration values for a specific device:

!EXAMPLE SVI Template <configuration item to be replaced with actual value>

interface Vlan< Vlan ID >
description < SVI description >
ipv6 address < IP Address >/< IP MASK>
ipv6 nd prefix < Prefix >/< Prefix MASK > 0 0 no-autoconfig
ipv6 nd managed-config-flag
ipv6 dhcp relay destination < DHCP6 Relay IP >

If we are lucky we have this:

More often than not we have this:

Or this:

The problem is a little broader but I think this very specific example illustrates the bigger issue. Today there is no one standard way to represent our network design and configuration data. A diagram (in Visio typically) is perhaps the de-facto standard but its not very automation friendly. I’ve had design and configuration data handed to me in Word, PowerPoint, Excel (and their open source equivalents), Text, Visio, and the PDF versions of all of those.

Let me be clear. I am not advocating one standard way to document an entire network design set…yet. I’m suggesting that automation will drive a standard way to represent configuration data and that should drive the resulting design documentation set in whatever form the humans need it. That configuration data set or data model should drive not just the actual configuration of the devices but the documentation of the design. Ultimately, we can expect to describe our entire design in a standard system data model but that is for a future discussion.

Structured design and configuration data

In order to leverage automation we need the configuration data presented in a standard format. I’m not talking configuration templates but rather the actual data that feeds those templates (as shown above) and generates a specific configuration and state for a specific device.

Traditionally, when developing a design you were usually left to your own devices as to how to go about doing that. In the end, you likely had to come up with a way to document the design for review and approval of some sort but that documentation was static (hand entered) and varied in format. Certainly not something that could be easily ingested by any type of automation. So over the last few years, I’ve developed certain structured ways to represent what I will call the “configuration payload”…all the things you need to build a specific working configuration for a device and to define the state it should be in.

Configuration payload includes:

  • hostname
  • authentication and authorization configuration
  • timezone
  • management configuration (NTP, SNMP, Logging, etc.)
  • interface configuration (ip, mask, description, routed, trunked, access, and other attributes)
  • routing configuration (protocol, id, networks, neighbors, etc.)

All of this data should be in a format that could be consumed by automation to, at the very least, generate specific device configurations and, ideally, to push those configurations to devices, QA, and ultimately to document.

My experience over the last few years tells me we have some work ahead of us to achieve that goal.

The problem – Unstructured design and configuration data is the norm today

As a consultant you tend to work with lots of different network engineers and client engineering teams. I started focusing on automation over 4 years ago and during that time I’ve seen my share of different types of configuration payload data. I’m constantly saying, if you can give me this data in this specific format, look what can be done with it!

My first memorable example of this problem was over 2 years ago. The client at the time had a very particular format that they wanted followed to document their wireless design and the deliverable had to be in Visio. I put together a standard format in Excel for representing access point data (name, model, and other attributes). This structured data set in Excel (converted to CSV) would then allow you to feed that data into a diagram that had a floor plan. You still had to move the boxes with the data around to where the APs were placed but it saved quite a lot of typing (and time) and reduced errors. I demonstrated the new workflow but the team felt that it would be simpler for them to stick to the old manual process. I was disappointed to be sure but it was a bit of a passion project to see how much of that process I could automate. We had already standardized on how to represent the Access Point configuration data for the automated system that configured the APs so it was a simple matter of using that data for the documentation.

The issue was more acute on the LAN side of the house. On the LAN side the structured documentation format (also in Excel) was not an option. It fed all the subsequent stages of the process including ordering hardware, configurations (this was the configuration payload!), staging, QA, and the final documentation deliverable.

When fellow network engineers were presented with the format we needed to use, lets just say, the reception lacked warmth. I used Excel specifically because I thought it would be less intimidating and nearly everyone has some familiarity with Excel. These seasoned, well credentialed network engineers, many who were CCIEs, struggled. I struggled right along with them…they could not grasp why we had to do things this way and I struggled to understand why it was such an issue. It is what we all do as part of network design…just the format was a little different, a little more structured (in my mind anyway).

I figured I had made the form too complicated and so I simplified it. The struggle continued. I developed a JSON template as an alternative. I think that made it worse. The feedback had a consistent theme. “I don’t usually do it that way.” “I’ve never done it this way before.” “This is confusing.” “This is complicated.”

Lets be clear, at the end of the day we were filling in hostname, timezone information, vlans, default gateway, SVI IP/Mask, uplink Interface configuration, and allowed vlans for the uplinks. These were extremely capable network engineers. I wasn’t asking them to do anything they couldn’t do half asleep. I was only requiring a certain format for the data!

During these struggles I started working with a young engineer who had expressed an interest in helping out with the automation aspects of the project. He grasped the structured documentation format (aka the Excel spreadsheet) in very little time! So much so, that he took on the task of training the seasoned network engineers. So it wasn’t the format or it wasn’t just the format if a young new hire with very little network experience could not just understand it, but master it enough to teach it to others.

With that, the pieces fell into place for me. What I was struggling against was years of tradition and learned behavior. Years of a tradition where the configuration payload format was arbitrary and irrelevant. All you needed was your Visio diagram and your notes and you were good to go.

Unstructured configuration payload data in a variety of formats (often static and binary) is of little use in this brave new world of automation and I started connecting the dots. YANG to model data, Vendor Yang data models…OK…I get it now. These are ways to define the configuration payload for a device in a structured way that is easily consumed by “units of automation” and the device itself.

This does not solve the broader issue of unlearning years of behavior but it does allow for the learning of a process that has one standard method of data representation. So if that transition can be made I can get out of the data modeling business (Excel) and there is now a standard way to represent the data and a single language we can use to talk about it. That is, of course, the ideal. I suspect I’m not out of the data modeling business just yet but I’m certainly a step closer to being out of it and, most importantly, understanding the real issue.

The diagram above shows an evolution of the network design process. The initial design activities won’t change much. We are always going to need to:

  • Understand the business needs, requirements, and constraints
  • Analyze the current state
  • Develop solutions perhaps incorporating new technology or design options using different technologies
  • Model the new design
  • Present & Review the new design

In this next evolution, we may use many of the old tools, some in new ways. We will certainly need new tools many of which I don’t believe exist yet. As we document requirements in a repository and configuration payload in a data model, those artifacts can now drive:

  • An automated packaging effort to generate the design information in human readable formats that each organization wants see
    • Here the Design Document, Presentation, and Diagram are an output of the configuration payload captured in the data model (I’ve deliberately not used Word, PowerPoint, and Visio…)
  • The actual configuration of the network via an automation framework since by definition our data models can be consumed by automation.
  • All of it in a repository under real revision control (not filenames with the date or a version identifier tacked on)

As with any major technology shift, paradigm change, call it what you will, the transition will likely result in three general communities.

  1. Those who eagerly adopt, evangelize, and lead the way
  2. Those who accept and adapt to fit within the new model
  3. Those who will not adapt

I’m sorry to say I’ve been wholly focused on the first community until now. It is this second “adapt community”, which will arguably be the largest of the three, that needs attention. These will be the network engineers who understand the benefits of automation and are willing to adapt but at least initally will likely not be the ones evangelizing or contributing directly to the automation effort. They will be the very capable users and consumers of it.

We need to better target skills development for them as the current landscape can be overwhelming.

Its also important to note that the tooling for this is woefully inadequate right now and is likely impeding adoption.

What’s next?

The solution may elude us for a while and may change over time. For me at least the next steps are clear.

  • I need to do a better job of targeting the broader network community who isn’t necessarily exited (yet) about all the automation but is willing to adapt.
  • I will start discussing and incorporating data models into the conversations and documentation products with my new clients and projects.
  • I will start showcasing the benefits of this approach in every step of the design process and how it can help improve the overall product.
    • revision control
    • improved accuracy
    • increased efficiency

Example Repository

If you want to see how some of these parts can start to fit together please visit my Data_Model_Design repository on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document which I then saved to PDF for “human consumption”.

Don’t miss these resources

Always a fan of DevNet, the DevNet team once again does not disappoint.

YANG for Dummies by David Barroso

YANG Opensource Tools for Data Modeling-driven Management by Benoit Claise

YANG Modules Overview from Juniper Networks Protocol Developer Guide

YANG and the Road to a Model Driven Network by Karim Okasha

The OpenConfig group bears watching as they are working on vendor agnostic real world models using the YANG language. This is very much a Service Provider focused initiative whose efforts may be prove very useful in the Enterprise space.

OpenConfig Site

OpenConfig GitHub Repository

Building a Production-ish Ready WebEx Teams ChatBot

Introduction

  • Is your interrupt-driven day no longer supportable?
  • Is there a particular Project Manager that asks you the same question every morning?
  • Do you often have to take some technical data and simplify it for semi- or non-technical consumption?
  • Would you like to pull out relevant sections of technical data for a sibling team? Today you send them all the data and hope they can pull out what they need, or find what they need because you just don’t have the time to put together something that is customized for them containing only what they need?

I suspect many of us have these situations far too often.

I spend quite a bit of time on Webex Teams (referred to as Spark from here on out in protest of the terrible re-branding of Spark to WebEx Teams which not only completely lacks imagination but also confuses anyone who also has to deal with Microsoft Teams…. a rebranding so awful it might actually unseat the rebranding of brigade). It is currently the messaging application of choice for my two biggest projects and a principal culprit in my interrupt-driven day. Maybe this can help turn it into an advantage… a mini-you if you will.

Not long ago, I lost a young and incredibly capable engineer to another project and that loss was pretty impactful. Many of the day to day activities fell back to me and I found lots of little (and not so little) things falling through the cracks. How was I going to work smarter not harder?

Network engineer getting asked questions from everyone!

The Back Story

I got to attend Interop this year and was very inspired by every session I attended but a few in particular really got me thinking about how I might automate my way out of my current predicament.

Jeremy Schulman– – Self-service Network Automation Using Slack

Nick RussoAutomation for Bureaucracies

Hank PrestonA Practical Look at NetDevOps – While Hank’s session inspired other ideas it is his work with DevNet and a SparkBot module that is most relevant for this exercise.

The Resulting Scene

The effort documented here is a real world experience and solution. To protect the innocent I’ve created a “demo” version of this solution that will highlight the functionality and the possibilities. Over the course of several posts, I’ll document the requirements, use cases, solutions, and issues.


I like to start with the problem statement:

  1. I need to be able to provide fine grained project status on demand
  2. I need to be able to print a pretty Summary report for management on demand
  3. I need to be able to extract the relevant portion of some technical information for a parallel project on demand
  4. I’d like to incorporate something fun to get the team comfortable and familiar with the Bot (on demand of course)

I also like to document the initial state and the characteristics of the environment:

  • Spark was the required platform for messaging. Enough of the team utilized Spark so as to make it an extremely viable delivery platform.
  • A SAAS platform was used for project status.
  • All the collateral and technical information resided in a document repository that everyone constantly works on and syncs to.

The solution concept: Develop a SparkBot attendant available to the Spark Team under which all of our spaces are created which can access an external API, a document repository, and respond to simple commands starting with:

  1. site_status
  2. site_summary
  3. site_networks
  4. comic_relief

OK..I started Googling and found lots of material but there were two issues:

  • Almost everything leveraged a dummy test environment often using ngrok, a fabulous little tool, but I needed this to be at least in the neighborhood of production ready. I did not want it to run on my home system.
  • Assembly was required in that there was good material on specific steps but not much on putting it all together. I suspect the thought there was that some of that should have been obvious but it wasn’t to me.

When I knew enough to be dangerous I figured I needed the following components:

ComponentDescriptionPlatform Used
Messaging Client & Account
The most basic requirement was a messaging client that was in use by enough of the team so as to be a viable delivery platform. This one was easy. Spark was already the messaging application of choice for the project. Using the messaging client solved all kinds of issues. I already had a client for a wide range of platforms, web, desktop, and mobile and I had a known User Interface to which the team was already accustomed.


Cisco WebEx Teams (Spark!!) Web Client, Desktop App, Mobile App
Messaging PlatformOf course the back end messaging platform had to support the required functionality for a Bot type application and research showed that it did. Spark has all the hooks for provisioning a Webhook and a Bot application.

Cisco WebEx Teams (Spark!!)
Document RepositoryThe Document repository the Team used had to have a client that I could install on the Web Server so that the data store would be available to bot functions and would always be synchronized (the latest).

Google Drive
Project Status APIThe Project Status tool had to support an API and I knew that the project SAAS did. (More on this later)

SAAS Project Management Tool
Web Server (Front End)Here is where it started getting tricky. I am by no means a System Administrator but I had to find a suitable web server and web server technology to use. The Web Server had to support the back end environment for the functionality and as I mentioned, also had to have a client so that I could present and sync the document repository.

Nginx
Web Server BackendFor the back end technology I always gravitate to Django but it was not a good fit in this case.
1. I did not foresee a need for a database
2. Most of the examples I found utilized Flask and I needed to get this working quickly before the village came after me with pitchforks and torches.

This was my first use of Flask and it is a testament to what you always here about it. It is simple and lightweight with quite a lot of functionality. More than enough for my purposes. With the excellent examples out there I had no issues getting it to work.

Flask & Python
Hosting PlatformThis was another challenge.

I mentioned I’m not a SysAdmin and initially I felt that bringing up a Linux server that I could “harden” sufficiently was probably not something I was going to do well so I started with a MacInCloud instance. It was a good Proof of Concept but I wasn’t happy with the performance or the cost so I abandoned it as the platform for the Bot Server.

There was nothing for it.

I needed to go down the path of a Linux host. I think I knew that already but I’ve been wanting to check out MacInCloud for a while and I thought that route would also save me time in researching how to secure a Linux server.
I always gravitate to Digital Ocean for this. They have superb documentation and How-Tos, simple and intuitive interface, excellent price points, and I’m a diver.

The first iteration of the Bot was actually on Digital Ocean and working quite well but I ran into a showstopper. The document sync client for Linux did not give you the option of changing the mount point. I could not grow the home volume on my Droplet sufficiently (the project data store is at about ~500GB) to host the data. I could add a volume but the sync client would try to sync to the home directory. I found a hack but it talked about sync instability and if I could not rely on the data being in sync on the Web Server then the rest was going to be pointless. Much of the functionality I needed involved accessing the latest data and documents on the repository.

Well, the other hosting platform I’ve been wanting to try is the Google Compute Platform. I must say I’m very impressed. Everything is pretty intuitive. I already had the domain I wanted to use on Google so that made DNS a breeze!

The production Bot is currently on GCP.

The public Demo system that is part of this post is on Digital Ocean. The Demo does not have the constraints of the production system and without the Digital Ocean documentation its doubtful I would have a working solution.

MacInCloud
Digital Ocean Droplet
Google Compute Platform (GCP)
Note: The Demo is built on a Digital Ocean Droplet
Messaging Platform SDK or ModuleI knew Spark had an SDK so worst case I had tools for the Bot.

Further research resulted in quite a few options for the Bot.

Here is where Hank Preston really saved me a ton of time. Hank’s webexteamsbot module provides a very nice framework for your own bot functions. It comes with a few that you will want to keep (/help is one) and provides some very good examples.

As your bot gets more capable (i.e. understands new functions or commands) you simply add those functions to your main script. The biggest issue I had was understanding what the module abstracted out. I needed to tap into a lot of the information…for example, the Spark space or room name had information that I needed to parse out so that if you ran a command within a space it would customize the output.

hpreston/webexteamsbot
ScriptsEach “function” or Bot command would need its own script or set of scripts and functions.

Python
HTTPS
I wanted to include as much security as my feeble SysAdmin abilities could muster. At a minimum SSL and some basic firewall functions.

HTTPS
certbot
DomainI wanted to use a FQDN.

cdl-automation.net
The Webex Teams ChatBot Subsystems
The ChatBot Subsystems

This is the first of a 5 Part Series on creating a production-ish ready WebEx Teams Chat Bot.

  1. Introduction
  2. The Basic Web Server
  3. The WebEx Teams (Spark) Backend Webhook
  4. The Bot Application on the Web Server
  5. The Bot

Decomposing Data Structures

Whether you are trying to find all the tenants in an ACI fabric, or all the interface IPs and descriptions on a network device, or trying to determine if the Earth is in imminent danger from an asteroid hurtling towards it, understanding complex data structures is a critical skill for anyone working with modern IT infrastructure technology.

As APIs become more and more prevalent, acting on an object (device, controller, cloud-based service, management system, etc.) will return structured data in a format you may have never seen before. For beginners, this may be a little daunting but once you get a few basics down you will be decomposing data structures at dinner parties!

Most modern APIs return data in XML or JSON format. Some give you the option to choose. We won’t spend any time on how you get this structured data. That will be a topic for another day. The focus today is how to interpret and manipulate data returned to you in JSON. I’m not a fan of XML so if I ever run into an API that only returns XML (odds are you will too) I do my very best to convert it to JSON first.

Lets get some basics out of the way. Curly braces {}, square brackets [], and spacing (whitespace) provide a syntax for this returned data. If you know a little Python these will be familiar to you.

Symbol


Meaning
[ ]LIST
Square brackets denote a list of values will be found between the opening [ and closing ] brackets separated by commas List elements are referenced by the number of its position in the list so that in a list like my_list = [1,2,3,4], if you want the information in the 3rd position (the third element) which is the number 3 you say my_list[2] because lists are zero indexed and so my_list[0] is the number 1, my_list[1] is the number 2, etc.
{ }DICTIONARY
Curly braces denote a list of key value pairs will be found between the opening { and closing } braces separated by commas with the key and value pair separated by a colon : key: value Dictionary key:value pairs are reference by the key so that in a dictionary like my_dict = {‘la’: ‘Dodgers’, ‘sf’: ‘Giants’} if you want to know the baseball team in LA you reference my_dict[‘la’].

Lists look like this:

[
item1,
item2,
item3
]

or (equally valid)

[item1,item2,item3]

Dictionaries look like this:

{
key:value,
otherkey:othervalue
}

or (equally valid)

{key:value, otherkey:othervalue}

The other thing to note is that when these structures are formatted, spacing is important and can tell you a lot about the hierarchy.

The examples above are not quoted and they should be. I didn’t because I wanted to highlight that single and double quotes can be used in Python but the JSON standard requires double quotes.

Its also important to understand that these two basic structures can be combined so that you can have a list of dictionaries or a dictionary of key:value pairs where the value is a list or a dictionary. You can have many levels and combinations and often that is what makes the data structure seem complex.

Lets start with a simple example.

Below is the output of an Ansible playbook that queries an ACI fabric for the configured tenants.

Cisco ACI Tenatn Query Playbook output
ACI Tenant Query Playbook – output

Lets look at the data that was returned, which is highlighted in a big yellow outer box in the image below. Also highlighted are the syntax symbols that will let us decompose this structure. As you can see from a, the entire structure is encased in curly braces {}. That tells us that the outermost structure is a dictionary and we know that dictionaries provide us with a key (think of this as an index value you use to get to the data in the value part) followed by a colon : followed by a value. In this case, we have a dictionary with one element or key:value pair and the value is a list. We know it is a list from b, which shows left square bracket immediately following the colon and denoting the start of a list structure.

{ "key" : "value" } ( where value is a list structure [])
{"tenantlist": ["common", "mgmt", "infra", etc..]}

Building a Custom TextFSM Template
ACI Tenant Query Playbook – output with annotations


This is a simple structure. It is short, so you can see the entire data structure in one view, but it has both types of data structures that you will encounter, a list and a dictionary.


Lets look at something a bit more complex.

REST COUNTRIES provides a public REST API for obtaining information about countries.

Check them out!

Using Postman, I submitted a query about Kenya. Below are the first 25 or so lines of the data returned.

Noting the colored letters and line numbers below in the image:

a. Line 1 shows us the left square bracket (outlined in a green box) which tells us this is a list. That is the outermost structure so you know that to get to the first element you will need to use a list reference like mylist[0] for the first element.

b. Line 2 shows us the left curly brace (outlined in a yellow box) which indicates that what follows as the first element of the list is a dictionary. Line 3 is a key:value pair where the key “name” has a value of “Kenya”.

c. Line 4 is a key:value pair but the value of key “topLevelDomain” is a list (of one element “.ke”).
Line7 returns us to a simple key:value pair.

Structured data returned from REST Countries API query
Structured data returned from REST Countries API query

Here is where it can start getting confusing. Remembering our reference rules..that is:

  • you reference a list element by its zero indexed positional number and
  • you reference the value of a dictionary element by its key

Don’t be distracted by the data within the syntax symbols just yet. If you see something like [ {},{},{} ], (ignoring the contents inside the braces and brackets) you should see that this is a list of three elements and those elements are dictionaries. Assuming alist = [ {},{},{} ] you access the first element (dictionary) with alist[0].

Going a step further, if you see this structure [ {"key1":[]},{"a_key": "blue"},{"listkey": [1,2,3]} ], you already know its a list of dictionaries. Now you can also see that the first element in the list is a dictionary with a single key:value pair and the key is “key1” and the value is an empty list. The second element in the list is also a dictionary with a single key:value pair with a key of “a_key” and a value of a string “blue”. I’ll leave you to describe the third element in the list.

Assuming my_list = [ {"key1":[]},{"a_key": "blue"},{"listkey": [1,2,3]} ], if I wanted to pull out the string “blue” I would reference my_color = my_list[1]["a_key"] and the variable my_color would be equal to “blue”. The string “blue” is the value of the second dictionary in the list. Remembering that list elements start at “0” (zero indexed), you need to access the element in the second position with [1]. To get to “blue” you have to use the key of “a_key” and so you have mylist[1][“a_key”] which will give you “blue”.

Lets try to extract the two letter country code for Kenya.

So I’m going to cheat a little here to introduce you to the concept of digging in to the data structure to “pluck” out the data you want. You really want to understand the entire data structure before doing this so I’m going to assume that this is a list with one element and that element is a dictionary of key value pairs with different information about Kenya.

That being the case, first I have to reference the element in the list.

  • The list has one element so I know I have to reference it with the zero index [0] (the first and only element in the list)
  • Next I have to pluck out the 2 letter country code for Kenya and that is in a key:value pair with the key of 'alpha2code'

So assuming we have a variable country_info that has our data structure, then to reference the 2 letter country code I would need to use

country_info[0]["alpha2Code"]

That reference structure above would return “KE”.

[0] takes us one level deep into the first element of the list. This is where the dictionary (note the curly brace in line 2 below) can be accessed. At this level we can access the key we need “alpha2Code” to get the 2 letter country code.

country_info =

Extracting the Country Code from the JSON output
Extracting the Country Code from the JSON output

Lets build on this. What if I need the country calling code so I can make a phone call to someone in Kenya from another country? For this, we need to go a level deeper as the country code is in a key named "callingCodes" at the same level of "alpha2Code" but the value is a list rather than a variable. See lines 9 – 11 in the image above. We know how to reference a list, so in this case, if I wanted the first country code in the list my reference structure would look like:

country_info[0]["callingCodes"][0]

That would return “254” (a string).

In many cases, you might want the entire list and so to get that:

country_info[0]["callingCodes"]

That would return [“254”] as a list (yes a list with only one element but a list because its enclosed in square brackets). There are cases where you may want to do some specific manipulation and you need the entire list.

Extra: In the companion GitHub repository to this post there is a quick & dirty Python3 script country_info_rest.py that will let you get some country data and save it to a file. There is also an optional “decompose function” that you can tailor to your needs to get a feel for decomposing the data structure via a script.

(generic_py3_env) Claudias-iMac:claudia$ python country_info_rest.py -h
usage: country_info_rest.py [-h] [-n CNAME] [-d]
​
Call REST Countries REST API with a country name.
​
optional arguments:
  -h, --help            show this help message and exit
  -n CNAME, --cname CNAME
                        Country Name to override default (Mexico)
  -d, --decompose       Execute a function to help decompose the response
​
Usage: 'python country_info_rest.py' without the --cname argument the script
will use the default country name of Mexico. Usage with optional name
parameter: 'python country_info_rest.py -n Singapore'. Note: this is a python3
script.
​​

Lets look at something really complex.

Now a word about “complex”. At this point I’m hoping you can start to see the pattern. It is a pattern of understanding the “breadcrumbs” that you need to follow to get to the data you want. You now know all the “breadcrumb” formats:

  • [positional number] for lists and
  • [“key”] for dictionaries

From here on out its more of the same. Lets see this in action with the data returned from the NASA Asetroids – Near Earth Object Web Service.

But first, here is why I called this “really complex”. A better term might be “long with scary data at first glance”. At least it was for me because when I first ran my first successful Ansible playbook and finally got data back it was like opening a shoe box with my favorite pair of shoes in it and finding a spider in the box…yes, shrieking “what IS that!” and jumping away.

I hope that by this point there is no shrieking but more of a quizzical..”Hmmm OK, I see the outer dictionary with a key of “asteroid_output” and a value of another dictionary with quite alot of keys and some odd looking stuff…”. Lets get into it!

…and if there is shrieking then I hope this helps get you on your way to where its quieter.

Raw output from an Ansible Playbook querying the NASA Near Earth Web Service
Raw output from an Ansible Playbook

I want to pluck out the diameter of the asteroid as well as something that tells me if there is any danger to Earth.

Somewhere, in the middle of all of this output, is this section which has the data we want but where is it in reference to the start of the entire data structure? Where are the breadcrumbs? Hard to tell… or is it?

Information we need to get to within the data structure returned by the Ansible playbook

You can visually walk the data but for data structures of this size and with many levels of hierarchy it can be time consuming and a bit daunting until you get the hang of it. There are a number of approaches I’ve tried including:

  1. visually inspecting it (good for 25 lines or less….if you cannot fit it on a single page to “eyeball” it try one of the other methods below…I promise you it will save you time)
  2. saving the output to a text file and opening it up in a modern text editor or IDE so that you can inspect and collapse sections to get a better understanding of the structure
  3. using a Python script or Ansible playbook to decompose by trial and error
  4. using a JSON editor to convert to a more readable structure and to interpret the data structure for you

I don’t recommend the first approach at all unless your data is like our first example or you combine it with the Python (or Ansible) trial and error approach but this can be time consuming. I do have to recommend doing it this way once because it really helps you understand what is going on.

Using a good advanced editor (*not* Notepad.exe) or IDE (Integrated Development Environment) is a good approach but for something that makes my eyes cross like the output above I use a JSON editor.

In the two sections below I’ll show you a bit more detail on approach #2 and #4. Play around with the companion GitHub repository for an example of approach #3.

Asteroid Data collapsed down in Sublime Text Editor

Note that this has already been collapsed down to the value of the key asteroid_output so the outer dictionary is already stripped off. In this view it looks a bit more manageable and the values we want can be found at the level shown below:

asteroid_output["json"]["near_earth_objects"][<date_key>]

where <date_key> can be any of the 8 date keys found in line 23, line 858, line 1154 etc. The gap in the line numbers give you a sense of how much data we’ve collapsed down but I hope you can begin to see how that makes it easier to start understanding you how you need to walk the path to the data you want.

asteroid_output =

Expanding one of the date keys as shown in the next image shows us how we might start to get to the data we want.

asteroid_output["json"]["near_earth_objects"]["2019-07-07"]

The date key we expanded, "2019-07-07", has a value that is a list. If we take the first element of that list we can get the estimated diameter in feet and the boolean value of “are we to go the way of the dinosaurs” or technically they value of key "is_potentially_hazardous_asteroid".

Estimated maximum diameter in feet:

asteroid_output["json"]["near_earth_objects"]["2019-07-07"][0]["estimated_diameter"]["feet"]["estimated_diameter_max"]

Is this going to be an extinction level event?:

asteroid_output["json"]["near_earth_objects"]["2019-07-07"][0]["is_potentially_hazardous_asteroid"]

Which will give us false (for that one date anyway :D).

Using a good text editor or IDE to investigate a data structure by expanding
Using a good text editor or IDE to investigate a data structure by expanding

Using JSON tools to decompose your data structure

It is here I must confess that these days if I can’t visually figure out or “eyeball” the “breadcrumbs” I need to use to get to the data I want, I immediately go to this approach. Invariably I think I can “eyeball” it and miss a level.

If I’m working with non-sensitive data JSON Editor Online is my personal favorite.

  1. I copy the output and paste it into the left window,
  2. click to analyze and format into the right window, and
  3. then I collapse and expand to explore the data structure and figure out the breadcrumbs that I need.

The Editor gives you additional information and element counts and has many other useful features. One of them is allowing you to save an analysis on line so you can share it.

Decomposing_Data_Structures_asteroid_output in JSON Editor Online

Using the JSON Editor Online to navigate through the returned data from your API call
Using the JSON Editor Online to navigate through the returned data from your API call

There are occasions where I’m not working with public data and in those cases I’m more comfortable using a local application. My “go to” local utility is JSON Editor from Vlad Badea available from the Apple Store. I don’t have a recommendation for Windows but I know such tools exist and some look interesting.

For this data set, the local JSON Editor application does a nicer job of representing the asteroid_output because it really collapses that hairy content value.

Using the JSON Editor App on your system to navigate through the returned data from your API call
JSON Editor APP on local system

Using a Python script to decompose by trial and error

In this repository there is a rudimentary Python3 script country_info_rest.py which, when executed with the “-d” option, will attempt to walk the response from the REST Country API a couple of levels.

The first part of the script executes a REST GET and saves the response. With the “-d” option it also executes a “decompose” function to help understand the returned data structure. Some sample output from the script follows.

Outer structure (0) levels deep:
        The data structure 0 levels deep is a <class 'list'>
        The length of the data structure 0 levels deep is 1
​
One level deep:
        The data structure 1 level deep is a <class 'dict'>
        The length of the data structure 1 level deep is 24
​
        Dictionary keys are dict_keys(['name', 'topLevelDomain', 'alpha2Code', 'alpha3Code', 'callingCodes', 'capital', 'altSpellings', 'region', 'subregion', 'population', 'latlng', 'demonym', 'area', 'gini', 'timezones', 'borders', 'nativeName', 'numericCode', 'currencies', 'languages', 'translations', 'flag', 'regionalBlocs', 'cioc'])
​
                Key: name       Value: Singapore
​
                Key: topLevelDomain     Value: ['.sg']
​
                Key: alpha2Code         Value: SG
​
                Key: alpha3Code         Value: SGP
​
                Key: callingCodes       Value: ['65']
​
                Key: capital    Value: Singapore
​
                Key: altSpellings       Value: ['SG', 'Singapura', 'Republik Singapura', '新加坡共和国']
​
                Key: region     Value: Asia
​
                Key: subregion  Value: South-Eastern Asia
​
                Key: population         Value: 5535000
​
                Key: latlng     Value: [1.36666666, 103.8]
​
                Key: demonym    Value: Singaporean
​
                Key: area       Value: 710.0
​
                Key: gini       Value: 48.1
​
                Key: timezones  Value: ['UTC+08:00']
​
                Key: borders    Value: []
​
                Key: nativeName         Value: Singapore
​
                Key: numericCode        Value: 702
​
                Key: currencies         Value: [{'code': 'BND', 'name': 'Brunei dollar', 'symbol': '$'}, {'code': 'SGD', 'name': 'Singapore dollar', 'symbol': '$'}]
​
                Key: languages  Value: [{'iso639_1': 'en', 'iso639_2': 'eng', 'name': 'English', 'nativeName': 'English'}, {'iso639_1': 'ms', 'iso639_2': 'msa', 'name': 'Malay', 'nativeName': 'bahasa Melayu'}, {'iso639_1': 'ta', 'iso639_2': 'tam', 'name': 'Tamil', 'nativeName': 'தமிழ்'}, {'iso639_1': 'zh', 'iso639_2': 'zho', 'name': 'Chinese', 'nativeName': '中文 (Zhōngwén)'}]
​
                Key: translations       Value: {'de': 'Singapur', 'es': 'Singapur', 'fr': 'Singapour', 'ja': 'シンガポール', 'it': 'Singapore', 'br': 'Singapura', 'pt': 'Singapura', 'nl': 'Singapore', 'hr': 'Singapur', 'faگاپور'}
​
                Key: flag       Value: https://restcountries.eu/data/sgp.svg
​
                Key: regionalBlocs      Value: [{'acronym': 'ASEAN', 'name': 'Association of Southeast Asian Nations', 'otherAcronyms': [], 'otherNames': []}]
​
                Key: cioc       Value: SIN

===== Plucking out specific data:     
2 Letter Country Code:                          SG     
First (0 index) International Calling Code:     65     
List of International Calling Code:             ['65']     
==========================================

Feel free to take this script and add to it and modify it for your own data structure!


Apart from the first example, I have deliberately not used data from network devices. I wanted to show that the data source really does not matter. Once you understand how to decompose the data, that is, get to the data you want within a returned data structure, you can pluck out data all day long from any data set. A secondary objective was to play around with some of these APIs. While checking to see if the Earth is about to get broadsided by an asteroid is clearly important there are quite a few public and fee-based APIs out there with perhaps more practical use.

Of course within our own networks, we will be querying device and controller APIs for status and pushing configuration payloads. We will be pulling inventory data from a CMDB system API, executing some actions, perhaps some updates, and recording any changes via API to the Ticketing System.


Some final tips, links, and notes:

Some sites have excellent API documentation that tell you exactly what will be returned but some don’t, so in many instances you have to do this decomposition exercise anyway. It’s best to get familiar with it. It’s like knowing how to read a map and and how to navigate in case you forget your GPS.

JSON Tools

JSON Editor Online

REST APIs

REST COUNTRIES

NASA Asteroids – Near Earth Object Web Service

https://api.nasa.gov/api.html#NeoWS

Examples Repository cldeluna/Decomposing_DataStructures


The Gratuitous. Arp

2019-07-07


Building a Custom TextFSM Template

If you have seen any of the TextFSM posts on this site you know how useful the Network To Code TextFSM Template repository can be. Rarely do I not find what I need there!

I recently had to parse route summary information from JUNOS Looking Glass routers. I always check the very rich set of templates in the NTC Template index repository but in this case I was out of luck. I was going to have to build my own… and you get to watch.

Two fantastic resources you can use when you are in the same boat are here:

Its good to begin by familiarizing yourself with the output you need to parse. Here is a snippet of the show command output.

>show route summary
Autonomous system number: 2495
Router ID: 164.113.193.221
inet.0: 762484 destinations, 1079411 routes (762477 active, 0 holddown, 12 hidden)
Direct: 1 routes, 1 active
Local: 1 routes, 1 active
BGP: 1079404 routes, 762470 active
Static: 5 routes, 5 active
inet.2: 3073 destinations, 3073 routes (3073 active, 0 holddown, 0 hidden)
BGP: 3073 routes, 3073 active

Start with something simple like ASN and RouterID

A basic TextFSM Template

I wanted to start slowly with something I knew I could get to work. Looking at the data, it should be simple to extract the first two values I need:
– ASN
– Router ID

I started with those values as they are by far the simpler to extract from the ‘show route summary’ command. I will try not to cover material that is covered by the two Google links above. However I do want to point out the concept of TextFSM (as I understand it or explain it to myself) which is to provide context for your regular expressions. That is, not only can you define the specific pattern to search for but you can also define its “environment”. As you can see below the “Value” keyword lets me define a variable I want to pluck out of the unstructured text (the show command output). LIne 4 defines the “action” section to start processing and the first thing to look for is a line that starts with “Autonomous system number:” one or more space noted by the \s+ and then our ASN variable which we defined above as being a pattern of one or more digits \d+. So you have the power of the regular expression that defines the value you want and the power of regular expressions to help you define the context where your value will be found.

Junos ‘show route summary’ TextFSM Template – Version 1

For this exercise we will use my textfsm3 GitHub repository and the “test_textfsm.py” script for our testing rather than the Python command interpreter. Simply clone the repository to get started.
Note that the repository has the completed version of the template. Look at the history of the template file on GitHub to see its “evolution”.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py -h
usage: test_textfsm.py [-h] [-v] template_file output_file
This script applys a textfsm template to a text file of unstructured data (often show commands). The resulting structured data is saved as text (output.txt) and CSV (output.csv).
positional arguments:
template_file TextFSM Template File
output_file Device data (show command) output
optional arguments:
-h, --help show this help message and exit
-v, --verbose Enable all of the extra print statements used to investigate the results

In the first iteration of the template file, we obtain the output below.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary
.template junos_show_route_summary.txt

TextFSM Results Header:
['ASN', 'RTRID']
================================
['2495', '164.113.193.221']
================================

Extract more details

So we have successfully built a template that will extract ASN and RouterID from the Junos show route summary command. Now it will get interesting because we also want this next set of values.

  • Interface
  • Destinations
  • Routes
  • Active
  • Holddown
  • Hidden

The first challenge here was to pick up the totals line. Here, one of my favorite tools comes into play, RegEx101. Regular expressions don’t come easy to me and this site makes it so easy! I saved the working session for trying to match the first part of that long totals line. As you can see, you can’t just match “inet”, or “inet” plus a digit, you also have to account for the “small.” Using RegEx101 and trial and error I came up with the following regular expression.

Value INT (([a-z]+.)?[a-z]+(\d)?.\d+)

inet.0: 762484 destinations, 1079411 routes (762477 active, 0 holddown, 12 hidden)

inet6.0: 66912 destinations, 103194 routes (66897 active, 0 holddown, 30 hidden)
Direct: 3 routes, 3 active

small.inet6.0: 31162 destinations, 31162 routes (31162 active, 0 holddown, 0 hidden)
BGP: 31162 routes, 31162 active

Let’s break it down…

The diagram below breaks the regex down into the key sections and numbers them. At the bottom you can see the actual text we are trying to parse and the numbers above indicate which section of the regex picked up the text we were interested in.

Breaking down the regular expression to extract the interface identifier (inet.x) for your TextFSM Template

The regex for INT (inet.x) was by far the most complicated. See 3 and 4 above. The rest of the line is far simpler and you just need to make sure you have it exactly as it appears in the raw text. Note that the parenthesis, which are part of the raw text show command, must also be ‘escaped’ just like the period.

Here is the TextFSM Template so far:

 Value Filldown ASN (\d+)
Value Filldown RTRID (\S+)
Value INT (([a-z]+.)?[a-z]+(\d)?.\d+)
Value DEST (\d+)
Value Required ROUTES (\d+)
Value ACTIVE (\d+)
Value HOLDDOWN (\d+)
Value HIDDEN (\d+)
Start
^Autonomous system number:\s+${ASN}
^Router ID:\s+${RTRID}
^${INT}:\s+${DEST}\s+destinations,\s+${ROUTES}\s+routes\s+\(${ACTIVE}\s+active,\s+${HOLDDOWN}\s+holddown,\s+${HIDDEN}\s+hidden\) -> Record

…and the resulting structured data:

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary.template junos_show_route_summary.txt
TextFSM Results Header:
['ASN', 'RTRID', 'INT', 'DEST', 'ROUTES', 'ACTIVE', 'HOLDDOWN', 'HIDDEN']
['2495', '164.113.193.221', 'inet.0', '762484', '1079411', '762477', '0', '12']
['2495', '164.113.193.221', 'inet.2', '3073', '3073', '3073', '0', '0']
['2495', '164.113.193.221', 'small.inet.0', '116371', '116377', '116371', '0', '0']
['2495', '164.113.193.221', 'inet6.0', '66912', '103194', '66897', '0', '30']
['2495', '164.113.193.221', 'small.inet6.0', '31162', '31162', '31162', '0', '0']

A few things to highlight, I used the ‘Filldown’ keyword for ASN and RTRID so that each “record” would have that information. The ‘Filldown’ keyword will take a value that appears once and duplicate it in subsequent records. If nothing else, it IDs the router from which the entry came but it also serves to simplify some things you might want to do down the line as each “record” has all the data. I also used the ‘Required’ keyword for routes to get rid of the empty last row that is generated when you used ‘Filldown’.

Almost there! We just need to pick up the source routes under each totals line.

Value SOURCE (\w+)
Value SRC_ROUTES (\d+)
Value SRC_ACTIVE (\d+)

Here is what the final (for now anyway) template looks like:

 Value Filldown ASN (\d+)
Value Filldown RTRID (\S+)
Value Filldown INT (([a-z]+.)?[a-z]+(\d)?.\d+)
Value DEST (\d+)
Value ROUTES (\d+)
Value ACTIVE (\d+)
Value HOLDDOWN (\d+)
Value HIDDEN (\d+)
Value SOURCE (\w+)
Value SRC_ROUTES (\d+)
Value SRC_ACTIVE (\d+)

Start
^Autonomous system number:\s+${ASN}
^Router ID:\s+${RTRID}
^${INT}:\s+${DEST}\s+destinations,\s+${ROUTES}\s+routes\s+(${ACTIVE}\s+active,\s+${HOLDDOWN}\s+holddown,\s+${HIDDEN}\s+hidden) -> Record
^\s+${SOURCE}:\s+${SRC_ROUTES}\s+routes,\s+${SRC_ACTIVE}\s+active -> Record

A few highlights. Because I wanted to store the source routes in a different value (SRC_ROUTES) I had to remove required from Routes in order to pick up the rows. I now have an extra row at the end but I can live with that for now. I also added Filldown to INT so that its clear where the source information came from.

(txtfsm3) Claudias-iMac:textfsm3 claudia$ python test_textfsm.py junos_show_route_summary.template junos_show_route_summary.txt

TextFSM Results Header:
['ASN', 'RTRID', 'INT', 'DEST', 'ROUTES', 'ACTIVE', 'HOLDDOWN', 'HIDDEN', 'SOURCE', 'SRC_ROUTES', 'SRC_ACT
IVE']
['2495', '164.113.193.221', 'inet.0', '762484', '1079411', '762477', '0', '12', '', '', '']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Direct', '1', '1']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Local', '1', '1']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'BGP', '1079404', '762470']
['2495', '164.113.193.221', 'inet.0', '', '', '', '', '', 'Static', '5', '5']
['2495', '164.113.193.221', 'inet.2', '3073', '3073', '3073', '0', '0', '', '', '']
['2495', '164.113.193.221', 'inet.2', '', '', '', '', '', 'BGP', '3073', '3073']
['2495', '164.113.193.221', 'small.inet.0', '116371', '116377', '116371', '0', '0', '', '', '']
['2495', '164.113.193.221', 'small.inet.0', '', '', '', '', '', 'BGP', '116377', '116371']
['2495', '164.113.193.221', 'inet6.0', '66912', '103194', '66897', '0', '30', '', '', '']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Direct', '3', '3']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Local', '2', '2']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'BGP', '103185', '66888']
['2495', '164.113.193.221', 'inet6.0', '', '', '', '', '', 'Static', '4', '4']
['2495', '164.113.193.221', 'small.inet6.0', '31162', '31162', '31162', '0', '0', '', '', '']
['2495', '164.113.193.221', 'small.inet6.0', '', '', '', '', '', 'BGP', '31162', '31162']
['2495', '164.113.193.221', 'small.inet6.0', '', '', '', '', '', '', '', '']

The test_textfsm.py file will save your output into a text file as well as into a CSV file.
I did try using ROUTES for both sections and making it Required again. This got rid of the extra empty row but really impacts readability. I would have to keep track of how I used ROUTES as I would have lost the SRC_ROUTES distinction. That is a far greater sin in my opinion than an empty row at the end which is clearly just an empty row.