Using Python and Pandas to look at Pandemic Data

The script and supporting files in this repository are intended to show how the Python Pandas module can be used to analyze data, specifically COVID-19 data.

I am going to recommend 3 data sets to “investigate”:

Background

WHO Data

The repository comes with the WHO data file from 06 April 2020 (WHO-COVID-19-global-data.csv). The simplest run of the script will use this WHO data file.

To download the latest file go to the Who Overview Map and download the Map Data from the link on the lower right hand side.

This CSV file will need clean up. Remove spaces from column titles. Some rows have spaces in the country names and so spaces have shifted columns (Belize and Palestine). You will need to combine the name and shift the data back to the correct columns. Welcome to the world of data.

who_download_2020-04-06_11-45-54

John Hopkins University (JHU) Center for Systems Science and Engineering (CSSE) Data

The John Hopikns Unversity CSSE data is widely used in the media and either drives or is incorporated into many other data sets.
More importantly for our purposes, this wonderful institution of higher learning makes the raw data available on a public repository (GitHub).

CSSEGISandData on GitHub

I’ve cloned the repository so that it sits as a subdirectory in my pandas_for_pandemic_data folder and I refresh it every day.

# Clones the pands_for_pandemic_data Repository
git clone https://github.com/cldeluna/pandas_for_pandemic_data.git

# Change into the pands_for_pandemic_data Repository
cd pandas_for_pandemic_data

# Clones the John Hopkins University CSSE Data
git clone https://github.com/CSSEGISandData/COVID-19.git

# Refresh the JHU Data
cd COVID-19
git pull
# Example of refreshing the JHU repository
Claudias-iMac:COVID-19 claudia$ git pull
remote: Enumerating objects: 148, done.
remote: Counting objects: 100% (148/148), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 252 (delta 135), reused 140 (delta 135), pack-reused 104
Receiving objects: 100% (252/252), 1.25 MiB | 6.51 MiB/s, done.
Resolving deltas: 100% (157/157), completed with 14 local objects.
From https://github.com/CSSEGISandData/COVID-19
865c933c..f3dea791 master -> origin/master
513b21a4..493821d3 web-data -> origin/web-data
Updating 865c933c..f3dea791
Fast-forward
csse_covid_19_data/UID_ISO_FIPS_LookUp_Table.csv | 7141 ++++++++++----------
…/csse_covid_19_daily_reports/04-06-2020.csv | 2810 ++++++++
…/time_series_covid19_confirmed_US.csv | 6508 +++++++++---------
…/time_series_covid19_confirmed_global.csv | 527 +-
…/time_series_covid19_deaths_US.csv | 6508 +++++++++---------
…/time_series_covid19_deaths_global.csv | 527 +-
…/time_series_covid19_recovered_global.csv | 499 +-
7 files changed, 13668 insertions(+), 10852 deletions(-)
create mode 100644 csse_covid_19_data/csse_covid_19_daily_reports/04-06-2020.csv
Claudias-iMac:COVID-19 claudia$


Feel free to put it elsewhere in your directory structure. The script sets the default path in the arguments section at the bottom. You can either update the default path directly or use the -d option when you execute the script to redirect script to look there for the daily files.

default_path_2020-04-07_11-13-29

New York Times Data


The New York Times has also shared their data. This repository only contains data for the US. They share two flavors:

  • US State Level data
  • US County Level data

They do a good job of keeping the data set very clean. Its all numeric and so far I’ve not seen any missing data which is rare for any data set.

== Number of MISSING values in each column:
date 0
state 0
fips 0
cases 0
deaths 0
dtype: int64

New York Times US Data GitHub Repository

I took the same approach with this repository as I did for the JHU data. I’ve cloned the repository so that it sits as a subdirectory in my pandas_for_pandemic_data folder and I refresh it every day.

Script

In general, the script will take in a CSV data file, turn it into a Pandas Data Frame, and execute a set of commands against the data. The main section manages which options to execute and sends the relevant data frame to a function that prints out the various analysis statements for the data frame. In general, this is what will be shown for each data frame.

  • Describe the data (pandas method showing interesting statistical facts about the data)
  • Show the shape of the data frame (number of rows and columns)
  • Show the first and last 5 lines of data
  • List the column headings
  • Show the data type of each column
  • Look for the total number of missing values in each column
  • Sum the columns (only makes sense for columns holding numeric data)

The various options let you control which data set you want to investigate and filter. The output is sent to your screen. These are just some of the actions available to you with Pandas. Once you have the data in a Pandas data frame you can query the data frame for the data that is meaningful to you.

Cheatsheet for todays_totals.py script

CLIDescription
python todays_totals.py -hDisplay all the options available (Help)
python todays_totals.pyWHO Data
Without any options, the script will load the local WHO data file from 6 April into a Pandas Data Frame and run some commands to investigate the data.
Reminder: If you download a fresh WHO CSV file please note the updates I list above so that you can cleanly import the CSV into a Data Frame
python todays_totals.py -c “MX”WHO Data Filtered for a Specific Country
Note: use the 2 letter country code as an argument with the -c option
python todays_totals.py -tJohn Hopkins University CSSE Data
The -t option will look for todays daily log file in the JHU CSSE (remember to clone the repository)
python todays_totals.py -t -c “Mexico”John Hopkins University CSSE Data
The -t -c “country or region” option will let you filter for a country
python todays_totals.py -t -s “California”John Hopkins University CSSE Data
The -t -s “state” option filters the JHU data set for a state or province
python todays_totals.py -t -f 06037John Hopkins University CSSE Data
The -t -f  FIPS option filters the JHU data set for a FIPS county code.   Note: FIPS code 06037 is for Los Angeles County
python todays_totals.py -nNew York Times Data
US Totals Only for the full NY Times data set with the -n option
(remember to clone the repository)
python todays_totals.py -n -f 6
or
python todays_totals.py -n -p “California”
New York Times Data
This data set has both “state” and “fips” but fips represents FIPS State Code so in this example 6 is the FIPS state code. 
This should get you exactly the same data as ** python todays_totals.py -n -p “California”**
Script CLI Cheat Sheet

The todays_totals.py script will give you an idea of how to load pandemic data into a Pandas data frame and interrogate the data. Executing it with the -h option will give you help on the options.

(pandas) Claudias-iMac:pandas_for_pandemic_data claudia$ python todays_totals.py -h
usage: todays_totals.py [-h] [-d DAILY_REPORTS_FOLDER] [-c COUNTRY_REGION]
[-p PROVINCE_STATE] [-s SPECIFIC_DAY] [-f FIPS] [-w]
[-t] [-n]
Script Description
optional arguments:
-h, --help show this help message and exit
-d DAILY_REPORTS_FOLDER, --daily_reports_folder DAILY_REPORTS_FOLDER
Set path to CSSE Dailty Report folder
csse_covid_19_daily_reports. Default is ./COVID-19/css
e_covid_19_data/csse_covid_19_daily_reports
-c COUNTRY_REGION, --country_region COUNTRY_REGION
Filer on 2 letter Country Region. Example: "US"
-p PROVINCE_STATE, --province_state PROVINCE_STATE
Filer on Province State. Example: "California"
-s SPECIFIC_DAY, --specific_day SPECIFIC_DAY
File for specific day. Example: 04-01-2020
-f FIPS, --fips FIPS FIPS County Code Example: 06037 (Los Angeles County)
-w, --who_data_file Analyze the WHO data file provided
-t, --today_csse Analyze todays file in the CSSE repo
-n, --new_york_times Analyze the New York Times Data
Usage: 'python todays_totals'

Running the script without any parameters yields some information on the WHO data set from 6 April which is part of the repository. This information includes:

A description of the data frame including some statistical data on the numeric values.

The rows and

Example output for WHO Data:

(pandas) Claudias-iMac:pandas_for_pandemic_data claudia$ python todays_totals.py
==================== DATA FRAME CHECK ====================
==================== WHO Data Frame from WHO-COVID-19-global-data.csv ====================

== Describe the Data Frame:
Deaths CumulativeDeaths Confirmed CumulativeConfirmed
count 6786.000000 6786.000000 6786.000000 6786.000000
mean 9.971412 104.946360 178.487179 2313.689213
std 73.268455 761.569677 1183.935476 12918.006118
min 0.000000 0.000000 0.000000 1.000000
25% 0.000000 0.000000 0.000000 4.000000
50% 0.000000 0.000000 2.000000 26.000000
75% 0.000000 3.000000 25.000000 235.000000
max 2003.000000 15889.000000 33510.000000 307318.000000

== Shape of the Data Frame:
(6786, 8)

== SAMPLE (first and last 5 rows):
day Country CountryName Region Deaths CumulativeDeaths Confirmed CumulativeConfirmed
0 2/25/20 AF Afghanistan EMRO 0 0 1 1
1 2/26/20 AF Afghanistan EMRO 0 0 0 1
2 2/27/20 AF Afghanistan EMRO 0 0 0 1
3 2/28/20 AF Afghanistan EMRO 0 0 0 1
4 2/29/20 AF Afghanistan EMRO 0 0 0 1
day Country CountryName Region Deaths CumulativeDeaths Confirmed CumulativeConfirmed
6781 4/2/20 ZW Zimbabwe AFRO 0 1 0 8
6782 4/3/20 ZW Zimbabwe AFRO 0 1 0 8
6783 4/4/20 ZW Zimbabwe AFRO 0 1 1 9
6784 4/5/20 ZW Zimbabwe AFRO 0 1 0 9
6785 4/6/20 ZW Zimbabwe AFRO 0 1 0 9

== Column Headings of the data set:
['day' 'Country' 'CountryName' 'Region' 'Deaths' 'CumulativeDeaths'
'Confirmed' 'CumulativeConfirmed']

== Column default data type:
day object
Country object
CountryName object
Region object
Deaths int64
CumulativeDeaths int64
Confirmed int64
CumulativeConfirmed int64
dtype: object

== Number of MISSING values in each column:
day 0
Country 85
CountryName 0
Region 62
Deaths 0
CumulativeDeaths 0
Confirmed 0
CumulativeConfirmed 0
dtype: int64

== Sum just the numeric columns in the Data Frame:
Deaths 67666
CumulativeDeaths 712166
Confirmed 1211214
CumulativeConfirmed 15700695
dtype: int64

(pandas) Claudias-iMac:pandas_for_pandemic_data claudia$

FuzzyWuzzy was a Python Module

An example of using the fuzzywuzzy Python module to match data sets with similar but not exact data – fuzzy matches!

I was recently given a list of locations that I had to analyze.

For the analysis, I needed data that was not in the original list (lets call that the source list). Luckily I had a larger data set (lets call that the detail list) which did have all the additional details I needed.

The source list with locations was a subset of the detail list. Even better, both lists had an “Address” column so I figured it would be a simple matter of looking up each location in my source list in the larger detail list and picking out the additional data that I needed.

…or so I thought…

The Same Column but NOT the Same Data

As I drilled down into the actual data, what I saw immediately derailed my original plan to use the “Address” column in both data sets as the key to match. For that to work, the keys would need to be the same or nearly so but I saw drastically different data. I might have been able to account for spaces and capitalization differences but never something like this:

Value From Source List AddressValue From Detailed List Address
Fort Irwin Military Base93 Goldstone Rd (Ft. Irwin)

As ever, Google to the rescue and enter the fuzzywuzzy python module.

Let’s take a closer look at the data.

I’ve used a list of NASA’s Deep Space Network Complexes to simulate the data and issues I had. My actual source list had hundreds of locations and the details list had thousands.

Numbers correspond to the image below.

  1. This is the original source list containing the list of locations to be analyzed.
  2. I happened to have the additional detail file which had all the additional information I needed for the locations listed in the original source list.
  3. At first look it seemed a perfect match as both files had an “Address” column I could use as a “key” to get the additional information for the locations in my original source list. However, on closer inspection you can see that the addresses for the same location were quite different. Even the “City” column had differences.
  4. You can also see that the source list was missing some State data.
  5. Finally, for each location in my source list I needed ZIP, complete state data, 2 Letter Country code, Directions, and the URL for each location.
  6. What I needed was all of the data in #5 extracted from the much larger detail file and combined with the locations from my original list.
Data Files
  • 6 & 7 For those that like to flip to the end, the script we will discuss here provided the last file combining the source list of locations with the detail data for each location from the larger detail file.

The Main section of the script

I’ve put the sample data sets and the script here on GitHub.

The main body of the script is very simple thanks to Pandas (another wonderful python module). It’s really only 8 lines of code and 4 steps.

  • Load the data sets from the Excel files into Pandas Data Frames
# Create a Data Frame from the Source Excel File
df_src = df_from_excel(arguments.source_file)
# Create a Data Frame from the Additional Details Excel File
df_det = df_from_excel(arguments.detail_file)
  • Add a new column to the source data set for “Full_Address” which contains the address information from the detailed file. This allows us to use the Pandas merge function to merge the two data sets in one line in exactly the way we want. This one statement calls a function that performs the fuzzy lookups
# Add a new column "Full_Address" to the Source data frame which has the "Address" information form the additional details data frame
# This new column will be used in the Pandas merge 
df_src['Full_Address'] = df_src['Address'].apply(get_series_match, args=[df_det['Address']])
  • Merge the two Data Frames into one data set with all the information needed
df_merged = pd.merge(df_src, df_det, left_on="Full_Address", right_on="Address", how="left", suffixes=('_src', '_det'))
Annotated merge command
  • Save the merged Data Frame as an Excel file and as a JSON file.
all_data_fn = "DSN_Complex_Lists_COMBINED"
​
df_merged.to_excel(f"{all_data_fn}.xlsx")
df_merged.to_json(f"{all_data_fn}.json", orient="records")

That is an overview of the entire process.


The get_series_match Function

The real magic is in the get_series_match function that is used to build the new “Full_Address” column in the original data set, df_src.

Let’s take a closer look at what that function is doing.

First, lets remember how its called:

df_src['Full_Address'] = df_src['Address'].apply(get_series_match, args=[df_det['Address']])

I will be the first to admit this is in no way elegant or “pythonic” but I’m not a developer. I’m a network engineer. It works!

In this one statement, thanks to the power of Pandas, we can iterate over the values in the Address column of the source data frame df_src[‘Address’], sending each to the get_series_match function to search for a match in the detail data frame Address column.

To the left of the equal sign, we tell the df_src data frame that we want to add a new column, “Full_Address”. We build the contents of that new column by passing the “Address” value from the source data frame, df_src[‘Address’], and using Pandas “apply” method to build the new value of each row using the get_series_match function (and we pass to the function the column “Address” data from the details data frame, df_det[‘Address’]).

get_series_match function

In line 38 we see the function defined with the two values it expected to be passed to it when called. The variable row_val has the “cell” value from the source data frame “Address” column.

Line 44 sets an empty variable to hold the match value (the address from the detail data frame).

The args variable is defined as a list and it has the column values from the detail data frame “Address” column so that we can iterate through that list looking for the specific row_val. Specifically the row_val passed to the function will be compared against this list of addresses from the detail data frame, df_det[‘Address’] :

                                            Address  
0                        93 Goldstone Rd (Ft. Irwin)  
1       421 Discovery Drive, Paddy's River District    
2  Ctra. M-531 Robledo de Chavela a Colmenar del ...  
3                               4800 Oak Grove Drive  

Line 48 starts the for loop which will iterate over each value in the df_det[‘Address’] data which we have stored as a python list in the variable args looking for the closest match to row_val.

Line 51 attempts an exact match. That is, if the src “Address” value matches the detail “Address” value (case insensitive), then we set match_value to the address from the detail data frame and we break out of the loop. We have found what we were looking for.

Lines 58 – 60 are executed if we don’t find an exact match. In reality, to make this shorter I would remove the exact match test (lines 51 – 56) but I left it in as an example of what I tried to do initially.

These 3 variables hold a ratio or “score” of how closely the two address values matched given different algorithms available in the fuzzywuzzy module. Each algorithm has a sweet spot. I try all three and see what gets me closest to what I need. In this case, with this data, token_sort_ration worked best (gave me the consistently higher score).

Value From Source List AddressValue From Detailed List AddressRatioPartial RatioToken Sort Ratio
Fort Irwin Military Base93 Goldstone Rd (Ft. Irwin)353850
Fort Irwin Military Base421 Discovery Drive, Paddy’s River District323833
Fort Irwin Military BaseCtra. M-531 Robledo de Chavela a Colmenar del Arroyo, Km 7.1243324
Fort Irwin Military Base4800 Oak Grove Drive272527

The get_series_match function returns match_value = “93 Goldstone Rd (Ft. Irwin)” in this example.

Script output with print enabled (stdout_details = True):

>>>>>>>>>>>>> Comparing row value Fort Irwin Military Base with additional detail series value 93 Goldstone Rd (Ft. Irwin)
​
                Fuzzy Ratio:    35
                Fuzzy Partial Ratio:    38
                Fuzzy Token Sort Ratio:         50 of type <class 'int'>
​
​
        Fuzzy Match Found for row: 
                Fort Irwin Military Base 
        with series value:
                93 Goldstone Rd (Ft. Irwin) 
        with Fuzzy Token Sort Ratio of 50!
>>>>>>>>>>>>> Comparing row value Fort Irwin Military Base with additional detail series value 421 Discovery Drive, Paddy's River District 
​
                Fuzzy Ratio:    32
                Fuzzy Partial Ratio:    38
                Fuzzy Token Sort Ratio:         33 of type <class 'int'>
​
>>>>>>>>>>>>> Comparing row value Fort Irwin Military Base with additional detail series value Ctra. M-531 Robledo de Chavela a Colmenar del Arroyo, Km 7.1
​
                Fuzzy Ratio:    24
                Fuzzy Partial Ratio:    33
                Fuzzy Token Sort Ratio:         24 of type <class 'int'>
​
>>>>>>>>>>>>> Comparing row value Fort Irwin Military Base with additional detail series value 4800 Oak Grove Drive
​
                Fuzzy Ratio:    27
                Fuzzy Partial Ratio:    25
                Fuzzy Token Sort Ratio:         27 of type <class 'int'>
​
match_value is 93 Goldstone Rd (Ft. Irwin)

Line 66 performs the critical test. I knew from looking at all the ration values that token_sort_ration was what I wanted to use in my logic test and by looking at the ratios or scores I knew that a score of 50 or better got me a good match. The rest is just like the exact match test. I set the match_value and break out of the loop.

Line 78 returns the match_value back to the calling statement and inserted that value into the new column in the appropriate row. Note that in line 74 if match_value never gets set in the for loop (the search), that means a match could not be made and so it gets set to “No match found”.

Video Overview ~12 minutes

Helpful Links

How much network automation stuff should I learn as a network engineer?

it is important to note that the question is not “should I learn any?” but rather “how much should I learn?”. The new Cisco DevNet Certifications help us answer that question. Let me share my journey to that conclusion.

In early February I decided to take the DevNet Associates Exam. I scheduled it for the first slot available once it went live on February 24th, 2020.

I had a study plan

  • Vacation & study
  • Back for a week & mostly study
  • Day before test nothing but study

The study plan execution was somewhat different

  • Study during vacation was contingent on downloading some content before I got to my destination where there would be no Internet access. For various reasons, that download didn’t happen and neither did any studying.
  • Some unforeseen events took place the week I came back so no studying except for listening to my CBT Nuggets course while I was driving from San Francisco to Los Angeles.
  • The day before the test I was still in Los Angeles and the only studying was more listening to my CBT Nuggets course at 1.6x speed as I headed North. I was too tired to make it all the way home so I had to go through San Jose traffic to make my 9:30AM test (listening to my CBT Nuggets course at 2x speed).

Somehow, despite all of that, I managed to pass the DevNet Associates exam and be one of the first 500 people to pass a DevNet exam. Let me explain why.

My true study plan

  • Working with network automation since 2014
  • The CBT Nuggets course

First, let me give kudos to the CBT Nuggets team. They had one of the few courses available for the Cisco Certified DevNet Associate 200-901 DEVASC exam (that I could find anyway). I recommend this course even if you are not studying for the exam. It was a terrific refresher, learned some new things, and filled in some gaps I didn’t realize I had!

I also purchased the DevNet Associate Fundamentals course offered on developer.cisco.com but didn’t have a chance to go through it for all the reasons above and its predominantly text based so I could not even listen to content while on the road.

How in the world did I pass?

I came away from the experience with two conclusions.

  • The test content accurately reflected useful skills one obtains by doing network automation
  • The test and study material are really focused on “teaching you to fish” (from the saying “if you give a hungry man a fish, you feed him for a day, but if you teach him how to fish, you feed him for a lifetime“)

Skills I had going into the test from real-world experience and training

The only reason I passed was experience in networking and in network automation. Specifically, experience and familiarity with:

  • Git
  • Serialization protocols (YAML, JSON, XML)
  • A network engineers’ knowledge of Python
  • APIs (REST in particular)
    • how to interact with them and how to read their documentation
  • RESTCONF, NETCONF, and YANG
  • Ansible and how to read Ansible module documentation
  • A basic knowledge of Linux and Bash
  • Docker basics
  • Basic networking

These are all skills I picked up at some level over the last 5 years in the course of automating tasks and workflows for my projects.

Given everything that transpired, I fully expected to fail. Without any of the formal training and material I planned to study, I figured I would get killed in the “Answer the Cisco Way” category alone. Even though I’ve worked with all of the topics listed above in one form or fashion I was sure there would be some special Cisco spin that I would get wrong.

I was pleasantly surprised and wrong. Cisco did a very good job with this test. It’s not a test of the “Cisco Way” or a test of your ability to memorize but a genuine attempt at testing your skill in this area. If you are familiar with the topics on the exam blueprint, know how to use documentation, and have hands on experience with the topics you will be successful.

If you think about it, the only reason I passed was because I have experience actually automating and the topics were ones highly relevant to day to day network automation. The CBT Nuggets course refreshed the stuff I knew and filled in some gaps for technologies I had not worked with directly BUT I could figure out how to work with them because I knew how to use the documentation and I had experience with APIs in general.

The exam looks to test your real world skills and knowledge. It tests your ability to solve a problem with a set of tools and information. These skills will serve you well regardless of vendor.

Use the exam blueprint

I don’t advise my study path but do advise getting started on a path that will get you some of these basic skills. You are going to need them.

You now have a blueprint for getting started from a credible and widely recognized source.

I believe every network engineer has been (or should be) asking (struggling with) the following question:

How much of this automation stuff should I know to be successful as a network engineer in this new era of network programmability and automation?

I think this DevNet Associates exam helps to quantify that answer. With this base skill set you will know if you want to move further into programmability & automation (i.e create automation) or if you only want to consume and execute automation. These are the basic skills you will need to do that.

The skills you need (regardless of vendor)

If you have not already, it’s time to get going!

  1. Know Git
  2. Know your serialization protocols (JSON, YAML, and XML)
  3. Have a network engineers’ knowledge of Python*
    • Sign up for Kirk Byers free Python for Network Engineers course
    • Sign up for Kirk Byers Network Automation for Network Engineers course (cost)
    • INE has some excellent Python for Network Engineers courses (cost)
  4. Know APIs (REST in particular)
    • CBT Nuggets DEVASC course has some excellent content for this topic
    • DevNet
  5. Be familiar with RESTCONF, NETCONF, and YANG
    • CBT Nuggets DEVASC course has some excellent content for this
    • DevNet
  6. Be familiar with Ansible and how to read Ansible module documentation
  7. Have basic knowledge of Linux and Bash
    1. David Bombal has a great course
    2. INE also has some excellent material – Linux Fundamentals for Network Engineers
    3. Udemy, Pluralsight, and CBT Nuggets also have good content for this
  8. Know Docker basics (concepts, images vs. container, Dockerfiles, basic usage)
  9. Know basic networking
    • I figure if you are reading this you’ve got this one covered!

* What is a “network engineers’ knowledge of Python”, you ask? Here is how I answer that question:
Basic python (language, object types, program flow control, using modules) and some familiarity with the common modules used for network programmability.

  • requests
  • ncclient
  • netmiko
  • json
  • xmltodict
  • PyYaml

Configuration Creation with Nornir

I tend to assess automation tools in four different contexts which is, in fact, a very general networking and automation workflow:

  • Discovery
    • How easy is it to find out about the network, document its configuration (the configuration of a device itself) and state (show commands “snapshotting” its state)?
  • Configuration Creation
    • How easy is it to generate configurations for a device, given a template?
  • Configuration Application
    • How easy is it to apply a set of configuration commands to one or more devices based on criteria?
  • Verification & Testing
    • How easy is it to audit and validate configurations and script results?
    • Yes, unglamorous though this may be it is a vital function for any network engineer and I would say doubly so with automation.

 We looked at Step 1, discovery, in the introduction to Nornir. This is step 2 in that workflow as I become more familiar with Nornir.

In my initial notes about Nornir I mentioned a few areas where Nornir really seemed to shine. Since then, I’ve had occasion to truly appreciate its native python roots. I recently worked with a Client where I was not able to install Ansible in their environment but they had no issues with Python. Nornir saves the day!

In keeping with my “assessment methodology” (trust me that sounds far more rigorous than it is) my first use of Nornir (then called Brigade) involved using Napalm get_facts against a couple of network devices and then decomposing the returned data (figuring out what it is and how to get to the data). In this way, I was easily able to discover key facts about all the network devices in my inventory and return them as structured data.

Why do we talk about “structured data” so often? Its our way of saying you don’t have to parse the ever changing stream of data you get back from some network devices. Perhaps it is more accurate to say that someone has already parsed the data for us and is returning it in a nice data structure that we can easily manipulate (a list, a dictionary, or more commonly a combination of both). For todays task we are going to parse the unstructured data we get back from each device ourselves so we can truly appreciate all the heavy lifting tools like Napalm do for us.

It drove me crazy that the first thing everyone always taught in Ansible was how to generate configs because that is not what I found powerful about it. For quite a while all anyone (in the networking community at least) ever learned to do with Ansible was generate configs! So I was pleased that most of the early Nornir examples started with what I call “discovery”. However, now it is time to look at configuration creation with Nornir.

Let’s get started.

I have a simple task I want to accomplish. I need to evaluate all of my switches and remove any vlans that are not in use. I also want to make sure I have a set of standard vlans configured on each switch:

  • 10 for Data
  • 100 for Voice
  • 300 for Digital Signage
  • 666 for User Static IP Devices

We will take this one step at a time. 

First we will query our devices for the data that we need to help us decide what vlans to keep and what vlans to remove. 

We will then take that data and generate a customized configuration “snippet” or set of commands that we can later apply to each device to achieve our desired result.  

Each switch will only have vlans that are in use and a standard set of vlans supporting voice, data, digital signage, and user devices with static IPs. As an added wrinkle, I have some devices that are only accessible via Telnet (you would be surprised at how often I find this to be the case.)

I’m not doing too much here with idempotency or “declarative” networking but I find that I understand things a bit better if I look at it from the current “classical” perspective and then look at how these tools can help leapfrog me into a much more efficient way of doing things.

This little automation exercise starts to bring in a variety of tools which will help us accomplish our task.

  • Nornir, and python of course, provide our ready made inventory, connection, and task environment. With Nornir in place I can now perform tasks on any subset of my inventory. We won’t cover filtering here but know it is an available feature.
  • Napalm easily accessible to us via Nornir provides the connectivity method (napalm_cli) that allows us to query, real time, our devices and obtain the data we need to achieve our “vlan cleanup & standardization” task.
  • TextFMS and NetworkToCode parsing templates allow us to extract our data as structured data so we can easily manipulate it and apply logic. We need this because napalm does not yet make available a “get vlans” getter and so we have to do this ourselves. 
    • Note that for some devices we might be able to use the interface getter for this data but I think there is great value in knowing how to do this ourselves should the need arise. We can’t have Mr. Barroso and team do all of our work for us!
  • Jinja2 also available to us via the Nornir framework will allow us to generate the customized configuration snippets.

Let see where they come into play in our task breakdown

Query devices for data

  • Environment set up via Nornir
  • Connectivity method used via a standard Nornir task using Napalm CLI

Analysis

Analyze the data retrieved from each device and determine a vlan “disposition” (keeping or removing)

  • Taking the data provided by the Napalm CLI (and the “show vlan” command we sent) we were able to quickly parse it via TextFSM and an existing TextFSM Template from the NetworkToCode TextFSM Template Repository
  • With our data now in a python data structure we were able to apply our “business rules logic” to get an understanding of the changes required in each device.
== Parsing vlan output for device arctic-as01 using TextFSM and NetworkToCode template.

================================================================================


VLAN_ID NAME          STATUS     TOTAL_INT_IN_VLAN    ACTION

1    default         active              7   Keeping this vlan

10   Management_Vlan active              1   Keeping this vlan

20   Web_Tier        active              0   This vlan will be removed!

30   App_Tier        active              0   This vlan will be removed!

40   DB_Tier         active              0   This vlan will be removed!

================================================================================

Configuration Creation

Generate a customized set of configuration commands to achieve our desired vlan state

  • Using the built in Nornir task that allows us to generate configurations based on Jinja2, we used the logic above to generate specific configuration commands for each device that, when applied, will achieve our desired state.

Here is the resulting configuration snippet for this particular device.

! For device arctic-as01
!

no vlan 20

no vlan 30

no vlan 40


vlan 10
 name Data_Vlan

vlan 100
 name Voice_Vlan

vlan 300
 name Digital_Signage

vlan 666
 name User_Static_IP_Devices

As you can see, Nornir is bringing together the tools and functions that we use for day to day network automation tasks under one native Python wrapper.  Where we need some customization or a tool does not exist it is a simple matter of using Python to bridge that gap!

Lets add a little more functionality to our original repository and try to gain a better understanding of Nornir.

Please see my nornir-config GitHub repository for the details!

Handy Links:

Nornir Documentation

Cisco Blog – Developer Exploring Nornir, the Python Automation Framework

A quick example of using TextFSM to parse data from Cisco show commands

Nornir – A New Network Automation Framework

nornir (formerly brigade) – A new network automation framework

Before getting started, let me say that I’m big fan of Ansible. It is one of my go-to automation frameworks. Having said that, there have been use cases where I’ve run into some of the limitations of Ansible and to be fair some of those limitations may have been my own. 

By limitations, I don’t mean I could not do what I wanted but rather to do what I wanted got a bit more complex than perhaps it should have been. When I go back in 6 months, I’ll have no idea how I got it to work. These use cases often involve more complex logic than what Ansible handles with its “simpler” constructs and Domain Specific Language (DSL) . 

So I was very intrigued to hear that the following automation heavyweights have been working on a new automation framework, Nornir:

  • David Barroso (think NAPALM – the python library not the sticky flammable stuff)
  • Kirk Byers (think Netmiko, Netmiko tools, and teacher extraordinaire – I can say that with full confidence as I’m pretty sure I’ve taken every one of his courses – some twice!)
  • Patrick Ogenstad (think NetworkLore)

Nornir Documentation

As an engineer one of my favorite question is “What problem are we trying to solve?” and here is my answer to that question when applied to Nornir.

Simpler, more complex logic

An oxymoron?  Certainly. Read on and I will try to explain.

By using pure Python, this framework solves the complex logic frustrations you may ultimately encounter with Ansible. If you can do it with python or have done it with python you are good to go. Thats not to say you can’t take your python script and turn it into an Ansible module but Nornir may save you that step.

Domain specific languages can be both a blessing and a curse. They can allow you the illusion that you are not programing and so facilitate getting you started if “programming” is not your cup of tea. They can help you get productive very quickly but eventually you may hit a tipping point where the cost of doing what you need to do with the tools and features in the DSL is too high in terms of complexity and supportability. Nornir “simplifies” that complex logic by allowing you access to all the tools you have in native python. As a side, and not insignificant, benefit you might actually remember what your code does when you get back to it in 6 months.  

Native on all platforms

Many companies only provide Windows laptops and so I’ve always tried to be very mindful of that when developing solutions. 

Scenario: I’ve got all of these Ansible play books that we can use to standardize network discovery, build configurations, apply configurations but most of my colleagues have Windows laptops and while I was able to develop and run these on my Mac where I can easily run an Ansible control server now we need to get an Ansible control server on a bunch of Windows laptops (this is not natively supported by Ansible). 

There are certainly solutions for this (see Using Docker as an Ansible and Python platform for Network Engineers) but that’s an extra step. There may be other reasons for taking that step but Nornir is a pip installable module and so you don’t need to.

I spent a Sunday afternoon dabbling in Nornir and it was well worth the time. It took me about 45 minutes to get things set up on my Windows system and run the example Patrick Ogenstad included in his post. While Nornir is said to support Python 2.7 (but recommends Python 3.6) I did have installation issues even with the latest pip installed. That was a significant part of the 45 minutes. Once I set up a Python3 virtual environment it worked flawlessly. You can see my work in this GitHub repository.

This is an exciting new framework with a great deal of promise which we can add to our automation arsenal!

Over the next few weeks (or months ) I’ll continue to familiarize myself with Nornir and report back.

This was originally published May 7, 2018 on Linkedin but has been updated to support the latest Nornir and scripts have been renamed so that there is no confusion between brigade and nornir. But let me say that this renaming is tied with the renaming of Cisco’s Spark platform to Webex Teams as the worst ever, or at least the last decade.

Original Brigade GitHub repository

Part 2 of this Series – Configuration Creation with Nornir

Pandas for Network Engineers (Who doesn’t love Pandas? )

The module not the mammal!

My original title for this article was going to be *Decomposing Pandas* as a follow on to *Decomposing Data Structures* but I was advised against that name. Go figure.

One of the things I love most about Python is that its always waiting for me to get just a little bit better so it can show me a slightly smarter way to do something. Pandas is the latest such example.

Pandas is a powerful data science Python library that excels at manipulating multidimensional data.

Why is this even remotely interesting to me as a network engineer?

Well, thats what Excel does, right?

I spend more time than I care to admit processing data in Excel. I find that Excel is always the lowest common denominator. I understand why and often I’m a culprit myself but eventually one grows weary of all the data being in a spreadsheet and having to manipulate it. I’m working on the former and Pandas is helping on the latter.

Google around enough for help on processing spreadsheets and you will come across references to the Pandas Python module.

If you are anything like me, you go through some or all of these stages:

  • You dismiss it as irrelevant to what you are trying to do
  • You dismiss it because its seems to be about big data, analytics, and scientific analysis of data (not your thing right?)
  • As you continue to struggle with what got you here in the first place (there has got to be a better way to deal with this spreadsheet data) you reconsider. So you try to do some processing in Pandas and pull a mental muscle…and what the heck is this NaN thing that keeps making my program crash? Basically, you find yourself way way out of your comfort zone (well..I did)!
  • You determine that your limited Python skills are not up to something quite this complex…after all, you know just enough Python to do the automation stuff you need to do and you are not a data scientist.

Finally, in a fit of desperation as you see all the Excel files you have to process, you decide that a python module is not going to get the better of you and you give it another go!

So here I am, on the other side of that brain sprain, and better for it, as is usually the case.

What is possible with Pandas…

Once you get the hang of it, manipulating spreadsheet-like data sets becomes so much simpler with Pandas. In fact, thats true for any data set, not just ones from spreadsheets. In fact, in the examples below, the data set comes from parsing show commands with TextFSM.

Knowing how to work with Pandas, even in a limited fashion as is the case with me, is going to be a handy skill to have for any Network Engineer who is (or is trying to become) conversant in programmability & automation.

My goal here is not to teach you Pandas as there is quite alot of excellent material out there to do that. I’ve highlighted the content which helped me the most in the “Study Guide” section at the end.

My goal is to share what I’ve been able to do with it as a Network Engineer, what I found most useful as I tried to wrap my head around it, and my own REPL work.

Lets look at something simple. I need to get the ARP table from a device and “interrogate” the data.

In this example, I have a text file with the output of the “show ip arp” command which I’ve parsed with TextFSM.

Here is the raw data returned from the TextFSM parsing script:

 # Executing textfsm strainer function only to get data
  strained, strainer = basic_textfsm.textfsm_strainer(template_file, output_file, debug=False)

In [1]: strained                                                                                                                                                                                                            
Out[1]:
[['Internet', '10.1.10.1', '5', '28c6.8ee1.659b', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.11', '4', '6400.6a64.f5ca', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.10', '172', '0018.7149.5160', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.21', '0', 'a860.b603.421c', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.37', '18', 'a4c3.f047.4528', 'ARPA', 'Vlan1'],
['Internet', '10.10.101.1', '-', '0018.b9b5.93c2', 'ARPA', 'Vlan101'],
['Internet', '10.10.100.1', '-', '0018.b9b5.93c1', 'ARPA', 'Vlan100'],
['Internet', '10.1.10.102', '-', '0018.b9b5.93c0', 'ARPA', 'Vlan1'],
['Internet', '71.103.129.220', '4', '28c6.8ee1.6599', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.170', '0', '000c.294f.a20b', 'ARPA', 'Vlan1'],
['Internet', '10.1.10.181', '0', '000c.298c.d663', 'ARPA', 'Vlan1']]

Note: don’t read anything into the variable name strained. The function I use to parse the data is called textfsm_strainer because I “strain” the data through TextFSM to get structured data out of it so I put the resulting parsed data from that function into a variable called “strained”.

Here is that data in a Pandas Data Frame:

# strained is the parsed data from my TextFSM function and the first command below
# loads that parsed data into a Pandas Data Frame called "df"
​
In [1]: df = pd.DataFrame(strained, columns=strainer.header)                                                                                                                                                                                                           
In [2]: df                                                                                                                                                                                                                                                      
Out[2]: 
​
    PROTOCOL         ADDRESS  AGE             MAC  TYPE INTERFACE
0   Internet       10.1.10.1    5  28c6.8ee1.659b  ARPA     Vlan1
1   Internet      10.1.10.11    4  6400.6a64.f5ca  ARPA     Vlan1
2   Internet      10.1.10.10  172  0018.7149.5160  ARPA     Vlan1
3   Internet      10.1.10.21    0  a860.b603.421c  ARPA     Vlan1
4   Internet      10.1.10.37   18  a4c3.f047.4528  ARPA     Vlan1
5   Internet     10.10.101.1    -  0018.b9b5.93c2  ARPA   Vlan101
6   Internet     10.10.100.1    -  0018.b9b5.93c1  ARPA   Vlan100
7   Internet     10.1.10.102    -  0018.b9b5.93c0  ARPA     Vlan1
8   Internet  71.103.129.220    4  28c6.8ee1.6599  ARPA     Vlan1
9   Internet     10.1.10.170    0  000c.294f.a20b  ARPA     Vlan1
10  Internet     10.1.10.181    0  000c.298c.d663  ARPA     Vlan1

I now have a spreadsheet like data structure with columns and rows that I can query and manipulate.


My first question:

What are all the IPs in Vlan1?

Just Python

Before Pandas, I would initialize an empty list to hold the one or more IPs and then I would iterate through the data structure (strained in this example) and where the interface “column” value (which in this list of lists in the strained variable is at index 5) was equal to ‘Vlan1’ I appended that IP to the list. The IP is in index 1 in each item the strained list.

# Using Python Only
print("\n\tUsing Python only..")
vlan1ips = []
for line in strained:
    if line[5] == 'Vlan1':
        vlan1ips.append(line[1])
print(f"{vlan1ips}")

The resulting output would look something like this:

['10.1.10.1', '10.1.10.11', '10.1.10.10', '10.1.10.21', '10.1.10.37', '10.1.10.102', '71.103.129.220', '10.1.10.170', '10.1.10.181']

Python and Pandas

Using a Pandas data frame df to hold the parsed data:

pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].values

The resulting output from the one liner above would look something like this:

 ['10.1.10.1' '10.1.10.11' '10.1.10.10' '10.1.10.21' '10.1.10.37'
'10.1.10.102' '71.103.129.220' '10.1.10.170' '10.1.10.181']

Same output with a single command!

Python List Comprehension

For those more conversant with Python, you could say that list comprehension is just as efficient.

# Using list comprehension
print("Using Python List Comprehension...")
lc_vlan1ips = [line[1] for line in strained if line[5] == 'Vlan1' ]

Results in:

Using List Comprehension: 
['10.1.10.1', '10.1.10.11', '10.1.10.10', '10.1.10.21', '10.1.10.37', '10.1.10.102', '71.103.129.220', '10.1.10.170', '10.1.10.181']

So yes..list comprehension gets us down to one line but I find it a bit obscure to read and a week later I will have no idea what is in line[5] or line[1].

I could turn the data into a list of dictionaries so that rather than using the positional indexes in a list I could turn line[1] into line[‘IP_ADDRESS’] and line[5] into line[‘INTERFACE’] which would make reading the list comprehension and the basic python easier but now we’ve added lines to the script.

Finally, Yes its one line but I’m still iterating over the data.

Pandas is set up to do all the iteration for me and lets me refer to data by name or by position “out of the box” and without any extra steps.

Lets decompose the one line of code:

If you think of this expression as a filter sandwich, the df[‘ADDRESS’] and .values are the bread and the middle .loc[df[‘INTERFACE’]] == ‘Vlan1’] part that filters is the main ingredient.

Without the middle part you would have a Pandas Series or list of all the IPs in the ARP table. Basically you get the entire contents of the ‘ADDRESS” column in the data frame without any filtering.

When you “qualify” df[‘ADDRESS’] with .loc[df[‘INTERFACE’]] == ‘Vlan1’] you filter the ADDRESS column in the data frame for just those records where INTERFACE is ‘Vlan1’ and you only return the IP values by using the .values method.

Now, this will return a numpy.ndarray which might be great for some subsequent statistical analysis but as network engineers our needs are simple.

I’m using iPython in the examples below as you can see from the “In” and “Out” line prefixes.

In [1]: pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].values

In [2]: type(pandas_vlan1ips) Out[2]: numpy.ndarray

I would like my list back as an actual python list and thats no problem for Pandas.

pandas-vlan1ips-list-2019-12-30_07-46-43

In [3]: pandas_vlan1ips = df['ADDRESS'].loc[df['INTERFACE'] == 'Vlan1'].to_list()

In [4]: type(pandas_vlan1ips) Out[4]: list

In [5]: pandas_vlan1ips Out[5]:` `['10.1.10.1',` `'10.1.10.11',` `'10.1.10.10',` `'10.1.10.21',` `'10.1.10.37',` `'10.1.10.102',` `'71.103.129.220',` `'10.1.10.170',` `'10.1.10.181']

You know what would be really handy? A list of dictionaries where I can reference both the IP ADDRESS and the MAC as keys.

In [5]: vlan1ipmac_ldict = df[['ADDRESS', 'MAC']].to_dict(orient='records')

In [6]: type(vlan1ipmac_ldict) Out[6]: list

In [7]: vlan1ipmac_ldict Out[7]:` `[{'ADDRESS': '10.1.10.1', 'MAC': '28c6.8ee1.659b'},` `{'ADDRESS': '10.1.10.11', 'MAC': '6400.6a64.f5ca'},` `{'ADDRESS': '10.1.10.10', 'MAC': '0018.7149.5160'},` `{'ADDRESS': '10.1.10.21', 'MAC': 'a860.b603.421c'},` `{'ADDRESS': '10.1.10.37', 'MAC': 'a4c3.f047.4528'},` `{'ADDRESS': '10.10.101.1', 'MAC': '0018.b9b5.93c2'},` `{'ADDRESS': '10.10.100.1', 'MAC': '0018.b9b5.93c1'},` `{'ADDRESS': '10.1.10.102', 'MAC': '0018.b9b5.93c0'},` `{'ADDRESS': '71.103.129.220', 'MAC': '28c6.8ee1.6599'},` `{'ADDRESS': '10.1.10.170', 'MAC': '000c.294f.a20b'},` `{'ADDRESS': '10.1.10.181', 'MAC': '000c.298c.d663'}]

In [8]: len(vlan1ipmac_ldict) Out[8]: 11

MAC address Lookup

Not impressed yet. Let see what else we can do with this Data Frame.

I have a small function that performs MAC address lookups to get the Vendor OUI.

This function is called get_oui_macvendors() and you pass it a MAC address and it returns the vendor name.

It uses the MacVendors.co API.

I’d like to add a column of data to our Data Frame with the Vendor OUI for each MAC address.

In the one line below, I’ve added a column to the data frame titled ‘OUI’ and populated its value by performing a lookup on each MAC and using the result from the get_oui_macvendors function.

df['OUI'] = df['MAC'].map(get_oui_macvendors)

The left side of the equation references a column in the data Fram which does not exist so it will be added.

The right side takes the existing MAC column in the data frame and takes each MAC address and runs it through the get_oui_macvendors function to get the Vendor OUI and “maps” that result into the new OUI “cell” for that MACs row in the data frame.

pandas-newcolumn diagram to show what is happening under the hood in the one line command to ad a coloumn

Now we have an updated Data Frame with a new OUI column giving the vendor code for each Mac.

In [1]: df                                                                                                                                                                                                                                                      
 Out[1]: 
     PROTOCOL         ADDRESS  AGE             MAC  TYPE INTERFACE                 OUI
 0   Internet       10.1.10.1    5  28c6.8ee1.659b  ARPA     Vlan1             NETGEAR
 1   Internet      10.1.10.11    4  6400.6a64.f5ca  ARPA     Vlan1           Dell Inc.
 2   Internet      10.1.10.10  172  0018.7149.5160  ARPA     Vlan1     Hewlett Packard
 3   Internet      10.1.10.21    0  a860.b603.421c  ARPA     Vlan1         Apple, Inc.
 4   Internet      10.1.10.37   18  a4c3.f047.4528  ARPA     Vlan1     Intel Corporate
 5   Internet     10.10.101.1    -  0018.b9b5.93c2  ARPA   Vlan101  Cisco Systems, Inc
 6   Internet     10.10.100.1    -  0018.b9b5.93c1  ARPA   Vlan100  Cisco Systems, Inc
 7   Internet     10.1.10.102    -  0018.b9b5.93c0  ARPA     Vlan1  Cisco Systems, Inc
 8   Internet  71.103.129.220    4  28c6.8ee1.6599  ARPA     Vlan1             NETGEAR
 9   Internet     10.1.10.170    0  000c.294f.a20b  ARPA     Vlan1        VMware, Inc.
 10  Internet     10.1.10.181    0  000c.298c.d663  ARPA     Vlan1        VMware, Inc.

More questions

Lets interrogate our data set further.

I want a unique list of all the INTERFACE values.

In [3]: df['INTERFACE'].unique()                                                                                                                                                                                                                                
 Out[3]: array(['Vlan1', 'Vlan101', 'Vlan100'], dtype=object)

How about “Give me a total count of each of the unique INTERFACE values?”

In [4]: df.groupby('INTERFACE').size()                                                                                                                                                                                                                          
 Out[4]: 
 INTERFACE
 Vlan1      9
 Vlan100    1
 Vlan101    1
 dtype: int64

Lets take it down a level and get unique totals based on INTERFACE and vendor OUI.

In [2]: df.groupby(['INTERFACE','OUI']).size()                                                                                                                                                                                                                  
 Out[2]: 
 INTERFACE  OUI               
 Vlan1      Apple, Inc.           1
            Cisco Systems, Inc    1
            Dell Inc.             1
            Hewlett Packard       1
            Intel Corporate       1
            NETGEAR               2
            VMware, Inc.          2
 Vlan100    Cisco Systems, Inc    1
 Vlan101    Cisco Systems, Inc    1
 dtype: int64

I could do this all day long!

Conclusion

I’ve just scratched the surface of what Pandas can do and I hope some of the examples I’ve shown above illustrate why investing in learning how to use data frames could be very beneficial. Filtering, getting unique values with counts, even Pivot Tables are possible with Pandas.

Don’t be discouraged by its seeming complexity like I was.

Don’t discount it because it does not seem to be applicable to what you are trying to do as a Network Engineer, like I did. I hope I’ve shown how very wrong I was and that it is very applicable.

In fact, this small example and some of the other content in this repository comes from an actual use case.

I’m involved in several large refresh projects and our workflow is what you would expect.

  1. Snapshot the environment before you change out the equipment
  2. Perform some basic reachability tests
  3. Replace the equipment (switches in this case)
  4. Perform basic reachability tests again
  5. Compare PRE and POST state and confirm that all the devices you had just before you started are back on the network.
  6. Troubleshoot as needed

As you can see if you delve into this repository, its heavy on APR and MAC data manipulation so that we can automate most of the workflow I’ve described above. Could I have done it without Pandas? Yes. Could I have done it as quickly and efficiently with code that I will have some shot of understanding in a month without Pandas? No.

I hope I’ve either put Pandas on your radar as a possible tool to use in the future or actually gotten you curious enough to take the next steps.

I really hope that the latter is the case and I encourage you to just dive in.

The companion repository on GitHub is intended to help and give you examples.


Next Steps

The “Study Guide” links below have some very good and clear content to get you started. Of all the content out there, these resources were the most helpful for me.

Let me also say that it took a focused effort to get the point where I was doing useful work with Pandas and I’ve only just scratched the surface. I was worth every minute! What I have described here and in this repository are the things that were useful for me as a Network Engineer.

Once you’ve gone through the Study Guide links and any others that you have found, you can return to this repository to see examples. In particular, this repository contains a Python script called arp_interrogate.py.

It goes through loading the ARP data from the “show ip arp” command, parsing it, and creating a Pandas Data Frame.

It then goes through a variety of questions (some of which you have seen above) to show how the Data Frame can be “interrogated” to get to information that might prove useful.

There are comments throughout which are reminders for me and which may be useful to you.

The script is designed to run with data in the repository by default but you can pass it your own “show ip arp” output with the -o option.

Using the -i option will drop you into iPython with all of the data still in memory for you to use. This will allow you to interrogate the data in the Data Frame yourself..

If you would like to use it make sure you clone or download the repository and set up the expected environment.

Options for the arp_interrogate.py script:

(pandas) Claudias-iMac:pandas_neteng claudia$ python arp_interrogate.py -h
usage: arp_interrogate.py [-h] [-t TEMPLATE_FILE] [-o OUTPUT_FILE] [-v]
                        [-f FILENAME] [-s] [-i] [-c]

Script Description

optional arguments:
-h, --help           show this help message and exit
-t TEMPLATE_FILE, --template_file TEMPLATE_FILE
                      TextFSM Template File
-o OUTPUT_FILE, --output_file OUTPUT_FILE
                      Full path to file with show command show ip arp output
-v, --verbose         Enable all of the extra print statements used to
                      investigate the results
-f FILENAME, --filename FILENAME
                      Resulting device data parsed output file name suffix
-s, --save           Save Parsed output in TXT, JSON, YAML, and CSV Formats
-i, --interactive     Drop into iPython
-c, --comparison     Show Comparison

Usage: ' python arp_interrogate.py Will run with default data in the
repository'
(pandas) Claudias-iMac:pandas_neteng claudia$

Study Guide

A Quick Introduction to the “Pandas” Python Library

https://towardsdatascience.com/a-quick-introduction-to-the-pandas-python-library-f1b678f34673

For me this is the class that made all the other classes start to make sense.

Note that this class is not Free.

Pandas Fundamentals by Paweł Kordek on PluralSight is exceptionally good.

There is quite alot to Pandas and it can be overwhelming (at least it was for me) but this course in particular got me working very quickly and explained things in a very clear way.

Python Pandas Tutorial 2: Dataframe Basics by codebasics <- good for Pandas operations and set_index

Python Pandas Tutorial 5: Handle Missing Data: fillna, dropna, interpolate by codebasics

Python Pandas Tutorial 6. Handle Missing Data: replace function by codebasics

Real Python <- this is terrific resource for learning Python

There is a lot of content here. Explore at will. The two below I found particularly helpful.

https://realpython.com/search?q=pandas

Intro to DataFrames by Joe James <–great ‘cheatsheet’



What others have shared…

Analyzing Wireshark Data with Pandas


Disclaimer

THE SOFTWARE in the mentioned repository IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

What language for Network Automation?

I’m often asked variants of “What language would you recommend when getting started with Network Automation and how would you get started?

As with most things in IT, the answer requires context.

In general, right now I would still pick Python.  Go is getting popular but still can’t compete with the huge body for work and modules and examples available for Python.

However….

Before getting to some suggestions on how I would start learning Python if I had it to do over, let me mention that there are other ways to get started with automation that also involve learning “new syntax” but may seem simpler (at first). Ansible and other automation frameworks use Domain Specific Language (DSL) which may be simpler to learn if you are doing very basic things or things that have examples in the “Global Data Store” more commonly known as Google & the Internet!

So if you are new to programming you may want to start with Ansible…lets you do lots of basic automation with more of an abstracted language.  All of the resources I mention below also have Ansible training.  Try to get one that is recent and geared towards networking as many Ansible courses are geared towards servers and while useful, you will wind up spending more time ramping up.

All roads lead to Python today

Having said that, once you start trying to do more complex things you may run into limitations or issues where some Python knowledge would be useful. Ansible is written in Python. Also, if you are new to linux, then while you may have saved some time with Ansible, you will need to invest some time figuring out how to get around in Linux as the Ansible control host runs on Linux and its variants but there are many other reasons why being conversant with Linux as a network engineer is important.

If you are serious about “upgrading” your skill set as a network engineer make sure you are somewhat comfortable with Linux basics and, while Ansible can get you going with some simple immediate tasks you may need to do for work, get started with Python as soon as you can.

If you are not comfortable moving around in Linux, David Bombal has an excellent intro on YouTube which explains why its an important skill to have and his full course is available via various training options.
Linux for Network Engineers: You need to learn Linux! (Prt 1) Demo of Cisco 9k, Arista EOS & Cumulus


One more point. In the interest of getting you productive as quickly as possible, I have built a series of Docker images that provide an Ansible control server as well as a python programing environment with many of the modules network engineers tend to use.

Ansible Server in Docker for Network Engineers



So back to the question at hand:

Step 1 (Investment: Time)

I would start with Kurt Byers free Learning Python course.

Note: I have no affiliation with him or his company but he speaks to Network Engineers better than anyone I’ve seen and it is an on-line course of recorded lessons (with a lab), one lesson a week, so from a time perspective is very easy to “consume” his course while still having an opportunity to interact and ask questions. The course with the community forum is well worth the additional cost. You will make new acquaintances who are interested in what you are doing and you sometimes run into old friends.

I would also start going through the coursework available for free on DevNet (requires an account).

Intermission (Investment: Time)

Pick a small task and get it working. This step is essential and will help focus your next step.

Spend time working with complex data structures, YAML, and JSON. See my post on “Decomposing Data Structures“.

Step 2 (Investment: Time & Money)

Your next step can take you along various paths. All of these are paid courses and can be done on demand or in a class. There are many options here but I’m listing the options with which I have first hand experience.

Kurt Byer

I really recommend Kurt Byers  Python for Network Engineers course. I’ve taken it several times!! 😀 . You have a lesson a week and this is taught by a Network Engineer for Network Engineers!

Udemy

I’m also a BIG fan of Udemy and they have some very good courses as well at very reasonable price points. Always listen to the free examples and always check the dates before you purchase. Make sure you find out if the Python course is using Python3 as there are many courses there which use Python 2 and if you are just staring, please start with Python3.!

These courses are at a nominal cost, self-paced but, again, make sure you get a current one that is Python 3.

INE

Sign up for an INE course. These guys are also very good and more self-paced but a bit more costly.

Network to Code

The Network to Code guys have great content in a more formalized setting where you can go for a week.

Note: I have taught for these guys so I do have a loose affiliation which does not lessen the point that they have terrific content.

Using Docker as an Ansible and Python platform for Network Engineers

A quick start guide for using the purpose built Docker images for Ansible and Python

Built for Network Engineers by a Network Engineer

Over the last few years I’ve built up a repository of Docker images to help me learn Ansible. If you are new to Ansible you may not know that while Ansible can control all manner of devices (Windows, Linux, Network, Virtual or Bare Metal, Cloud, etc.) the Ansible control server itself only runs on Linux. There are ways around that now (Win10 Subsystem for Linux and Cygwin) but they are not supported. If you are trying to get started, you first have to stand up a control server. Depending on your familiarity with Linux and Virtualization technology you can spend quite a bit of time going down different avenues just to get to the point where you can run a playbook.

Docker was the answer for me.

These Ansible containers have been built over the years to provide a quick alternative to running an Ansible Control server on a local virtualization VM (VirtualBox or VMware). A container is also handy if you need an Ansible Control server to use via a VPN connection. You will see that many of the test playbooks are playbooks designed to perform discovery on a set of devices (aka run lots of show commands and save the output) and so a common practice is to VPN in (or connect directly) to a client network and quickly perform this discovery task.

I’ve kept the various images with different Ansible versions so I can test playbooks on specific versions. In many cases I work with clients who use a specific version of Ansible and so its handy to be able to test locally on the version they use.

This “Quick Start” cheat sheet is intended to get you up and running quickly with the various Ansible containers. Each offers an Ansible control server and a python environment.

The following Docker images are available on Docker Hub

Docker Images providing an Ansible and Python environment for Network Engineers – Complete List

Select a specific Ansible Version:

* if you are not sure which image to use, go with Bionic Immigrant! It’s the most mature, based on Ubuntu Long Term Support (LTS), supports the automated Documentation examples you may have seen, and includes the Batfish client, Batfish Ansible module/role, and Ansible Network Engine role. It is also the image that will run all of my shared repositories.


Installing Docker

To run any Docker container (aka Docker image) you will need to install Docker Desktop on your Mac or Windows system (see note below about Docker Desktop for Windows and consider Docker Toolbox if you run other virtualization software) or Docker Engine on your linux host. You can use the free Docker Community Edition.

The instructions below focus on Mac and Windows operating systems as those tend to be the most prevalent among Network Engineers (at least the ones I know!).

Install on your Operating System of choice

Installing Docker Desktop on Mac

Installing Docker Desktop on Windows WARNING: The Docker Desktop application is only supported on Windows 10 Pro (or better) 64-bit and requires Hyper-V and the Containers Window features to be enabled.

This means that other Virtualization software that does not support HyperV will not work work (i.e. VMware Workstation and VirtualBox) while you have Hyper-V enabled and Docker Desktop won’t work when you have have Hyper-V disabled (but VirtualBox and VMware will).

If you have existing Virtualization software installed and which you use, Docker Toolbox for Windows is still available.

For the Linux aficionados:

Installing Docker Engine Community version on Linux


Now that you have Docker installed – Command Cheat Sheet

Now that you have Docker installed, here is a cheat sheet of the commands I find most useful.

Docker4NetEng_Cheatsheet

Apple
OS-X

Getting Started on Mac with Docker Desktop

Environment:

  • Mac OS X (macOS Sierra Version 10.12.6)
  • Intel based iMac
  • Intel Core i7 4GHz 23 GB Memory

Summary of Steps

  1. Make sure Docker Desktop is installed and running
  2. Open a terminal window and launch your container with the docker run command
  3. Look around the ready built repositories which are cloned into the container to get you started quickly (always remember to git pull to get the latest).

Details

Docker Desktop on Mac OS X

Using Docker Desktop on Mac OS X Video ~13min

  1. Before starting make sure that Docker is installed and running on your Mac.
About Docker

2. Open a terminal window and use the docker run -it command to start the container on your Ma

Full command to start an interactive session

docker run -it cldeluna/disco-immigrant

The first time you execute this command, the docker image will download and then put you into an interacive bash shell in the container.

-i, --interactive Keep STDIN open even if not attached

`-t, --tty Allocate a pseudo-TTY``

This will basically take over your terminal window so if you need to do something else on your system open up a new or different terminal window. Check the command cheat sheet for alternatives like using the -dit option to run in the background and the docker exec command to “attach” to the running container.

If you have not downloaded the image using the docker pull <image> command the docker run command will know and pull it down for your. Once the download is complete and the container is running you will notice that the prompt in your terminal window has changed.

It will look something like “root@c421cab61f1f:/ansible_local“.

Claudias-iMac:disco-immigrant claudia$ docker run -it cldeluna/disco-immigrant
root@c421cab61f1f:/ansible_local#

3. Start looking around

  • Check the version of ansible on the container. In the example below we are using the disco-immigrant image which comes with ansible 2.9.1.
  • Several repositories are cloned into the container to get you started quickly. Check out the Ansible playbook repositories and change directory into one to see the example playbooks and try one!…you can find details in the “Run one of the ready built Playbooks!” At this point, once you are in a working docker CLI the process is basically the same across all operating systems.
  • Always do a “git pull” in any of the cloned repositories to make sure you are running the latest scripts and playbooks.
  • Run your first playbook! You don’t need to bring up any device as many of the playbooks use the DevNet AlwaysOn Sandbox devices.

If you cd or change directory into the cisco_ios directory you can get started with some basic Playbooks.

Claudias-iMac:disco-immigrant claudia$ docker run -it cldeluna/disco-immigrant
root@c421cab61f1f:/ansible_local#
root@c421cab61f1f:/ansible_local# ls
ansible2_4_base cisco_aci cisco_ios
root@c421cab61f1f:/ansible_local# cd cisco_ios
root@c421cab61f1f:/ansible_local/cisco_ios# ls
ansible.cfg     ios_all_vrf_arp.yml   ios_show_cmdlist_zip.yml logs
filter_plugins ios_facts_lab.yml     ios_show_lab.yml         nxos_facts_lab.yml
group_vars     ios_facts_report.yml ios_show_lab25.yml       nxos_show_cmdlist.yml
hosts           ios_show_cmdlist.yml ios_show_lab_brief.yml   templates
root@c421cab61f1f:/ansible_local/cisco_ios#

Using Docker Desktop on Mac OS X Video ~13min


Windows 10 Pro Logo

Getting Started with Docker Toolbox on Windows

Environment:

  • Microsoft Windows 10 Pro
  • x64-based PC
  • Intel(R) Core(TM)i7-6700K CPU @ 4.00GHz, 4008 Mhz, 4Core(s)…

Summary of Steps

  1. Make sure Docker Desktop is installed and running
  2. Open a terminal window or “default” VirtualBox VM console
  3. Look around the ready built repositories which are cloned in the container to get you started quickly.

Details

  1. Docker Toolbox is quirky, no question about it. The desktop shortcuts often don’t work for me but going directly to the VirtualBox VM typically does. Open up VirtualBox and make sure the Docker Toolbox VM is running (it is actually called “default”!)
  2. For me, what generally works is opening up the default VM console directly. Double-click on 1 in the image below to start the default container and open up the container VM console.
  3. Start looking around! Check the version of ansible on the container. In the example below we are using the disco-immigrant image which comes with ansible 2.9.1. Check out the Ansible playbook repositories and change directory into one to see the example playbooks and try one!…you can find details in the “Run one of the ready built Playbooks!” At this point, once you are in a working docker CLI the process is basically the same across all operating systems.
  4. Always do a “git pull” in any of the cloned repositories to make sure you are running the latest scripts and playbooks.
  5. Run your first playbook! You don’t need to bring up any device as most use the DevNet AlwaysOn Sandbox devices.
Docker Toolbox on Windows

Using Docker Toolbox on Windows Video ~13min


Run one of the ready built Playbooks!

Summary of Steps

  1. Select a repository to try. In this example we will try the cisco_ios playbook repository
  2. Enter the git pull command in your container terminal to make sure the repository has the latest code
  3. Try one of the ready made Playbooks
  4. Take one of the example playbooks and modify it to suit your needs or create a new Playbook.

Details

  1. Move into the desired playbook repository by issuing the change directory command cd cisco_ios from the ansible_local directory
  2. Before you try any of the playbooks, its a good idea to execute a git pull so that you have the latest version of the repository.

Example of updated repository:

root@c421cab61f1f:/ansible_local/cisco_ios# git pull
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (1/1), done.
remote: Total 3 (delta 2), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (3/3), done.
From https://github.com/cldeluna/cisco_ios
  275f642..a8f951f master     -> origin/master
Updating 275f642..a8f951f
Fast-forward
ios_show_lab_brief.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
root@c421cab61f1f:/ansible_local/cisco_ios#

This means that updates were made to the repository since the time the Docker image was built

Example of repository already up to date:

root@c421cab61f1f:/ansible_local/cisco_ios# git pull
Already up to date.
root@c421cab61f1f:/ansible_local/cisco_ios#

*This means that no updates have been made to the repository since the time the Docker image was built

3 Execute the ios_show_lab_brief.yml Playbook.

The comments in the playbook explain in some detail what the playbook is doing and how to execute it.

root@c421cab61f1f:/ansible_local/cisco_ios# ansible-playbook -i hosts ios_show_lab_brief.yml

You will see that the playbook saves different types of output into different text files. cd to the logs directory and review the results.

root@c421cab61f1f:/ansible_local/cisco_ios# cd logs
root@c421cab61f1f:/ansible_local/cisco_ios/logs# tree
.
|-- ios-xe-mgmt.cisco.com-config.txt
|-- ios-xe-mgmt.cisco.com-raw-output.txt
|-- ios-xe-mgmt.cisco.com-readable-show-output.txt
`-- output_directory

0 directories, 4 files

4 At this point, you can start making these playbooks your own.

Update the hosts file and create your own group of devices. Update the show commands. Start your own Playbook now that you have an Ansible Control server you are ready to go!

Since this is a container, it will leverage your systems network connection so if you VPN into your lab for example, you can use Control Server on your system.

Tip: Mount a Shared Folder

All the work you do will be “captive” in your container unless you start your container with an option to share a folder between your desktop and the container.

That may not be a problem. I try to develop locally and then “git pull” updates inside the container and test but there are time when I want to try new things within the container and without this mounting option can’t get to the new files you created. You often have to copy them out and worst case you forget and destroy your container and lose work. So if the git push/pull model is not to your liking then the -v option is for you!

Note that this is more difficult to do using Docker Toolbox as there are several levels of abstraction.

docker-v-annotated_shell
Directory shared between the Docker host and the Docker container.


References


Full output of ios_show_lab_brief.yml Playbook Execution:

This Playbook ran against the DevNet IOS XE device and so your output may differ from what is shown below.

root@c421cab61f1f:/ansible_local/cisco_ios# 
 root@c421cab61f1f:/ansible_local/cisco_ios# ansible-playbook -i hosts ios_show_lab_brief.yml
 PLAY [Pull show commands form Cisco IOS_XE Always On Sandbox device] *
 TASK [Iterate over show commands] **
 ok: [ios-xe-mgmt.cisco.com] => (item=show run)
 ok: [ios-xe-mgmt.cisco.com] => (item=show version)
 ok: [ios-xe-mgmt.cisco.com] => (item=show inventory)
 ok: [ios-xe-mgmt.cisco.com] => (item=show ip int br)
 ok: [ios-xe-mgmt.cisco.com] => (item=show ip route)
 TASK [debug] *
 ok: [ios-xe-mgmt.cisco.com] => {
     "output": {
         "changed": false,
         "deprecations": [
             {
                 "msg": "Distribution Ubuntu 19.04 on host ios-xe-mgmt.cisco.com should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information",
                 "version": "2.12"
             }
         ],
         "msg": "All items completed",
         "results": [
             {
                 "ansible_facts": {
                     "discovered_interpreter_python": "/usr/bin/python"
                 },
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show run"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show run",
                 "stdout": [
                     "Building configuration…\n\nCurrent configuration : 6228 bytes\n!\n! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root\n!\nversion 16.9\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nplatform qfp utilization monitor load 80\nno platform punt-keepalive disable-kernel-core\nplatform console virtual\n!\nhostname csr1000v\n!\nboot-start-marker\nboot-end-marker\n!\n!\nno logging console\nenable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/\n!\nno aaa new-model\n!\n!\n!\n!\n!\n!\n!\nip domain name abc.inc\n!\n!\n!\nlogin on-success log\n!\n!\n!\n!\n!\n!\n!\nsubscriber templating\n! \n! \n! \n! \n!\nmultilink bundle-name authenticated\n!\n!\n!\n!\n!\ncrypto pki trustpoint TP-self-signed-1530096085\n enrollment selfsigned\n subject-name cn=IOS-Self-Signed-Certificate-1530096085\n revocation-check none\n rsakeypair TP-self-signed-1530096085\n!\n!\ncrypto pki certificate chain TP-self-signed-1530096085\n certificate self-signed 01\n  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \n  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \n  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 \n  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \n  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 \n  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \n  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 \n  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F \n  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA \n  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 \n  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 \n  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE \n  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 \n  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 \n  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \n  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 \n  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 \n  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 \n  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B \n  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 \n  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A \n  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 \n  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 \n  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 \n  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 \n  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958\n  \tquit\n!\n!\n!\n!\n!\n!\n!\n!\nlicense udi pid CSR1000V sn 9ZL30UN51R9\nlicense boot level ax\nno license smart enable\ndiagnostic bootup level minimal\n!\nspanning-tree extend system-id\n!\nnetconf-yang\n!\nrestconf\n!\nusername developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.\nusername cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.\nusername root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/\n!\nredundancy\n!\n!\n!\n!\n!\n!\n! \n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n! \n! \n!\n!\ninterface Loopback18\n description Configured by RESTCONF\n ip address 172.16.100.18 255.255.255.0\n!\ninterface Loopback702\n description Configured by charlotte\n ip address 172.17.2.1 255.255.255.0\n!\ninterface Loopback710\n description Configured by seb\n ip address 172.17.10.1 255.255.255.0\n!\ninterface Loopback2101\n description Configured by RESTCONF\n ip address 172.20.1.1 255.255.255.0\n!\ninterface Loopback2102\n description Configured by Charlotte\n ip address 172.20.2.1 255.255.255.0\n!\ninterface Loopback2103\n description Configured by OWEN\n ip address 172.20.3.1 255.255.255.0\n!\ninterface Loopback2104\n description Configured by RESTCONF\n ip address 172.20.4.1 255.255.255.0\n!\ninterface Loopback2105\n description Configured by RESTCONF\n ip address 172.20.5.1 255.255.255.0\n!\ninterface Loopback2107\n description Configured by Josia\n ip address 172.20.7.1 255.255.255.0\n!\ninterface Loopback2108\n description Configured by RESTCONF\n ip address 172.20.8.1 255.255.255.0\n!\ninterface Loopback2109\n description Configured by RESTCONF\n ip address 172.20.9.1 255.255.255.0\n!\ninterface Loopback2111\n description Configured by RESTCONF\n ip address 172.20.11.1 255.255.255.0\n!\ninterface Loopback2112\n description Configured by RESTCONF\n ip address 172.20.12.1 255.255.255.0\n!\ninterface Loopback2113\n description Configured by RESTCONF\n ip address 172.20.13.1 255.255.255.0\n!\ninterface Loopback2114\n description Configured by RESTCONF\n ip address 172.20.14.1 255.255.255.0\n!\ninterface Loopback2116\n description Configured by RESTCONF\n ip address 172.20.16.1 255.255.255.0\n!\ninterface Loopback2117\n description Configured by RESTCONF\n ip address 172.20.17.1 255.255.255.0\n!\ninterface Loopback2119\n description Configured by RESTCONF\n ip address 172.20.19.19 255.255.255.0\n!\ninterface Loopback2121\n description Configured by RESTCONF\n ip address 172.20.21.1 255.255.255.0\n!\ninterface Loopback3115\n description Configured by Breuvage\n ip address 172.20.15.1 255.255.255.0\n!\ninterface GigabitEthernet1\n description MANAGEMENT INTERFACE - DON'T TOUCH ME\n ip address 10.10.20.48 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet2\n description Configured by RESTCONF\n ip address 10.255.255.1 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet3\n description Network Interface\n no ip address\n shutdown\n negotiation auto\n no mop enabled\n no mop sysid\n!\nip forward-protocol nd\nip http server\nip http authentication local\nip http secure-server\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254\n!\nip ssh rsa keypair-name ssh-key\nip ssh version 2\nip scp server enable\n!\n!\n!\n!\n!\ncontrol-plane\n!\n!\n!\n!\n!\nbanner motd ^C\nWelcome to the DevNet Sandbox for CSR1000v and IOS XE\n\nThe following programmability features are already enabled:\n  - NETCONF\n  - RESTCONF\n\nThanks for stopping by.\n^C\n!\nline con 0\n exec-timeout 0 0\n stopbits 1\nline vty 0 4\n login local\n transport input ssh\n!\nntp logging\nntp authenticate\n!\n!\n!\n!\n!\nend"
                 ],
                 "stdout_lines": [
                     [
                         "Building configuration…",
                         "",
                         "Current configuration : 6228 bytes",
                         "!",
                         "! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root",
                         "!",
                         "version 16.9",
                         "service timestamps debug datetime msec",
                         "service timestamps log datetime msec",
                         "platform qfp utilization monitor load 80",
                         "no platform punt-keepalive disable-kernel-core",
                         "platform console virtual",
                         "!",
                         "hostname csr1000v",
                         "!",
                         "boot-start-marker",
                         "boot-end-marker",
                         "!",
                         "!",
                         "no logging console",
                         "enable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/",
                         "!",
                         "no aaa new-model",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "ip domain name abc.inc",
                         "!",
                         "!",
                         "!",
                         "login on-success log",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "subscriber templating",
                         "! ",
                         "! ",
                         "! ",
                         "! ",
                         "!",
                         "multilink bundle-name authenticated",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "crypto pki trustpoint TP-self-signed-1530096085",
                         " enrollment selfsigned",
                         " subject-name cn=IOS-Self-Signed-Certificate-1530096085",
                         " revocation-check none",
                         " rsakeypair TP-self-signed-1530096085",
                         "!",
                         "!",
                         "crypto pki certificate chain TP-self-signed-1530096085",
                         " certificate self-signed 01",
                         "  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 ",
                         "  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 ",
                         "  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 ",
                         "  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 ",
                         "  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 ",
                         "  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 ",
                         "  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 ",
                         "  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F ",
                         "  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA ",
                         "  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 ",
                         "  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 ",
                         "  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE ",
                         "  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 ",
                         "  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 ",
                         "  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF ",
                         "  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 ",
                         "  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 ",
                         "  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 ",
                         "  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B ",
                         "  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 ",
                         "  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A ",
                         "  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 ",
                         "  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 ",
                         "  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 ",
                         "  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 ",
                         "  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958",
                         "  \tquit",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "license udi pid CSR1000V sn 9ZL30UN51R9",
                         "license boot level ax",
                         "no license smart enable",
                         "diagnostic bootup level minimal",
                         "!",
                         "spanning-tree extend system-id",
                         "!",
                         "netconf-yang",
                         "!",
                         "restconf",
                         "!",
                         "username developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.",
                         "username cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.",
                         "username root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/",
                         "!",
                         "redundancy",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "! ",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "! ",
                         "! ",
                         "!",
                         "!",
                         "interface Loopback18",
                         " description Configured by RESTCONF",
                         " ip address 172.16.100.18 255.255.255.0",
                         "!",
                         "interface Loopback702",
                         " description Configured by charlotte",
                         " ip address 172.17.2.1 255.255.255.0",
                         "!",
                         "interface Loopback710",
                         " description Configured by seb",
                         " ip address 172.17.10.1 255.255.255.0",
                         "!",
                         "interface Loopback2101",
                         " description Configured by RESTCONF",
                         " ip address 172.20.1.1 255.255.255.0",
                         "!",
                         "interface Loopback2102",
                         " description Configured by Charlotte",
                         " ip address 172.20.2.1 255.255.255.0",
                         "!",
                         "interface Loopback2103",
                         " description Configured by OWEN",
                         " ip address 172.20.3.1 255.255.255.0",
                         "!",
                         "interface Loopback2104",
                         " description Configured by RESTCONF",
                         " ip address 172.20.4.1 255.255.255.0",
                         "!",
                         "interface Loopback2105",
                         " description Configured by RESTCONF",
                         " ip address 172.20.5.1 255.255.255.0",
                         "!",
                         "interface Loopback2107",
                         " description Configured by Josia",
                         " ip address 172.20.7.1 255.255.255.0",
                         "!",
                         "interface Loopback2108",
                         " description Configured by RESTCONF",
                         " ip address 172.20.8.1 255.255.255.0",
                         "!",
                         "interface Loopback2109",
                         " description Configured by RESTCONF",
                         " ip address 172.20.9.1 255.255.255.0",
                         "!",
                         "interface Loopback2111",
                         " description Configured by RESTCONF",
                         " ip address 172.20.11.1 255.255.255.0",
                         "!",
                         "interface Loopback2112",
                         " description Configured by RESTCONF",
                         " ip address 172.20.12.1 255.255.255.0",
                         "!",
                         "interface Loopback2113",
                         " description Configured by RESTCONF",
                         " ip address 172.20.13.1 255.255.255.0",
                         "!",
                         "interface Loopback2114",
                         " description Configured by RESTCONF",
                         " ip address 172.20.14.1 255.255.255.0",
                         "!",
                         "interface Loopback2116",
                         " description Configured by RESTCONF",
                         " ip address 172.20.16.1 255.255.255.0",
                         "!",
                         "interface Loopback2117",
                         " description Configured by RESTCONF",
                         " ip address 172.20.17.1 255.255.255.0",
                         "!",
                         "interface Loopback2119",
                         " description Configured by RESTCONF",
                         " ip address 172.20.19.19 255.255.255.0",
                         "!",
                         "interface Loopback2121",
                         " description Configured by RESTCONF",
                         " ip address 172.20.21.1 255.255.255.0",
                         "!",
                         "interface Loopback3115",
                         " description Configured by Breuvage",
                         " ip address 172.20.15.1 255.255.255.0",
                         "!",
                         "interface GigabitEthernet1",
                         " description MANAGEMENT INTERFACE - DON'T TOUCH ME",
                         " ip address 10.10.20.48 255.255.255.0",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "interface GigabitEthernet2",
                         " description Configured by RESTCONF",
                         " ip address 10.255.255.1 255.255.255.0",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "interface GigabitEthernet3",
                         " description Network Interface",
                         " no ip address",
                         " shutdown",
                         " negotiation auto",
                         " no mop enabled",
                         " no mop sysid",
                         "!",
                         "ip forward-protocol nd",
                         "ip http server",
                         "ip http authentication local",
                         "ip http secure-server",
                         "ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254",
                         "!",
                         "ip ssh rsa keypair-name ssh-key",
                         "ip ssh version 2",
                         "ip scp server enable",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "control-plane",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "banner motd ^C",
                         "Welcome to the DevNet Sandbox for CSR1000v and IOS XE",
                         "",
                         "The following programmability features are already enabled:",
                         "  - NETCONF",
                         "  - RESTCONF",
                         "",
                         "Thanks for stopping by.",
                         "^C",
                         "!",
                         "line con 0",
                         " exec-timeout 0 0",
                         " stopbits 1",
                         "line vty 0 4",
                         " login local",
                         " transport input ssh",
                         "!",
                         "ntp logging",
                         "ntp authenticate",
                         "!",
                         "!",
                         "!",
                         "!",
                         "!",
                         "end"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show version"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show version",
                 "stdout": [
                     "Cisco IOS XE Software, Version 16.09.03\nCisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2019 by Cisco Systems, Inc.\nCompiled Wed 20-Mar-19 07:56 by mcpre\n\n\nCisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.\nAll rights reserved.  Certain components of Cisco IOS-XE software are\nlicensed under the GNU General Public License (\"GPL\") Version 2.0.  The\nsoftware code licensed under GPL Version 2.0 is free software that comes\nwith ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such\nGPL code under the terms of GPL Version 2.0.  For more details, see the\ndocumentation or \"License Notice\" file accompanying the IOS-XE software,\nor the applicable URL provided on the flyer accompanying the IOS-XE\nsoftware.\n\n\nROM: IOS-XE ROMMON\n\ncsr1000v uptime is 1 day, 2 hours, 8 minutes\nUptime for this control processor is 1 day, 2 hours, 10 minutes\nSystem returned to ROM by reload\nSystem image file is \"bootflash:packages.conf\"\nLast reload reason: reload\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nLicense Level: ax\nLicense Type: Default. No valid license found.\nNext reload license Level: ax\n\n\nSmart Licensing Status: Smart Licensing is DISABLED\n\ncisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.\nProcessor board ID 9ZL30UN51R9\n3 Gigabit Ethernet interfaces\n32768K bytes of non-volatile configuration memory.\n8113280K bytes of physical memory.\n7774207K bytes of virtual hard disk at bootflash:.\n0K bytes of WebUI ODM Files at webui:.\n\nConfiguration register is 0x2102"
                 ],
                 "stdout_lines": [
                     [
                         "Cisco IOS XE Software, Version 16.09.03",
                         "Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)",
                         "Technical Support: http://www.cisco.com/techsupport",
                         "Copyright (c) 1986-2019 by Cisco Systems, Inc.",
                         "Compiled Wed 20-Mar-19 07:56 by mcpre",
                         "",
                         "",
                         "Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.",
                         "All rights reserved.  Certain components of Cisco IOS-XE software are",
                         "licensed under the GNU General Public License (\"GPL\") Version 2.0.  The",
                         "software code licensed under GPL Version 2.0 is free software that comes",
                         "with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such",
                         "GPL code under the terms of GPL Version 2.0.  For more details, see the",
                         "documentation or \"License Notice\" file accompanying the IOS-XE software,",
                         "or the applicable URL provided on the flyer accompanying the IOS-XE",
                         "software.",
                         "",
                         "",
                         "ROM: IOS-XE ROMMON",
                         "",
                         "csr1000v uptime is 1 day, 2 hours, 8 minutes",
                         "Uptime for this control processor is 1 day, 2 hours, 10 minutes",
                         "System returned to ROM by reload",
                         "System image file is \"bootflash:packages.conf\"",
                         "Last reload reason: reload",
                         "",
                         "",
                         "",
                         "This product contains cryptographic features and is subject to United",
                         "States and local country laws governing import, export, transfer and",
                         "use. Delivery of Cisco cryptographic products does not imply",
                         "third-party authority to import, export, distribute or use encryption.",
                         "Importers, exporters, distributors and users are responsible for",
                         "compliance with U.S. and local country laws. By using this product you",
                         "agree to comply with applicable laws and regulations. If you are unable",
                         "to comply with U.S. and local laws, return this product immediately.",
                         "",
                         "A summary of U.S. laws governing Cisco cryptographic products may be found at:",
                         "http://www.cisco.com/wwl/export/crypto/tool/stqrg.html",
                         "",
                         "If you require further assistance please contact us by sending email to",
                         "export@cisco.com.",
                         "",
                         "License Level: ax",
                         "License Type: Default. No valid license found.",
                         "Next reload license Level: ax",
                         "",
                         "",
                         "Smart Licensing Status: Smart Licensing is DISABLED",
                         "",
                         "cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.",
                         "Processor board ID 9ZL30UN51R9",
                         "3 Gigabit Ethernet interfaces",
                         "32768K bytes of non-volatile configuration memory.",
                         "8113280K bytes of physical memory.",
                         "7774207K bytes of virtual hard disk at bootflash:.",
                         "0K bytes of WebUI ODM Files at webui:.",
                         "",
                         "Configuration register is 0x2102"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show inventory"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show inventory",
                 "stdout": [
                     "NAME: \"Chassis\", DESCR: \"Cisco CSR1000V Chassis\"\nPID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9\n\nNAME: \"module R0\", DESCR: \"Cisco CSR1000V Route Processor\"\nPID: CSR1000V          , VID: V00  , SN: JAB1303001C\n\nNAME: \"module F0\", DESCR: \"Cisco CSR1000V Embedded Services Processor\"\nPID: CSR1000V          , VID:      , SN:"
                 ],
                 "stdout_lines": [
                     [
                         "NAME: \"Chassis\", DESCR: \"Cisco CSR1000V Chassis\"",
                         "PID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9",
                         "",
                         "NAME: \"module R0\", DESCR: \"Cisco CSR1000V Route Processor\"",
                         "PID: CSR1000V          , VID: V00  , SN: JAB1303001C",
                         "",
                         "NAME: \"module F0\", DESCR: \"Cisco CSR1000V Embedded Services Processor\"",
                         "PID: CSR1000V          , VID:      , SN:"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show ip int br"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show ip int br",
                 "stdout": [
                     "Interface              IP-Address      OK? Method Status                Protocol\nGigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      \nGigabitEthernet2       10.255.255.1    YES other  up                    up      \nGigabitEthernet3       unassigned      YES NVRAM  administratively down down    \nLoopback18             172.16.100.18   YES other  up                    up      \nLoopback702            172.17.2.1      YES other  up                    up      \nLoopback710            172.17.10.1     YES other  up                    up      \nLoopback2101           172.20.1.1      YES other  up                    up      \nLoopback2102           172.20.2.1      YES other  up                    up      \nLoopback2103           172.20.3.1      YES other  up                    up      \nLoopback2104           172.20.4.1      YES other  up                    up      \nLoopback2105           172.20.5.1      YES other  up                    up      \nLoopback2107           172.20.7.1      YES other  up                    up      \nLoopback2108           172.20.8.1      YES other  up                    up      \nLoopback2109           172.20.9.1      YES other  up                    up      \nLoopback2111           172.20.11.1     YES other  up                    up      \nLoopback2112           172.20.12.1     YES other  up                    up      \nLoopback2113           172.20.13.1     YES other  up                    up      \nLoopback2114           172.20.14.1     YES other  up                    up      \nLoopback2116           172.20.16.1     YES other  up                    up      \nLoopback2117           172.20.17.1     YES other  up                    up      \nLoopback2119           172.20.19.19    YES other  up                    up      \nLoopback2121           172.20.21.1     YES other  up                    up      \nLoopback3115           172.20.15.1     YES other  up                    up"
                 ],
                 "stdout_lines": [
                     [
                         "Interface              IP-Address      OK? Method Status                Protocol",
                         "GigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      ",
                         "GigabitEthernet2       10.255.255.1    YES other  up                    up      ",
                         "GigabitEthernet3       unassigned      YES NVRAM  administratively down down    ",
                         "Loopback18             172.16.100.18   YES other  up                    up      ",
                         "Loopback702            172.17.2.1      YES other  up                    up      ",
                         "Loopback710            172.17.10.1     YES other  up                    up      ",
                         "Loopback2101           172.20.1.1      YES other  up                    up      ",
                         "Loopback2102           172.20.2.1      YES other  up                    up      ",
                         "Loopback2103           172.20.3.1      YES other  up                    up      ",
                         "Loopback2104           172.20.4.1      YES other  up                    up      ",
                         "Loopback2105           172.20.5.1      YES other  up                    up      ",
                         "Loopback2107           172.20.7.1      YES other  up                    up      ",
                         "Loopback2108           172.20.8.1      YES other  up                    up      ",
                         "Loopback2109           172.20.9.1      YES other  up                    up      ",
                         "Loopback2111           172.20.11.1     YES other  up                    up      ",
                         "Loopback2112           172.20.12.1     YES other  up                    up      ",
                         "Loopback2113           172.20.13.1     YES other  up                    up      ",
                         "Loopback2114           172.20.14.1     YES other  up                    up      ",
                         "Loopback2116           172.20.16.1     YES other  up                    up      ",
                         "Loopback2117           172.20.17.1     YES other  up                    up      ",
                         "Loopback2119           172.20.19.19    YES other  up                    up      ",
                         "Loopback2121           172.20.21.1     YES other  up                    up      ",
                         "Loopback3115           172.20.15.1     YES other  up                    up"
                     ]
                 ]
             },
             {
                 "ansible_loop_var": "item",
                 "changed": false,
                 "failed": false,
                 "invocation": {
                     "module_args": {
                         "auth_pass": null,
                         "authorize": null,
                         "commands": [
                             "show ip route"
                         ],
                         "host": null,
                         "interval": 1,
                         "match": "all",
                         "password": null,
                         "port": null,
                         "provider": null,
                         "retries": 10,
                         "ssh_keyfile": null,
                         "timeout": null,
                         "username": null,
                         "wait_for": null
                     }
                 },
                 "item": "show ip route",
                 "stdout": [
                     "Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP\n       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area \n       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2\n       E1 - OSPF external type 1, E2 - OSPF external type 2\n       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2\n       ia - IS-IS inter area, * - candidate default, U - per-user static route\n       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP\n       a - application route\n       + - replicated route, % - next hop override, p - overrides from PfR\n\nGateway of last resort is 10.10.20.254 to network 0.0.0.0\n\nS*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1\n      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks\nC        10.10.20.0/24 is directly connected, GigabitEthernet1\nL        10.10.20.48/32 is directly connected, GigabitEthernet1\nC        10.255.255.0/24 is directly connected, GigabitEthernet2\nL        10.255.255.1/32 is directly connected, GigabitEthernet2\n      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks\nC        172.16.100.0/24 is directly connected, Loopback18\nL        172.16.100.18/32 is directly connected, Loopback18\n      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks\nC        172.17.2.0/24 is directly connected, Loopback702\nL        172.17.2.1/32 is directly connected, Loopback702\nC        172.17.10.0/24 is directly connected, Loopback710\nL        172.17.10.1/32 is directly connected, Loopback710\n      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks\nC        172.20.1.0/24 is directly connected, Loopback2101\nL        172.20.1.1/32 is directly connected, Loopback2101\nC        172.20.2.0/24 is directly connected, Loopback2102\nL        172.20.2.1/32 is directly connected, Loopback2102\nC        172.20.3.0/24 is directly connected, Loopback2103\nL        172.20.3.1/32 is directly connected, Loopback2103\nC        172.20.4.0/24 is directly connected, Loopback2104\nL        172.20.4.1/32 is directly connected, Loopback2104\nC        172.20.5.0/24 is directly connected, Loopback2105\nL        172.20.5.1/32 is directly connected, Loopback2105\nC        172.20.7.0/24 is directly connected, Loopback2107\nL        172.20.7.1/32 is directly connected, Loopback2107\nC        172.20.8.0/24 is directly connected, Loopback2108\nL        172.20.8.1/32 is directly connected, Loopback2108\nC        172.20.9.0/24 is directly connected, Loopback2109\nL        172.20.9.1/32 is directly connected, Loopback2109\nC        172.20.11.0/24 is directly connected, Loopback2111\nL        172.20.11.1/32 is directly connected, Loopback2111\nC        172.20.12.0/24 is directly connected, Loopback2112\nL        172.20.12.1/32 is directly connected, Loopback2112\nC        172.20.13.0/24 is directly connected, Loopback2113\nL        172.20.13.1/32 is directly connected, Loopback2113\nC        172.20.14.0/24 is directly connected, Loopback2114\nL        172.20.14.1/32 is directly connected, Loopback2114\nC        172.20.15.0/24 is directly connected, Loopback3115\nL        172.20.15.1/32 is directly connected, Loopback3115\nC        172.20.16.0/24 is directly connected, Loopback2116\nL        172.20.16.1/32 is directly connected, Loopback2116\nC        172.20.17.0/24 is directly connected, Loopback2117\nL        172.20.17.1/32 is directly connected, Loopback2117\nC        172.20.19.0/24 is directly connected, Loopback2119\nL        172.20.19.19/32 is directly connected, Loopback2119\nC        172.20.21.0/24 is directly connected, Loopback2121\nL        172.20.21.1/32 is directly connected, Loopback2121"
                 ],
                 "stdout_lines": [
                     [
                         "Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP",
                         "       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area ",
                         "       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2",
                         "       E1 - OSPF external type 1, E2 - OSPF external type 2",
                         "       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2",
                         "       ia - IS-IS inter area, * - candidate default, U - per-user static route",
                         "       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP",
                         "       a - application route",
                         "       + - replicated route, % - next hop override, p - overrides from PfR",
                         "",
                         "Gateway of last resort is 10.10.20.254 to network 0.0.0.0",
                         "",
                         "S*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1",
                         "      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks",
                         "C        10.10.20.0/24 is directly connected, GigabitEthernet1",
                         "L        10.10.20.48/32 is directly connected, GigabitEthernet1",
                         "C        10.255.255.0/24 is directly connected, GigabitEthernet2",
                         "L        10.255.255.1/32 is directly connected, GigabitEthernet2",
                         "      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks",
                         "C        172.16.100.0/24 is directly connected, Loopback18",
                         "L        172.16.100.18/32 is directly connected, Loopback18",
                         "      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks",
                         "C        172.17.2.0/24 is directly connected, Loopback702",
                         "L        172.17.2.1/32 is directly connected, Loopback702",
                         "C        172.17.10.0/24 is directly connected, Loopback710",
                         "L        172.17.10.1/32 is directly connected, Loopback710",
                         "      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks",
                         "C        172.20.1.0/24 is directly connected, Loopback2101",
                         "L        172.20.1.1/32 is directly connected, Loopback2101",
                         "C        172.20.2.0/24 is directly connected, Loopback2102",
                         "L        172.20.2.1/32 is directly connected, Loopback2102",
                         "C        172.20.3.0/24 is directly connected, Loopback2103",
                         "L        172.20.3.1/32 is directly connected, Loopback2103",
                         "C        172.20.4.0/24 is directly connected, Loopback2104",
                         "L        172.20.4.1/32 is directly connected, Loopback2104",
                         "C        172.20.5.0/24 is directly connected, Loopback2105",
                         "L        172.20.5.1/32 is directly connected, Loopback2105",
                         "C        172.20.7.0/24 is directly connected, Loopback2107",
                         "L        172.20.7.1/32 is directly connected, Loopback2107",
                         "C        172.20.8.0/24 is directly connected, Loopback2108",
                         "L        172.20.8.1/32 is directly connected, Loopback2108",
                         "C        172.20.9.0/24 is directly connected, Loopback2109",
                         "L        172.20.9.1/32 is directly connected, Loopback2109",
                         "C        172.20.11.0/24 is directly connected, Loopback2111",
                         "L        172.20.11.1/32 is directly connected, Loopback2111",
                         "C        172.20.12.0/24 is directly connected, Loopback2112",
                         "L        172.20.12.1/32 is directly connected, Loopback2112",
                         "C        172.20.13.0/24 is directly connected, Loopback2113",
                         "L        172.20.13.1/32 is directly connected, Loopback2113",
                         "C        172.20.14.0/24 is directly connected, Loopback2114",
                         "L        172.20.14.1/32 is directly connected, Loopback2114",
                         "C        172.20.15.0/24 is directly connected, Loopback3115",
                         "L        172.20.15.1/32 is directly connected, Loopback3115",
                         "C        172.20.16.0/24 is directly connected, Loopback2116",
                         "L        172.20.16.1/32 is directly connected, Loopback2116",
                         "C        172.20.17.0/24 is directly connected, Loopback2117",
                         "L        172.20.17.1/32 is directly connected, Loopback2117",
                         "C        172.20.19.0/24 is directly connected, Loopback2119",
                         "L        172.20.19.19/32 is directly connected, Loopback2119",
                         "C        172.20.21.0/24 is directly connected, Loopback2121",
                         "L        172.20.21.1/32 is directly connected, Loopback2121"
                     ]
                 ]
             }
         ]
     }
 }
 TASK [copy] **
 changed: [ios-xe-mgmt.cisco.com -> localhost]
 TASK [copy] **
 changed: [ios-xe-mgmt.cisco.com -> localhost]
 TASK [Generate Device Show Command File(s)] 
 changed: [ios-xe-mgmt.cisco.com] => (item={'msg': u'All items completed', 'deprecations': [{'msg': u'Distribution Ubuntu 19.04 on host ios-xe-mgmt.cisco.com should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information', 'version': u'2.12'}], 'changed': False, 'results': [{'ansible_loop_var': u'item', u'stdout': [u"Building configuration…\n\nCurrent configuration : 6228 bytes\n!\n! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root\n!\nversion 16.9\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nplatform qfp utilization monitor load 80\nno platform punt-keepalive disable-kernel-core\nplatform console virtual\n!\nhostname csr1000v\n!\nboot-start-marker\nboot-end-marker\n!\n!\nno logging console\nenable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/\n!\nno aaa new-model\n!\n!\n!\n!\n!\n!\n!\nip domain name abc.inc\n!\n!\n!\nlogin on-success log\n!\n!\n!\n!\n!\n!\n!\nsubscriber templating\n! \n! \n! \n! \n!\nmultilink bundle-name authenticated\n!\n!\n!\n!\n!\ncrypto pki trustpoint TP-self-signed-1530096085\n enrollment selfsigned\n subject-name cn=IOS-Self-Signed-Certificate-1530096085\n revocation-check none\n rsakeypair TP-self-signed-1530096085\n!\n!\ncrypto pki certificate chain TP-self-signed-1530096085\n certificate self-signed 01\n  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \n  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \n  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 \n  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \n  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 \n  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \n  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 \n  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F \n  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA \n  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 \n  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 \n  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE \n  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 \n  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 \n  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \n  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 \n  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 \n  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 \n  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B \n  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 \n  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A \n  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 \n  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 \n  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 \n  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 \n  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958\n  \tquit\n!\n!\n!\n!\n!\n!\n!\n!\nlicense udi pid CSR1000V sn 9ZL30UN51R9\nlicense boot level ax\nno license smart enable\ndiagnostic bootup level minimal\n!\nspanning-tree extend system-id\n!\nnetconf-yang\n!\nrestconf\n!\nusername developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.\nusername cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.\nusername root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/\n!\nredundancy\n!\n!\n!\n!\n!\n!\n! \n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n!\n! \n! \n!\n!\ninterface Loopback18\n description Configured by RESTCONF\n ip address 172.16.100.18 255.255.255.0\n!\ninterface Loopback702\n description Configured by charlotte\n ip address 172.17.2.1 255.255.255.0\n!\ninterface Loopback710\n description Configured by seb\n ip address 172.17.10.1 255.255.255.0\n!\ninterface Loopback2101\n description Configured by RESTCONF\n ip address 172.20.1.1 255.255.255.0\n!\ninterface Loopback2102\n description Configured by Charlotte\n ip address 172.20.2.1 255.255.255.0\n!\ninterface Loopback2103\n description Configured by OWEN\n ip address 172.20.3.1 255.255.255.0\n!\ninterface Loopback2104\n description Configured by RESTCONF\n ip address 172.20.4.1 255.255.255.0\n!\ninterface Loopback2105\n description Configured by RESTCONF\n ip address 172.20.5.1 255.255.255.0\n!\ninterface Loopback2107\n description Configured by Josia\n ip address 172.20.7.1 255.255.255.0\n!\ninterface Loopback2108\n description Configured by RESTCONF\n ip address 172.20.8.1 255.255.255.0\n!\ninterface Loopback2109\n description Configured by RESTCONF\n ip address 172.20.9.1 255.255.255.0\n!\ninterface Loopback2111\n description Configured by RESTCONF\n ip address 172.20.11.1 255.255.255.0\n!\ninterface Loopback2112\n description Configured by RESTCONF\n ip address 172.20.12.1 255.255.255.0\n!\ninterface Loopback2113\n description Configured by RESTCONF\n ip address 172.20.13.1 255.255.255.0\n!\ninterface Loopback2114\n description Configured by RESTCONF\n ip address 172.20.14.1 255.255.255.0\n!\ninterface Loopback2116\n description Configured by RESTCONF\n ip address 172.20.16.1 255.255.255.0\n!\ninterface Loopback2117\n description Configured by RESTCONF\n ip address 172.20.17.1 255.255.255.0\n!\ninterface Loopback2119\n description Configured by RESTCONF\n ip address 172.20.19.19 255.255.255.0\n!\ninterface Loopback2121\n description Configured by RESTCONF\n ip address 172.20.21.1 255.255.255.0\n!\ninterface Loopback3115\n description Configured by Breuvage\n ip address 172.20.15.1 255.255.255.0\n!\ninterface GigabitEthernet1\n description MANAGEMENT INTERFACE - DON'T TOUCH ME\n ip address 10.10.20.48 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet2\n description Configured by RESTCONF\n ip address 10.255.255.1 255.255.255.0\n negotiation auto\n no mop enabled\n no mop sysid\n!\ninterface GigabitEthernet3\n description Network Interface\n no ip address\n shutdown\n negotiation auto\n no mop enabled\n no mop sysid\n!\nip forward-protocol nd\nip http server\nip http authentication local\nip http secure-server\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254\n!\nip ssh rsa keypair-name ssh-key\nip ssh version 2\nip scp server enable\n!\n!\n!\n!\n!\ncontrol-plane\n!\n!\n!\n!\n!\nbanner motd ^C\nWelcome to the DevNet Sandbox for CSR1000v and IOS XE\n\nThe following programmability features are already enabled:\n  - NETCONF\n  - RESTCONF\n\nThanks for stopping by.\n^C\n!\nline con 0\n exec-timeout 0 0\n stopbits 1\nline vty 0 4\n login local\n transport input ssh\n!\nntp logging\nntp authenticate\n!\n!\n!\n!\n!\nend"], u'changed': False, 'failed': False, 'item': u'show run', u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show run'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Building configuration…', u'', u'Current configuration : 6228 bytes', u'!', u'! Last configuration change at 23:15:26 UTC Sat Nov 23 2019 by root', u'!', u'version 16.9', u'service timestamps debug datetime msec', u'service timestamps log datetime msec', u'platform qfp utilization monitor load 80', u'no platform punt-keepalive disable-kernel-core', u'platform console virtual', u'!', u'hostname csr1000v', u'!', u'boot-start-marker', u'boot-end-marker', u'!', u'!', u'no logging console', u'enable secret 5 $1$gkJ1$EofN9ajW9k18SoRTgkhYr/', u'!', u'no aaa new-model', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'ip domain name abc.inc', u'!', u'!', u'!', u'login on-success log', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'subscriber templating', u'! ', u'! ', u'! ', u'! ', u'!', u'multilink bundle-name authenticated', u'!', u'!', u'!', u'!', u'!', u'crypto pki trustpoint TP-self-signed-1530096085', u' enrollment selfsigned', u' subject-name cn=IOS-Self-Signed-Certificate-1530096085', u' revocation-check none', u' rsakeypair TP-self-signed-1530096085', u'!', u'!', u'crypto pki certificate chain TP-self-signed-1530096085', u' certificate self-signed 01', u'  30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 ', u'  31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 ', u'  69666963 6174652D 31353330 30393630 3835301E 170D3139 30353135 31353230 ', u'  34305A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 ', u'  4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D31 35333030 ', u'  39363038 35308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 ', u'  0A028201 0100B239 1ADC578A 8FD99454 BC1BE3E4 38E9CF35 D1D2420E 53D62D27 ', u'  92220CF4 A1AD3126 76B809F0 F227D539 3E371330 8C7767EA 2F22A811 7CA7B88F ', u'  26EE73B8 9925DAFF E2453823 BCF29423 DACB3CE9 92238E44 18E1834F A6D8ABCA ', u'  C6B686E5 ACD87A90 AF9EAE89 093BBEDC 1E2E2AEE 989C4B8C 7D53DBE4 57AE8D66 ', u'  2424721F 3C66A5AC 24A77372 EC6691CE 61B8DF71 A327F668 A9C76D2D EE364206 ', u'  2713286B 7127CB29 57010489 D350BC1B E19C548E D63B0609 3FB63FFE DAD9CBAE ', u'  26A60DB8 A2C51F1D B75577DF 4CA4879C A36E545F C221760D E1308E74 35399E91 ', u'  8A7075CD 498E7439 BBFC72A7 9217389D 8C1787FF 5AC1ECCA 36D9AE5C 8564AD06 ', u'  4CD176B2 EB690203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF ', u'  301F0603 551D2304 18301680 142A4179 9A2DB89D 21F5780E A6170B83 D01CF664 ', u'  17301D06 03551D0E 04160414 2A41799A 2DB89D21 F5780EA6 170B83D0 1CF66417 ', u'  300D0609 2A864886 F70D0101 05050003 82010100 5469C02A ACD746F5 FAA7ADD6 ', u'  53BF195C B0FE9815 EC401671 0FDB9C8A 91571EA0 0F1748BA BA7DEFEE 41889B7B ', u'  58F280B7 6FD9D433 B53E5EA4 860014A6 01408E1C 12212B34 499CFC91 9AD075B8 ', u'  7300AF75 A836A2A4 588B4B91 2E72DF0D DA9EA3CD 7CE8D3E3 4990A6D5 5F46634A ', u'  5518C7C1 34B5B5D7 44EAF2A8 0DFB4762 4F2450BE D3D0D5E3 F026015D DFF04762 ', u'  AA3E3332 07FEF910 D895D4D8 D673E2DB D7534719 F86C0BA8 ACAB3057 6E50A289 ', u'  4D1EB2F9 9D24EA20 B0ADA198 037450F4 C606864A A6C7C060 5099D394 FF68F570 ', u'  4D9F84E6 2B1238B9 32D7FABB F9632EA7 BA8597E8 63802AD9 B92187DF 53935107 ', u'  5B6C962B 805A8031 F268C32C B1338EAB 3E9A2958', u'  \tquit', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'license udi pid CSR1000V sn 9ZL30UN51R9', u'license boot level ax', u'no license smart enable', u'diagnostic bootup level minimal', u'!', u'spanning-tree extend system-id', u'!', u'netconf-yang', u'!', u'restconf', u'!', u'username developer privilege 15 secret 5 $1$HtLC$7Kj3hGBoDnSHzdEeR/2ix.', u'username cisco privilege 15 secret 5 $1$aO1Y$0AFVz00ON.hE4WkY.BeYq.', u'username root privilege 15 secret 5 $1$vpY7$mh9d69ui3koSaITBi8k9D/', u'!', u'redundancy', u'!', u'!', u'!', u'!', u'!', u'!', u'! ', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'!', u'! ', u'! ', u'!', u'!', u'interface Loopback18', u' description Configured by RESTCONF', u' ip address 172.16.100.18 255.255.255.0', u'!', u'interface Loopback702', u' description Configured by charlotte', u' ip address 172.17.2.1 255.255.255.0', u'!', u'interface Loopback710', u' description Configured by seb', u' ip address 172.17.10.1 255.255.255.0', u'!', u'interface Loopback2101', u' description Configured by RESTCONF', u' ip address 172.20.1.1 255.255.255.0', u'!', u'interface Loopback2102', u' description Configured by Charlotte', u' ip address 172.20.2.1 255.255.255.0', u'!', u'interface Loopback2103', u' description Configured by OWEN', u' ip address 172.20.3.1 255.255.255.0', u'!', u'interface Loopback2104', u' description Configured by RESTCONF', u' ip address 172.20.4.1 255.255.255.0', u'!', u'interface Loopback2105', u' description Configured by RESTCONF', u' ip address 172.20.5.1 255.255.255.0', u'!', u'interface Loopback2107', u' description Configured by Josia', u' ip address 172.20.7.1 255.255.255.0', u'!', u'interface Loopback2108', u' description Configured by RESTCONF', u' ip address 172.20.8.1 255.255.255.0', u'!', u'interface Loopback2109', u' description Configured by RESTCONF', u' ip address 172.20.9.1 255.255.255.0', u'!', u'interface Loopback2111', u' description Configured by RESTCONF', u' ip address 172.20.11.1 255.255.255.0', u'!', u'interface Loopback2112', u' description Configured by RESTCONF', u' ip address 172.20.12.1 255.255.255.0', u'!', u'interface Loopback2113', u' description Configured by RESTCONF', u' ip address 172.20.13.1 255.255.255.0', u'!', u'interface Loopback2114', u' description Configured by RESTCONF', u' ip address 172.20.14.1 255.255.255.0', u'!', u'interface Loopback2116', u' description Configured by RESTCONF', u' ip address 172.20.16.1 255.255.255.0', u'!', u'interface Loopback2117', u' description Configured by RESTCONF', u' ip address 172.20.17.1 255.255.255.0', u'!', u'interface Loopback2119', u' description Configured by RESTCONF', u' ip address 172.20.19.19 255.255.255.0', u'!', u'interface Loopback2121', u' description Configured by RESTCONF', u' ip address 172.20.21.1 255.255.255.0', u'!', u'interface Loopback3115', u' description Configured by Breuvage', u' ip address 172.20.15.1 255.255.255.0', u'!', u'interface GigabitEthernet1', u" description MANAGEMENT INTERFACE - DON'T TOUCH ME", u' ip address 10.10.20.48 255.255.255.0', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'interface GigabitEthernet2', u' description Configured by RESTCONF', u' ip address 10.255.255.1 255.255.255.0', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'interface GigabitEthernet3', u' description Network Interface', u' no ip address', u' shutdown', u' negotiation auto', u' no mop enabled', u' no mop sysid', u'!', u'ip forward-protocol nd', u'ip http server', u'ip http authentication local', u'ip http secure-server', u'ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.10.20.254', u'!', u'ip ssh rsa keypair-name ssh-key', u'ip ssh version 2', u'ip scp server enable', u'!', u'!', u'!', u'!', u'!', u'control-plane', u'!', u'!', u'!', u'!', u'!', u'banner motd ^C', u'Welcome to the DevNet Sandbox for CSR1000v and IOS XE', u'', u'The following programmability features are already enabled:', u'  - NETCONF', u'  - RESTCONF', u'', u'Thanks for stopping by.', u'^C', u'!', u'line con 0', u' exec-timeout 0 0', u' stopbits 1', u'line vty 0 4', u' login local', u' transport input ssh', u'!', u'ntp logging', u'ntp authenticate', u'!', u'!', u'!', u'!', u'!', u'end']], 'ansible_facts': {u'discovered_interpreter_python': u'/usr/bin/python'}}, {'item': u'show version', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Cisco IOS XE Software, Version 16.09.03\nCisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)\nTechnical Support: http://www.cisco.com/techsupport\nCopyright (c) 1986-2019 by Cisco Systems, Inc.\nCompiled Wed 20-Mar-19 07:56 by mcpre\n\n\nCisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.\nAll rights reserved.  Certain components of Cisco IOS-XE software are\nlicensed under the GNU General Public License ("GPL") Version 2.0.  The\nsoftware code licensed under GPL Version 2.0 is free software that comes\nwith ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such\nGPL code under the terms of GPL Version 2.0.  For more details, see the\ndocumentation or "License Notice" file accompanying the IOS-XE software,\nor the applicable URL provided on the flyer accompanying the IOS-XE\nsoftware.\n\n\nROM: IOS-XE ROMMON\n\ncsr1000v uptime is 1 day, 2 hours, 8 minutes\nUptime for this control processor is 1 day, 2 hours, 10 minutes\nSystem returned to ROM by reload\nSystem image file is "bootflash:packages.conf"\nLast reload reason: reload\n\n\n\nThis product contains cryptographic features and is subject to United\nStates and local country laws governing import, export, transfer and\nuse. Delivery of Cisco cryptographic products does not imply\nthird-party authority to import, export, distribute or use encryption.\nImporters, exporters, distributors and users are responsible for\ncompliance with U.S. and local country laws. By using this product you\nagree to comply with applicable laws and regulations. If you are unable\nto comply with U.S. and local laws, return this product immediately.\n\nA summary of U.S. laws governing Cisco cryptographic products may be found at:\nhttp://www.cisco.com/wwl/export/crypto/tool/stqrg.html\n\nIf you require further assistance please contact us by sending email to\nexport@cisco.com.\n\nLicense Level: ax\nLicense Type: Default. No valid license found.\nNext reload license Level: ax\n\n\nSmart Licensing Status: Smart Licensing is DISABLED\n\ncisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.\nProcessor board ID 9ZL30UN51R9\n3 Gigabit Ethernet interfaces\n32768K bytes of non-volatile configuration memory.\n8113280K bytes of physical memory.\n7774207K bytes of virtual hard disk at bootflash:.\n0K bytes of WebUI ODM Files at webui:.\n\nConfiguration register is 0x2102'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show version'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Cisco IOS XE Software, Version 16.09.03', u'Cisco IOS Software [Fuji], Virtual XE Software (X86_64_LINUX_IOSD-UNIVERSALK9-M), Version 16.9.3, RELEASE SOFTWARE (fc2)', u'Technical Support: http://www.cisco.com/techsupport', u'Copyright (c) 1986-2019 by Cisco Systems, Inc.', u'Compiled Wed 20-Mar-19 07:56 by mcpre', u'', u'', u'Cisco IOS-XE software, Copyright (c) 2005-2019 by cisco Systems, Inc.', u'All rights reserved.  Certain components of Cisco IOS-XE software are', u'licensed under the GNU General Public License ("GPL") Version 2.0.  The', u'software code licensed under GPL Version 2.0 is free software that comes', u'with ABSOLUTELY NO WARRANTY.  You can redistribute and/or modify such', u'GPL code under the terms of GPL Version 2.0.  For more details, see the', u'documentation or "License Notice" file accompanying the IOS-XE software,', u'or the applicable URL provided on the flyer accompanying the IOS-XE', u'software.', u'', u'', u'ROM: IOS-XE ROMMON', u'', u'csr1000v uptime is 1 day, 2 hours, 8 minutes', u'Uptime for this control processor is 1 day, 2 hours, 10 minutes', u'System returned to ROM by reload', u'System image file is "bootflash:packages.conf"', u'Last reload reason: reload', u'', u'', u'', u'This product contains cryptographic features and is subject to United', u'States and local country laws governing import, export, transfer and', u'use. Delivery of Cisco cryptographic products does not imply', u'third-party authority to import, export, distribute or use encryption.', u'Importers, exporters, distributors and users are responsible for', u'compliance with U.S. and local country laws. By using this product you', u'agree to comply with applicable laws and regulations. If you are unable', u'to comply with U.S. and local laws, return this product immediately.', u'', u'A summary of U.S. laws governing Cisco cryptographic products may be found at:', u'http://www.cisco.com/wwl/export/crypto/tool/stqrg.html', u'', u'If you require further assistance please contact us by sending email to', u'export@cisco.com.', u'', u'License Level: ax', u'License Type: Default. No valid license found.', u'Next reload license Level: ax', u'', u'', u'Smart Licensing Status: Smart Licensing is DISABLED', u'', u'cisco CSR1000V (VXE) processor (revision VXE) with 2392579K/3075K bytes of memory.', u'Processor board ID 9ZL30UN51R9', u'3 Gigabit Ethernet interfaces', u'32768K bytes of non-volatile configuration memory.', u'8113280K bytes of physical memory.', u'7774207K bytes of virtual hard disk at bootflash:.', u'0K bytes of WebUI ODM Files at webui:.', u'', u'Configuration register is 0x2102']], u'changed': False}, {'item': u'show inventory', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'NAME: "Chassis", DESCR: "Cisco CSR1000V Chassis"\nPID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9\n\nNAME: "module R0", DESCR: "Cisco CSR1000V Route Processor"\nPID: CSR1000V          , VID: V00  , SN: JAB1303001C\n\nNAME: "module F0", DESCR: "Cisco CSR1000V Embedded Services Processor"\nPID: CSR1000V          , VID:      , SN:'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show inventory'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'NAME: "Chassis", DESCR: "Cisco CSR1000V Chassis"', u'PID: CSR1000V          , VID: V00  , SN: 9ZL30UN51R9', u'', u'NAME: "module R0", DESCR: "Cisco CSR1000V Route Processor"', u'PID: CSR1000V          , VID: V00  , SN: JAB1303001C', u'', u'NAME: "module F0", DESCR: "Cisco CSR1000V Embedded Services Processor"', u'PID: CSR1000V          , VID:      , SN:']], u'changed': False}, {'item': u'show ip int br', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Interface              IP-Address      OK? Method Status                Protocol\nGigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      \nGigabitEthernet2       10.255.255.1    YES other  up                    up      \nGigabitEthernet3       unassigned      YES NVRAM  administratively down down    \nLoopback18             172.16.100.18   YES other  up                    up      \nLoopback702            172.17.2.1      YES other  up                    up      \nLoopback710            172.17.10.1     YES other  up                    up      \nLoopback2101           172.20.1.1      YES other  up                    up      \nLoopback2102           172.20.2.1      YES other  up                    up      \nLoopback2103           172.20.3.1      YES other  up                    up      \nLoopback2104           172.20.4.1      YES other  up                    up      \nLoopback2105           172.20.5.1      YES other  up                    up      \nLoopback2107           172.20.7.1      YES other  up                    up      \nLoopback2108           172.20.8.1      YES other  up                    up      \nLoopback2109           172.20.9.1      YES other  up                    up      \nLoopback2111           172.20.11.1     YES other  up                    up      \nLoopback2112           172.20.12.1     YES other  up                    up      \nLoopback2113           172.20.13.1     YES other  up                    up      \nLoopback2114           172.20.14.1     YES other  up                    up      \nLoopback2116           172.20.16.1     YES other  up                    up      \nLoopback2117           172.20.17.1     YES other  up                    up      \nLoopback2119           172.20.19.19    YES other  up                    up      \nLoopback2121           172.20.21.1     YES other  up                    up      \nLoopback3115           172.20.15.1     YES other  up                    up'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show ip int br'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Interface              IP-Address      OK? Method Status                Protocol', u'GigabitEthernet1       10.10.20.48     YES NVRAM  up                    up      ', u'GigabitEthernet2       10.255.255.1    YES other  up                    up      ', u'GigabitEthernet3       unassigned      YES NVRAM  administratively down down    ', u'Loopback18             172.16.100.18   YES other  up                    up      ', u'Loopback702            172.17.2.1      YES other  up                    up      ', u'Loopback710            172.17.10.1     YES other  up                    up      ', u'Loopback2101           172.20.1.1      YES other  up                    up      ', u'Loopback2102           172.20.2.1      YES other  up                    up      ', u'Loopback2103           172.20.3.1      YES other  up                    up      ', u'Loopback2104           172.20.4.1      YES other  up                    up      ', u'Loopback2105           172.20.5.1      YES other  up                    up      ', u'Loopback2107           172.20.7.1      YES other  up                    up      ', u'Loopback2108           172.20.8.1      YES other  up                    up      ', u'Loopback2109           172.20.9.1      YES other  up                    up      ', u'Loopback2111           172.20.11.1     YES other  up                    up      ', u'Loopback2112           172.20.12.1     YES other  up                    up      ', u'Loopback2113           172.20.13.1     YES other  up                    up      ', u'Loopback2114           172.20.14.1     YES other  up                    up      ', u'Loopback2116           172.20.16.1     YES other  up                    up      ', u'Loopback2117           172.20.17.1     YES other  up                    up      ', u'Loopback2119           172.20.19.19    YES other  up                    up      ', u'Loopback2121           172.20.21.1     YES other  up                    up      ', u'Loopback3115           172.20.15.1     YES other  up                    up']], u'changed': False}, {'item': u'show ip route', 'ansible_loop_var': u'item', 'failed': False, u'stdout': [u'Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP\n       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area \n       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2\n       E1 - OSPF external type 1, E2 - OSPF external type 2\n       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2\n       ia - IS-IS inter area, * - candidate default, U - per-user static route\n       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP\n       a - application route\n       + - replicated route, % - next hop override, p - overrides from PfR\n\nGateway of last resort is 10.10.20.254 to network 0.0.0.0\n\nS*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1\n      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks\nC        10.10.20.0/24 is directly connected, GigabitEthernet1\nL        10.10.20.48/32 is directly connected, GigabitEthernet1\nC        10.255.255.0/24 is directly connected, GigabitEthernet2\nL        10.255.255.1/32 is directly connected, GigabitEthernet2\n      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks\nC        172.16.100.0/24 is directly connected, Loopback18\nL        172.16.100.18/32 is directly connected, Loopback18\n      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks\nC        172.17.2.0/24 is directly connected, Loopback702\nL        172.17.2.1/32 is directly connected, Loopback702\nC        172.17.10.0/24 is directly connected, Loopback710\nL        172.17.10.1/32 is directly connected, Loopback710\n      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks\nC        172.20.1.0/24 is directly connected, Loopback2101\nL        172.20.1.1/32 is directly connected, Loopback2101\nC        172.20.2.0/24 is directly connected, Loopback2102\nL        172.20.2.1/32 is directly connected, Loopback2102\nC        172.20.3.0/24 is directly connected, Loopback2103\nL        172.20.3.1/32 is directly connected, Loopback2103\nC        172.20.4.0/24 is directly connected, Loopback2104\nL        172.20.4.1/32 is directly connected, Loopback2104\nC        172.20.5.0/24 is directly connected, Loopback2105\nL        172.20.5.1/32 is directly connected, Loopback2105\nC        172.20.7.0/24 is directly connected, Loopback2107\nL        172.20.7.1/32 is directly connected, Loopback2107\nC        172.20.8.0/24 is directly connected, Loopback2108\nL        172.20.8.1/32 is directly connected, Loopback2108\nC        172.20.9.0/24 is directly connected, Loopback2109\nL        172.20.9.1/32 is directly connected, Loopback2109\nC        172.20.11.0/24 is directly connected, Loopback2111\nL        172.20.11.1/32 is directly connected, Loopback2111\nC        172.20.12.0/24 is directly connected, Loopback2112\nL        172.20.12.1/32 is directly connected, Loopback2112\nC        172.20.13.0/24 is directly connected, Loopback2113\nL        172.20.13.1/32 is directly connected, Loopback2113\nC        172.20.14.0/24 is directly connected, Loopback2114\nL        172.20.14.1/32 is directly connected, Loopback2114\nC        172.20.15.0/24 is directly connected, Loopback3115\nL        172.20.15.1/32 is directly connected, Loopback3115\nC        172.20.16.0/24 is directly connected, Loopback2116\nL        172.20.16.1/32 is directly connected, Loopback2116\nC        172.20.17.0/24 is directly connected, Loopback2117\nL        172.20.17.1/32 is directly connected, Loopback2117\nC        172.20.19.0/24 is directly connected, Loopback2119\nL        172.20.19.19/32 is directly connected, Loopback2119\nC        172.20.21.0/24 is directly connected, Loopback2121\nL        172.20.21.1/32 is directly connected, Loopback2121'], u'invocation': {u'module_args': {u'username': None, u'authorize': None, u'commands': [u'show ip route'], u'interval': 1, u'retries': 10, u'auth_pass': None, u'wait_for': None, u'host': None, u'ssh_keyfile': None, u'timeout': None, u'provider': None, u'password': None, u'port': None, u'match': u'all'}}, u'stdout_lines': [[u'Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP', u'       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area ', u'       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2', u'       E1 - OSPF external type 1, E2 - OSPF external type 2', u'       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2', u'       ia - IS-IS inter area, * - candidate default, U - per-user static route', u'       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP', u'       a - application route', u'       + - replicated route, % - next hop override, p - overrides from PfR', u'', u'Gateway of last resort is 10.10.20.254 to network 0.0.0.0', u'', u'S*    0.0.0.0/0 [1/0] via 10.10.20.254, GigabitEthernet1', u'      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks', u'C        10.10.20.0/24 is directly connected, GigabitEthernet1', u'L        10.10.20.48/32 is directly connected, GigabitEthernet1', u'C        10.255.255.0/24 is directly connected, GigabitEthernet2', u'L        10.255.255.1/32 is directly connected, GigabitEthernet2', u'      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks', u'C        172.16.100.0/24 is directly connected, Loopback18', u'L        172.16.100.18/32 is directly connected, Loopback18', u'      172.17.0.0/16 is variably subnetted, 4 subnets, 2 masks', u'C        172.17.2.0/24 is directly connected, Loopback702', u'L        172.17.2.1/32 is directly connected, Loopback702', u'C        172.17.10.0/24 is directly connected, Loopback710', u'L        172.17.10.1/32 is directly connected, Loopback710', u'      172.20.0.0/16 is variably subnetted, 34 subnets, 2 masks', u'C        172.20.1.0/24 is directly connected, Loopback2101', u'L        172.20.1.1/32 is directly connected, Loopback2101', u'C        172.20.2.0/24 is directly connected, Loopback2102', u'L        172.20.2.1/32 is directly connected, Loopback2102', u'C        172.20.3.0/24 is directly connected, Loopback2103', u'L        172.20.3.1/32 is directly connected, Loopback2103', u'C        172.20.4.0/24 is directly connected, Loopback2104', u'L        172.20.4.1/32 is directly connected, Loopback2104', u'C        172.20.5.0/24 is directly connected, Loopback2105', u'L        172.20.5.1/32 is directly connected, Loopback2105', u'C        172.20.7.0/24 is directly connected, Loopback2107', u'L        172.20.7.1/32 is directly connected, Loopback2107', u'C        172.20.8.0/24 is directly connected, Loopback2108', u'L        172.20.8.1/32 is directly connected, Loopback2108', u'C        172.20.9.0/24 is directly connected, Loopback2109', u'L        172.20.9.1/32 is directly connected, Loopback2109', u'C        172.20.11.0/24 is directly connected, Loopback2111', u'L        172.20.11.1/32 is directly connected, Loopback2111', u'C        172.20.12.0/24 is directly connected, Loopback2112', u'L        172.20.12.1/32 is directly connected, Loopback2112', u'C        172.20.13.0/24 is directly connected, Loopback2113', u'L        172.20.13.1/32 is directly connected, Loopback2113', u'C        172.20.14.0/24 is directly connected, Loopback2114', u'L        172.20.14.1/32 is directly connected, Loopback2114', u'C        172.20.15.0/24 is directly connected, Loopback3115', u'L        172.20.15.1/32 is directly connected, Loopback3115', u'C        172.20.16.0/24 is directly connected, Loopback2116', u'L        172.20.16.1/32 is directly connected, Loopback2116', u'C        172.20.17.0/24 is directly connected, Loopback2117', u'L        172.20.17.1/32 is directly connected, Loopback2117', u'C        172.20.19.0/24 is directly connected, Loopback2119', u'L        172.20.19.19/32 is directly connected, Loopback2119', u'C        172.20.21.0/24 is directly connected, Loopback2121', u'L        172.20.21.1/32 is directly connected, Loopback2121']], u'changed': False}]})
 PLAY RECAP ***
 ios-xe-mgmt.cisco.com      : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
 root@c421cab61f1f:/ansible_local/cisco_ios#

The Hunt for a Cisco ACI Lab

As an independent consultant one of the things I have to provide for myself are labs.

It’s a wonderful time for labs! Virtual capabilities and offerings make testing and modeling a clients network easier than ever.

Cisco DevNet offers “Always On” Sandbox devices that are always there if you need to test a “unit of automation”. Network to Code labs are also available on demand (at a small cost) and a huge time saver.

But Cisco’s Application Centric Infrastructure (ACI) is a different animal altogether.

Cisco offers a simulator (hardware and VM) (check out the DevNet Always On APIC to see it in action) which is terrific for automation testing and configuration training but there is no data plane so testing routing, vpcs, trunks, and contracts is a non starter. For that you need the hardware…and so began my search for an ACI Lab for rent.

The compilation below is still a work in progress but I thought I would share my findings, so far, over the last 6 months.

First let’s set the context.

I was looking to rent a physical ACI lab (not a simulator) with the following requirements:

  • Gen2 or better hardware (to test ACL logging among other things) and ACI 4.0 or later
  • Accessible (Configurable) Layer 3 device or devices (to test L3 Outs)
  • Accessible (Configurable) Layer 2 device or devices (to test L2)
  • VMM integration (vCenter and ESXi Compute)
  • Pre configured test VMs
  • My own custom test VMs
  • A means of running Ansible and/or Python within the lab environment and cloning repositories

Going in, I didn’t have a timeframe in mind. I would take a 4 hour block of time or a week and that is still the case. I also realized that it was unlikely anything for rent would meet all of my requirements but that was the baseline I would use to assess the various offerings.

A lower cost option for just a few hours is handy for quick tests.  Having a lab for a week is very nice and gives you the latitude to test without the time pressure.

Out of 13 possible options, I’ve tried 4. Two were very good and I would use them again and 2 were the opposite.

Recommendations:

While the INE lab is a bit difficult to schedule (you have to plan ahead which isn’t compatible with my “immediate gratification” approach to things) it’s a terrific lab with configuration access to the L2/L3 devices which gives you good flexibility.  

NterOne also offers a terrific lab and I was able to schedule it within a week of when I called. The lab is superbly documented and designed so as to make it very easy to understand. I got two pods/tenants which gave me some good flexibility in terms of exporting and testing contracts etc. The L2 and L3 devices are read only and pre-configured so those are a little limiting.

Some observations:

  • So far, no one is running Gen2+ equipment.
  • Almost all of the designs I have seen have single link L2/L3s so its difficult to test vpcs (and you generally need access to the other device unless its been preconfigured for you).
  • All the labs were running ACI 4.x

Global Knowledge has some interesting offerings and I was very excited initially but even getting the simplest answer was impossible. Like many larger companies, if you try to deviate from the menu it does not go well. I moved on.

INE, NterOne, and Firefly all spent the time understanding my requirements and offering solutions. Sadly Firefly was way out of my price range.

On a final note, I would avoid My CCIE Rack and CCIE Rack Rentals which may actually use the same lab. Documentation is terrible and I’ve tried 3 or 4 times and have gotten in about 50% of the time. The first time, I didn’t realize I needed to rent both of their DC labs (one has ACI and the other gives you access to the L2/L3 devices). The last time I rented a lab (both labs) they just simply cancelled my labs and never responded to emails either before or after.  If you have money you would like to dispose of, send it here (Coral Restoration Foundation Curaçao) or some other worthy cause. A much better use of those funds I’d have to say.

If anyone has has good experience with an ACI Lab rental that I’ve not included here I would love to hear about it!

Kudos to INE and NterOne for great customer service and flexibility!  

Summary

No alt text provided for this image

1. The INE Staff was open to allowing me to put some of my own repos and tools into the environment but when I scheduled the lab that became problematic.    INE was very honorable and let me have the lab in the non-customized way for the week without charge since they were not able to honor my customization request at that time!!

2. The Student Jump box can be customized which was very nice (I had access to my GitHub repos) and Python was available although it was Python 2.7.

3. Cost is not unreasonable but there is a minimum of

4. students so unless you have 3 like minded friends it becomes very expensive. 4 I’ve always been a big fan of Global Knowledge but my interactions with them were not positive.  I could not get even the most basic question answered (for example, did they have a money back guarantee or 30 day return policy since I was never able to get my more specific questions answered?  I figured if I had 30 days to see if the lab met my requirements then I could test it out and find out for myself.)

5.  Great customer service but the pricing was a non starter. $$$+ per day and it would have bene limited to business hours

6. When I first reached out with questions about their ACI lab they said it would not be available until late October (I assumed this year).  When I reached out in November, they didn’t even answer the question so clearly this is still al work in progress.

7. Worthy of further investigation


Details and Links

Cost Legend:

  • $ Less than $200
  • $$ Hundreds
  • $$$ Thousands

INE $$

CCIE Data Center – 850 Tokens/Week (Weekly rentals only) (1$ = 1 Token)

Excellent lab but very busy (because it’s very good) and so can be difficult to schedule.

NterOne $$

Excellent lab with good functionality at a reasonable price point.

Fast Lane $$$

Minimum of 4 Students @ $439/Student

Global Knowledge $$$

On Demand (12 Months)

Very poor customer support (my experience)

CloudMyLab

Lab not available yet. No timeframe given.

Octa Networks

More course focused but awaiting response.

Labs 4 Rent

INDIA: +91-9538 476 467  |  UAE: +971-589 703 499 | Email: info@labs4rent.com

No response to emails

FireFly $$$+ /day

Too expensive (for me)!

Rack Professionals

Needs further investigation

NH Networkers Home

+91-8088617460 / +91-8088617460

Needs further investigation

Micronics Training

They do not rent out their racks.

My CCIE Rack $

support@myccierack.com. Whatsapp number- 7840018186

Very poor experience

CCIE Rack Rentals $

support@ccierack.rentals WhatsApp : +918976927692

Very poor experience

The Struggle with Structure – Network Automation, Design, and Data Models

Preface

Modern enterprise networking is going to require a level of structure and consistency that the majority of its networking community may find unfamiliar and perhaps uncomfortable. As a community, we’ve never had to present our designs and configuration data in any kind of globally consistent or even industry standard format.

I’m fascinated by all things relating to network automation but the one thing that eluded me was the discussion around data models (YANG, OpenConfig).

Early on, the little I researched around YANG led me to conclude that it was interesting, perhaps something more relevant to the service provider community, and a bit academic. In short, not something directly relevant to what I was doing.

Here is how I figured out that nothing could be further from the truth and why I think this is an area that needs even more focus.

If you want to skip my torturous journey towards the obvious, see the resources section at the end or jump over to Cisco’s DevNet Model Driven Programmability for some excellent material.

You can also cut to the chase by going to the companion repository Data_Model_Design on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document.


The Current Landscape

Since its earliest days as a discipline, networking (at least in the enterprise) has generally allowed quite a bit of freedom in the design process and its resulting documentation. That is one of the things I love about it and I’m certain I’m not alone in that feeling. A little island of creativity in an ocean of the technical.

For every design, I put together my own diagrams, my own documentation, and my own way to represent the configuration or just the actual configuration. Organizations tried to put some structure around that with a Word, Visio, or configuration text template, but often even that was mostly just for the purposes of branding and identification of the material. How many of us have been given a Word template with the appropriate logos on the title page and if you were lucky a few headings? Many organizations certainly went further requiring a specific format and structure so that there was consistency within the organization but move on to a different organization and everything was different.

The resulting design documentation sets were many and varied and locally significant.

In effect, the result was unstructured data. Unstructured or even semi structured data as text or standard output from a device or system is well know but this is unstructured data on a broader scale.

Design and Configuration Data

Over the last few years I’ve observed a pattern that I’m just now able to articulate. This pattern speaks to the problem of unstructured design and configuration data. The first thing I realized is that, as usual, I’m late to the party. Certainly the IETF has been working on the structured configuration data problem for almost 20 years and longer if you include SNMP! The Service Provider community is also working hard in this area.

The problem of structured vs unstructured data has been well documented over the years. Devin Pickell describes this in great detail in his Structured vs Unstructured Data – What’s the Difference? post.

For the purposes of this discussion let me summarize with a very specific example.

We have a text template that we need to customize with specific configuration values for a specific device:

!EXAMPLE SVI Template <configuration item to be replaced with actual value>

interface Vlan< Vlan ID >
description < SVI description >
ipv6 address < IP Address >/< IP MASK>
ipv6 nd prefix < Prefix >/< Prefix MASK > 0 0 no-autoconfig
ipv6 nd managed-config-flag
ipv6 dhcp relay destination < DHCP6 Relay IP >

If we are lucky we have this:

More often than not we have this:

Or this:

The problem is a little broader but I think this very specific example illustrates the bigger issue. Today there is no one standard way to represent our network design and configuration data. A diagram (in Visio typically) is perhaps the de-facto standard but its not very automation friendly. I’ve had design and configuration data handed to me in Word, PowerPoint, Excel (and their open source equivalents), Text, Visio, and the PDF versions of all of those.

Let me be clear. I am not advocating one standard way to document an entire network design set…yet. I’m suggesting that automation will drive a standard way to represent configuration data and that should drive the resulting design documentation set in whatever form the humans need it. That configuration data set or data model should drive not just the actual configuration of the devices but the documentation of the design. Ultimately, we can expect to describe our entire design in a standard system data model but that is for a future discussion.

Structured design and configuration data

In order to leverage automation we need the configuration data presented in a standard format. I’m not talking configuration templates but rather the actual data that feeds those templates (as shown above) and generates a specific configuration and state for a specific device.

Traditionally, when developing a design you were usually left to your own devices as to how to go about doing that. In the end, you likely had to come up with a way to document the design for review and approval of some sort but that documentation was static (hand entered) and varied in format. Certainly not something that could be easily ingested by any type of automation. So over the last few years, I’ve developed certain structured ways to represent what I will call the “configuration payload”…all the things you need to build a specific working configuration for a device and to define the state it should be in.

Configuration payload includes:

  • hostname
  • authentication and authorization configuration
  • timezone
  • management configuration (NTP, SNMP, Logging, etc.)
  • interface configuration (ip, mask, description, routed, trunked, access, and other attributes)
  • routing configuration (protocol, id, networks, neighbors, etc.)

All of this data should be in a format that could be consumed by automation to, at the very least, generate specific device configurations and, ideally, to push those configurations to devices, QA, and ultimately to document.

My experience over the last few years tells me we have some work ahead of us to achieve that goal.

The problem – Unstructured design and configuration data is the norm today

As a consultant you tend to work with lots of different network engineers and client engineering teams. I started focusing on automation over 4 years ago and during that time I’ve seen my share of different types of configuration payload data. I’m constantly saying, if you can give me this data in this specific format, look what can be done with it!

My first memorable example of this problem was over 2 years ago. The client at the time had a very particular format that they wanted followed to document their wireless design and the deliverable had to be in Visio. I put together a standard format in Excel for representing access point data (name, model, and other attributes). This structured data set in Excel (converted to CSV) would then allow you to feed that data into a diagram that had a floor plan. You still had to move the boxes with the data around to where the APs were placed but it saved quite a lot of typing (and time) and reduced errors. I demonstrated the new workflow but the team felt that it would be simpler for them to stick to the old manual process. I was disappointed to be sure but it was a bit of a passion project to see how much of that process I could automate. We had already standardized on how to represent the Access Point configuration data for the automated system that configured the APs so it was a simple matter of using that data for the documentation.

The issue was more acute on the LAN side of the house. On the LAN side the structured documentation format (also in Excel) was not an option. It fed all the subsequent stages of the process including ordering hardware, configurations (this was the configuration payload!), staging, QA, and the final documentation deliverable.

When fellow network engineers were presented with the format we needed to use, lets just say, the reception lacked warmth. I used Excel specifically because I thought it would be less intimidating and nearly everyone has some familiarity with Excel. These seasoned, well credentialed network engineers, many who were CCIEs, struggled. I struggled right along with them…they could not grasp why we had to do things this way and I struggled to understand why it was such an issue. It is what we all do as part of network design…just the format was a little different, a little more structured (in my mind anyway).

I figured I had made the form too complicated and so I simplified it. The struggle continued. I developed a JSON template as an alternative. I think that made it worse. The feedback had a consistent theme. “I don’t usually do it that way.” “I’ve never done it this way before.” “This is confusing.” “This is complicated.”

Lets be clear, at the end of the day we were filling in hostname, timezone information, vlans, default gateway, SVI IP/Mask, uplink Interface configuration, and allowed vlans for the uplinks. These were extremely capable network engineers. I wasn’t asking them to do anything they couldn’t do half asleep. I was only requiring a certain format for the data!

During these struggles I started working with a young engineer who had expressed an interest in helping out with the automation aspects of the project. He grasped the structured documentation format (aka the Excel spreadsheet) in very little time! So much so, that he took on the task of training the seasoned network engineers. So it wasn’t the format or it wasn’t just the format if a young new hire with very little network experience could not just understand it, but master it enough to teach it to others.

With that, the pieces fell into place for me. What I was struggling against was years of tradition and learned behavior. Years of a tradition where the configuration payload format was arbitrary and irrelevant. All you needed was your Visio diagram and your notes and you were good to go.

Unstructured configuration payload data in a variety of formats (often static and binary) is of little use in this brave new world of automation and I started connecting the dots. YANG to model data, Vendor Yang data models…OK…I get it now. These are ways to define the configuration payload for a device in a structured way that is easily consumed by “units of automation” and the device itself.

This does not solve the broader issue of unlearning years of behavior but it does allow for the learning of a process that has one standard method of data representation. So if that transition can be made I can get out of the data modeling business (Excel) and there is now a standard way to represent the data and a single language we can use to talk about it. That is, of course, the ideal. I suspect I’m not out of the data modeling business just yet but I’m certainly a step closer to being out of it and, most importantly, understanding the real issue.

The diagram above shows an evolution of the network design process. The initial design activities won’t change much. We are always going to need to:

  • Understand the business needs, requirements, and constraints
  • Analyze the current state
  • Develop solutions perhaps incorporating new technology or design options using different technologies
  • Model the new design
  • Present & Review the new design

In this next evolution, we may use many of the old tools, some in new ways. We will certainly need new tools many of which I don’t believe exist yet. As we document requirements in a repository and configuration payload in a data model, those artifacts can now drive:

  • An automated packaging effort to generate the design information in human readable formats that each organization wants see
    • Here the Design Document, Presentation, and Diagram are an output of the configuration payload captured in the data model (I’ve deliberately not used Word, PowerPoint, and Visio…)
  • The actual configuration of the network via an automation framework since by definition our data models can be consumed by automation.
  • All of it in a repository under real revision control (not filenames with the date or a version identifier tacked on)

As with any major technology shift, paradigm change, call it what you will, the transition will likely result in three general communities.

  1. Those who eagerly adopt, evangelize, and lead the way
  2. Those who accept and adapt to fit within the new model
  3. Those who will not adapt

I’m sorry to say I’ve been wholly focused on the first community until now. It is this second “adapt community”, which will arguably be the largest of the three, that needs attention. These will be the network engineers who understand the benefits of automation and are willing to adapt but at least initally will likely not be the ones evangelizing or contributing directly to the automation effort. They will be the very capable users and consumers of it.

We need to better target skills development for them as the current landscape can be overwhelming.

Its also important to note that the tooling for this is woefully inadequate right now and is likely impeding adoption.

What’s next?

The solution may elude us for a while and may change over time. For me at least the next steps are clear.

  • I need to do a better job of targeting the broader network community who isn’t necessarily exited (yet) about all the automation but is willing to adapt.
  • I will start discussing and incorporating data models into the conversations and documentation products with my new clients and projects.
  • I will start showcasing the benefits of this approach in every step of the design process and how it can help improve the overall product.
    • revision control
    • improved accuracy
    • increased efficiency

Example Repository

If you want to see how some of these parts can start to fit together please visit my Data_Model_Design repository on GitHub where you can see a “proof of concept” taking a modified Cisco data model containing a handful of components and developing the high level diagram for those components and a sample Markdown design Document which I then saved to PDF for “human consumption”.

Don’t miss these resources

Always a fan of DevNet, the DevNet team once again does not disappoint.

YANG for Dummies by David Barroso

YANG Opensource Tools for Data Modeling-driven Management by Benoit Claise

YANG Modules Overview from Juniper Networks Protocol Developer Guide

YANG and the Road to a Model Driven Network by Karim Okasha

The OpenConfig group bears watching as they are working on vendor agnostic real world models using the YANG language. This is very much a Service Provider focused initiative whose efforts may be prove very useful in the Enterprise space.

OpenConfig Site

OpenConfig GitHub Repository