My Network Automation Journey, Part 3: Modernizing the Network Development Toolkit

Last updated on April 29, 2021.

BlueCat invited John Capobianco, author of “Automate Your Network: Introducing the Modern Approach to Enterprise Network Management” to walk us through his journey of network automation. From the planning phase to deployment up the stack, John will cover the tradeoffs and critical decision that every network automation project should address – including the role of DNS. John’s opinions are solely his own and do not express the views or opinions of his employer.

Part 1: Frameworks and Goals

Part 2: Building in Ansible

In part one of this series I detailed my very first attempt at network automation. I was delighted with the initial results, particularly the final run-time. Still, I had nagging doubts about the development process and how challenging it was to finally get a working playbook. This was not because Ansible was particularly difficult to understand or figure out. On the contrary, the framework itself was quite simple and easy to get going.

The problem was more about the framework. I was still working with tools and a mindset of a network engineer or administrator, when I should have been thinking like a modern network developer. It was clear that the tools and methodology I had become accustomed to after 20 years of IT would need to evolve.

Legacy Toolkit

Up to this point, manual configuration of network devices meant simply an SSH session to the device and a text editor. In some cases I might have used an enhanced text editor, but even that is optional. To move files around (for example uploading a new firmware image to a device) a file transfer program and protocol (FTP, SCP, TFTP), might also come into the picture.

For my first playbook, these were also the only tools I used. I was crudely developing code in a basic text editor, and clumsily moving my code around using file transfer programs. We found ourselves manually incrementing file names to configuration ‘versions’ as we corrected syntax (and because of the basic text editor there were a lot of syntax issues) or if we modified logic in the playbook. Ultimately we got our Ansible playbook up and running, but the process was extremely painful. I was left wondering “Is this automation stuff really worth all this up-front hassle? How I am I saving time or making things easier if the development cycle is this challenging?” “What am I missing?”

Enter VS Code

Then I discovered Microsoft Visual Studio Codea free, open-source, portable code editor. I felt like a caveman discovering fire. Changing from a text editor to a development tool such as VS Code was a massive evolutionary step for me in terms of easily writing error-free code. I am still learning more VS Code secrets every day, but some of my initial discoveries include:

  • A rich library of extensions to further enhance the already feature-rich editor
  • Extensions for specific file formats and file types
  • YAML, Python, Ansible, Jinja2, CSV, Markdown, many others
  • No more syntax errors at runtime
  • Errors in spacing, formatting, syntax are all highlighted at development time in the editor
  • Split screen / side-by-side viewing capabilities with multiple files
  • File comparison with differentials
  • Ability to easily comment out large sections of code
  • Highlighting a section of code and using the Control-/ keystroke
  • Uncomment code using say keystroke

Given all of this functionality, the team decided to make a change across the board. Overnight our whole team of traditional networking-oriented staff migrated from their text editor of choice (notepad, Notepad++, Textpad, etc.) to VS Code. Even if they were not working on YAML files and simply looking at running-configuration device output, my team started using VS Code. Fully empowered with a new development tool, some of my doubts about the transition to automation started to subside. VS Code addressed a lot of my original concerns about the difficulties in the development cycle. After using VS Code our syntax errors were resolved but having a solid code editor was not enough.

Enter TFS

Sometimes the smallest piece of advice can have dramatic consequences. For me, it was a suggestion from my Senior Director to “investigate TFS” and that my automated solutions “should be under version control”.

I had some peripheral awareness that our software developers were using Microsoft Team Foundation Server (TFS) but had no idea what it meant, especially in terms of network automation. After some introductory meetings with some peers in the development team, they agreed to stand up a repository in TFS where I could keep my code.

TFS would quickly become a key component of my automation solution. TFS has since evolved into Azure DevOps, adding cloud support to the latest release. Anyone aware of GitHub can think of TFS as a private, on-prem version of GitHub used to store repositories privately inside an enterprise, as opposed to in open publically available repositories on the internet. TFS also adds some key features not available in GitHub:

  • Acts as a GUI front-end for your Git repositories
  • Acts as source and version control for your automation playbooks and output
  • Features a distributed Work centre for tracking work and collaborating on code
  • Complete Git history and differentials
  • More advanced features like automated builds and automated deployments
  • Can all Ansible playbooks directly from TFS automated tasks
  • Empowered distributed teams to collaborate on code with full version and source control

TFS was also my first introduction to Git, the underlying source and version control software.

Enter Git

TFS acts a GUI front-end for a Git-based repository. Git tracks all of the contents of a folder structure, providing version and source control over the files in the repository.

Using Git, I no longer had to track changes to refactored code using file names or other crude indicators. The latest working version of a playbook was no longer a guessing game. Instead of using file transfer protocols to move playbooks and output from a development workstation to the Linux host, I could use native Git commands. It suddenly became much easier to collaborate on automation scripts across the entire team.

In VS Code (which includes native Git integration) I didn’t even need to know any Git commands. Everything is handled by point-and-click operations. VS Code features the following Git features:

  • Ability to clone Git repositories in VS Code directly
  • Easily Git pull and Git push changes into the Git repository
  • Git add, Git commit, simply commit messages, and Git Push commands abstracted by mouse clicks
  • Extensions to further enhance Git, such as commit history, previous versions, and coder identification (who committed the last change)
  • No need for file transfer protocol or application to move files around between developer workstation and Linux Ansible workstation

Git is also included natively on the Linux host where I execute my Ansible playbooks. There are some key Git commands you will need to become familiar with to get started:

git clone <repository url>

Clones a repository locally into a new directory.

git checkout <working branch>

Navigate to a working branch in the repository. This does not create another copy of the repository, it simply changes the context and puts focus on the desired branch.

git add

Stages locally changed files for the next commit.

git commit -a -m “<git commit message>”

Commits all staged files and adds a commit message. Commit messages are very important and document all changes committed to the local branch. These messages can be used to track or troubleshoot changes made to source code.

git push

Pushes locally committed changes into the remote repository. It requires authentication when using TFS. This is typically the last step performed locally on a branch, pushing the code into the remote branch. After this step, the remote branch in TFS or GitHub will reflect your local branch. It should be noted that this does not push changes into the master branch; a pull request is required to merge the working branch into the master branch.

git pull

Pull down any remote changes from the remote repository into the local branch. Synchronizing the local and remote branches. Used often in a distributed working environment to pull changes made from other developers.

For practice and to see this incredible technology in action, let’s clone the BlueCat GitHub Ansible repository.

First make sure you have Git and VS Code installed on your machine. Then visit the BlueCat GitHub repository and navigate to the BlueCat Gateway Ansible Module

Click Clone or Download.

Copy the URL into your copy / paste buffer.

Launch VS Code.

Use the Command Palette and type or select Git: Clone.

When prompted, create a new local folder to hold the repository. Open the repository and browse the contents. Congratulations! You have successfully cloned your first Git repo! If you wanted to, you could also contribute to BlueCat’s code using the Git techniques above.

Network Development Lifecycle and Branching Strategy

One key takeaway from my journey is that any automation project needs a network development lifecycle (NDLC) as well as a branching strategy. The NDLC I try to follow compliments the branching strategy:

  • Have a stable master branch in your repository
    • Master branch should be known working code and reflect the current state of the network
    • Never develop directly in master
    • Protect master from direct development
    • Never store passwords in clear text in master
  • A new feature request or bug fix is required on the network
    • Always create a branch
    • Either a feature or bugfix branch
    • Develop locally using lots of well documented commits
    • Commit only related changes so you can easily rollback
    • Use Pull requests to merge well tested and validated branches into master
    • No long-lived branches
  • Use Merge conflicts to ensure version and source control
    • While painful, merge conflicts ensure master remains the source of truth
    • Reconcile conflicts by merging master locally then push back reconciled changes to master
  • Track and assign branches in TFS

After putting these pieces together, the process side of network automation became much clearer to me. Infrastructure commands converted to code, which is kept in a TFS or GitHub Git repository. The repository uses branching to ensure a golden configuration in the form of my master branch.

In the next post, I will discuss moving from simple one-time orchestrated configuration changes to fully automated configuration management. We will introduce a new tool, Jinja2 templates, to the Ansible mix and build on our early success with YAML playbooks. This is when things really get exciting! Stay tuned.

Read more