DRY-Ansible-for-Network-Automation

A collection of Ansible roles and playbooks to provision, sketch and backup a network running Junos OS, without the need of manually feeding extra data to represent the physical topology.

MIT License

Stars
18
Committers
3

DRY Ansible for Network Automation

DRY Ansible for Network Automation (DANA) is a collection of Ansible roles and playbooks that allow you to provision, sketch and backup a network running Junos OS, without the need of manually feeding extra data to represent the physical topology.

This is achieved by leveraging a topology inspection role that automatically discovers every link and interface that connect all the devices of particular Ansible group.

The primary goal of this project is to assist network (and DevOps) engineers who need to deploy a testing network quickly, reliably and with as few manual inputs as possible.

Another intent is to illustrate some practical examples of how the Junos OS Automation stack can be integrated with open source building blocks such as Ansible, Python and Jinja2;

Don't Repeat Yourself (DRY) is the core paradigm driving this project:

  • It is DRY : by breaking down atomic operations into Ansible roles that are then conveniently reused
    across easily consumable playbooks;
  • It keeps you DRY: by freeing you from providing anything that can be automatically figured out about
    your network topology.

Table of contents

Complete List of Features

  • Inspect a Junos OS network and automatically represent topology links and interfaces;
  • Draw a sketch of the network topology and export it in PDF;
  • Generate and provision OSPF underlay configuration;
  • Generate and provision EBGP underlay configuration;
  • Generate and provision multiple LAGs configuration;
  • Backup all active configurations;
  • Push multiple configuration files;

Quick Start Example - EBGP Underlay Provisioning

Suppose you spent few hours in the lab cabling up the following network topology for testing purposes.

The switches only got your Lab default configuration, which includes management and loopback interfaces.

Your ultimate goal is to configure an IP Fabric:

  • Underlay IP connectivity on all fabric links, with a different IP subnet per link;
  • EBGP as underlay routing protocol, with one private Autonomous System Number (ASN) per device to redistribute the
    loopback addresses across the fabric;
  • Load balancing enabled in the control and forwarding plane.

The playbook pb_provision_ebgp_underlay is what you need:

  1. Create a group called ip_underlay in your inventory file (hosts.ini) in which you include the devices that must be
    part of the fabric (this will be the only input from your side):
# hosts.ini

[ip_underlay]
qfx5120-1
qfx5120-2 
qfx5120-3
qfx5120-4
qfx5200-1
qfx5200-2
  1. Run the playbook:
ansible-playbook pb_provision_ebgp_underlay.yml -i hosts.ini -t push_config

The tag push_config tells the playbook to both generate and commit the configuration to the remote devices. You can omit this tag if you only want to generate the files locally. They will be stored in a folder _ebgp_underlay_config in your inventory directory.

  1. Enjoy the final result!

A quick summary of what just happened:

  1. Links and neighbours connecting the members of the ip_underlay group have been automatically discovered, while Links
    to devices outside the group (the EX switches in this example) have been safely ignored;
  2. IP addresses, interfaces and ASNs have been automatically generated from default seed values (that can be customised);
  3. A configuration file for each device involved has been generated accordingly, stored in a local folder and finally
    pushed to the remote devices.

You can find out more about how the discovery is carried out and how you can tune the default variables to suite your needs in the Usage section of the documentation below.

The leaf device qfx5120-1 in this example will be provisioned with the following configuration:

interfaces {
    xe-0/0/1 {
        mtu 9216;
        unit 0 {
            family inet {
                address 10.100.0.0/31;
            }
        }
    }
    xe-0/0/2 {
        mtu 9216;
        unit 0 {
            family inet {
                address 10.100.0.2/31;
            }
        }
    }
}
protocols {
    bgp {
        group ebgp-underlay {
                type external;
                family inet {
                unicast;
                }
                multipath {
                    multiple-as;
                }
                export pl-local_loopback;
                local-as 4200000101;
    
                neighbor 10.100.0.1 {
                    description qfx5200-1;
                    peer-as 4200000106;
                }
                neighbor 10.100.0.3 {
                    description qfx5200-2;
                    peer-as 4200000105;
                }
            }
        }
}

policy-options {
    policy-statement pl-local_loopback {
        term 1 {
            from {
                protocol direct;
                interface lo0.0;
            }
            then accept;
        }
    }

    policy-statement ECMP {
        then {
            load-balance per-packet;
        }
    }
}

routing-options {
    forwarding-table {
        export ECMP;
    }
}

Installation

On the machine that you want to use as Ansible controller (can be your laptop or a dedicated server/VM):

  1. Clone or download this repository;

  2. Make sure Python 3.7 (or above) is installed, or create and activate a Python 3 virtual environment (recommended when running Ansible locally);

  3. Install the requirements

    pip install -r requirements.txt
    

At this point you are ready to execute the playbooks.

Usage

Each individual operation is defined as a custom Ansible role. Roles are then imported across different ready-to-use playbooks.

You can use this project in two ways:

  • Run one of the playbooks;
  • Write your own playbook and import one or more custom roles.

Playbooks

You can check each individual playbook documentation for more details and examples:

  • pb_backup_config.yml: Backup all active configurations in a single
    folder;
  • pb_sketch_topology.yml: Discover the network topology
    (or a subset of it) and generate a PDF diagram with a sketch of nodes, links and interfaces;
  • pb_provision_ip_underlay.yml: Discover the network topology
    and then generate the Junos configuration for underlay IP connectivity.
  • pb_provision_ebgp_underlay.yml: Discover the network
    topology and then generate the Junos configuration for underlay IP connectivity and EBGP peering over the physical
    interfaces;
  • pb_provision_ospf_underlay.yml: Discover the network
    topology and then generate the Junos configuration for underlay IP connectivity and OSPF over the physical interfaces;
  • pb_push_config.yml: Load and commit one or more configuration files
    from a local folder to the corresponding remote devices, identified by the config file name.
  • pb_provision_lag.yml: Automatically generates the Junos
    configuration for one or more aggregated interfaces without requiring any interface name to be fed as input.

Roles