Arista Automated Configuration using Ansible

Overview

Arista Networks

Arista Networks is an industry leader in data-driven, client to cloud networking for large data center/AI, campus and routing environments. Arista’s award-winning platforms deliver availability, agility, automation, analytics and security through an advanced network operating stack

Source: Arista's homepage.

Arista has some really great products and solutions, and in this post I will have a deeper look at using Arista Validated Design automating the whole bringup process of a Spine-Leaf topology. I will also see how this can be used to make the network underlay more agile by adding changes "on-demand".

I have been working with VMware NSX for many years and one of NSX benefits is how easy it is to automate. I am very keen on testing out how I can achieve the same level of automation with the physical underlay network too.

What I have never been working with is automating the physical network. Automation is not only for easier deployment, handling dynamics in the datacenter more efficient, but also reducing/eliminate configuration issues. In this post I will go through how I make use of Arista vEOS and Arista's Ansible playbooks to deploy a full spine-leaf topology from zero to hero.

Arista Extensible Operating System - EOS

Arista Extensible Operating System (EOS®) is the core of Arista cloud networking solutions for next-generation data centers and cloud networks. Cloud architectures built with Arista EOS scale to hundreds of thousands of compute and storage nodes with management and provisioning capabilities that work at scale. Through its programmability, EOS enables a set of software applications that deliver workflow automation, high availability, unprecedented network visibility and analytics and rapid integration with a wide range of third-party applications for virtualization, management, automation and orchestration services.

Source: Arista

vEOS

vEOS is a virtual appliance making it possible to run EOS as a virtual machine in vSphere, KVM, Proxmox, VMware Workstation, Fusion, and VirtualBox just to name a few. For more information head over here.

With this it is absolutely possible to deploy/test Arista EOS with all kinds of functionality in the comfort of my own lab. So without further ado, lets jump in to it.

vEOS on Proxmox

The vEOS appliance consists of two files, the aboot-veos-x.iso and the veos-lab-4-x.disk. The aboot-veos-x.iso is mounted as a CD/DVD ISO and the disk file is the harddisk of your VM. I am running Proxmox in my lab that supports importing both VMDK and qcow2 disk files, but I will be using qcow2 as vEOS also includes this file. So here what I did to create a working vEOS VM in Proxmox:

  • Upload the file aboot-veos.iso to a datastore on my Proxmox hosts I can store ISO files.

aboot-iso

  • Upload the qcow2 image/disk file to a temp folder on my Proxmox host. (e.g /tmp)
1root@proxmox-02:/tmp# ls
2vEOS64-lab-4.32.1F.qcow2
  • Create a new VM like this:

vm-eos

Add a Serial Port, a USB device and mount the aboot-iso on the CD/DVD drive, and select no hardisk in the wizard (delete the proposed harddisk). Operating system type is Linux 6.x. I chose to use x86-64-v2-AES CPU emulation.

  • Add the vEOS disk by utilizing the qm import command like this, where 7011 is the ID of my VM, raid-10-node02 is the datastore on my host where I want the qcow2 image to be imported/placed.
1root@proxmox-02:/tmp# qm importdisk 7011 vEOS64-lab-4.32.1F.qcow2 raid-10-node02 -format raw
2importing disk 'vEOS64-lab-4.32.1F.qcow2' to VM 7011 ...
3transferred 0.0 B of 4.0 GiB (0.00%)
4transferred 50.9 MiB of 4.0 GiB (1.24%)
5...
6transferred 4.0 GiB of 4.0 GiB (100.00%)
7Successfully imported disk as 'unused0:raid-10-node02:vm-7011-disk-0'

When this is done it will turn up as an unused disk in my VM.

unused-disk

To add the unused disk I select it, click edit and choose SATA and bus 0. This was the only way for the vEOS to successfully boot. This is contrary to what is stated in the official documentation here The Aboot-veos iso must be set as a CD-ROM image on the IDE bus, and the EOS vmdk must be a hard drive image on the same IDE bus. The simulated hardware cannot contain a SATA controller or vEOS will fail to fully boot.

sata

add

sata-disk-added

Now the disk has been added. One final note, I have added the network interfaces I need in my lab as seen above. The net0 will be used for dedicated oob management.

Info: A note on the virtual network adapters for the vEOS appliances. I struggled to get any stateful sessions between my two test VMs connected to the Leaf switches and realized soon it had to be MTU issues. It is also mentioned in the official documentation to use the VIRTIO network adapter, it seemingly works (ping succeeded, traceroute not) but trying to SSH between my test VMs just timed out. Using tcpdump I could see some of the traffic, but only fragments. After changing the NICs to Intel E1000 no MTU issue and everything works as expected.

I also changed the CPU emulation to use HOST. With this setup I could raise the MTU for the underlay BGP uplinks to 9000.

Thats it, I can now power on my vEOS.

booting

When its done booting, can take a couple of seconds, it will present you with the following screen:

login

I can decide to log in and configure it manually by logging in with admin, disable Zero Touch Provisioning. But thats not what this post is about, it is about automating the whole process as much as possible. So this takes me to the next chapter. Zero Touch Provisioning.

I can now power it off, clone this instance to the amount of vEOS appliances I need. I have created 5 instances to be used In the following parts of this post.

ZTP - Zero Touch Provisioning

Now that I have created all my needed vEOS VMs I need some way to set the basic config like Management Interface IP and username password so I can hand them over to Ansible to automate the whole configuration.

EOS starts by default in ZTP mode, meaning it will do a DHCP request and acquire an IP address if there is a DHCP server that reply, this also means I can configure my DHCP server with a option to run a script from a TFTP server to do these initial configurations.

For ZTP to work I must have a DHCP server with some specific settings, then a TFTP server. I decided to create a dedicated DHCP server for this purpose, and I also run the TFTPD instance on the same server as where I run the DHCPD server. The Linux distribution I am using is Ubuntu Server.

Following the official documentation here I have configured my DHCP server with the following setting:

 1# ####GLOBAL Server config###
 2default-lease-time 7200;
 3max-lease-time 7200;
 4authoritative;
 5log-facility local7;
 6ddns-update-style none;
 7one-lease-per-client true;
 8deny duplicates;
 9option option-252 code 252 = text;
10option domain-name "int.guzware.net";
11option domain-name-servers 10.100.1.7,10.100.1.6;
12option netbios-name-servers 10.100.1.7,10.100.1.6;
13option ntp-servers 10.100.1.7;
14
15# ###### Arista MGMT####
16subnet 172.18.100.0 netmask 255.255.255.0 {
17    pool {
18    range 172.18.100.101 172.18.100.150;
19    option domain-name "int.guzware.net";
20    option domain-name-servers 10.100.1.7,10.100.1.6;
21    option broadcast-address 10.100.1.255;
22    option ntp-servers 10.100.1.7;
23    option routers 172.18.100.2;
24    get-lease-hostnames true;
25    option subnet-mask 255.255.255.0;
26  }
27}
28
29
30host s_lan_0 {
31        hardware ethernet bc:24:11:7b:5d:e6;
32        fixed-address 172.18.100.101;
33        option bootfile-name "tftp://172.18.100.10/ztp-spine1-script";
34     }
35
36host s_lan_1 {
37        hardware ethernet bc:24:11:04:f8:f8;
38        fixed-address 172.18.100.102;
39        option bootfile-name "tftp://172.18.100.10/ztp-spine2-script";
40     }
41
42host s_lan_2 {
43        hardware ethernet bc:24:11:ee:53:83;
44        fixed-address 172.18.100.103;
45        option bootfile-name "tftp://172.18.100.10/ztp-leaf1-script";
46     }
47
48host s_lan_3 {
49        hardware ethernet bc:24:11:b3:2f:74;
50        fixed-address 172.18.100.104;
51        option bootfile-name "tftp://172.18.100.10/ztp-leaf2-script";
52     }
53
54host s_lan_4 {
55        hardware ethernet bc:24:11:f8:da:7f;
56        fixed-address 172.18.100.105;
57        option bootfile-name "tftp://172.18.100.10/ztp-borderleaf1-script";
58     }

The 5 host entries corresponds with my 5 vEOS appliances mac addresses respectively and the option bootfile-name refers to a unique file for every vEOS appliance.

The TFTP server has this configuration:

1# /etc/default/tftpd-hpa
2
3TFTP_USERNAME="tftp"
4TFTP_DIRECTORY="/home/andreasm/arista/tftpboot"
5TFTP_ADDRESS=":69"
6TFTP_OPTIONS="--secure"

Then in the tftp_directory I have the following files:

 1andreasm@arista-dhcp:~/arista/tftpboot$ ll
 2total 48
 3drwxrwxr-x 2      777 nogroup  4096 Jun 10 08:59 ./
 4drwxrwxr-x 3 andreasm andreasm 4096 Jun 10 08:15 ../
 5-rw-r--r-- 1 root     root      838 Jun 10 08:55 borderleaf-1-startup-config
 6-rw-r--r-- 1 root     root      832 Jun 10 08:52 leaf-1-startup-config
 7-rw-r--r-- 1 root     root      832 Jun 10 08:53 leaf-2-startup-config
 8-rw-r--r-- 1 root     root      832 Jun 10 08:45 spine-1-startup-config
 9-rw-r--r-- 1 root     root      832 Jun 10 08:51 spine-2-startup-config
10-rw-r--r-- 1 root     root      103 Jun 10 08:55 ztp-borderleaf1-script
11-rw-r--r-- 1 root     root       97 Jun 10 08:53 ztp-leaf1-script
12-rw-r--r-- 1 root     root       97 Jun 10 08:54 ztp-leaf2-script
13-rw-r--r-- 1 root     root       98 Jun 10 08:39 ztp-spine1-script
14-rw-r--r-- 1 root     root       98 Jun 10 08:51 ztp-spine2-script

The content of the ztp-leaf1-script file:

1andreasm@arista-dhcp:~/arista/tftpboot$ cat ztp-leaf1-script
2#!/usr/bin/Cli -p2
3
4enable
5
6copy tftp://172.18.100.10/leaf-1-startup-config flash:startup-config

The content of the leaf-1-startup-config file (taken from the Arista AVD repository here):

 1andreasm@arista-dhcp:~/arista/tftpboot$ cat leaf-1-startup-config
 2hostname leaf-1
 3!
 4! Configures username and password for the ansible user
 5username ansible privilege 15 role network-admin secret sha512 $hash/
 6!
 7! Defines the VRF for MGMT
 8vrf instance MGMT
 9!
10! Defines the settings for the Management1 interface through which Ansible reaches the device
11interface Management1
12   description oob_management
13   no shutdown
14   vrf MGMT
15   ! IP address - must be set uniquely per device
16   ip address 172.18.100.103/24
17!
18! Static default route for VRF MGMT
19ip route vrf MGMT 0.0.0.0/0 172.18.100.2
20!
21! Enables API access in VRF MGMT
22management api http-commands
23   protocol https
24   no shutdown
25   !
26   vrf MGMT
27      no shutdown
28!
29end
30!
31! Save configuration to flash
32copy running-config startup-config

Now I just need to make sure both my DHCP service and TFTP service is running:

 1# DHCP Server
 2andreasm@arista-dhcp:~/arista/tftpboot$ systemctl status isc-dhcp-server
 3● isc-dhcp-server.service - ISC DHCP IPv4 server
 4     Loaded: loaded (/lib/systemd/system/isc-dhcp-server.service; enabled; vendor preset: enabled)
 5     Active: active (running) since Mon 2024-06-10 09:02:08 CEST; 6h ago
 6       Docs: man:dhcpd(8)
 7   Main PID: 3725 (dhcpd)
 8      Tasks: 4 (limit: 4557)
 9     Memory: 4.9M
10        CPU: 15ms
11     CGroup: /system.slice/isc-dhcp-server.service
12             └─3725 dhcpd -user dhcpd -group dhcpd -f -4 -pf /run/dhcp-server/dhcpd.pid -cf /etc/dhcp/dhcpd.co>
13
14# TFTPD server
15andreasm@arista-dhcp:~/arista/tftpboot$ systemctl status tftpd-hpa.service
16● tftpd-hpa.service - LSB: HPA's tftp server
17     Loaded: loaded (/etc/init.d/tftpd-hpa; generated)
18     Active: active (running) since Mon 2024-06-10 08:17:55 CEST; 7h ago
19       Docs: man:systemd-sysv-generator(8)
20    Process: 2414 ExecStart=/etc/init.d/tftpd-hpa start (code=exited, status=0/SUCCESS)
21      Tasks: 1 (limit: 4557)
22     Memory: 408.0K
23        CPU: 39ms
24     CGroup: /system.slice/tftpd-hpa.service
25             └─2422 /usr/sbin/in.tftpd --listen --user tftp --address :69 --secure /home/andreasm/arista/tftpb>

Thats it. If I have already powered on my vEOS appliance they will very soon get their new config and reboot with the desired config. If not, just reset or power them on and off again. Every time I deploy a new vEOS appliance I just have to update my DHCP server config to add the additional hosts mac addresses and corresponding config files.

Spine-Leaf - Desired Topology

A spine-leaf topology is a two-layer network architecture commonly used in data centers. It is designed to provide high-speed, low-latency, and highly available network connectivity. This topology is favored for its scalability and performance, especially in environments requiring large amounts of east-west traffic (server-to-server).

In virtual environments, from regular virtual machines to containers running in Kubernetes, its common with a large amount of east-west traffic. In this post I will be using a Spine Leaf architecture.

Before I did any automated provisioning using Arista Validated Design and Ansible I deployed my vEOS appliances configured them with the amount of network interfaces needed to support my intended use (below), then configured them all manually using CLI so I was sure I had a working configuration, and no issues in my lab. I wanted to make sure I could deploy a spine-leaf topology, create some vlans and attached some VMs to them and checked connectivity. Below was my desired topology:

spine-leaf

And here was the config I used on all switches respectively:

Spine1

  1no aaa root
  2!
  3username admin role network-admin secret sha512 $hash/
  4!
  5switchport default mode routed
  6!
  7transceiver qsfp default-mode 4x10G
  8!
  9service routing protocols model multi-agent
 10!
 11hostname spine-1
 12!
 13spanning-tree mode mstp
 14!
 15system l1
 16   unsupported speed action error
 17   unsupported error-correction action error
 18!
 19interface Ethernet1
 20   description spine-leaf-1-downlink-1
 21   mtu 9214
 22   no switchport
 23   ip address 192.168.0.0/31
 24!
 25interface Ethernet2
 26   description spine-leaf-1-downlink-2
 27   mtu 9214
 28   no switchport
 29   ip address 192.168.0.2/31
 30!
 31interface Ethernet3
 32   description spine-leaf-2-downlink-3
 33   mtu 9214
 34   no switchport
 35   ip address 192.168.0.4/31
 36!
 37interface Ethernet4
 38   description spine-leaf-2-downlink-4
 39   mtu 9214
 40   no switchport
 41   ip address 192.168.0.6/31
 42!
 43interface Ethernet5
 44   description spine-leaf-3-downlink-5
 45   mtu 9214
 46   no switchport
 47   ip address 192.168.0.8/31
 48!
 49interface Ethernet6
 50   description spine-leaf-3-downlink-6
 51   mtu 9214
 52   no switchport
 53   ip address 192.168.0.10/31
 54!
 55interface Loopback0
 56   description spine-1-evpn-lo
 57   ip address 10.0.0.1/32
 58!
 59interface Management1
 60   ip address 172.18.5.71/24
 61!
 62ip routing
 63!
 64ip prefix-list PL-LOOPBACKS
 65   seq 10 permit 10.0.0.0/24 eq 32
 66!
 67ip route 0.0.0.0/0 172.18.5.2
 68!
 69route-map RM-LOOPBACKS permit 10
 70   match ip address prefix-list PL-LOOPBACKS
 71!
 72router bgp 65000
 73   router-id 10.0.0.1
 74   maximum-paths 4 ecmp 4
 75   neighbor UNDERLAY peer group
 76   neighbor UNDERLAY allowas-in 1
 77   neighbor UNDERLAY ebgp-multihop 4
 78   neighbor UNDERLAY send-community extended
 79   neighbor UNDERLAY maximum-routes 12000
 80   neighbor 192.168.0.1 peer group UNDERLAY
 81   neighbor 192.168.0.1 remote-as 65001
 82   neighbor 192.168.0.1 description leaf-1-u1
 83   neighbor 192.168.0.3 peer group UNDERLAY
 84   neighbor 192.168.0.3 remote-as 65001
 85   neighbor 192.168.0.3 description leaf-1-u2
 86   neighbor 192.168.0.5 peer group UNDERLAY
 87   neighbor 192.168.0.5 remote-as 65002
 88   neighbor 192.168.0.5 description leaf-2-u1
 89   neighbor 192.168.0.7 peer group UNDERLAY
 90   neighbor 192.168.0.7 remote-as 65002
 91   neighbor 192.168.0.7 description leaf-2-u2
 92   neighbor 192.168.0.9 peer group UNDERLAY
 93   neighbor 192.168.0.9 remote-as 65003
 94   neighbor 192.168.0.9 description borderleaf-1-u1
 95   neighbor 192.168.0.11 peer group UNDERLAY
 96   neighbor 192.168.0.11 remote-as 65003
 97   neighbor 192.168.0.11 description borderleaf-1-u2
 98   !
 99   address-family ipv4
100      neighbor UNDERLAY activate
101!
102end

Spine2

  1no aaa root
  2!
  3username admin role network-admin secret sha512 $hash/
  4!
  5switchport default mode routed
  6!
  7transceiver qsfp default-mode 4x10G
  8!
  9service routing protocols model multi-agent
 10!
 11hostname spine-2
 12!
 13spanning-tree mode mstp
 14!
 15system l1
 16   unsupported speed action error
 17   unsupported error-correction action error
 18!
 19interface Ethernet1
 20   description spine-leaf-1-downlink-1
 21   mtu 9214
 22   no switchport
 23   ip address 192.168.1.0/31
 24!
 25interface Ethernet2
 26   description spine-leaf-1-downlink-2
 27   mtu 9214
 28   no switchport
 29   ip address 192.168.1.2/31
 30!
 31interface Ethernet3
 32   description spine-leaf-2-downlink-3
 33   mtu 9214
 34   no switchport
 35   ip address 192.168.1.4/31
 36!
 37interface Ethernet4
 38   description spine-leaf-2-downlink-4
 39   mtu 9214
 40   no switchport
 41   ip address 192.168.1.6/31
 42!
 43interface Ethernet5
 44   description spine-leaf-3-downlink-5
 45   mtu 9214
 46   no switchport
 47   ip address 192.168.1.8/31
 48!
 49interface Ethernet6
 50   description spine-leaf-3-downlink-6
 51   mtu 9214
 52   no switchport
 53   ip address 192.168.1.10/31
 54!
 55interface Loopback0
 56   ip address 10.0.0.2/32
 57!
 58interface Management1
 59   ip address 172.18.5.72/24
 60   description spine-leaf-2-downlink-4
 61   mtu 9214
 62   no switchport
 63   ip address 192.168.1.6/31
 64!
 65interface Ethernet5
 66   description spine-leaf-3-downlink-5
 67   mtu 9214
 68   no switchport
 69   ip address 192.168.1.8/31
 70!
 71interface Ethernet6
 72   description spine-leaf-3-downlink-6
 73   mtu 9214
 74   no switchport
 75   ip address 192.168.1.10/31
 76!
 77interface Loopback0
 78   ip address 10.0.0.2/32
 79!
 80interface Management1
 81   ip address 172.18.5.72/24
 82!
 83ip routing
 84!
 85ip prefix-list PL-LOOPBACKS
 86   seq 10 permit 10.0.0.0/24 eq 32
 87!
 88ip route 0.0.0.0/0 172.18.5.2
 89!
 90route-map RM-LOOPBACKS permit 10
 91   match ip address prefix-list PL-LOOPBACKS
 92!
 93router bgp 65000
 94   router-id 10.0.0.2
 95   maximum-paths 4 ecmp 4
 96   neighbor UNDERLAY peer group
 97   neighbor UNDERLAY allowas-in 1
 98   neighbor UNDERLAY ebgp-multihop 4
 99   neighbor UNDERLAY send-community extended
100   neighbor UNDERLAY maximum-routes 12000
101   neighbor 192.168.1.1 peer group UNDERLAY
102   neighbor 192.168.1.1 remote-as 65001
103   neighbor 192.168.1.1 description leaf-1-u3
104   neighbor 192.168.1.3 peer group UNDERLAY
105   neighbor 192.168.1.3 remote-as 65001
106   neighbor 192.168.1.3 description leaf-1-u4
107   neighbor 192.168.1.5 peer group UNDERLAY
108   neighbor 192.168.1.5 remote-as 65002
109   neighbor 192.168.1.5 description leaf-2-u3
110   neighbor 192.168.1.7 peer group UNDERLAY
111   neighbor 192.168.1.7 remote-as 65002
112   neighbor 192.168.1.7 description leaf-2-u4
113   neighbor 192.168.1.9 peer group UNDERLAY
114   neighbor 192.168.1.9 remote-as 65003
115   neighbor 192.168.1.9 description borderleaf-1-u3
116   neighbor 192.168.1.11 peer group UNDERLAY
117   neighbor 192.168.1.11 remote-as 65003
118   neighbor 192.168.1.11 description borderleaf-1-u4
119   redistribute connected route-map RM-LOOPBACKS
120   !
121   address-family ipv4
122      neighbor UNDERLAY activate
123!
124end

Leaf-1

  1no aaa root
  2!
  3username admin role network-admin secret sha512 $hash
  4!
  5switchport default mode routed
  6!
  7transceiver qsfp default-mode 4x10G
  8!
  9service routing protocols model multi-agent
 10!
 11hostname leaf-1
 12!
 13spanning-tree mode mstp
 14no spanning-tree vlan-id 4094
 15!
 16spanning-tree mst configuration
 17   instance 1 vlan  1-4094
 18!
 19system l1
 20   unsupported speed action error
 21   unsupported error-correction action error
 22!
 23vlan 1070
 24   name subnet-70
 25!
 26vlan 1071
 27   name subnet-71
 28!
 29aaa authorization exec default local
 30!
 31interface Ethernet1
 32   description leaf-spine-1-uplink-1
 33   mtu 9000
 34   no switchport
 35   ip address 192.168.0.1/31
 36!
 37interface Ethernet2
 38   description leaf-spine-1-uplink-2
 39   mtu 9000
 40   no switchport
 41   ip address 192.168.0.3/31
 42!
 43interface Ethernet3
 44   description leaf-spine-2-uplink-3
 45   mtu 9000
 46   no switchport
 47   ip address 192.168.1.1/31
 48!
 49interface Ethernet4
 50   description leaf-spine-2-uplink-4
 51   mtu 9000
 52   no switchport
 53   ip address 192.168.1.3/31
 54!
 55interface Ethernet5
 56   mtu 1500
 57   switchport access vlan 1071
 58   switchport
 59   spanning-tree portfast
 60!
 61interface Ethernet6
 62   no switchport
 63!
 64interface Loopback0
 65   description leaf-1-lo
 66   ip address 10.0.0.3/32
 67!
 68interface Management1
 69   ip address 172.18.5.73/24
 70!
 71interface Vlan1070
 72   description subnet-70
 73   ip address virtual 10.70.0.1/24
 74!
 75interface Vlan1071
 76   description subnet-71
 77   ip address virtual 10.71.0.1/24
 78!
 79interface Vxlan1
 80   vxlan source-interface Loopback0
 81   vxlan udp-port 4789
 82   vxlan vlan 1070-1071 vni 10070-10071
 83!
 84ip virtual-router mac-address 00:1c:73:ab:cd:ef
 85!
 86ip routing
 87!
 88ip prefix-list PL-LOOPBACKS
 89   seq 10 permit 10.0.0.0/24 eq 32
 90!
 91ip route 0.0.0.0/0 172.18.5.2
 92!
 93route-map RM-LOOPBACKS permit 10
 94   match ip address prefix-list PL-LOOPBACKS
 95!
 96router bgp 65001
 97   router-id 10.0.0.3
 98   maximum-paths 4 ecmp 4
 99   neighbor OVERLAY peer group
100   neighbor OVERLAY ebgp-multihop 5
101   neighbor OVERLAY send-community extended
102   neighbor UNDERLAY peer group
103   neighbor UNDERLAY allowas-in 1
104   neighbor UNDERLAY ebgp-multihop 4
105   neighbor UNDERLAY send-community extended
106   neighbor UNDERLAY maximum-routes 12000
107   neighbor 10.0.0.4 peer group OVERLAY
108   neighbor 10.0.0.4 remote-as 65002
109   neighbor 10.0.0.4 update-source Loopback0
110   neighbor 10.0.0.5 peer group OVERLAY
111   neighbor 10.0.0.5 remote-as 65003
112   neighbor 192.168.0.0 peer group UNDERLAY
113   neighbor 192.168.0.0 remote-as 65000
114   neighbor 192.168.0.0 description spine-1-int-1
115   neighbor 192.168.0.2 peer group UNDERLAY
116   neighbor 192.168.0.2 remote-as 65000
117   neighbor 192.168.0.2 description spine-1-int-2
118   neighbor 192.168.1.0 peer group UNDERLAY
119   neighbor 192.168.1.0 remote-as 65000
120   neighbor 192.168.1.0 description spine-2-int-1
121   neighbor 192.168.1.2 peer group UNDERLAY
122   neighbor 192.168.1.2 remote-as 65000
123   neighbor 192.168.1.2 description spine-2-int-2
124   !
125   vlan-aware-bundle V1070-1079
126      rd 10.0.0.3:1070
127      route-target both 10010:1
128      redistribute learned
129      vlan 1070-1079
130   !
131   address-family evpn
132      neighbor OVERLAY activate
133   !
134   address-family ipv4
135      neighbor UNDERLAY activate
136      redistribute connected route-map RM-LOOPBACKS
137!
138end

Leaf-2

  1no aaa root
  2!
  3username admin role network-admin secret sha512 $hash
  4!
  5switchport default mode routed
  6!
  7transceiver qsfp default-mode 4x10G
  8!
  9service routing protocols model multi-agent
 10!
 11hostname leaf-2
 12!
 13spanning-tree mode mstp
 14no spanning-tree vlan-id 4094
 15!
 16spanning-tree mst configuration
 17   instance 1 vlan  1-4094
 18!
 19system l1
 20   unsupported speed action error
 21   unsupported error-correction action error
 22!
 23vlan 1070
 24   name subnet-70
 25!
 26vlan 1071
 27   name subnet-71
 28!
 29vlan 1072
 30!
 31aaa authorization exec default local
 32!
 33interface Ethernet1
 34   description leaf-spine-1-uplink-1
 35   mtu 9000
 36   no switchport
 37   ip address 192.168.0.5/31
 38!
 39interface Ethernet2
 40   description leaf-spine-1-uplink-2
 41   mtu 9000
 42   no switchport
 43   ip address 192.168.0.7/31
 44!
 45interface Ethernet3
 46   description leaf-spine-2-uplink-3
 47   mtu 9000
 48   no switchport
 49   ip address 192.168.1.5/31
 50!
 51interface Ethernet4
 52   description leaf-spine-2-uplink-4
 53   mtu 9000
 54   no switchport
 55   ip address 192.168.1.7/31
 56!
 57interface Ethernet5
 58   mtu 1500
 59   switchport access vlan 1070
 60   switchport
 61   spanning-tree portfast
 62!
 63interface Ethernet6
 64   no switchport
 65!
 66interface Loopback0
 67   ip address 10.0.0.4/32
 68!
 69interface Management1
 70   ip address 172.18.5.74/24
 71!
 72interface Vlan1070
 73   description subnet-70
 74   ip address virtual 10.70.0.1/24
 75!
 76interface Vlan1071
 77   description subnet-71
 78   ip address virtual 10.71.0.1/24
 79!
 80interface Vxlan1
 81   vxlan source-interface Loopback0
 82   vxlan udp-port 4789
 83   vxlan vlan 1070-1071 vni 10070-10071
 84!
 85ip virtual-router mac-address 00:1c:73:ab:cd:ef
 86!
 87ip routing
 88!
 89ip prefix-list PL-LOOPBACKS
 90   seq 10 permit 10.0.0.0/24 eq 32
 91!
 92ip route 0.0.0.0/0 172.18.5.2
 93!
 94route-map RM-LOOPBACKS permit 10
 95   match ip address prefix-list PL-LOOPBACKS
 96!
 97router bgp 65002
 98   router-id 10.0.0.4
 99   maximum-paths 4 ecmp 4
100   neighbor OVERLAY peer group
101   neighbor OVERLAY ebgp-multihop 5
102   neighbor OVERLAY send-community extended
103   neighbor UNDERLAY peer group
104   neighbor UNDERLAY allowas-in 1
105   neighbor UNDERLAY ebgp-multihop 4
106   neighbor UNDERLAY send-community extended
107   neighbor UNDERLAY maximum-routes 12000
108   neighbor 10.0.0.3 peer group OVERLAY
109   neighbor 10.0.0.3 remote-as 65001
110   neighbor 10.0.0.3 update-source Loopback0
111   neighbor 10.0.0.5 peer group OVERLAY
112   neighbor 10.0.0.5 remote-as 65003
113   neighbor 10.0.0.5 update-source Loopback0
114   neighbor 192.168.0.4 peer group UNDERLAY
115   neighbor 192.168.0.4 remote-as 65000
116   neighbor 192.168.0.4 description spine-1-int-3
117   neighbor 192.168.0.6 peer group UNDERLAY
118   neighbor 192.168.0.6 remote-as 65000
119   neighbor 192.168.0.6 description spine-1-int-4
120   neighbor 192.168.1.4 peer group UNDERLAY
121   neighbor 192.168.1.4 remote-as 65000
122   neighbor 192.168.1.4 description spine-2-int-3
123   neighbor 192.168.1.6 peer group UNDERLAY
124   neighbor 192.168.1.6 remote-as 65000
125   neighbor 192.168.1.6 description spine-2-int-4
126   !
127   vlan-aware-bundle V1070-1079
128      rd 10.0.0.4:1070
129      route-target both 10010:1
130      redistribute learned
131      vlan 1070-1079
132   !
133   address-family evpn
134      neighbor OVERLAY activate
135   !
136   address-family ipv4
137      neighbor UNDERLAY activate
138      redistribute connected route-map RM-LOOPBACKS
139!
140end

Borderleaf-1

  1no aaa root
  2!
  3username admin role network-admin secret sha512 $hash
  4!
  5switchport default mode routed
  6!
  7transceiver qsfp default-mode 4x10G
  8!
  9service routing protocols model multi-agent
 10!
 11hostname borderleaf-1
 12!
 13spanning-tree mode mstp
 14no spanning-tree vlan-id 4094
 15!
 16spanning-tree mst configuration
 17   instance 1 vlan  1-4094
 18!
 19system l1
 20   unsupported speed action error
 21   unsupported error-correction action error
 22!
 23vlan 1079
 24   name subnet-wan
 25!
 26aaa authorization exec default local
 27!
 28interface Ethernet1
 29   description leaf-spine-1-uplink-1
 30   mtu 9214
 31   no switchport
 32   ip address 192.168.0.9/31
 33!
 34interface Ethernet2
 35   description leaf-spine-1-uplink-2
 36   mtu 9214
 37   no switchport
 38   ip address 192.168.0.11/31
 39!
 40interface Ethernet3
 41   description leaf-spine-2-uplink-3
 42   mtu 9214
 43   no switchport
 44   ip address 192.168.1.9/31
 45!
 46interface Ethernet4
 47   description leaf-spine-2-uplink-4
 48   mtu 9214
 49   no switchport
 50   ip address 192.168.1.11/31
 51!
 52interface Ethernet5
 53   switchport trunk allowed vlan 1070-1079
 54   switchport mode trunk
 55   switchport
 56!
 57interface Ethernet6
 58   no switchport
 59!
 60interface Loopback0
 61   ip address 10.0.0.5/32
 62!
 63interface Management1
 64   ip address 172.18.5.75/24
 65!
 66interface Vlan1079
 67   ip address virtual 10.79.0.1/24
 68!
 69interface Vxlan1
 70   vxlan source-interface Loopback0
 71   vxlan udp-port 4689
 72   vxlan vlan 1079 vni 10079
 73!
 74ip routing
 75!
 76ip prefix-list PL-LOOPBACKS
 77   seq 10 permit 10.0.0.0/24 eq 32
 78!
 79ip route 0.0.0.0/0 172.18.5.2
 80!
 81route-map RM-LOOPBACKS permit 10
 82   match ip address prefix-list PL-LOOPBACKS
 83!
 84router bgp 65003
 85   router-id 10.0.0.4
 86   maximum-paths 2 ecmp 2
 87   neighbor OVERLAY peer group
 88   neighbor OVERLAY ebgp-multihop 5
 89   neighbor OVERLAY send-community extended
 90   neighbor UNDERLAY peer group
 91   neighbor UNDERLAY allowas-in 1
 92   neighbor UNDERLAY ebgp-multihop 4
 93   neighbor UNDERLAY send-community extended
 94   neighbor UNDERLAY maximum-routes 12000
 95   neighbor 10.0.0.3 peer group OVERLAY
 96   neighbor 10.0.0.3 remote-as 65001
 97   neighbor 10.0.0.3 update-source Loopback0
 98   neighbor 10.0.0.4 peer group OVERLAY
 99   neighbor 10.0.0.4 remote-as 65002
100   neighbor 10.0.0.4 update-source Loopback0
101   neighbor 192.168.0.8 peer group UNDERLAY
102   neighbor 192.168.0.8 remote-as 65000
103   neighbor 192.168.0.8 description spine-1-int-5
104   neighbor 192.168.0.10 peer group UNDERLAY
105   neighbor 192.168.0.10 remote-as 65000
106   neighbor 192.168.0.10 description spine-1-int-6
107   neighbor 192.168.1.8 peer group UNDERLAY
108   neighbor 192.168.1.8 remote-as 65000
109   neighbor 192.168.1.8 description spine-2-int-5
110   neighbor 192.168.1.10 peer group UNDERLAY
111   neighbor 192.168.1.10 remote-as 65000
112   neighbor 192.168.1.10 description spine-2-int-6
113   !
114   vlan-aware-bundle V1079
115      rd 10.0.0.5:1079
116      route-target both 10070:1
117      redistribute learned
118      vlan 1079
119   !
120   address-family evpn
121      neighbor OVERLAY activate
122   !
123   address-family ipv4
124      neighbor UNDERLAY activate
125      redistribute connected route-map RM-LOOPBACKS
126!
127end

With the configurations above manually created, I had a working Spine-Leaf topology in my lab. These configs will also be very interesting to compare later on. A quick note on my config is that I am using two distinct point to point from every leaf to each spine.

My physical lab topology

I think it also make sense to quickly go over how my lab is configured. The diagram below illustrates my two Proxmox hosts 1 and 2 both connected to my physical switch on port 49, 51 and 50, 52 respectively. The reason I bring this up is because in certain scenarios I dont want certain vlans to be available on the trunk for both hosts. Like the downlinks from the the vEOS appliances to the attached test VMs, this will not be the case in real world either. This just confuses things.

physical-topology

Below is how I interconnect all my vEOS appliances, separating all point to point connections on their own dedicated vlan. This is ofcourse not necessary in "real world" scnearios, but again this is all virtual environment (including the vEOS switches). All the VLANs are configured on the above illustrated physical switch and made available on the trunks to the respective Proxmox hosts.

vlan-separation

I have also divided the spines and leaf1 and leaf2 to run on each host, as I have only two hosts the borderleaf-1 is placed on same host as leaf-2.

vm-placement

Below is the VLAN tag for each vEOS appliance:

Spine-1

spine-1-vlan

Spine-2

spine-2-vlans

Leaf-1

leaf-1-vlans

For the leafs the last two network cards net5 and net6 are used as host downlinks and not involved in forming the spine/leaf.

Leaf-2

leaf-2-vlans

Borderleaf-1

borderleaf-1-vlans

After verififying the above configs worked after manually configuring it, I reset all the switches back to factory settings. They will get the initial config set by Zero-Touch-Provisioning and ready to be configured again.

Now the next chapters is about automating the configuring of the vEOS switches to form a spine/leaf topology using Ansible. To get started I used Arista's very well documented Arista Validated Design here. More on this in the coming chapters

Arista Validated Designs (AVD)

Arista Validated Designs (AVD) is an extensible data model that defines Arista’s Unified Cloud Network architecture as “code”.

Arista.avd is an Ansible collection for Arista Validated Designs. It’s maintained by Arista and accepts third-party contributions on GitHub at aristanetworks/avd.

While Ansible is the core automation engine, AVD is an Ansible Collection described above. It provides roles, modules, and plugins that allow the user to generate and deploy best-practice configurations to Arista based networks of various design types: Data Center, Campus and Wide Area Networks.

Source: Arista https://avd.arista.com/

Arista Validated Design is a very well maintained project, and by having a quick look at their GitHub repo updates are done very frequently and latest release was 3 weeks ago (at the time of writing this post).

The Arista Validated Designs webpage avd.arista.com is very well structured and the documentation to get started using Arista Validated design is brilliant. It also includes some example designs like Single DC L3LS, Dual DC L3LS, L2LS Fabric, Campus Fabric and ISIS-LDP IPVPN to further simplify getting started.

I will base my deployment on the Single DC L3LS example, with some modifications to achieve a similiar design as illustrated earlier. The major modifications I am doing is removing some of the leafs and no MLAG, keeping it as close to my initial design as possible.

Preparing my environment for AVD and AVD collection requirements

To get started using Ansible I find it best to create a dedicated Python Environment to keep all the different requirements isolated from other projects. This means I can run different versions and packages within their own dedicated virtual environments without them interfering with other environments.

So before I install any of AVDs requirements I will start by creating a folder for my AVD project:

1andreasm@linuxmgmt01:~$ mkdir arista_validated_design
2andreasm@linuxmgmt01:~$ cd arista_validated_design/
3andreasm@linuxmgmt01:~/arista_validated_design$

Then I will create my Python Virtual Environment.

1andreasm@linuxmgmt01:~/arista_validated_design$ python3 -m venv avd-environment
2andreasm@linuxmgmt01:~/arista_validated_design$ ls
3avd-environment
4andreasm@linuxmgmt01:~/arista_validated_design$ cd avd-environment/
5andreasm@linuxmgmt01:~/arista_validated_design/avd-environment$ ls
6bin  include  lib  lib64  pyvenv.cfg

This will create a subfolder with the name of the environment. Now all I need to to is to activate the environment so I can deploy the necessary requirements for AVD.

1andreasm@linuxmgmt01:~/arista_validated_design$ source avd-environment/bin/activate
2(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$

Notice the (avd-environment) indicating I am now in my virtual environment called avd-environment. Now that I have a dedicated Python environment for this I am ready to install all the dependencies for the AVD Collection without running the risk of any conflict with other environments. Below is the AVD Collection Requirements:

  • Python 3.9 or later
  • ansible-core from 2.15.0 to 2.17.x
  • arista.avd collection
  • additional Python requirements:
 1# PyAVD must follow the exact same version as the Ansible collection.
 2# For development this should be installed as an editable install as specified in requirement-dev.txt
 3pyavd==4.9.0-dev0
 4netaddr>=0.7.19
 5Jinja2>=3.0.0
 6treelib>=1.5.5
 7cvprac>=1.3.1
 8jsonschema>=4.10.3
 9referencing>=0.35.0
10requests>=2.27.0
11PyYAML>=6.0.0
12deepmerge>=1.1.0
13cryptography>=38.0.4
14# No anta requirement until the eos_validate_state integration is out of preview.
15# anta>=1.0.0
16aristaproto>=0.1.1
  • Modify ansible.cfg to support jinja2 extensions.

Install AVD Collection and requirements

From my newly created Python Environment I will install the necessary components to get started with AVD.

The first requirement the Python version:

1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ python --version
2Python 3.12.1

Next requirement is installing ansible-core:

 1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ python3 -m pip install ansible
 2Collecting ansible
 3  Obtaining dependency information for ansible from https://files.pythonhosted.org/packages/28/7c/a5f708b7b033f068a8ef40db5c993bee4cfafadd985d48dfe44db8566fc6/ansible-10.0.1-py3-none-any.whl.metadata
 4  Using cached ansible-10.0.1-py3-none-any.whl.metadata (8.2 kB)
 5Collecting ansible-core~=2.17.0 (from ansible)
 6  Obtaining dependency information for ansible-core~=2.17.0 from https://files.pythonhosted.org/packages/2f/77/97fb1880abb485f1df31b36822c537330db86bea4105fdea6e1946084c16/ansible_core-2.17.0-py3-none-any.whl.metadata
 7  Using cached ansible_core-2.17.0-py3-none-any.whl.metadata (6.9 kB)
 8...
 9Installing collected packages: resolvelib, PyYAML, pycparser, packaging, MarkupSafe, jinja2, cffi, cryptography, ansible-core, ansible
10Successfully installed MarkupSafe-2.1.5 PyYAML-6.0.1 ansible-10.0.1 ansible-core-2.17.0 cffi-1.16.0 cryptography-42.0.8 jinja2-3.1.4 packaging-24.1 pycparser-2.22 resolvelib-1.0.1
1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ ansible --version
2ansible [core 2.17.0]

The third requirement is to install the arista.avd collection:

1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ ansible-galaxy collection install arista.avd
2Starting galaxy collection install process
3[WARNING]: Collection arista.cvp does not support Ansible version 2.17.0
4Process install dependency map
5Starting collection install process
6Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/arista-avd-4.8.0.tar.gz to /home/andreasm/.ansible/tmp/ansible-local-66927_qwu6ou1/tmp33cqptq7/arista-avd-4.8.0-p_88prjp
7Installing 'arista.avd:4.8.0' to '/home/andreasm/.ansible/collections/ansible_collections/arista/avd'
8arista.avd:4.8.0 was installed successfully

Then I need the fourth requirement installing the additional Python requirments:

1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ export ARISTA_AVD_DIR=$(ansible-galaxy collection list arista.avd --format yaml | head -1 | cut -d: -f1)
2(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ pip3 install -r ${ARISTA_AVD_DIR}/arista/avd/requirements.txt
3Collecting netaddr>=0.7.19 (from -r /home/andreasm/.ansible/collections/ansible_collections/arista/avd/requirements.txt (line 1))

By pointing to the requirements.txt it will grab all necessary requirements.

And the last requirement, modifying the ansible.cfg to support jinja2 extensions. I will get back to this in a second. I will copy the AVD examples to my current folder first, the reason for this is that they already contain a ansible.cfg file in every example.

 1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ ansible-playbook arista.avd.install_examples
 2[WARNING]: No inventory was parsed, only implicit localhost is available
 3[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
 4
 5PLAY [Install Examples] *****************************************************************************************************************************************************************************************
 6
 7TASK [Copy all examples to /home/andreasm/arista_validated_design] **********************************************************************************************************************************************
 8changed: [localhost]
 9
10PLAY RECAP ******************************************************************************************************************************************************************************************************
11localhost                  : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Now lets have a look at the contents in my folder:

 1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design$ ll
 2total 32
 3drwxrwxr-x  8 andreasm andreasm 4096 Jun 12 06:08 ./
 4drwxr-xr-x 43 andreasm andreasm 4096 Jun 11 05:24 ../
 5drwxrwxr-x  5 andreasm andreasm 4096 Jun 11 06:02 avd-environment/
 6drwxrwxr-x  7 andreasm andreasm 4096 Jun 12 06:07 campus-fabric/
 7drwxrwxr-x  7 andreasm andreasm 4096 Jun 12 06:08 dual-dc-l3ls/
 8drwxrwxr-x  7 andreasm andreasm 4096 Jun 12 06:07 isis-ldp-ipvpn/
 9drwxrwxr-x  7 andreasm andreasm 4096 Jun 12 06:08 l2ls-fabric/
10drwxrwxr-x  8 andreasm andreasm 4096 Jun 12 06:09 single-dc-l3ls/

And by taking a look inside the single-dc-l3ls folder (which I am basing my configuration on) I will see that there is already a ansible.cfg file:

 1(avd-environment) andreasm@linuxmgmt01:~/arista_validated_design/single-dc-l3ls$ ll
 2total 92
 3drwxrwxr-x 8 andreasm andreasm  4096 Jun 12 06:09 ./
 4drwxrwxr-x 8 andreasm andreasm  4096 Jun 12 06:08 ../
 5-rw-rw-r-- 1 andreasm andreasm   109 Jun 12 06:08 ansible.cfg # here
 6-rw-rw-r-- 1 andreasm andreasm   422 Jun 12 06:08 build.yml
 7drwxrwxr-x 2 andreasm andreasm  4096 Jun 12 06:09 config_backup/
 8-rw-rw-r-- 1 andreasm andreasm   368 Jun 12 06:08 deploy-cvp.yml
 9-rw-rw-r-- 1 andreasm andreasm   260 Jun 12 06:08 deploy.yml
10drwxrwxr-x 4 andreasm andreasm  4096 Jun 12 06:09 documentation/
11drwxrwxr-x 2 andreasm andreasm  4096 Jun 12 06:09 group_vars/
12drwxrwxr-x 2 andreasm andreasm  4096 Jun 12 06:08 images/
13drwxrwxr-x 4 andreasm andreasm  4096 Jun 12 06:09 intended/
14-rw-rw-r-- 1 andreasm andreasm  1403 Jun 12 06:08 inventory.yml
15-rw-rw-r-- 1 andreasm andreasm 36936 Jun 12 06:08 README.md
16drwxrwxr-x 2 andreasm andreasm  4096 Jun 12 06:09 switch-basic-configurations/

Now to the last requirement, when I open the ansible.cfg file using vim I will notice this content:

1[defaults]
2inventory=inventory.yml
3jinja2_extensions = jinja2.ext.loopcontrols,jinja2.ext.do,jinja2.ext.i18n

It already contains the requirements. What I can add is the following, as recommended by AVD:

1[defaults]
2inventory=inventory.yml
3jinja2_extensions = jinja2.ext.loopcontrols,jinja2.ext.do,jinja2.ext.i18n
4duplicate_dict_key=error # added this

Thats it for the preparations. Now it is time to do some networking automation.

For more details and instructions, head over to the avd.arista.com webpage as it is very well documented there.

Preparing AVD example files

To get started with Arista Validated Design is quite easy as the necessary files are very well structured and easy to understand with already example information populated making it very easy to follow. In my single-dc-l3ls folder there is a couple of files inside the group_cvars folder I need to edit to match my environment. When these have been edited its time to apply the task. But before getting there I will go through the files how I have edited them.

As I mentioned above, I will base my deployment on the example *single-dc-l3ls" with some minor modifications by removing some leaf-switches, addings some uplinks etc. So by entering the folder single-dc-l3ls folder, which was created when I copied the collection over to my environment earlier, I will find the content related such a deployment/topology.

Below is the files I need to do some edits in, numbered in the order they are configured:

 1├── ansible.cfg # added the optional duplicate_dict_key  
 2├── group_vars/
 3│ ├── CONNECTED_ENDPOINTS.yml #### 7 ####
 4│ ├── DC1_L2_LEAVES.yml ### N/A ####
 5│ ├── DC1_L3_LEAVES.yml #### 3 ####
 6│ ├── DC1_SPINES.yml #### 2 ####
 7│ ├── DC1.yml #### 5 ####
 8│ ├── FABRIC.yml #### 4 ####
 9│ └── NETWORK_SERVICES.yml #### 6 ####
10├── inventory.yml #### 1 ####

First out is the inventory.yml

This file contains a list over which hosts/switches that should be included in the configuration, in my environment it looks like this:

 1---
 2all:
 3  children:
 4    FABRIC:
 5      children:
 6        DC1:
 7          children:
 8            DC1_SPINES:
 9              hosts:
10                dc1-spine1:
11                  ansible_host: 172.18.100.101
12                dc1-spine2:
13                  ansible_host: 172.18.100.102
14            DC1_L3_LEAVES:
15              hosts:
16                dc1-leaf1:
17                  ansible_host: 172.18.100.103
18                dc1-leaf2:
19                  ansible_host: 172.18.100.104
20                dc1-borderleaf1:
21                  ansible_host: 172.18.100.105
22
23    NETWORK_SERVICES:
24      children:
25        DC1_L3_LEAVES:
26    CONNECTED_ENDPOINTS:
27      children:
28        DC1_L3_LEAVES:

I have removed the L2 Leaves and the corresponding group, as my plan is to deploy this design:

2-spine-3-leafs

When done editing the inventory.yml I will cd into the group_cvars folder for the next files to be edited.

The first two files in this folder is the device type files DC1_SPINES and DC1_L3_LEAVES.yml which defines which "role" each device will have in the above topology (spine, l2 or l3 leaf). I will leave these with the default content.

Next up is the FABRIC.yml which configures "global" settings on all devices:

 1---
 2# Ansible connectivity definitions
 3# eAPI connectivity via HTTPS is specified (as opposed to CLI via SSH)
 4ansible_connection: ansible.netcommon.httpapi
 5# Specifies that we are indeed using Arista EOS
 6ansible_network_os: arista.eos.eos
 7# This user/password must exist on the switches to enable Ansible access
 8ansible_user: ansible
 9ansible_password: password
10# User escalation (to enter enable mode)
11ansible_become: true
12ansible_become_method: enable
13# Use SSL (HTTPS)
14ansible_httpapi_use_ssl: true
15# Do not try to validate certs
16ansible_httpapi_validate_certs: false
17
18# Common AVD group variables
19fabric_name: FABRIC
20
21# Define underlay and overlay routing protocol to be used
22underlay_routing_protocol: ebgp
23overlay_routing_protocol: ebgp
24
25# Local users
26local_users:
27  # Define a new user, which is called "ansible"
28  - name: ansible
29    privilege: 15
30    role: network-admin
31    # Password set to "ansible". Same string as the device generates when configuring a username.
32    sha512_password: $hash/
33  - name: admin
34    privilege: 15
35    role: network-admin
36    no_password: true
37
38# BGP peer groups passwords
39bgp_peer_groups:
40  # all passwords set to "arista"
41  evpn_overlay_peers:
42    password: Q4fqtbqcZ7oQuKfuWtNGRQ==
43  ipv4_underlay_peers:
44    password: 7x4B4rnJhZB438m9+BrBfQ==
45
46# P2P interfaces MTU, includes VLANs 4093 and 4094 that are over peer-link
47# If you're running vEOS-lab or cEOS, you should use MTU of 1500 instead as shown in the following line
48# p2p_uplinks_mtu: 9214
49p2p_uplinks_mtu: 1500
50
51# Set default uplink, downlink, and MLAG interfaces based on node type
52default_interfaces:
53  - types: [ spine ]
54    platforms: [ default ]
55    #uplink_interfaces: [ Ethernet1-2 ]
56    downlink_interfaces: [ Ethernet1-6 ]
57  - types: [ l3leaf ]
58    platforms: [ default ]
59    uplink_interfaces: [ Ethernet1-4 ]
60    downlink_interfaces: [ Ethernet5-6 ]
61
62# internal vlan reservation
63internal_vlan_order:
64  allocation: ascending
65  range:
66    beginning: 1100
67    ending: 1300
68
69
70# DNS Server
71name_servers:
72  - 10.100.1.7
73
74# NTP Servers IP or DNS name, first NTP server will be preferred, and sourced from Management VRF
75ntp_settings:
76  server_vrf: use_mgmt_interface_vrf
77  servers:
78    - name: dns-bind-01.int.guzware.net

In the FABRIC.yml I have added this section:

1# internal vlan reservation
2internal_vlan_order:
3  allocation: ascending
4  range:
5    beginning: 1100
6    ending: 1300

After editing the FABRIC.yml its time to edit the DC1.yml. This file will configure the unique BGP settings and map the specific uplinks/downlinks in the spine-leaf. As I have decided to use two distinct uplinks pr Leaf to the Spines I need to edit the DC1.yml accordingly:

  1---
  2# Default gateway used for the management interface
  3mgmt_gateway: 172.18.100.2
  4
  5
  6# Spine switch group
  7spine:
  8  # Definition of default values that will be configured to all nodes defined in this group
  9  defaults:
 10    # Set the relevant platform as each platform has different default values in Ansible AVD
 11    platform: vEOS-lab
 12    # Pool of IPv4 addresses to configure interface Loopback0 used for BGP EVPN sessions
 13    loopback_ipv4_pool: 10.0.0.0/27
 14    # ASN to be used by BGP
 15    bgp_as: 65000
 16
 17  # Definition of nodes contained in this group.
 18  # Specific configuration of device must take place under the node definition. Each node inherits all values defined under 'defaults'
 19  nodes:
 20    # Name of the node to be defined (must be consistent with definition in inventory)
 21    - name: dc1-spine1
 22      # Device ID definition. An integer number used for internal calculations (ie. IPv4 address of the loopback_ipv4_pool among others)
 23      id: 1
 24      # Management IP to be assigned to the management interface
 25      mgmt_ip: 172.18.100.101/24
 26
 27    - name: dc1-spine2
 28      id: 2
 29      mgmt_ip: 172.18.100.102/24
 30
 31# L3 Leaf switch group
 32l3leaf:
 33  defaults:
 34    # Set the relevant platform as each platform has different default values in Ansible AVD
 35    platform: vEOS-lab
 36    # Pool of IPv4 addresses to configure interface Loopback0 used for BGP EVPN sessions
 37    loopback_ipv4_pool: 10.0.0.0/27
 38    # Offset all assigned loopback IP addresses.
 39    # Required when the < loopback_ipv4_pool > is same for 2 different node_types (like spine and l3leaf) to avoid over-lapping IPs.
 40    # For example, set the minimum offset l3leaf.defaults.loopback_ipv4_offset: < total # spine switches > or vice versa.
 41    loopback_ipv4_offset: 2
 42    # Definition of pool of IPs to be used as Virtual Tunnel EndPoint (VXLAN origin and destination IPs)
 43    vtep_loopback_ipv4_pool: 10.255.1.0/27
 44    # Ansible hostname of the devices used to establish neighborship (IP assignments and BGP peering)
 45    uplink_interfaces: ['Ethernet1', 'Ethernet2', 'Ethernet3', 'Ethernet4', 'Ethernet5', 'Ethernet6', 'Ethernet1', 'Ethernet2', 'Ethernet3', 'Ethernet4', 'Ethernet5', 'Ethernet6']
 46    uplink_switches: ['dc1-spine1', 'dc1-spine1', 'dc1-spine2', 'dc1-spine2']
 47    # Definition of pool of IPs to be used in P2P links
 48    uplink_ipv4_pool: 192.168.0.0/26
 49    # Definition of pool of IPs to be used for MLAG peer-link connectivity
 50    #mlag_peer_ipv4_pool: 10.255.1.64/27
 51    # iBGP Peering between MLAG peers
 52    #mlag_peer_l3_ipv4_pool: 10.255.1.96/27
 53    # Virtual router mac for VNIs assigned to Leaf switches in format xx:xx:xx:xx:xx:xx
 54    virtual_router_mac_address: 00:1c:73:00:00:99
 55    spanning_tree_priority: 4096
 56    spanning_tree_mode: mstp
 57
 58  # If two nodes (and only two) are in the same node_group, they will automatically form an MLAG pair
 59  node_groups:
 60    # Definition of a node group that will include two devices in MLAG.
 61    # Definitions under the group will be inherited by both nodes in the group
 62    - group: DC1_L3_LEAF1
 63      # ASN to be used by BGP for the group. Both devices in the MLAG pair will use the same BGP ASN
 64      bgp_as: 65001
 65      nodes:
 66        # Definition of hostnames under the node_group
 67        - name: dc1-leaf1
 68          id: 1
 69          mgmt_ip: 172.18.100.103/24
 70          # Definition of the port to be used in the uplink device facing this device.
 71          # Note that the number of elements in this list must match the length of 'uplink_switches' as well as 'uplink_interfaces'
 72          uplink_switch_interfaces:
 73            - Ethernet1
 74            - Ethernet2
 75            - Ethernet1
 76            - Ethernet2
 77
 78    - group: DC1_L3_LEAF2
 79      bgp_as: 65002
 80      nodes:
 81        - name: dc1-leaf2
 82          id: 2
 83          mgmt_ip: 172.18.100.104/24
 84          uplink_switch_interfaces:
 85            - Ethernet3
 86            - Ethernet4
 87            - Ethernet3
 88            - Ethernet4
 89
 90    - group: DC1_L3_BORDERLEAF1
 91      bgp_as: 65003
 92      nodes:
 93        - name: dc1-borderleaf1
 94          id: 3
 95          mgmt_ip: 172.18.100.105/24
 96          uplink_switch_interfaces:
 97            - Ethernet5
 98            - Ethernet6
 99            - Ethernet5
100            - Ethernet6

When I am satisfied with the DC1.yml I will continue with the NETWORK_SERVICES.yml. This will configure the respective VRF VNI/VXLAN mappings and L2 VLANS.

 1---
 2tenants:
 3  # Definition of tenants. Additional level of abstraction to VRFs
 4  - name: TENANT1
 5    # Number used to generate the VNI of each VLAN by adding the VLAN number in this tenant.
 6    mac_vrf_vni_base: 10000
 7    vrfs:
 8      # VRF definitions inside the tenant.
 9      - name: VRF10
10        # VRF VNI definition.
11        vrf_vni: 10
12        # Enable VTEP Network diagnostics
13        # This will create a loopback with virtual source-nat enable to perform diagnostics from the switch.
14        vtep_diagnostic:
15          # Loopback interface number
16          loopback: 10
17          # Loopback ip range, a unique ip is derived from this ranged and assigned
18          # to each l3 leaf based on it's unique id.
19          loopback_ip_range: 10.255.10.0/27
20        svis:
21          # SVI definitions.
22          - id: 1072
23            # SVI Description
24            name: VRF10_VLAN1072
25            enabled: true
26            # IP anycast gateway to be used in the SVI in every leaf.
27            ip_address_virtual: 10.72.10.1/24
28          - id: 1073
29            name: VRF10_VLAN1073
30            enabled: true
31            ip_address_virtual: 10.73.10.1/24
32      - name: VRF11
33        vrf_vni: 11
34        vtep_diagnostic:
35          loopback: 11
36          loopback_ip_range: 10.255.11.0/27
37        svis:
38          - id: 1074
39            name: VRF11_VLAN1074
40            enabled: true
41            ip_address_virtual: 10.74.11.1/24
42          - id: 1075
43            name: VRF11_VLAN1075
44            enabled: true
45            ip_address_virtual: 10.75.11.1/24
46
47    l2vlans:
48      # These are pure L2 vlans. They do not have a SVI defined in the l3leafs and they will be bridged inside the VXLAN fabric
49      - id: 1070
50        name: L2_VLAN1070
51      - id: 1071
52        name: L2_VLAN1071

And finally the CONNECTED_ENDPOINTS.yml. This will configure the actual physical access/trunk ports for host connections/endpoints.

 1---
 2# Definition of connected endpoints in the fabric.
 3servers:
 4  # Name of the defined server.
 5  - name: dc1-leaf1-vm-server1
 6    # Definition of adapters on the server.
 7    adapters:
 8        # Name of the server interfaces that will be used in the description of each interface
 9      - endpoint_ports: [ VM1 ]
10        # Device ports where the server ports are connected.
11        switch_ports: [ Ethernet5 ]
12        # Device names where the server ports are connected.
13        switches: [ dc1-leaf1 ]
14        # VLANs that will be configured on these ports.
15        vlans: 1071
16        # Native VLAN to be used on these ports.
17        #native_vlan: 4092
18        # L2 mode of the port.
19        mode: access
20        # Spanning tree portfast configuration on this port.
21        spanning_tree_portfast: edge
22        # Definition of the pair of ports as port channel.
23        #port_channel:
24          # Description of the port channel interface.
25          #description: PortChannel dc1-leaf1-server1
26          # Port channel mode for LACP.
27          #mode: active
28
29      - endpoint_ports: [ VM2 ]
30        switch_ports: [ Ethernet6 ]
31        switches: [ dc1-leaf1 ]
32        vlans: 1072
33        mode: access
34        spanning_tree_portfast: edge
35
36  - name: dc1-leaf2-server1
37    adapters:
38      - endpoint_ports: [ VM3 ]
39        switch_ports: [ Ethernet5 ]
40        switches: [ dc1-leaf ]
41        vlans: 1070
42        native_vlan: 4092
43        mode: access
44        spanning_tree_portfast: edge
45        #port_channel:
46        #  description: PortChannel dc1-leaf2-server1
47        #  mode: active
48
49      - endpoint_ports: [ VM4 ]
50        switch_ports: [ Ethernet6 ]
51        switches: [ dc1-leaf2 ]
52        vlans: 1073
53        mode: access
54        spanning_tree_portfast: edge

Now that all the yml's have beed edited accordingly, its the time everyone has been waiting for... Applying the config above and see some magic happen.

Ansible build and deploy

Going one folder back up, I will find two files of interest (root of my chosen example folder single-dc-l3ls) called build.yml and deploy.yml.

Before sending the actual configuration to the devices, I will start by running the build.yml as a dry-run for error checking etc, but also building out the configuration for me to inspect before configuring the devices themselves. It will also create some dedicated files under the documentation folder (more on that later).

Lets run the the build.yaml:

 1(arista_avd) andreasm@linuxmgmt01:~/arista/andreas-spine-leaf$ ansible-playbook build.yml
 2
 3PLAY [Build Configurations and Documentation] *******************************************************************************************************************************************************************
 4
 5TASK [arista.avd.eos_designs : Verify Requirements] *************************************************************************************************************************************************************
 6AVD version 4.8.0
 7Use -v for details.
 8[WARNING]: Collection arista.cvp does not support Ansible version 2.17.0
 9ok: [dc1-spine1 -> localhost]
10
11TASK [arista.avd.eos_designs : Create required output directories if not present] *******************************************************************************************************************************
12ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/intended/structured_configs)
13ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/documentation/fabric)
14
15TASK [arista.avd.eos_designs : Set eos_designs facts] ***********************************************************************************************************************************************************
16ok: [dc1-spine1]
17
18TASK [arista.avd.eos_designs : Generate device configuration in structured format] ******************************************************************************************************************************
19ok: [dc1-borderleaf1 -> localhost]
20ok: [dc1-spine1 -> localhost]
21ok: [dc1-spine2 -> localhost]
22ok: [dc1-leaf1 -> localhost]
23ok: [dc1-leaf2 -> localhost]
24
25TASK [arista.avd.eos_designs : Generate fabric documentation] ***************************************************************************************************************************************************
26ok: [dc1-spine1 -> localhost]
27
28TASK [arista.avd.eos_designs : Generate fabric point-to-point links summary in csv format.] *********************************************************************************************************************
29ok: [dc1-spine1 -> localhost]
30
31TASK [arista.avd.eos_designs : Generate fabric topology in csv format.] *****************************************************************************************************************************************
32ok: [dc1-spine1 -> localhost]
33
34TASK [arista.avd.eos_designs : Remove avd_switch_facts] *********************************************************************************************************************************************************
35ok: [dc1-spine1]
36
37TASK [arista.avd.eos_cli_config_gen : Verify Requirements] ******************************************************************************************************************************************************
38skipping: [dc1-spine1]
39
40TASK [arista.avd.eos_cli_config_gen : Create required output directories if not present] ************************************************************************************************************************
41ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/intended/structured_configs)
42ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/documentation)
43ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/intended/configs)
44ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/documentation/devices)
45
46TASK [arista.avd.eos_cli_config_gen : Include device intended structure configuration variables] ****************************************************************************************************************
47skipping: [dc1-spine1]
48skipping: [dc1-spine2]
49skipping: [dc1-leaf1]
50skipping: [dc1-leaf2]
51skipping: [dc1-borderleaf1]
52
53TASK [arista.avd.eos_cli_config_gen : Generate eos intended configuration] **************************************************************************************************************************************
54ok: [dc1-spine1 -> localhost]
55ok: [dc1-spine2 -> localhost]
56ok: [dc1-leaf2 -> localhost]
57ok: [dc1-leaf1 -> localhost]
58ok: [dc1-borderleaf1 -> localhost]
59
60TASK [arista.avd.eos_cli_config_gen : Generate device documentation] ********************************************************************************************************************************************
61ok: [dc1-spine2 -> localhost]
62ok: [dc1-spine1 -> localhost]
63ok: [dc1-borderleaf1 -> localhost]
64ok: [dc1-leaf1 -> localhost]
65ok: [dc1-leaf2 -> localhost]
66
67PLAY RECAP ******************************************************************************************************************************************************************************************************
68dc1-borderleaf1            : ok=3    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
69dc1-leaf1                  : ok=3    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
70dc1-leaf2                  : ok=3    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
71dc1-spine1                 : ok=11   changed=0    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
72dc1-spine2                 : ok=3    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

No errors, looking good. (PS! I have already run the build.yaml once, and as there was no changes, there is nothing for it to update/change hence the 0 in changed. If it was any changes it would have reflected that too)

It has now created the individual config files under the folder intented/configs for my inspection and record.

1(arista_avd) andreasm@linuxmgmt01:~/arista/andreas-spine-leaf/intended/configs$ ll
2total 48
3drwxrwxr-x 2 andreasm andreasm 4096 Jun 10 11:49 ./
4drwxrwxr-x 4 andreasm andreasm 4096 Jun 10 07:14 ../
5-rw-rw-r-- 1 andreasm andreasm 6244 Jun 10 11:49 dc1-borderleaf1.cfg
6-rw-rw-r-- 1 andreasm andreasm 6564 Jun 10 11:49 dc1-leaf1.cfg
7-rw-rw-r-- 1 andreasm andreasm 6399 Jun 10 11:49 dc1-leaf2.cfg
8-rw-rw-r-- 1 andreasm andreasm 4386 Jun 10 11:49 dc1-spine1.cfg
9-rw-rw-r-- 1 andreasm andreasm 4390 Jun 10 11:49 dc1-spine2.cfg

Lets continue with the deploy.yml and send the configuration to the devices themselves:

 1(arista_avd) andreasm@linuxmgmt01:~/arista/andreas-spine-leaf$ ansible-playbook deploy.yml
 2
 3PLAY [Deploy Configurations to Devices using eAPI] **************************************************************************************************************************************************************
 4
 5TASK [arista.avd.eos_config_deploy_eapi : Verify Requirements] **************************************************************************************************************************************************
 6AVD version 4.8.0
 7Use -v for details.
 8[WARNING]: Collection arista.cvp does not support Ansible version 2.17.0
 9ok: [dc1-spine1 -> localhost]
10
11TASK [arista.avd.eos_config_deploy_eapi : Create required output directories if not present] ********************************************************************************************************************
12ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/config_backup)
13ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/arista/andreas-spine-leaf/config_backup)
14
15TASK [arista.avd.eos_config_deploy_eapi : Replace configuration with intended configuration] ********************************************************************************************************************
16[DEPRECATION WARNING]: The `ansible.module_utils.compat.importlib.import_module` function is deprecated. This feature will be removed in version 2.19. Deprecation warnings can be disabled by setting
17deprecation_warnings=False in ansible.cfg.
18ok: [dc1-leaf1]
19ok: [dc1-spine2]
20ok: [dc1-spine1]
21ok: [dc1-leaf2]
22ok: [dc1-borderleaf1]
23
24PLAY RECAP ******************************************************************************************************************************************************************************************************
25dc1-borderleaf1            : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
26dc1-leaf1                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
27dc1-leaf2                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
28dc1-spine1                 : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
29dc1-spine2                 : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

No changes here either as the switches has already been configured by me earlier uisng AVD. No changes has been done to the exisitng devices (I will do a change further down). If a new configuration or change the configuration is written to the devices instantly and now configured as requested.

In the next chapters I will go over what other benefits Arista Validated Design comes with.

Automatically generated config files

As mentioned above, by just running the build.yml action it will automatically create all the devices configuration files. These files are placed under the folder intended/configs and includes the full configs for every devices defined in the inventory.yml.

Having a look inside the dc1-borderleaf1.cfg file shows me the exact config that will be deployed on the device when I run the deploy.yml:

  1!RANCID-CONTENT-TYPE: arista
  2!
  3vlan internal order ascending range 1100 1300
  4!
  5transceiver qsfp default-mode 4x10G
  6!
  7service routing protocols model multi-agent
  8!
  9hostname dc1-borderleaf1
 10ip name-server vrf MGMT 10.100.1.7
 11!
 12ntp local-interface vrf MGMT Management1
 13ntp server vrf MGMT dns-bind-01.int.guzware.net prefer
 14!
 15spanning-tree mode mstp
 16spanning-tree mst 0 priority 4096
 17!
 18no enable password
 19no aaa root
 20!
 21username admin privilege 15 role network-admin nopassword
 22username ansible privilege 15 role network-admin secret sha512 $6$sd3XM3jppUZgDhBs$Ouqb8DdTnQ3efciJMM71Z7iSnkHwGv.CoaWvOppegdUeQ5F1cIAAbJE/D40rMhYMkjiNkAuW7ixMEEoccCXHT/
 23!
 24vlan 1070
 25   name L2_VLAN1070
 26!
 27vlan 1071
 28   name L2_VLAN1071
 29!
 30vlan 1072
 31   name VRF10_VLAN1072
 32!
 33vlan 1073
 34   name VRF10_VLAN1073
 35!
 36vlan 1074
 37   name VRF11_VLAN1074
 38!
 39vlan 1075
 40   name VRF11_VLAN1075
 41!
 42vrf instance MGMT
 43!
 44vrf instance VRF10
 45!
 46vrf instance VRF11
 47!
 48interface Ethernet1
 49   description P2P_LINK_TO_DC1-SPINE1_Ethernet5
 50   no shutdown
 51   mtu 1500
 52   no switchport
 53   ip address 192.168.0.17/31
 54!
 55interface Ethernet2
 56   description P2P_LINK_TO_DC1-SPINE1_Ethernet6
 57   no shutdown
 58   mtu 1500
 59   no switchport
 60   ip address 192.168.0.19/31
 61!
 62interface Ethernet3
 63   description P2P_LINK_TO_DC1-SPINE2_Ethernet5
 64   no shutdown
 65   mtu 1500
 66   no switchport
 67   ip address 192.168.0.21/31
 68!
 69interface Ethernet4
 70   description P2P_LINK_TO_DC1-SPINE2_Ethernet6
 71   no shutdown
 72   mtu 1500
 73   no switchport
 74   ip address 192.168.0.23/31
 75!
 76interface Loopback0
 77   description EVPN_Overlay_Peering
 78   no shutdown
 79   ip address 10.0.0.5/32
 80!
 81interface Loopback1
 82   description VTEP_VXLAN_Tunnel_Source
 83   no shutdown
 84   ip address 10.255.1.5/32
 85!
 86interface Loopback10
 87   description VRF10_VTEP_DIAGNOSTICS
 88   no shutdown
 89   vrf VRF10
 90   ip address 10.255.10.5/32
 91!
 92interface Loopback11
 93   description VRF11_VTEP_DIAGNOSTICS
 94   no shutdown
 95   vrf VRF11
 96   ip address 10.255.11.5/32
 97!
 98interface Management1
 99   description oob_management
100   no shutdown
101   vrf MGMT
102   ip address 172.18.100.105/24
103!
104interface Vlan1072
105   description VRF10_VLAN1072
106   no shutdown
107   vrf VRF10
108   ip address virtual 10.72.10.1/24
109!
110interface Vlan1073
111   description VRF10_VLAN1073
112   no shutdown
113   vrf VRF10
114   ip address virtual 10.73.10.1/24
115!
116interface Vlan1074
117   description VRF11_VLAN1074
118   no shutdown
119   vrf VRF11
120   ip address virtual 10.74.11.1/24
121!
122interface Vlan1075
123   description VRF11_VLAN1075
124   no shutdown
125   vrf VRF11
126   ip address virtual 10.75.11.1/24
127!
128interface Vxlan1
129   description dc1-borderleaf1_VTEP
130   vxlan source-interface Loopback1
131   vxlan udp-port 4789
132   vxlan vlan 1070 vni 11070
133   vxlan vlan 1071 vni 11071
134   vxlan vlan 1072 vni 11072
135   vxlan vlan 1073 vni 11073
136   vxlan vlan 1074 vni 11074
137   vxlan vlan 1075 vni 11075
138   vxlan vrf VRF10 vni 10
139   vxlan vrf VRF11 vni 11
140!
141ip virtual-router mac-address 00:1c:73:00:00:99
142!
143ip address virtual source-nat vrf VRF10 address 10.255.10.5
144ip address virtual source-nat vrf VRF11 address 10.255.11.5
145!
146ip routing
147no ip routing vrf MGMT
148ip routing vrf VRF10
149ip routing vrf VRF11
150!
151ip prefix-list PL-LOOPBACKS-EVPN-OVERLAY
152   seq 10 permit 10.0.0.0/27 eq 32
153   seq 20 permit 10.255.1.0/27 eq 32
154!
155ip route vrf MGMT 0.0.0.0/0 172.18.100.2
156!
157route-map RM-CONN-2-BGP permit 10
158   match ip address prefix-list PL-LOOPBACKS-EVPN-OVERLAY
159!
160router bfd
161   multihop interval 300 min-rx 300 multiplier 3
162!
163router bgp 65003
164   router-id 10.0.0.5
165   maximum-paths 4 ecmp 4
166   no bgp default ipv4-unicast
167   neighbor EVPN-OVERLAY-PEERS peer group
168   neighbor EVPN-OVERLAY-PEERS update-source Loopback0
169   neighbor EVPN-OVERLAY-PEERS bfd
170   neighbor EVPN-OVERLAY-PEERS ebgp-multihop 3
171   neighbor EVPN-OVERLAY-PEERS password 7 Q4fqtbqcZ7oQuKfuWtNGRQ==
172   neighbor EVPN-OVERLAY-PEERS send-community
173   neighbor EVPN-OVERLAY-PEERS maximum-routes 0
174   neighbor IPv4-UNDERLAY-PEERS peer group
175   neighbor IPv4-UNDERLAY-PEERS password 7 7x4B4rnJhZB438m9+BrBfQ==
176   neighbor IPv4-UNDERLAY-PEERS send-community
177   neighbor IPv4-UNDERLAY-PEERS maximum-routes 12000
178   neighbor 10.0.0.1 peer group EVPN-OVERLAY-PEERS
179   neighbor 10.0.0.1 remote-as 65000
180   neighbor 10.0.0.1 description dc1-spine1
181   neighbor 10.0.0.2 peer group EVPN-OVERLAY-PEERS
182   neighbor 10.0.0.2 remote-as 65000
183   neighbor 10.0.0.2 description dc1-spine2
184   neighbor 192.168.0.16 peer group IPv4-UNDERLAY-PEERS
185   neighbor 192.168.0.16 remote-as 65000
186   neighbor 192.168.0.16 description dc1-spine1_Ethernet5
187   neighbor 192.168.0.18 peer group IPv4-UNDERLAY-PEERS
188   neighbor 192.168.0.18 remote-as 65000
189   neighbor 192.168.0.18 description dc1-spine1_Ethernet6
190   neighbor 192.168.0.20 peer group IPv4-UNDERLAY-PEERS
191   neighbor 192.168.0.20 remote-as 65000
192   neighbor 192.168.0.20 description dc1-spine2_Ethernet5
193   neighbor 192.168.0.22 peer group IPv4-UNDERLAY-PEERS
194   neighbor 192.168.0.22 remote-as 65000
195   neighbor 192.168.0.22 description dc1-spine2_Ethernet6
196   redistribute connected route-map RM-CONN-2-BGP
197   !
198   vlan 1070
199      rd 10.0.0.5:11070
200      route-target both 11070:11070
201      redistribute learned
202   !
203   vlan 1071
204      rd 10.0.0.5:11071
205      route-target both 11071:11071
206      redistribute learned
207   !
208   vlan 1072
209      rd 10.0.0.5:11072
210      route-target both 11072:11072
211      redistribute learned
212   !
213   vlan 1073
214      rd 10.0.0.5:11073
215      route-target both 11073:11073
216      redistribute learned
217   !
218   vlan 1074
219      rd 10.0.0.5:11074
220      route-target both 11074:11074
221      redistribute learned
222   !
223   vlan 1075
224      rd 10.0.0.5:11075
225      route-target both 11075:11075
226      redistribute learned
227   !
228   address-family evpn
229      neighbor EVPN-OVERLAY-PEERS activate
230   !
231   address-family ipv4
232      no neighbor EVPN-OVERLAY-PEERS activate
233      neighbor IPv4-UNDERLAY-PEERS activate
234   !
235   vrf VRF10
236      rd 10.0.0.5:10
237      route-target import evpn 10:10
238      route-target export evpn 10:10
239      router-id 10.0.0.5
240      redistribute connected
241   !
242   vrf VRF11
243      rd 10.0.0.5:11
244      route-target import evpn 11:11
245      route-target export evpn 11:11
246      router-id 10.0.0.5
247      redistribute connected
248!
249management api http-commands
250   protocol https
251   no shutdown
252   !
253   vrf MGMT
254      no shutdown
255!
256end

It is the full config it is intending to send to my borderleaf1 device. So goes for all the other cfg files. If I dont have access to the devices, I just want to generate the config this is just perfect I can stop there and AVD has already provided the configuration files for me.

Automated documentation

Everyone loves documentation, but not everyone loves documenting. Creating a full documentation and keeping it up to date after changes has been done is an important but time consuming thing. Regardless of loving to document or not, it is a very important component to have in place.

When using Ansible Validated design, every time running the build.yml it will automatically create the documentation for every single device that has been configured. Part of the process when running the build.yaml is creating the device configuration but lo the full documentation and Guess what...

It updates the documentation AUTOMATICALLY every time a new change has been added 😃 👍

Lets test that in the next chapter..

Day 2 changes using AVD

Not every environment is static, changes needs to be done from time to time. In this example I need to change some downlinks on my borwderleaf-1 device. I have been asked to configure a downlink port on the leaf for a new firewall that is being connected. I will go ahead and change the yml file CONNECTED_ENDPOINTS.yml by adding this section:

1  - name: dc1-borderleaf1-wan1
2    adapters:
3      - endpoint_ports: [ WAN1 ]
4        switch_ports: [ Ethernet5 ]
5        switches: [ dc1-borderleaf1 ]
6        vlans: 1079
7        mode: access
8        spanning_tree_portfast: edge

The whole content of the CONNECTED_ENDPOINTS.yml looks like this now:

 1---
 2# Definition of connected endpoints in the fabric.
 3servers:
 4  - name: dc1-leaf1-vm-server1
 5    adapters:
 6      - endpoint_ports: [ VM1 ]
 7        switch_ports: [ Ethernet5 ]
 8        switches: [ dc1-leaf1 ]
 9        vlans: 1071
10        mode: access
11        spanning_tree_portfast: edge
12
13
14      - endpoint_ports: [ VM2 ]
15        switch_ports: [ Ethernet6 ]
16        switches: [ dc1-leaf1 ]
17        vlans: 1072
18        mode: access
19        spanning_tree_portfast: edge
20
21  - name: dc1-leaf2-server1
22    adapters:
23      - endpoint_ports: [ VM3 ]
24        switch_ports: [ Ethernet5 ]
25        switches: [ dc1-leaf2 ]
26        vlans: 1070
27        native_vlan: 4092
28        mode: access
29        spanning_tree_portfast: edge
30
31
32      - endpoint_ports: [ VM4 ]
33        switch_ports: [ Ethernet6 ]
34        switches: [ dc1-leaf2 ]
35        vlans: 1073
36        mode: access
37        spanning_tree_portfast: edge
38
39  - name: dc1-borderleaf1-wan1  ## ADDED NOW ##
40    adapters:
41      - endpoint_ports: [ WAN1 ]
42        switch_ports: [ Ethernet5 ]
43        switches: [ dc1-borderleaf1 ]
44        vlans: 1079
45        mode: access
46        spanning_tree_portfast: edge

Now I just need to run the build and if I am satisified I can run the deploy.yml. Lets test. I have already sent one config to the switch as one can refer to above. Now I have done a change and want to reflect that both in the intented configs, documentation and on the device itself. First I will run build.yml:

1(arista_avd) andreasm@linuxmgmt01:~/arista/andreas-spine-leaf$ ansible-playbook build.yml
2
3PLAY [Build Configurations and Documentation] *******************************************************************************************************************************************************************
4PLAY RECAP ******************************************************************************************************************************************************************************************************
5dc1-borderleaf1            : ok=3    changed=3    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

I reports 3 changes. Lets check the documentation and device configuration for borderleaf1:

The automatically updated documentation and device configuration for borderleaf1 (shortened for easier readability)

 1## Interfaces
 2
 3### Ethernet Interfaces
 4
 5#### Ethernet Interfaces Summary
 6
 7##### L2
 8
 9| Interface | Description | Mode | VLANs | Native VLAN | Trunk Group | Channel-Group |
10| --------- | ----------- | ---- | ----- | ----------- | ----------- | ------------- |
11| Ethernet5 |  dc1-borderleaf1-wan1_WAN1 | access | 1079 | - | - | - |
12
13*Inherited from Port-Channel Interface
14
15##### IPv4
16
17| Interface | Description | Type | Channel Group | IP Address | VRF |  MTU | Shutdown | ACL In | ACL Out |
18| --------- | ----------- | -----| ------------- | ---------- | ----| ---- | -------- | ------ | ------- |
19| Ethernet1 | P2P_LINK_TO_DC1-SPINE1_Ethernet5 | routed | - | 192.168.0.17/31 | default | 1500 | False | - | - |
20| Ethernet2 | P2P_LINK_TO_DC1-SPINE1_Ethernet6 | routed | - | 192.168.0.19/31 | default | 1500 | False | - | - |
21| Ethernet3 | P2P_LINK_TO_DC1-SPINE2_Ethernet5 | routed | - | 192.168.0.21/31 | default | 1500 | False | - | - |
22| Ethernet4 | P2P_LINK_TO_DC1-SPINE2_Ethernet6 | routed | - | 192.168.0.23/31 | default | 1500 | False | - | - |
23
24#### Ethernet Interfaces Device Configuration
25
26```eos
27!
28interface Ethernet1
29   description P2P_LINK_TO_DC1-SPINE1_Ethernet5
30   no shutdown
31   mtu 1500
32   no switchport
33   ip address 192.168.0.17/31
34!
35interface Ethernet2
36   description P2P_LINK_TO_DC1-SPINE1_Ethernet6
37   no shutdown
38   mtu 1500
39   no switchport
40   ip address 192.168.0.19/31
41!
42interface Ethernet3
43   description P2P_LINK_TO_DC1-SPINE2_Ethernet5
44   no shutdown
45   mtu 1500
46   no switchport
47   ip address 192.168.0.21/31
48!
49interface Ethernet4
50   description P2P_LINK_TO_DC1-SPINE2_Ethernet6
51   no shutdown
52   mtu 1500
53   no switchport
54   ip address 192.168.0.23/31
55!
56interface Ethernet5 ## NEW ##
57   description dc1-borderleaf1-wan1_WAN1
58   no shutdown
59   switchport access vlan 1079
60   switchport mode access
61   switchport
62   spanning-tree portfast
63```

The actual installed config on my dc1-borderleaf1 switch:

 1andreasm@linuxmgmt01:~/arista/andreas-spine-leaf/intended/configs$ ssh ansible@172.18.100.105
 2Password:
 3Last login: Mon Jun 10 07:07:38 2024 from 10.100.5.10
 4dc1-borderleaf1>enable
 5dc1-borderleaf1#show running-config
 6! Command: show running-config
 7! device: dc1-borderleaf1 (vEOS-lab, EOS-4.32.1F)
 8!
 9! boot system flash:/vEOS-lab.swi
10!
11no aaa root
12!
13management api http-commands
14   no shutdown
15   !
16   vrf MGMT
17      no shutdown
18!
19interface Ethernet1
20   description P2P_LINK_TO_DC1-SPINE1_Ethernet5
21   mtu 1500
22   no switchport
23   ip address 192.168.0.17/31
24!
25interface Ethernet2
26   description P2P_LINK_TO_DC1-SPINE1_Ethernet6
27   mtu 1500
28   no switchport
29   ip address 192.168.0.19/31
30!
31interface Ethernet3
32   description P2P_LINK_TO_DC1-SPINE2_Ethernet5
33   mtu 1500
34   no switchport
35   ip address 192.168.0.21/31
36!
37interface Ethernet4
38   description P2P_LINK_TO_DC1-SPINE2_Ethernet6
39   mtu 1500
40   no switchport
41   ip address 192.168.0.23/31
42!
43interface Ethernet5
44!
45interface Ethernet6
46!
47interface Loopback0
48   description EVPN_Overlay_Peering
49   ip address 10.0.0.5/32

No Ethernet5 configured yet.

Now all I need to do is run the deploy.yml to send the updated config to the switch itself.

 1(arista_avd) andreasm@linuxmgmt01:~/arista/andreas-spine-leaf$ ansible-playbook deploy.yml
 2
 3PLAY [Deploy Configurations to Devices using eAPI] **************************************************************************************************************************************************************
 4
 5TASK [arista.avd.eos_config_deploy_eapi : Replace configuration with intended configuration] ********************************************************************************************************************
 6
 7changed: [dc1-borderleaf1]
 8
 9RUNNING HANDLER [arista.avd.eos_config_deploy_eapi : Backup running config] *************************************************************************************************************************************
10changed: [dc1-borderleaf1]
11
12PLAY RECAP ******************************************************************************************************************************************************************************************************
13dc1-borderleaf1            : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Done, now if I log into my actual borderleaf-1 switch to verify:

 1dc1-borderleaf1#show running-config
 2! Command: show running-config
 3! device: dc1-borderleaf1 (vEOS-lab, EOS-4.32.1F)
 4!
 5interface Ethernet1
 6   description P2P_LINK_TO_DC1-SPINE1_Ethernet5
 7   mtu 1500
 8   no switchport
 9   ip address 192.168.0.17/31
10!
11interface Ethernet2
12   description P2P_LINK_TO_DC1-SPINE1_Ethernet6
13   mtu 1500
14   no switchport
15   ip address 192.168.0.19/31
16!
17interface Ethernet3
18   description P2P_LINK_TO_DC1-SPINE2_Ethernet5
19   mtu 1500
20   no switchport
21   ip address 192.168.0.21/31
22!
23interface Ethernet4
24   description P2P_LINK_TO_DC1-SPINE2_Ethernet6
25   mtu 1500
26   no switchport
27   ip address 192.168.0.23/31
28!
29interface Ethernet5
30   description dc1-borderleaf1-wan1_WAN1
31   switchport access vlan 1079
32   spanning-tree portfast
33!
34interface Ethernet6
35!
36interface Loopback0
37   description EVPN_Overlay_Peering
38   ip address 10.0.0.5/32

The config has been added.

I can run the build and deploy as much as I want, if there is no changes it will not do anything. As soon as I want to a change, it will respect that and add my change declaratively.

Automating the whole process and interact with AVD using a WEB UI

Instead of doing this whole process of installing dependencies manually etc I asked my friend ChatGPT if it not could make a script that did the whole process for me.. Below is the script. It needs to be run in linux and in a folder where I am allowed to create new folders. Create a new .sh file, copy the content, save and make it executable by running chmod +x filename.sh.

1andreasm@linuxmgmt01:~/arista-automated-avd$ vim create-avd-project.sh
2andreasm@linuxmgmt01:~/arista-automated-avd$ chmod +x create-avd-project.sh
  1#!/bin/bash
  2
  3# Prompt for input name
  4read -p "Enter the name: " input_name
  5
  6# Create a folder from the input
  7mkdir "$input_name"
  8
  9# CD into the newly created folder
 10cd "$input_name"
 11
 12# Create a Python virtual environment with the same name as the input
 13python3 -m venv "$input_name"
 14
 15# Activate the virtual environment
 16source "$input_name/bin/activate"
 17
 18# Install Ansible
 19python3 -m pip install ansible
 20
 21# Install Arista AVD collection
 22ansible-galaxy collection install arista.avd
 23
 24# Export ARISTA_AVD_DIR environment variable
 25export ARISTA_AVD_DIR=$(ansible-galaxy collection list arista.avd --format yaml | head -1 | cut -d: -f1)
 26
 27# Install requirements from ARISTA_AVD_DIR
 28pip3 install -r ${ARISTA_AVD_DIR}/arista/avd/requirements.txt
 29
 30# Install additional packages
 31pip install flask markdown2 pandas
 32
 33# Run ansible-playbook arista.avd.install_examples
 34ansible-playbook arista.avd.install_examples
 35
 36# Create menu
 37echo "Which example do you want to select?
 381. Single DC L3LS
 392. Dual DC L3LS
 403. Campus Fabric
 414. ISIS-LDP-IPVPN
 425. L2LS Fabric"
 43read -p "Enter your choice (1-5): " choice
 44
 45# Set the folder based on choice
 46case $choice in
 47  1) folder="single-dc-l3ls" ;;
 48  2) folder="dual-dc-l3ls" ;;
 49  3) folder="campus-fabric" ;;
 50  4) folder="isis-ldp-ipvpn" ;;
 51  5) folder="l2ls-fabric" ;;
 52  *) echo "Invalid choice"; exit 1 ;;
 53esac
 54
 55# CD into the respective folder
 56cd "$folder"
 57
 58# Create app.py with the given content
 59cat << 'EOF' > app.py
 60from flask import Flask, render_template, request, jsonify, Response
 61import os
 62import subprocess
 63import logging
 64import markdown2
 65import pandas as pd
 66
 67app = Flask(__name__)
 68ROOT_DIR = '.'  # Root directory where inventory.yml is located
 69GROUP_VARS_DIR = os.path.join(ROOT_DIR, 'group_vars')  # Subfolder where other YAML files are located
 70FABRIC_DOCS_DIR = os.path.join(ROOT_DIR, 'documentation', 'fabric')
 71DEVICES_DOCS_DIR = os.path.join(ROOT_DIR, 'documentation', 'devices')
 72CONFIGS_DIR = os.path.join(ROOT_DIR, 'intended', 'configs')
 73STRUCTURED_CONFIGS_DIR = os.path.join(ROOT_DIR, 'intended', 'structured_configs')
 74
 75# Ensure the documentation directories exist
 76for directory in [FABRIC_DOCS_DIR, DEVICES_DOCS_DIR, CONFIGS_DIR, STRUCTURED_CONFIGS_DIR]:
 77    if not os.path.exists(directory):
 78        os.makedirs(directory)
 79
 80# Set up logging
 81logging.basicConfig(level=logging.DEBUG)
 82
 83@app.route('/')
 84def index():
 85    try:
 86        root_files = [f for f in os.listdir(ROOT_DIR) if f.endswith('.yml')]
 87        group_vars_files = [f for f in os.listdir(GROUP_VARS_DIR) if f.endswith('.yml')]
 88        fabric_docs_files = [f for f in os.listdir(FABRIC_DOCS_DIR) if f.endswith(('.md', '.csv'))]
 89        devices_docs_files = [f for f in os.listdir(DEVICES_DOCS_DIR) if f.endswith(('.md', '.csv'))]
 90        configs_files = [f for f in os.listdir(CONFIGS_DIR) if f.endswith('.cfg')]
 91        structured_configs_files = [f for f in os.listdir(STRUCTURED_CONFIGS_DIR) if f.endswith('.yml')]
 92        logging.debug(f"Root files: {root_files}")
 93        logging.debug(f"Group vars files: {group_vars_files}")
 94        logging.debug(f"Fabric docs files: {fabric_docs_files}")
 95        logging.debug(f"Devices docs files: {devices_docs_files}")
 96        logging.debug(f"Configs files: {configs_files}")
 97        logging.debug(f"Structured configs files: {structured_configs_files}")
 98        return render_template('index.html', root_files=root_files, group_vars_files=group_vars_files, fabric_docs_files=fabric_docs_files, devices_docs_files=devices_docs_files, configs_files=configs_files, structured_configs_files=structured_configs_files)
 99    except Exception as e:
100        logging.error(f"Error loading file list: {e}")
101        return "Error loading file list", 500
102
103@app.route('/load_file', methods=['POST'])
104def load_file():
105    try:
106        filename = request.json['filename']
107        logging.debug(f"Loading file: {filename}")
108        if filename in os.listdir(ROOT_DIR):
109            file_path = os.path.join(ROOT_DIR, filename)
110        elif filename in os.listdir(GROUP_VARS_DIR):
111            file_path = os.path.join(GROUP_VARS_DIR, filename)
112        elif filename in os.listdir(FABRIC_DOCS_DIR):
113            file_path = os.path.join(FABRIC_DOCS_DIR, filename)
114        elif filename in os.listdir(DEVICES_DOCS_DIR):
115            file_path = os.path.join(DEVICES_DOCS_DIR, filename)
116        elif filename in os.listdir(CONFIGS_DIR):
117            file_path = os.path.join(CONFIGS_DIR, filename)
118        elif filename in os.listdir(STRUCTURED_CONFIGS_DIR):
119            file_path = os.path.join(STRUCTURED_CONFIGS_DIR, filename)
120        else:
121            raise FileNotFoundError(f"File not found: {filename}")
122
123        logging.debug(f"File path: {file_path}")
124        with open(file_path, 'r') as file:
125            content = file.read()
126
127        if filename.endswith('.md'):
128            content = markdown2.markdown(content, extras=["toc", "fenced-code-blocks", "header-ids"])
129            return jsonify(content=content, is_markdown=True)
130        elif filename.endswith('.csv'):
131            df = pd.read_csv(file_path)
132            content = df.to_html(index=False)
133            return jsonify(content=content, is_csv=True)
134        else:
135            return jsonify(content=content)
136    except Exception as e:
137        logging.error(f"Error loading file: {e}")
138        return jsonify(error=str(e)), 500
139
140@app.route('/save_file', methods=['POST'])
141def save_file():
142    try:
143        filename = request.json['filename']
144        content = request.json['content']
145
146        file_path = os.path.join(ROOT_DIR, filename) if filename in os.listdir(ROOT_DIR) else os.path.join(GROUP_VARS_DIR, filename)
147
148        with open(file_path, 'w') as file:
149            file.write(content)
150        return jsonify(success=True)
151    except Exception as e:
152        logging.error(f"Error saving file: {e}")
153        return jsonify(success=False, error=str(e)), 500
154
155def run_ansible_playbook(playbook):
156    process = subprocess.Popen(['ansible-playbook', playbook], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
157    for line in iter(process.stdout.readline, ''):
158        yield f"data: {line}\n\n"
159    process.stdout.close()
160    process.wait()
161
162@app.route('/run_playbook_stream/<playbook>')
163def run_playbook_stream(playbook):
164    return Response(run_ansible_playbook(playbook), mimetype='text/event-stream')
165
166if __name__ == '__main__':
167    app.run(host='0.0.0.0', port=5000, debug=True)
168EOF
169
170# Create templates directory and index.html with the given content
171mkdir templates
172cat << 'EOF' > templates/index.html
173<!DOCTYPE html>
174<html>
175<head>
176    <title>Edit Ansible Files</title>
177    <style>
178        #editor {
179            width: 100%;
180            height: 80vh;
181        }
182        #output, #fileContent {
183            width: 100%;
184            height: 200px;
185            white-space: pre-wrap;
186            background-color: #f0f0f0;
187            padding: 10px;
188            border: 1px solid #ccc;
189            overflow-y: scroll;
190        }
191        #fileContent {
192            height: auto;
193        }
194    </style>
195    <script src="https://cdnjs.cloudflare.com/ajax/libs/ace/1.4.12/ace.js" crossorigin="anonymous"></script>
196    <script src="https://cdnjs.cloudflare.com/ajax/libs/ace/1.4.12/ext-language_tools.js" crossorigin="anonymous"></script>
197    <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
198</head>
199<body>
200    <h1>Edit Ansible Files</h1>
201    <select id="fileSelector">
202        <option value="">Select a file</option>
203        <optgroup label="Root Files">
204            {% for file in root_files %}
205            <option value="{{ file }}">{{ file }}</option>
206            {% endfor %}
207        </optgroup>
208        <optgroup label="Group Vars Files">
209            {% for file in group_vars_files %}
210            <option value="{{ file }}">{{ file }}</option>
211            {% endfor %}
212        </optgroup>
213    </select>
214    <button id="saveButton">Save</button>
215    <div id="editor">Select a file to load...</div>
216
217    <h2>Documentation</h2>
218    <h3>Fabric</h3>
219    <div id="fabricDocs">
220        {% for file in fabric_docs_files %}
221        <button class="docButton" data-filename="{{ file }}">{{ file }}</button>
222        {% endfor %}
223    </div>
224    <h3>Devices</h3>
225    <div id="devicesDocs">
226        {% for file in devices_docs_files %}
227        <button class="docButton" data-filename="{{ file }}">{{ file }}</button>
228        {% endfor %}
229    </div>
230
231    <h2>Configs</h2>
232    <h3>Intended Configs</h3>
233    <div id="configs">
234        {% for file in configs_files %}
235        <button class="configButton" data-filename="{{ file }}">{{ file }}</button>
236        {% endfor %}
237    </div>
238    <h3>Structured Configs</h3>
239    <div id="structuredConfigs">
240        {% for file in structured_configs_files %}
241        <button class="configButton" data-filename="{{ file }}">{{ file }}</button>
242        {% endfor %}
243    </div>
244    <div id="fileContent"></div>
245
246    <button id="runBuildButton">Run Build Playbook</button>
247    <button id="runDeployButton">Run Deploy Playbook</button>
248    <div id="output"></div>
249
250    <script>
251        $(document).ready(function() {
252            var editor = ace.edit("editor");
253            editor.setTheme("ace/theme/monokai");
254            editor.session.setMode("ace/mode/yaml");
255
256            $('#fileSelector').change(function() {
257                var filename = $(this).val();
258                console.log("Selected file: " + filename);
259                if (filename) {
260                    $.ajax({
261                        url: '/load_file',
262                        type: 'POST',
263                        contentType: 'application/json',
264                        data: JSON.stringify({ filename: filename }),
265                        success: function(data) {
266                            console.log("File content received:", data);
267                            if (data && data.content) {
268                                editor.setValue(data.content, -1);
269                            } else {
270                                editor.setValue("Failed to load file content.", -1);
271                            }
272                        },
273                        error: function(xhr, status, error) {
274                            console.error("Error loading file content:", status, error);
275                            editor.setValue("Error loading file content: " + error, -1);
276                        }
277                    });
278                }
279            });
280
281            $('#saveButton').click(function() {
282                var filename = $('#fileSelector').val();
283                var content = editor.getValue();
284                console.log("Saving file: " + filename);
285                if (filename) {
286                    $.ajax({
287                        url: '/save_file',
288                        type: 'POST',
289                        contentType: 'application/json',
290                        data: JSON.stringify({ filename: filename, content: content }),
291                        success: function(data) {
292                            if (data.success) {
293                                alert('File saved successfully');
294                            } else {
295                                alert('Failed to save file');
296                            }
297                        },
298                        error: function(xhr, status, error) {
299                            console.error("Error saving file:", status, error);
300                            alert('Error saving file: ' + error);
301                        }
302                    });
303                }
304            });
305
306            $('.docButton, .configButton').click(function() {
307                var filename = $(this).data('filename');
308                console.log("Selected file: " + filename);
309                $.ajax({
310                    url: '/load_file',
311                    type: 'POST',
312                    contentType: 'application/json',
313                    data: JSON.stringify({ filename: filename }),
314                    success: function(data) {
315                        console.log("File content received:", data);
316                        if (data && data.content) {
317                            $('#fileContent').html(data.content);
318                            if (data.is_markdown || data.is_csv) {
319                                $('#fileContent a').click(function(event) {
320                                    event.preventDefault();
321                                    var targetId = $(this).attr('href').substring(1);
322                                    var targetElement = document.getElementById(targetId);
323                                    if (targetElement) {
324                                        targetElement.scrollIntoView();
325                                    }
326                                });
327                            }
328                        } else {
329                            $('#fileContent').text("Failed to load file content.");
330                        }
331                    },
332                    error: function(xhr, status, error) {
333                        console.error("Error loading file content:", status, error);
334                        $('#fileContent').text("Error loading file content: " + error);
335                    }
336                });
337            });
338
339            $('#runBuildButton').click(function() {
340                runPlaybook('build.yml');
341            });
342
343            $('#runDeployButton').click(function() {
344                runPlaybook('deploy.yml');
345            });
346
347            function runPlaybook(playbook) {
348                var eventSource = new EventSource('/run_playbook_stream/' + playbook);
349                eventSource.onmessage = function(event) {
350                    $('#output').append(event.data + '\n');
351                    $('#output').scrollTop($('#output')[0].scrollHeight);
352                };
353                eventSource.onerror = function() {
354                    eventSource.close();
355                };
356            }
357        });
358    </script>
359</body>
360</html>
361EOF

Run the script to automatically install dependencies

I will now execute the script in a folder where it will create a new subfolder based on the input I give it and then I will be presented with a menu asking me which example I want to use. Then the script will go ahead and install necessary components and requirements (including copying all the example collections from Arista). After a short while a new folder is created using the name I entered in the first prompt. In my example below I am using new-site-3. Inside the newly created folder all the examples and python environment will be created, and also a Python generated web-page (inside the selected example folder e.g single-dc-l3ls) that can be started and will be available on http://0.0.0.0:5000 (more on the webpage later).

 1andreasm@linuxmgmt01:~/arista-automated-avd$ ./create-avd-project-v1.sh
 2Enter the name: new-site-3
 3Collecting ansible
 4Installing collected packages: resolvelib, PyYAML, pycparser, packaging, MarkupSafe, jinja2, cffi, Starting galaxy collection install process
 5
 6PLAY [Install Examples] *****************************************************************************************************************************************************************************************
 7
 8TASK [Copy all examples to /home/andreasm/arista-automated-avd/new-site-3] **************************************************************************************************************************************
 9changed: [localhost]
10
11PLAY RECAP ******************************************************************************************************************************************************************************************************
12localhost                  : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
13
14Which example do you want to select?
151. Single DC L3LS
162. Dual DC L3LS
173. Campus Fabric
184. ISIS-LDP-IPVPN
195. L2LS Fabric
20Enter your choice (1-5): 1

Now after the script above has run, all I need to do is to cd into the new folder that was created based on my input name (new-site-3).

1andreasm@linuxmgmt01:~/arista-automated-avd$ ll
2total 52
3drwxrwxr-x  5 andreasm andreasm  4096 Jun 13 06:19 ./
4drwxr-xr-x 44 andreasm andreasm  4096 Jun 13 06:18 ../
5-rwxrwxr-x  1 andreasm andreasm 14542 Jun 12 21:56 create-avd-project.sh*
6drwxrwxr-x  8 andreasm andreasm  4096 Jun 13 06:21 new-site-3/

The only thing I need to now is to activate my new Python environment also named after the input I gave (new-site-3):

1andreasm@linuxmgmt01:~/arista-automated-avd/new-site-3$ ls
2campus-fabric  dual-dc-l3ls  isis-ldp-ipvpn  l2ls-fabric  new-site-3  single-dc-l3ls
3andreasm@linuxmgmt01:~/arista-automated-avd/new-site-3$ source new-site-3/bin/activate
4(new-site-3) andreasm@linuxmgmt01:~/arista-automated-avd/new-site-3$

I can now cd into my example folder I want to use, edit the necessary files etc, do ansible-playbook build.yml and deploy.yml. Or even better, I will cd into the example folder I selected in the last prompt, there the script has placed a file called app.py and a new folder called templates containing a index.html file so I can start a webserver. With this webserver I can interact with the files more interactively.

Web-based interaction

Together with my friend ChatGPT we have also created a web page to interact with the Arista Validated Designs a bit more interactively.

To start the webserver I need to cd into the example folder I selected from the script above (e.g single-dc-l3ls), and from there run the following command: python app.py (the python environment needs to be active).

 1(new-site-3) andreasm@linuxmgmt01:~/arista-automated-avd/new-site-3/single-dc-l3ls$ python app.py
 2 * Serving Flask app 'app'
 3 * Debug mode: on
 4INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 5 * Running on all addresses (0.0.0.0)
 6 * Running on http://127.0.0.1:5000
 7 * Running on http://10.100.5.10:5000
 8INFO:werkzeug:Press CTRL+C to quit
 9INFO:werkzeug: * Restarting with stat
10WARNING:werkzeug: * Debugger is active!
11INFO:werkzeug: * Debugger PIN: 129-945-984

webserver

The page is capable of editing and save all the needed yml files within its own "project/environment" (e.g single-dc-l3ls). When done editing, there is two buttons that will trigger the ansible-playbook build.yml and ansible-playbook deploy.yml commands respectively with output. After the build command has been run it is capable of showing all the auto-created documentation contents under the folders documentation/fabric and documentation/devices respectively for easy access and interactive Table of Contents.

See short video clip below:

Outro

This post has been a very exciting exercise.

The Arista Validated Design not only made deploying complex designs an easy task, but provided also much important documentation as part of the process. Regardless of the network switches being physical, they can be automated like anything else. Day 2 configurations are a joy to perform with this approach too.