Arista Cloudvision and AVD using Containerlab

Overview

Arista CloudVision, AVD and Containerlab

This post will continue upon my previous post about automated provisioning of EOS using Arista Validated Design. I will be using Arista Validated Design in combination with Arista CloudVision for provisioning and configuration instead of sending the config directly to the switches. There are some differences (and many benefits) of using CloudVision, which I will quickly touch upon in the context of this post a bit further down. I will probably at a later stage create a dedicated post on Arista CloudVision as it is a very comprehensive management tool.

Another difference in this post is that I will be using a containerized EOS (cEOS) instead of a virtual machine based EOS (vEOS). The benefit here is that I can make use of Containerlab which takes care of the orchestrating of all the cEOS containers I need, and that in a very rapid way too. An absolute incredible tool to quickly facilitate a full blown lab with support for a vast set of scenarios and topologies.

A summary of this post will be Containerlab providing the platform to deploy and run the cEOS switches, Arista Validated Design providing the automated config for my selected topology (spine-leaf, using same as in previous post) and Arista CloudVision as the tool that handles and manages all my devices, pushing the config to the cEOS'es.

avd-cvp-containerlab

Arista CloudVision

CloudVision® is Arista’s modern, multi-domain management platform that leverages cloud networking principles to deliver a simplified NetOps experience. Unlike traditional domain-specific management solutions, CloudVision enables zero-touch network operations with consistent operations enterprise-wide, helping to break down the complexity of siloed management approaches.

As Arista’s platform for Network as-a-Service, CloudVision is designed to bring OpEx efficiency through automation across the entire network lifecycle - from design, to operations, to support and ongoing maintenance.

Source: Arista

As this post is not meant to focus on CloudVision (will be referred to as CVP throughout this post) alone I will just concentrate on the parts in CloudVision that are relevant and differs from my previous post.

CloudVision configlets, containers, tasks and change control

When using CVP in combination with Arista Validated Design (will be referred to as AVD throughout this post) I have much more control on how and when configurations are being sent to my Arista EOS switches. Just to name one, change control and the ability to review and compare configs before approving and pushing the config to the devices.

review-diff

EOS in CVP inventory or not

AVD in itself does not require any actual devices to send the configs to, as it can also be used to solely create configurations and documentation for a planned topology. This is also possible in combination with CVP as this will then create the necessary containers, configlets and tasks in CVP regardless of the devices being in CVPs inventory or not. For more reference on this using AVD see here.

If I want to also push the config to the EOS switches as a full hands-off automated configuration using CVP and AVD I need to have the EOS switches already in CVPs inventory, otherwise there will be no devices for CVP to send the configs to (kind of obvious). What's not so obvious maybe is that one need to inform AVD whether the devices are already in CVPs inventory or not or else the AVD playbook deploy-cvp will fail.

Adding the EOS switches to CVPs inventory can be done either through the Zero Touch Provisioning (ZTP from here on) of the EOS switches or manually add them to CVP after they have been initially configured. In this post I will be adding the EOS swtiches to the CVP inventory as part of the ZTP.

Change Control and tasks

The Change Control module selects and executes a group of tasks that you want to process simultaneously. Selecting tasks and creating Change Controls function similarly in Change Control and Task Management modules.

Change Controls provides the following benefits:

  • Sequencing tasks
  • Adding unlimited snapshots to every device impacted by the Change Control execution
  • Adding custom actions
  • Pushing images via Multi-Chassis Link Aggregation (MLAG) In-Service Software Upgrade (ISSU) or Border Gateway Protocol (BGP) maintenance mode
  • Reviewing the entire set of changes to approve Change Controls

Even with the devices added to CVPs inventory I have a choice whether I want the config to be automatically approved and pushed to the devices when I run playbook-ansible deploy-cvp.yml or if I just want the task to be configured and wait for a change manager to review and approve before the config is pushed. This is a very useful and powerful feature in a production environment.

To control how AVD is handling this is described here, the key execute= false or true in the deploy-cvp.yaml. If the latter is configured with false, AVD will instruct CVP to only create the configlets, containers and tasks. The tasks will be in a pending state until the change manager creates a task and approve or rejects it.

tasks

change-control

Containers and configlets

In CVP the switches can be put into respective containers. In combination with AVD these containers are being created automatically and the switches are moved into their container based on their group membership in the inventory.yml

 1    FABRIC:
 2      children:
 3        DC1:
 4          children:
 5            DC1_SPINES:
 6              hosts:
 7                dc1-spine1:
 8                  ansible_host: 192.168.20.2
 9                dc1-spine2:
10                  ansible_host: 192.168.20.3
11            DC1_L3_LEAVES:
12              hosts:
13                dc1-leaf1:
14                  ansible_host: 192.168.20.4
15                dc1-leaf2:
16                  ansible_host: 192.168.20.5
17                dc1-borderleaf1:
18                  ansible_host: 192.168.20.6

containers

If the EOS switches are being added manually or as part of the ZTP they will be automatically placed in the Undefined container and when AVD creates the task and it is approved they will be moved from there to their container accordingly.

AVD will create the intended configuration under Configlets:

configlets

These contains the configs CVP will use to push to the respective devices, which can be easily inspected by just clicking on one of them:

configlets

Containerlab

Now onto the next gem in this post, Containerlab

With the growing number of containerized Network Operating Systems grows the demand to easily run them in the user-defined, versatile lab topologies.

Unfortunately, container orchestration tools like docker-compose are not a good fit for that purpose, as they do not allow a user to easily create connections between the containers which define a topology.

Containerlab provides a CLI for orchestrating and managing container-based networking labs. It starts the containers, builds a virtual wiring between them to create lab topologies of users choice and manages labs lifecycle.

containerlab

Source: Containerlab

To get started with Containerlab couldn't be more simple. Probably one of the easiest project out there to get started with.

In my lab I have prepared a Ubuntu virtual machine with some disk, 8 CPU and 16GB of ram (this could probably be reduces to a much smaller spec, but I have the resources available for it).

On my clean ubuntu machine, I just had to run this script provided by Containerlab to prepare and install everything needed (including Docker).

1curl -sL https://containerlab.dev/setup | sudo -E bash -s "all"

Thats it, after a couple of minutes it is ready. All I need now is to grab my cEOS image and upload it to my local Docker image registry (same host as Containerlab is installed).

Containerlab supports a bunch of network operating system containers, like cEOS, and there is ofcourse a lot of options, configuration possibilites and customization that can be done. I recommend Containerlab highly and the documentation provided on the page is very good. All the info I needed to get started was provided in the Containerlab documentation.

Getting the lab up and running

My intention is to deploy 5 cEOS containers to form my spine-leaf topology, like this:

ceos-containers

Prepare Containerlab to deploy my cEOS containers and desired topology

To get started with cEOS in Containerlab I have to download the cEOS image to my VM running Containerlab. Then I need to upload it to the local Docker image registry.

1andreasm@containerlab:~$ docker import cEOS-lab-4.32.2F.tar.xz ceos:4.32.2F
2andreasm@containerlab:~$ docker images
3REPOSITORY   TAG       IMAGE ID       CREATED      SIZE
4ceos         4.32.2F   deaae9fc39b3   3 days ago   2.09GB

When the image is available locally I can start creating my Containerlab Topology file:

 1name: spine-leaf-borderleaf
 2
 3mgmt:
 4  network: custom_mgmt                # management network name
 5  ipv4-subnet: 192.168.20.0/24       # ipv4 range
 6
 7topology:
 8  nodes:
 9    node-1:
10      kind: arista_ceos
11      image: ceos:4.32.2F
12      startup-config: node1-startup-config.cfg
13      mgmt-ipv4: 192.168.20.2
14    node-2:
15      kind: arista_ceos
16      image: ceos:4.32.2F
17      startup-config: node2-startup-config.cfg
18      mgmt-ipv4: 192.168.20.3
19    node-3:
20      kind: arista_ceos
21      image: ceos:4.32.2F
22      startup-config: node3-startup-config.cfg
23      mgmt-ipv4: 192.168.20.4
24    node-4:
25      kind: arista_ceos
26      image: ceos:4.32.2F
27      startup-config: node4-startup-config.cfg
28      mgmt-ipv4: 192.168.20.5
29    node-5:
30      kind: arista_ceos
31      image: ceos:4.32.2F
32      startup-config: node5-startup-config.cfg
33      mgmt-ipv4: 192.168.20.6
34    br-node-3:
35      kind: bridge
36    br-node-4:
37      kind: bridge
38    br-node-5:
39      kind: bridge
40
41  links:
42    - endpoints: ["node-3:eth1", "node-1:eth1"]
43    - endpoints: ["node-3:eth2", "node-2:eth1"]
44    - endpoints: ["node-4:eth1", "node-1:eth2"]
45    - endpoints: ["node-4:eth2", "node-2:eth2"]
46    - endpoints: ["node-5:eth1", "node-1:eth3"]
47    - endpoints: ["node-5:eth2", "node-2:eth3"]
48    - endpoints: ["node-3:eth3", "br-node-3:n3-eth3"]
49    - endpoints: ["node-4:eth3", "br-node-4:n4-eth3"]
50    - endpoints: ["node-5:eth3", "br-node-5:n5-eth3"]

A short explanation to the above yaml. I define a custom management network for the cEOS oob Management0 interface, then I add all my nodes, defining a static management ip pr node. I also point to a startup config that provides the necessary minimum config for my cEOS'es. I will provide the output of the configs further down. Then I have added 3 bridges, these are being used for the downlinks on the leaves for client/server/vm connectivity later on. The links section defines how the cEOS "interconnect" with each other. This linking is taken care of by Containerlab. I only need to define which interfaces from which nodes that links to which interface on the other nodes. These links are my "peer-links". Then I have the br-node links that just links to these br-node-x bridges created in my os like this:

1andreasm@containerlab:~$ sudo ip link add name br-node-3 type bridge
2andreasm@containerlab:~$ sudo ip link set br-node-3 up

Below is my startup-config for each cEOS:

 1!
 2daemon TerminAttr
 3   exec /usr/bin/TerminAttr -cvaddr=172.18.100.99:9910 -cvauth=token,/tmp/token -cvvrf=MGMT -disableaaa -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -taillogs
 4   no shutdown
 5!
 6hostname dc1-spine1
 7!
 8! Configures username and password for the ansible user
 9username ansible privilege 15 role network-admin secret sha512 $4$redacted/
10!
11! Defines the VRF for MGMT
12vrf instance MGMT
13!
14! Defines the settings for the Management1 interface through which Ansible reaches the device
15interface Management0 # Note the Management0 here.. not 1
16   description oob_management
17   no shutdown
18   vrf MGMT
19   ! IP address - must be set uniquely per device
20   ip address 192.168.20.2/24
21!
22! Static default route for VRF MGMT
23ip route vrf MGMT 0.0.0.0/0 192.168.20.1
24!
25! Enables API access in VRF MGMT
26management api http-commands
27   protocol https
28   no shutdown
29   !
30   vrf MGMT
31      no shutdown
32!
33end
34!
35! Save configuration to flash
36copy running-config startup-config

The startup config above includes the config for adding them to my CVP inventory, daemon TerminAttr.

Now it is time to deploy my lab. Let see how this goes 😄

 1andreasm@containerlab:~/containerlab/lab-spine-leaf-cvp$ sudo containerlab deploy -t spine-leaf-border.yaml 
 2INFO[0000] Containerlab v0.56.0 started                 
 3INFO[0000] Parsing & checking topology file: spine-leaf-border.yaml 
 4INFO[0000] Creating docker network: Name="custom_mgmt", IPv4Subnet="192.168.20.0/24", IPv6Subnet="", MTU=0 
 5INFO[0000] Creating lab directory: /home/andreasm/containerlab/lab-spine-leaf-cvp/clab-spine-leaf-borderleaf 
 6INFO[0000] Creating container: "node-1"                 
 7INFO[0000] Creating container: "node-5"                 
 8INFO[0000] Creating container: "node-3"                 
 9INFO[0000] Creating container: "node-4"                 
10INFO[0000] Creating container: "node-2"                 
11INFO[0001] Running postdeploy actions for Arista cEOS 'node-5' node 
12INFO[0001] Created link: node-5:eth3 <--> br-node-5:n5-eth3 
13INFO[0001] Created link: node-4:eth1 <--> node-1:eth2   
14INFO[0001] Created link: node-3:eth1 <--> node-1:eth1   
15INFO[0001] Created link: node-4:eth2 <--> node-2:eth2   
16INFO[0001] Created link: node-3:eth2 <--> node-2:eth1   
17INFO[0001] Created link: node-5:eth1 <--> node-1:eth3   
18INFO[0001] Running postdeploy actions for Arista cEOS 'node-1' node 
19INFO[0001] Created link: node-4:eth3 <--> br-node-4:n4-eth3 
20INFO[0001] Running postdeploy actions for Arista cEOS 'node-4' node 
21INFO[0001] Created link: node-3:eth3 <--> br-node-3:n3-eth3 
22INFO[0001] Running postdeploy actions for Arista cEOS 'node-3' node 
23INFO[0001] Created link: node-5:eth2 <--> node-2:eth3   
24INFO[0001] Running postdeploy actions for Arista cEOS 'node-2' node 
25INFO[0050] Adding containerlab host entries to /etc/hosts file 
26INFO[0050] Adding ssh config for containerlab nodes     
27INFO[0050] 🎉 New containerlab version 0.57.0 is available! Release notes: https://containerlab.dev/rn/0.57/
28Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/ 
29+---+-----------------------------------+--------------+--------------+-------------+---------+-----------------+--------------+
30| # |               Name                | Container ID |    Image     |    Kind     |  State  |  IPv4 Address   | IPv6 Address |
31+---+-----------------------------------+--------------+--------------+-------------+---------+-----------------+--------------+
32| 1 | clab-spine-leaf-borderleaf-node-1 | 9947cd235370 | ceos:4.32.2F | arista_ceos | running | 192.168.20.2/24 | N/A          |
33| 2 | clab-spine-leaf-borderleaf-node-2 | 2051bdcc81e6 | ceos:4.32.2F | arista_ceos | running | 192.168.20.3/24 | N/A          |
34| 3 | clab-spine-leaf-borderleaf-node-3 | 0b6ef17f29e8 | ceos:4.32.2F | arista_ceos | running | 192.168.20.4/24 | N/A          |
35| 4 | clab-spine-leaf-borderleaf-node-4 | f88bfe335603 | ceos:4.32.2F | arista_ceos | running | 192.168.20.5/24 | N/A          |
36| 5 | clab-spine-leaf-borderleaf-node-5 | a1f6eff1bd18 | ceos:4.32.2F | arista_ceos | running | 192.168.20.6/24 | N/A          |
37+---+-----------------------------------+--------------+--------------+-------------+---------+-----------------+--------------+
38andreasm@containerlab:~/containerlab/lab-spine-leaf-cvp$ 

Wow, a new version of Containerlab is out...

After a minute or two The cEOS containers are up and running it seems. Lets see if I can log into one of them.

 1andreasm@containerlab:~/containerlab/lab-spine-leaf-cvp$ ssh ansible@192.168.20.2
 2(ansible@192.168.20.2) Password: 
 3dc1-spine1>
 4dc1-spine1>en
 5dc1-spine1#configure 
 6dc1-spine1(config)#show running-config 
 7! Command: show running-config
 8! device: dc1-spine1 (cEOSLab, EOS-4.32.2F-38195967.4322F (engineering build))
 9!
10no aaa root
11!
12username ansible privilege 15 role network-admin secret sha512 $6redactedxMEEocchsdf/
13!
14management api http-commands
15   no shutdown
16   !
17   vrf MGMT
18      no shutdown
19!
20daemon TerminAttr
21   exec /usr/bin/TerminAttr -cvaddr=172.18.100.99:9910 -cvauth=token,/tmp/token -cvvrf=MGMT -disableaaa -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -taillogs
22   no shutdown
23!
24no service interface inactive port-id allocation disabled
25!
26transceiver qsfp default-mode 4x10G
27!
28service routing protocols model multi-agent
29!
30hostname dc1-spine1
31!
32spanning-tree mode mstp
33!
34system l1
35   unsupported speed action error
36   unsupported error-correction action error
37!
38vrf instance MGMT
39!
40interface Ethernet1
41!
42interface Ethernet2
43!
44interface Ethernet3
45!
46interface Management0
47   description oob_management
48   vrf MGMT
49   ip address 192.168.20.2/24
50!
51no ip routing
52no ip routing vrf MGMT
53!
54ip route vrf MGMT 0.0.0.0/0 192.168.20.1
55!
56router multicast
57   ipv4
58      software-forwarding kernel
59   !
60   ipv6
61      software-forwarding kernel
62!
63end
64dc1-spine1(config)#

This is really nice, now did something happen in my CVP?

Before I started my lab, this was the view in my CVP:

empty-inventory

Now, lets go in and check again:

devices-in-inventory

Alright, my lab is up. But I am missing the full config ofcourse. They have just been deployed, but no connections, peerings etc have been made.

Back to Containerlab. There is one command I would like to test out, containerlab graph -t topology.yml. Lets see what this does:

1andreasm@containerlab:~/containerlab/lab-spine-leaf-cvp$ sudo containerlab graph -t spine-leaf-border.yaml 
2INFO[0000] Parsing & checking topology file: spine-leaf-border.yaml 
3INFO[0000] Serving static files from directory: /etc/containerlab/templates/graph/nextui/static 
4INFO[0000] Serving topology graph on http://0.0.0.0:50080 

Lets open my browser:

a-nice-diagram

A full topology layout, including the bridge interfaces!!!! NICE

Preparing Arista Validated Design to use CVP

I will not go through all the files I have edited in my AVD project folder as most of them are identical to my previous post but will reflect upon the changes for using CVP instead of directly to the EOS switches in the relevant file. I have commented under which files needs to be updated, and provide my examples. The rest is not shown or done any changes on.

Below is the files I need to edit in general for AVD to deploy my desired single DC L3LS spine-leaf topology.

 1├── ansible.cfg
 2├── deploy-cvp.yml # execute false or true  
 3├── group_vars/
 4│ ├── CONNECTED_ENDPOINTS.yml # untouched
 5│ ├── DC1_L2_LEAVES.yml # untouched
 6│ ├── DC1_L3_LEAVES.yml # untouched
 7│ ├── DC1_SPINES.yml # untouched
 8│ ├── DC1.yml # added "mgmt_interface: Management0" and updated dict-of-dicts to list-of-dicts
 9│ ├── FABRIC.yml # This needs to reflect on my CVP endpoint
10│ └── NETWORK_SERVICES.yml # untouched
11├── inventory.yml # This needs to reflect my CVP configuration

In the deploy-cvp.yml I need to edit execute_tasks If I want to execute the tasks directly from AVD in CVP or not. I have disabled execution of the task (default) as I want to show how it looks like in CVP.

 1---
 2- name: Deploy Configurations to Devices Using CloudVision Portal # (1)!
 3  hosts: CLOUDVISION
 4  gather_facts: false
 5  connection: local
 6  tasks:
 7
 8    - name: Deploy Configurations to CloudVision # (2)!
 9      ansible.builtin.import_role:
10        name: arista.avd.eos_config_deploy_cvp
11      vars:
12        cv_collection: v3 # (3)!
13        fabric_name: FABRIC # (4)!
14        execute_tasks: false

In the inventory.yml the CVP relevant section is added. (it is in there by default, but I removed it in previous post as I did not use it)

 1---
 2all:
 3  children:
 4    CLOUDVISION:
 5      hosts:
 6        cvp:
 7          # Ansible variables used by the ansible_avd and ansible_cvp roles to push configuration to devices via CVP
 8          ansible_host: cvp-01.domain.net
 9          ansible_httpapi_host: cvp-01.domain.net
10          ansible_user: ansible
11          ansible_password: password
12          ansible_connection: httpapi
13          ansible_httpapi_use_ssl: true
14          ansible_httpapi_validate_certs: false
15          ansible_network_os: eos
16          ansible_httpapi_port: 443
17          ansible_python_interpreter: $(which python3)
18
19
20    FABRIC:
21      children:
22        DC1:
23          children:
24            DC1_SPINES:
25              hosts:
26                dc1-spine1:
27                  ansible_host: 192.168.20.2
28                dc1-spine2:
29                  ansible_host: 192.168.20.3
30            DC1_L3_LEAVES:
31              hosts:
32                dc1-leaf1:
33                  ansible_host: 192.168.20.4
34                dc1-leaf2:
35                  ansible_host: 192.168.20.5
36                dc1-borderleaf1:
37                  ansible_host: 192.168.20.6
38
39    NETWORK_SERVICES:
40      children:
41        DC1_L3_LEAVES:
42    CONNECTED_ENDPOINTS:
43      children:
44        DC1_L3_LEAVES:

DC1.yml is updated to reflect the coming deprecation of dict-of-dicts to list-of-dicts and added the mgmt_interface: Management0.

  1---
  2# Default gateway used for the management interface
  3mgmt_gateway: 192.168.0.1
  4mgmt_interface: Management0
  5
  6
  7# Spine switch group
  8spine:
  9  # Definition of default values that will be configured to all nodes defined in this group
 10  defaults:
 11    # Set the relevant platform as each platform has different default values in Ansible AVD
 12    platform: cEOS-lab
 13    # Pool of IPv4 addresses to configure interface Loopback0 used for BGP EVPN sessions
 14    loopback_ipv4_pool: 192.168.0.0/27
 15    # ASN to be used by BGP
 16    bgp_as: 65100
 17
 18  # Definition of nodes contained in this group.
 19  # Specific configuration of device must take place under the node definition. Each node inherits all values defined under 'defaults'
 20  nodes:
 21    # Name of the node to be defined (must be consistent with definition in inventory)
 22    - name: dc1-spine1
 23      # Device ID definition. An integer number used for internal calculations (ie. IPv4 address of the loopback_ipv4_pool among others)
 24      id: 1
 25      # Management IP to be assigned to the management interface
 26      mgmt_ip: 192.168.20.2/24
 27
 28    - name: dc1-spine2
 29      id: 2
 30      mgmt_ip: 192.168.20.3/24
 31
 32# L3 Leaf switch group
 33l3leaf:
 34  defaults:
 35    # Set the relevant platform as each platform has different default values in Ansible AVD
 36    platform: cEOS-lab
 37    # Pool of IPv4 addresses to configure interface Loopback0 used for BGP EVPN sessions
 38    loopback_ipv4_pool: 192.168.0.0/27
 39    # Offset all assigned loopback IP addresses.
 40    # Required when the < loopback_ipv4_pool > is same for 2 different node_types (like spine and l3leaf) to avoid over-lapping IPs.
 41    # For example, set the minimum offset l3leaf.defaults.loopback_ipv4_offset: < total # spine switches > or vice versa.
 42    loopback_ipv4_offset: 2
 43    # Definition of pool of IPs to be used as Virtual Tunnel EndPoint (VXLAN origin and destination IPs)
 44    vtep_loopback_ipv4_pool: 192.168.1.0/27
 45    # Ansible hostname of the devices used to establish neighborship (IP assignments and BGP peering)
 46    uplink_switches: ['dc1-spine1', 'dc1-spine2']
 47    # Definition of pool of IPs to be used in P2P links
 48    uplink_ipv4_pool: 192.168.100.0/26
 49    # Definition of pool of IPs to be used for MLAG peer-link connectivity
 50    #mlag_peer_ipv4_pool: 10.255.1.64/27
 51    # iBGP Peering between MLAG peers
 52    #mlag_peer_l3_ipv4_pool: 10.255.1.96/27
 53    # Virtual router mac for VNIs assigned to Leaf switches in format xx:xx:xx:xx:xx:xx
 54    virtual_router_mac_address: 00:1c:73:00:00:99
 55    spanning_tree_priority: 4096
 56    spanning_tree_mode: mstp
 57
 58  # If two nodes (and only two) are in the same node_group, they will automatically form an MLAG pair
 59  node_groups:
 60    # Definition of a node group that will include two devices in MLAG.
 61    # Definitions under the group will be inherited by both nodes in the group
 62    - group: DC1_L3_LEAF1
 63      # ASN to be used by BGP for the group. Both devices in the MLAG pair will use the same BGP ASN
 64      bgp_as: 65101
 65      nodes:
 66        # Definition of hostnames under the node_group
 67        - name: dc1-leaf1
 68          id: 1
 69          mgmt_ip: 192.168.20.4/24
 70          # Definition of the port to be used in the uplink device facing this device.
 71          # Note that the number of elements in this list must match the length of 'uplink_switches' as well as 'uplink_interfaces'
 72          uplink_switch_interfaces:
 73            - Ethernet1
 74            - Ethernet1
 75    # Definition of a node group that will include two devices in MLAG.
 76    # Definitions under the group will be inherited by both nodes in the group
 77    - group: DC1_L3_LEAF2
 78      # ASN to be used by BGP for the group. Both devices in the MLAG pair will use the same BGP ASN
 79      bgp_as: 65102
 80      nodes:
 81        # Definition of hostnames under the node_group
 82        - name: dc1-leaf2
 83          id: 2
 84          mgmt_ip: 192.168.20.5/24
 85          uplink_switch_interfaces:
 86            - Ethernet2
 87            - Ethernet2
 88    # Definition of a node group that will include two devices in MLAG.
 89    # Definitions under the group will be inherited by both nodes in the group
 90    - group: DC1_L3_BORDERLEAF1
 91      # ASN to be used by BGP for the group. Both devices in the MLAG pair will use the same BGP ASN
 92      bgp_as: 65102
 93      nodes:
 94        # Definition of hostnames under the node_group
 95        - name: dc1-borderleaf1
 96          id: 3
 97          mgmt_ip: 192.168.20.6/24
 98          uplink_switch_interfaces:
 99            - Ethernet3
100            - Ethernet3

That should be it. Now it is time to run two playbooks: build.yml to create the documentation and intended configs and any errors. Then I will execute the deploy-cvp.yml playbook to push the config and tasks to CVP. Lets see whats going to happen.

Info: For AVD and CVP to reach my cEOS containers I have created a static route in my physical router like this: *ip route 192.168.0.0/24 10.100.5.40* (ip of my Containerlab host/vm)
Info: A note on the virtual network adapters for the cEOS appliances.The Management interface is Management0 not Management1, this needs to be reflected in the AVD configs by adding "mgmt_interface: Management0" in the DC1.yaml

Since my previous post, AVD has been upgraded to version 4.10, latest version in time of writing this post. I have also updated my yamls to accomodate this coming deprecation:

1[DEPRECATION WARNING]: [dc1-spine1]: The input data model 'dict-of-dicts to list-of-dicts 
2automatic conversion' is deprecated. See 'https://avd.arista.com/stable/docs/porting-
3guides/4.x.x.html#data-model-changes-from-dict-of-dicts-to-list-of-dicts' for details. This 
4feature will be removed from arista.avd.eos_designs in version 5.0.0. Deprecation warnings can be 
5disabled by setting deprecation_warnings=False in ansible.cfg.

build.yml

 1(clab01) andreasm@linuxmgmt10:~/containerlab/clab01/single-dc-l3ls$ ansible-playbook build.yml 
 2
 3PLAY [Build Configurations and Documentation] *****************************************************
 4
 5TASK [arista.avd.eos_designs : Verify Requirements] ***********************************************
 6AVD version 4.10.0
 7Use -v for details.
 8ok: [dc1-spine1 -> localhost]
 9
10TASK [arista.avd.eos_designs : Create required output directories if not present] *****************
11ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/containerlab/clab01/single-dc-l3ls/intended/structured_configs)
12ok: [dc1-spine1 -> localhost] => (item=/home/andreasm/containerlab/clab01/single-dc-l3ls/documentation/fabric)
13
14TASK [arista.avd.eos_designs : Set eos_designs facts] *********************************************
15ok: [dc1-spine1]
16
17TASK [arista.avd.eos_designs : Generate device configuration in structured format] ****************
18changed: [dc1-leaf2 -> localhost]
19changed: [dc1-borderleaf1 -> localhost]
20changed: [dc1-spine1 -> localhost]
21changed: [dc1-spine2 -> localhost]
22changed: [dc1-leaf1 -> localhost]
23
24TASK [arista.avd.eos_designs : Generate fabric documentation] *************************************
25changed: [dc1-spine1 -> localhost]
26
27TASK [arista.avd.eos_designs : Generate fabric point-to-point links summary in csv format.] *******
28changed: [dc1-spine1 -> localhost]
29
30TASK [arista.avd.eos_designs : Generate fabric topology in csv format.] ***************************
31changed: [dc1-spine1 -> localhost]
32
33TASK [arista.avd.eos_designs : Remove avd_switch_facts] *******************************************
34ok: [dc1-spine1]
35
36TASK [arista.avd.eos_cli_config_gen : Verify Requirements] ****************************************
37skipping: [dc1-spine1]
38
39TASK [arista.avd.eos_cli_config_gen : Generate eos intended configuration and device documentation] ***
40changed: [dc1-spine2 -> localhost]
41changed: [dc1-spine1 -> localhost]
42changed: [dc1-leaf2 -> localhost]
43changed: [dc1-leaf1 -> localhost]
44changed: [dc1-borderleaf1 -> localhost]
45
46PLAY RECAP ****************************************************************************************
47dc1-borderleaf1            : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
48dc1-leaf1                  : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
49dc1-leaf2                  : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
50dc1-spine1                 : ok=9    changed=5    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
51dc1-spine2                 : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

This went well. Nothing has happened in CVP yet. Thats next...

Now it is time to run deploy-cvp.yml

deploy-cvp.yml

 1(clab01) andreasm@linuxmgmt10:~/containerlab/clab01/single-dc-l3ls$ ansible-playbook deploy-cvp.yml 
 2
 3PLAY [Deploy Configurations to Devices Using CloudVision Portal] **********************************
 4
 5TASK [arista.avd.eos_config_deploy_cvp : Create required output directories if not present] *******
 6ok: [cvp -> localhost] => (item=/home/andreasm/containerlab/clab01/single-dc-l3ls/intended/structured_configs/cvp)
 7
 8TASK [arista.avd.eos_config_deploy_cvp : Verify Requirements] *************************************
 9AVD version 4.10.0
10Use -v for details.
11ok: [cvp -> localhost]
12
13TASK [arista.avd.eos_config_deploy_cvp : Start creation/update process.] **************************
14included: /home/andreasm/.ansible/collections/ansible_collections/arista/avd/roles/eos_config_deploy_cvp/tasks/v3/main.yml for cvp
15
16TASK [arista.avd.eos_config_deploy_cvp : Generate intended variables] *****************************
17ok: [cvp]
18
19TASK [arista.avd.eos_config_deploy_cvp : Build DEVICES and CONTAINER definition for cvp] **********
20changed: [cvp -> localhost]
21
22TASK [arista.avd.eos_config_deploy_cvp : Start creation/update process.] **************************
23included: /home/andreasm/.ansible/collections/ansible_collections/arista/avd/roles/eos_config_deploy_cvp/tasks/v3/present.yml for cvp
24
25TASK [arista.avd.eos_config_deploy_cvp : Load CVP device information for cvp] *********************
26ok: [cvp]
27
28TASK [arista.avd.eos_config_deploy_cvp : Create configlets on CVP cvp.] ***************************
29changed: [cvp]
30
31TASK [arista.avd.eos_config_deploy_cvp : Execute any configlet generated tasks to update configuration on cvp] ***
32skipping: [cvp]
33
34TASK [arista.avd.eos_config_deploy_cvp : Building Containers topology on cvp] *********************
35changed: [cvp]
36
37TASK [arista.avd.eos_config_deploy_cvp : Execute pending tasks on cvp] ****************************
38skipping: [cvp]
39
40TASK [arista.avd.eos_config_deploy_cvp : Configure devices on cvp] ********************************
41changed: [cvp]
42
43TASK [arista.avd.eos_config_deploy_cvp : Execute pending tasks on cvp] ****************************
44skipping: [cvp]
45
46PLAY RECAP ****************************************************************************************
47cvp                        : ok=10   changed=4    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0   

That went without any issues.

Whats happening in CVP

Before I ran my playbook above, this was the content in the below sections:

no-containers

no-configlets

no-pending-tasks

Now after I have ran my playbook:

new-containers

I can see some new containers, and 5 tasks

new-configlets

New configlets have been added.

Lets have a look at the tasks.

5-tasks-pending

I have indeed 5 tasks pending. Let me inspect one of them before I decide to approve them or not.

task-details

The Designed Configuration certainly looks more interesting than the Running Configuration. So I think I will approve these tasks indeed.

I need to then create a Change Control, and to do it as simple as possible I will select all 5 tasks and create a single Change Control including all 5 tasks in same.

create-change-control

create-cc

change-to-review

Let me review and hopefully approve

warning

There is a warning there, but I think I will take my chances on those.

Approve and Execute

configlet-push

Now my cEOS switches should be getting their configuration. Lets check one of them when the task has been completed.

All tasks completed, the switches has been placed in their respective containers:

containers

Lets check a switch for the config:

  1dc1-borderleaf1#show running-config 
  2! Command: show running-config
  3! device: dc1-borderleaf1 (cEOSLab, EOS-4.32.2F-38195967.4322F (engineering build))
  4!
  5no aaa root
  6!
  7username admin privilege 15 role network-admin nopassword
  8username ansible privilege 15 role network-admin secret sha512 $4$redactedxMEEoccYHS/
  9!
 10management api http-commands
 11   no shutdown
 12   !
 13   vrf MGMT
 14      no shutdown
 15!
 16daemon TerminAttr
 17   exec /usr/bin/TerminAttr -cvaddr=172.18.100.99:9910 -cvauth=token,/tmp/token -cvvrf=MGMT -disableaaa -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -taillogs
 18   no shutdown
 19!
 20vlan internal order ascending range 1100 1300
 21!
 22no service interface inactive port-id allocation disabled
 23!
 24transceiver qsfp default-mode 4x10G
 25!
 26service routing protocols model multi-agent
 27!
 28hostname dc1-borderleaf1
 29ip name-server vrf MGMT 10.100.1.7
 30!
 31spanning-tree mode mstp
 32spanning-tree mst 0 priority 4096
 33!
 34system l1
 35   unsupported speed action error
 36   unsupported error-correction action error
 37!
 38vlan 1070
 39   name VRF11_VLAN1070
 40!
 41vlan 1071
 42   name VRF11_VLAN1071
 43!
 44vlan 1074
 45   name L2_VLAN1074
 46!
 47vlan 1075
 48   name L2_VLAN1075
 49!
 50vrf instance MGMT
 51!
 52vrf instance VRF11
 53!
 54interface Ethernet1
 55   description P2P_LINK_TO_DC1-SPINE1_Ethernet3
 56   mtu 1500
 57   no switchport
 58   ip address 192.168.100.9/31
 59!
 60interface Ethernet2
 61   description P2P_LINK_TO_DC1-SPINE2_Ethernet3
 62   mtu 1500
 63   no switchport
 64   ip address 192.168.100.11/31
 65!
 66interface Ethernet3
 67   description dc1-borderleaf1-wan1_WAN1
 68!
 69interface Loopback0
 70   description EVPN_Overlay_Peering
 71   ip address 192.168.0.5/32
 72!
 73interface Loopback1
 74   description VTEP_VXLAN_Tunnel_Source
 75   ip address 192.168.1.5/32
 76!
 77interface Loopback11
 78   description VRF11_VTEP_DIAGNOSTICS
 79   vrf VRF11
 80   ip address 192.168.11.5/32
 81!
 82interface Management0
 83   description oob_management
 84   vrf MGMT
 85   ip address 192.168.20.6/24
 86!
 87interface Vlan1070
 88   description VRF11_VLAN1070
 89   vrf VRF11
 90   ip address virtual 10.70.0.1/24
 91!
 92interface Vlan1071
 93   description VRF11_VLAN1071
 94   vrf VRF11
 95   ip address virtual 10.71.0.1/24
 96!
 97interface Vxlan1
 98   description dc1-borderleaf1_VTEP
 99   vxlan source-interface Loopback1
100   vxlan udp-port 4789
101   vxlan vlan 1070 vni 11070
102   vxlan vlan 1071 vni 11071
103   vxlan vlan 1074 vni 11074
104   vxlan vlan 1075 vni 11075
105   vxlan vrf VRF11 vni 11
106!
107ip virtual-router mac-address 00:1c:73:00:00:99
108ip address virtual source-nat vrf VRF11 address 192.168.11.5
109!
110ip routing
111no ip routing vrf MGMT
112ip routing vrf VRF11
113!
114ip prefix-list PL-LOOPBACKS-EVPN-OVERLAY
115   seq 10 permit 192.168.0.0/27 eq 32
116   seq 20 permit 192.168.1.0/27 eq 32
117!
118ip route vrf MGMT 0.0.0.0/0 192.168.20.1
119!
120ntp local-interface vrf MGMT Management0
121ntp server vrf MGMT 10.100.1.7 prefer
122!
123route-map RM-CONN-2-BGP permit 10
124   match ip address prefix-list PL-LOOPBACKS-EVPN-OVERLAY
125!
126router bfd
127   multihop interval 300 min-rx 300 multiplier 3
128!
129router bgp 65102
130   router-id 192.168.0.5
131   update wait-install
132   no bgp default ipv4-unicast
133   maximum-paths 4 ecmp 4
134   neighbor EVPN-OVERLAY-PEERS peer group
135   neighbor EVPN-OVERLAY-PEERS update-source Loopback0
136   neighbor EVPN-OVERLAY-PEERS bfd
137   neighbor EVPN-OVERLAY-PEERS ebgp-multihop 3
138   neighbor EVPN-OVERLAY-PEERS password 7 Q4fqtbqcZ7oQuKfuWtNGRQ==
139   neighbor EVPN-OVERLAY-PEERS send-community
140   neighbor EVPN-OVERLAY-PEERS maximum-routes 0
141   neighbor IPv4-UNDERLAY-PEERS peer group
142   neighbor IPv4-UNDERLAY-PEERS password 7 7x4B4rnJhZB438m9+BrBfQ==
143   neighbor IPv4-UNDERLAY-PEERS send-community
144   neighbor IPv4-UNDERLAY-PEERS maximum-routes 12000
145   neighbor 192.168.0.1 peer group EVPN-OVERLAY-PEERS
146   neighbor 192.168.0.1 remote-as 65100
147   neighbor 192.168.0.1 description dc1-spine1
148   neighbor 192.168.0.2 peer group EVPN-OVERLAY-PEERS
149   neighbor 192.168.0.2 remote-as 65100
150   neighbor 192.168.0.2 description dc1-spine2
151   neighbor 192.168.100.8 peer group IPv4-UNDERLAY-PEERS
152   neighbor 192.168.100.8 remote-as 65100
153   neighbor 192.168.100.8 description dc1-spine1_Ethernet3
154   neighbor 192.168.100.10 peer group IPv4-UNDERLAY-PEERS
155   neighbor 192.168.100.10 remote-as 65100
156   neighbor 192.168.100.10 description dc1-spine2_Ethernet3
157   redistribute connected route-map RM-CONN-2-BGP
158   !
159   vlan 1070
160      rd 192.168.0.5:11070
161      route-target both 11070:11070
162      redistribute learned
163   !
164   vlan 1071
165      rd 192.168.0.5:11071
166      route-target both 11071:11071
167      redistribute learned
168   !
169   vlan 1074
170      rd 192.168.0.5:11074
171      route-target both 11074:11074
172      redistribute learned
173   !
174   vlan 1075
175      rd 192.168.0.5:11075
176      route-target both 11075:11075
177      redistribute learned
178   !
179   address-family evpn
180      neighbor EVPN-OVERLAY-PEERS activate
181   !
182   address-family ipv4
183      no neighbor EVPN-OVERLAY-PEERS activate
184      neighbor IPv4-UNDERLAY-PEERS activate
185   !
186   vrf VRF11
187      rd 192.168.0.5:11
188      route-target import evpn 11:11
189      route-target export evpn 11:11
190      router-id 192.168.0.5
191      redistribute connected
192!
193router multicast
194   ipv4
195      software-forwarding kernel
196   !
197   ipv6
198      software-forwarding kernel
199!
200end
201dc1-borderleaf1# 
 1dc1-borderleaf1# show bgp summary 
 2BGP summary information for VRF default
 3Router identifier 192.168.0.5, local AS number 65102
 4Neighbor                AS Session State AFI/SAFI                AFI/SAFI State   NLRI Rcd   NLRI Acc
 5-------------- ----------- ------------- ----------------------- -------------- ---------- ----------
 6192.168.0.1          65100 Established   L2VPN EVPN              Negotiated              7          7
 7192.168.0.2          65100 Established   L2VPN EVPN              Negotiated              7          7
 8192.168.100.8        65100 Established   IPv4 Unicast            Negotiated              3          3
 9192.168.100.10       65100 Established   IPv4 Unicast            Negotiated              3          3
10dc1-borderleaf1#show  ip bgp 
11BGP routing table information for VRF default
12Router identifier 192.168.0.5, local AS number 65102
13Route status codes: s - suppressed contributor, * - valid, > - active, E - ECMP head, e - ECMP
14                    S - Stale, c - Contributing to ECMP, b - backup, L - labeled-unicast
15                    % - Pending best path selection
16Origin codes: i - IGP, e - EGP, ? - incomplete
17RPKI Origin Validation codes: V - valid, I - invalid, U - unknown
18AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop
19
20          Network                Next Hop              Metric  AIGP       LocPref Weight  Path
21 * >      192.168.0.1/32         192.168.100.8         0       -          100     0       65100 i
22 * >      192.168.0.2/32         192.168.100.10        0       -          100     0       65100 i
23 * >Ec    192.168.0.3/32         192.168.100.8         0       -          100     0       65100 65101 i
24 *  ec    192.168.0.3/32         192.168.100.10        0       -          100     0       65100 65101 i
25 * >      192.168.0.5/32         -                     -       -          -       0       i
26 * >Ec    192.168.1.3/32         192.168.100.8         0       -          100     0       65100 65101 i
27 *  ec    192.168.1.3/32         192.168.100.10        0       -          100     0       65100 65101 i
28 * >      192.168.1.5/32         -                     -       -          -       0       i
29dc1-borderleaf1#show vxlan address-table 
30          Vxlan Mac Address Table
31----------------------------------------------------------------------
32
33VLAN  Mac Address     Type      Prt  VTEP             Moves   Last Move
34----  -----------     ----      ---  ----             -----   ---------
351300  001c.7357.10f4  EVPN      Vx1  192.168.1.3      1       0:03:22 ago
36Total Remote Mac Addresses for this criterion: 1
37dc1-borderleaf1#

All BGP neighbors are peering and connected, VXLAN is up and running.

Job well done. When it comes to post-updates to the config they can continue to be performed (even should/must) as the changes will then come in to CVP, someone needs to review and approve before they are applied. And by using a declarative approach like AVD there is minimal risk of someone overwriting or overriding the config manually taking the human error out of the picture.

Again, a note on CVP. There will be another post coming only focusing on CVP. So stay tuned for that one.

Connecting generic containers to the cEOS switches

Now that I have my full fabric up and running, I would like to test connectivity between two generic containers running Ubuntu connected to each of their Leaf L3 switch, each on their different VLAN. Containerlabs supports several ways to interact with the network nodes. I decided to go with an easy approach, spin a couple of generic docker containers. When I deployed my cEOS lab earlier I created a couple of Linux bridges that my Leaf switches connect their Ethernet/3 interfaces to. Each leaf have their own dedicated bridge for their Ethernet/3 interfaces. That means I just need to deploy my container clients and connecting their interface to these bridges as well.

To make it a bit more interesting test I want to attach Client-1 to br-node-3 where dc1-leaf1 Ethernet/3 is attached and Client-2 to br-node4 where dc1-leaf2 Ethernet/3 is attached.

client-1-2-attached

To deploy a generic Docker container, like a Ubuntu container, using Containerlab I need to either add additional nodes using the kind: linux to my existing lab topology yaml, or create a separate topology yaml. Both ways are explained below.

Create new topology and connect to existing bridges

If I quickly want to add generic container nodes and add them to my already running cEOS topology I need to define an additional topology yaml where I define my "clients" and where they should be linked. Its not possible to update or apply updates to an existing running topology in Containerlab using Docker. Here is my "client" yaml:

 1name: clients-attached
 2topology:
 3  nodes:
 4    client-1:
 5      kind: linux
 6      image: ubuntu:latest
 7    client-2:
 8      kind: linux
 9      image: ubuntu:latest
10    br-node-4:
11      kind: bridge
12    br-node-3:
13      kind: bridge
14  links:
15    - endpoints: ["client-1:eth1","br-node-3:eth4"]
16    - endpoints: ["client-2:eth1","br-node-4:eth12"]

Client 1 eth1 is attached to br-node-3 bridge eth4, that is the same bridge my *Leaf-1 Ethernet/3 is connected to. Client 2 eth1 is attached to br-node-4 bridge eth12, that is the same bridge my *Leaf-1 Ethernet/3 is connected to. The ethx in the bridge is just another free number.

Adding generic containers to existing topology

As it is very easy to just bring down my lab and re-provision everything back up again using Containerlab, AVD and CVP I can also modify my existing topology to include my test clients (generic linux containers).

 1name: spine-leaf-borderleaf
 2
 3mgmt:
 4  network: custom_mgmt                # management network name
 5  ipv4-subnet: 192.168.20.0/24       # ipv4 range
 6
 7topology:
 8  nodes:
 9    node-1:
10      kind: arista_ceos
11      image: ceos:4.32.2F
12      startup-config: node1-startup-config.cfg
13      mgmt-ipv4: 192.168.20.2
14    node-2:
15      kind: arista_ceos
16      image: ceos:4.32.2F
17      startup-config: node2-startup-config.cfg
18      mgmt-ipv4: 192.168.20.3
19    node-3:
20      kind: arista_ceos
21      image: ceos:4.32.2F
22      startup-config: node3-startup-config.cfg
23      mgmt-ipv4: 192.168.20.4
24    node-4:
25      kind: arista_ceos
26      image: ceos:4.32.2F
27      startup-config: node4-startup-config.cfg
28      mgmt-ipv4: 192.168.20.5
29    node-5:
30      kind: arista_ceos
31      image: ceos:4.32.2F
32      startup-config: node5-startup-config.cfg
33      mgmt-ipv4: 192.168.20.6
34# Clients attached to specific EOS Interfaces
35    client-1:
36      kind: linux
37      image: ubuntu:latest
38    client-2:
39      kind: linux
40      image: ubuntu:latest
41
42# Bridges for specific downlinks 
43    br-node-3:
44      kind: bridge
45    br-node-4:
46      kind: bridge
47    br-node-5:
48      kind: bridge
49
50
51  links:
52    - endpoints: ["node-3:eth1", "node-1:eth1"]
53    - endpoints: ["node-3:eth2", "node-2:eth1"]
54    - endpoints: ["node-4:eth1", "node-1:eth2"]
55    - endpoints: ["node-4:eth2", "node-2:eth2"]
56    - endpoints: ["node-5:eth1", "node-1:eth3"]
57    - endpoints: ["node-5:eth2", "node-2:eth3"]
58    - endpoints: ["node-3:eth3", "br-node-3:n3-eth3"]
59    - endpoints: ["node-4:eth3", "br-node-4:n4-eth3"]
60    - endpoints: ["node-5:eth3", "br-node-5:n5-eth3"]
61    - endpoints: ["client-1:eth1","br-node-3:eth4"]
62    - endpoints: ["client-2:eth1","br-node-4:eth12"]

Then I re-deployed my topology containing the additional two above linux nodes.

 1INFO[0000] Containerlab v0.56.0 started                 
 2INFO[0000] Parsing & checking topology file: spine-leaf-border.yaml 
 3INFO[0000] Creating docker network: Name="custom_mgmt", IPv4Subnet="192.168.20.0/24", IPv6Subnet="", MTU=0 
 4INFO[0000] Creating lab directory: /home/andreasm/containerlab/lab-spine-leaf-cvp/clab-spine-leaf-borderleaf 
 5INFO[0000] Creating container: "node-1"                 
 6INFO[0000] Creating container: "node-4"                 
 7INFO[0000] Creating container: "node-3"                 
 8INFO[0000] Creating container: "node-5"                 
 9INFO[0000] Creating container: "node-2"                 
10INFO[0000] Creating container: "client-2"               
11INFO[0000] Created link: node-5:eth1 <--> node-1:eth3   
12INFO[0000] Running postdeploy actions for Arista cEOS 'node-1' node 
13INFO[0001] Created link: node-3:eth1 <--> node-1:eth1   
14INFO[0001] Running postdeploy actions for Arista cEOS 'node-5' node 
15INFO[0001] Created link: node-4:eth1 <--> node-1:eth2   
16INFO[0001] Creating container: "client-1"               
17INFO[0001] Created link: node-5:eth3 <--> br-node-5:n5-eth3 
18INFO[0001] Created link: node-4:eth3 <--> br-node-4:n4-eth3 
19INFO[0001] Running postdeploy actions for Arista cEOS 'node-4' node 
20INFO[0001] Created link: node-3:eth2 <--> node-2:eth1   
21INFO[0001] Created link: node-4:eth2 <--> node-2:eth2   
22INFO[0001] Created link: node-3:eth3 <--> br-node-3:n3-eth3 
23INFO[0001] Running postdeploy actions for Arista cEOS 'node-3' node 
24INFO[0001] Created link: node-5:eth2 <--> node-2:eth3   
25INFO[0001] Running postdeploy actions for Arista cEOS 'node-2' node 
26INFO[0001] Created link: client-2:eth1 <--> br-node-4:eth12 
27INFO[0001] Created link: client-1:eth1 <--> br-node-3:eth4 
28INFO[0046] Adding containerlab host entries to /etc/hosts file 
29INFO[0046] Adding ssh config for containerlab nodes     
30INFO[0046] 🎉 New containerlab version 0.57.0 is available! Release notes: https://containerlab.dev/rn/0.57/
31Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/ 
32+---+-------------------------------------+--------------+---------------+-------------+---------+-----------------+--------------+
33| # |                Name                 | Container ID |     Image     |    Kind     |  State  |  IPv4 Address   | IPv6 Address |
34+---+-------------------------------------+--------------+---------------+-------------+---------+-----------------+--------------+
35| 1 | clab-spine-leaf-borderleaf-client-1 | b88f71f8461f | ubuntu:latest | linux       | running | 192.168.20.8/24 | N/A          |
36| 2 | clab-spine-leaf-borderleaf-client-2 | 030de9ac2c1a | ubuntu:latest | linux       | running | 192.168.20.7/24 | N/A          |
37| 3 | clab-spine-leaf-borderleaf-node-1   | 5aaa12968276 | ceos:4.32.2F  | arista_ceos | running | 192.168.20.2/24 | N/A          |
38| 4 | clab-spine-leaf-borderleaf-node-2   | 0ef9bfc0c093 | ceos:4.32.2F  | arista_ceos | running | 192.168.20.3/24 | N/A          |
39| 5 | clab-spine-leaf-borderleaf-node-3   | 85d18564c3fc | ceos:4.32.2F  | arista_ceos | running | 192.168.20.4/24 | N/A          |
40| 6 | clab-spine-leaf-borderleaf-node-4   | 11a1ecccb2aa | ceos:4.32.2F  | arista_ceos | running | 192.168.20.5/24 | N/A          |
41| 7 | clab-spine-leaf-borderleaf-node-5   | 34ecdecd10db | ceos:4.32.2F  | arista_ceos | running | 192.168.20.6/24 | N/A          |
42+---+-------------------------------------+--------------+---------------+-------------+---------+-----------------+--------------+

containerlab-graph-w-clients

The benefit of adding the generic containers to my existing topology, I can view the connection diagram using containerlab graph.

As soon as they were up and running I exec into each and one of them and configured static IP addresses on both their eth1 interfaces, adding a route on client-1 pointing to client-2s subnet using the eth1 as gateway. And vice versa on client-2.

 1root@client-1:/# ip addr add 10.71.0.11/24 dev eth1
 2root@client-1:/# ip route add 10.70.0.0/24 via 10.71.0.1
 3root@client-1:/# ping 10.70.0.12
 4PING 10.70.0.12 (10.70.0.12) 56(84) bytes of data.
 564 bytes from 10.70.0.12: icmp_seq=1 ttl=63 time=16.3 ms
 664 bytes from 10.70.0.12: icmp_seq=2 ttl=62 time=3.65 ms
 7^C
 8--- 10.70.0.12 ping statistics ---
 92 packets transmitted, 2 received, 0% packet loss, time 1001ms
10rtt min/avg/max/mdev = 3.653/9.955/16.257/6.302 ms
11root@client-1:/# 
1root@client-2:/# ip addr add 10.70.0.12/24 dev eth1
2root@client-2:/# ip route add 10.71.0.0/24 via 10.70.0.1
3root@client-2:/# ping 10.71.0.1
4PING 10.71.0.1 (10.71.0.1) 56(84) bytes of data.
564 bytes from 10.71.0.1: icmp_seq=1 ttl=64 time=3.67 ms
664 bytes from 10.71.0.1: icmp_seq=2 ttl=64 time=0.965 ms

Then I could ping from client-1 to client-2 and vice versa.

Client-1 is connected directly to Leaf-1 Ethernet/3 via Bridge br-node-3 and client-2 is connected directly to Leaf-2 Ethernet/3 via Bridge br-node-4. Client-1 is configured with a static ip of 10.71.0.11/24 and client-2 is configured with a static of 10.70.0.12/24. For these two clients to reach each other it has to go over VXLAN where both VLANs is encapsulated. A quick ping test from each client shows that this works:

 1## client 1 pinging client 2
 2root@client-1:/# ip addr
 3567: eth1@if568: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
 4    link/ether aa:c1:ab:18:8e:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
 5    inet 10.71.0.11/24 scope global eth1
 6       valid_lft forever preferred_lft forever
 7569: eth0@if570: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
 8    link/ether 02:42:ac:14:14:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
 9    inet 172.20.20.2/24 brd 172.20.20.255 scope global eth0
10       valid_lft forever preferred_lft forever
11root@client-1:/# ping 10.70.0.12
12PING 10.70.0.12 (10.70.0.12) 56(84) bytes of data.
1364 bytes from 10.70.0.12: icmp_seq=1 ttl=62 time=2.96 ms
1464 bytes from 10.70.0.12: icmp_seq=2 ttl=62 time=2.66 ms
1564 bytes from 10.70.0.12: icmp_seq=3 ttl=62 time=2.61 ms
1664 bytes from 10.70.0.12: icmp_seq=4 ttl=62 time=3.05 ms
1764 bytes from 10.70.0.12: icmp_seq=5 ttl=62 time=3.08 ms
1864 bytes from 10.70.0.12: icmp_seq=6 ttl=62 time=3.03 ms
19^C
20--- 10.70.0.12 ping statistics ---
216 packets transmitted, 6 received, 0% packet loss, time 5007ms
22rtt min/avg/max/mdev = 2.605/2.896/3.080/0.191 ms
23
24# client 2 pinging client-1
25
26root@client-2:~# ip add
27571: eth1@if572: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default 
28    link/ether aa:c1:ab:f7:d8:60 brd ff:ff:ff:ff:ff:ff link-netnsid 0
29    inet 10.70.0.12/24 scope global eth1
30       valid_lft forever preferred_lft forever
31573: eth0@if574: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
32    link/ether 02:42:ac:14:14:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
33    inet 172.20.20.3/24 brd 172.20.20.255 scope global eth0
34       valid_lft forever preferred_lft forever
35root@client-2:~# ping  10.71.0.11
36PING 10.71.0.11 (10.71.0.11) 56(84) bytes of data.
3764 bytes from 10.71.0.11: icmp_seq=3 ttl=62 time=2.81 ms
3864 bytes from 10.71.0.11: icmp_seq=4 ttl=62 time=2.49 ms
3964 bytes from 10.71.0.11: icmp_seq=8 ttl=62 time=2.97 ms
4064 bytes from 10.71.0.11: icmp_seq=9 ttl=62 time=2.56 ms
4164 bytes from 10.71.0.11: icmp_seq=10 ttl=62 time=3.14 ms
4264 bytes from 10.71.0.11: icmp_seq=11 ttl=62 time=2.82 ms
43^C
44--- 10.71.0.11 ping statistics ---
456 packets transmitted, 6 received, 0% packet loss, time 10013ms
46rtt min/avg/max/mdev = 2.418/2.897/4.224/0.472 ms
47root@client-2:~# 

And on both Leaf-1 and Leaf-2 I can see the arp correctly and VXLAN address-table:

 1## DC Leaf-1
 2dc1-leaf1(config)#show vxlan address-table 
 3          Vxlan Mac Address Table
 4----------------------------------------------------------------------
 5
 6VLAN  Mac Address     Type      Prt  VTEP             Moves   Last Move
 7----  -----------     ----      ---  ----             -----   ---------
 81070  aac1.abf7.d860  EVPN      Vx1  192.168.1.4      1       0:03:52 ago
 91300  001c.7356.0016  EVPN      Vx1  192.168.1.4      1       7:44:07 ago
101300  001c.73c4.4a1d  EVPN      Vx1  192.168.1.5      1       7:44:07 ago
11Total Remote Mac Addresses for this criterion: 3
12
13dc1-leaf1(config)#show arp vrf VRF11
14Address         Age (sec)  Hardware Addr   Interface
1510.70.0.12              -  aac1.abf7.d860  Vlan1070, Vxlan1
1610.71.0.11        0:04:00  aac1.ab18.8ebb  Vlan1071, Ethernet3
17dc1-leaf1(config)#
18
19## DC Leaf-2
20
21dc1-leaf2(config)#show vxlan address-table 
22          Vxlan Mac Address Table
23----------------------------------------------------------------------
24
25VLAN  Mac Address     Type      Prt  VTEP             Moves   Last Move
26----  -----------     ----      ---  ----             -----   ---------
271071  aac1.ab18.8ebb  EVPN      Vx1  192.168.1.3      1       0:04:36 ago
281300  001c.7357.10f4  EVPN      Vx1  192.168.1.3      1       7:44:50 ago
29Total Remote Mac Addresses for this criterion: 2
30dc1-leaf2(config)#
31dc1-leaf2(config)#show arp vrf VRF11 
32Address         Age (sec)  Hardware Addr   Interface
3310.70.0.12        0:04:36  aac1.abf7.d860  Vlan1070, Ethernet3
3410.71.0.11              -  aac1.ab18.8ebb  Vlan1071, Vxlan1
35dc1-leaf2(config)#

Outro

The combination of Arista CloudVision, Arista Validated Design and Containerlab made this post such a joy to do. Spinning up a rather complex topology using Container suddenly became just so fun to do, and in seconds. If something breaks or fails, just do a destroy and deploy again. Minutes later up and running again. Using AVD to define and create the configuration for the topologies also turns a complex task to a easy and understandable thing to do at the same time as it eliminates the chance of doing human errors. Arista CloudVision is the cherry on the top with all its features, the vast set of information readily available from the same UI/dashboard and control mechanism like the Change Control.

This concludes this post.

Happy networking