UNetLab external networks with VLAN tags using Open vSwitch (OVS)

In this article, I’ll show you how to setup external network connectivity with VLAN trunking on UNetLab (as of this writing, aka eve-ng) using Open vSwitch (OVS). Some of the biggest advantages of using OVS instead of traditional Linux bridges (i.e. default pnet networks on UNetLab) are the state-of-the-art switching features that OVS provides out of the box. Plus, by using OVS, I feel it less hassle to get external networks to work properly with VLAN tags on UNetLab.

Virtualization Environment

Before we dive in, I’d like to point out how exactly my virtualization environment looks like. Particularly, since I have a dedicated server for UNetLab, to eliminate one layer of virtualization, I’ve installed Ubuntu 14.04 with UNetLab on a bare metal server.

Heads up!
If you’re not running UNetLab on bare metal server, the final configuration of OVS will be slightly different. You’ll have to configure one additional virtual patch cable between OVS and your Hypervisor vSwitch. Make sure this vSwitch is compatible with OVS, or set the Hypervisor’s vSwitch as OVS.

Topology

The topology that I’ll use to illustrate external networks connectivity with VLAN tags is shown in Figure 1. As you can see, my bare metal server running UNetLab has 2 Ethernet interfaces. I decided to use eth4 as management interface and eth3 as an uplink external interfaces. So, eth3 will be set as an interface in an OVS bridge ovsbr0, which is created manually on UNL host (Ubuntu). As a result, ultimately, UNetLab nodes (routers/switches) will be able to reach out external networks through this physical interface. In my case, as represented in Figure 1, vMX should be able to use VLAN tags to reach out the external physical router R1(DM4100).

However, the key point is, how can you connect UNL virtual nodes to ovsbr0 ? Fortunately, UNL already supports ovs networks within your UNL lab topology. So, all you have to do is simply to drop an ovs network in your lab, which, under the hood, UNL creates this network as an OVS bridge. After that, you just have to connect this OVS bridge, which will be named vnet0_<ID>, to the other OVS bridge ovsbr0 (that was manually created). In order to connect these two OVS bridges, I’ll use OVS patch ports. Basically, an OVS patch port acts as a virtual patch cable.

Another small detail in this topology is, how can you figure out the name of the OVS bridge that UNL automatically creates? It turns out that the name of the bridge will be automatically derived from the network ID. The network ID has the format vnet0_<ID>, where this ID starts at 1 and it’s incremented as you add networks. In other words, if this ovs network that you created was the first one in this topology, it’ll be named vnet0_1. You can always consult the network button in the GUI to figure out which network ID is being used by the ovs type network.

UNetLab vnet0_IDs
For more details about figuring out which ID is associated with which vnet0 I recommend you read this previous post.

figure_1

OVS Configuration

The configuration presented in this section is visually represented in Figure 1. Initially, there are no OVS bridges created on UNL host:

1
2
3
4
root@unl:~# ovs-vsctl show
ecd8a866-1860-4ce6-a01d-8d4978beb2ab
ovs_version: "2.0.2"
root@unl:~#

Configuring ovsbr0

The following commands create ovsbr0 and sets eth3 as an interface on this OVS bridge. In addition, the patch cable patch_ovsbr0 is created to be connected to OVS bridge vnet0_1:

1
2
3
4
5
ovs-vsctl add-br ovsbr0
ovs-vsctl add-port ovsbr0 eth3
ovs-vsctl add-port ovsbr0 patch_ovsbr0
ovs-vsctl set interface patch_ovsbr0 type=patch
ovs-vsctl set interface patch_ovsbr0 options:peer=patch_vnet0_1

VLAN Tags
By default, an OVS interface works as a trunk. All VLAN tags will be allowed.

Patch Port
The remote patch port interface name must match accordingly to the options:peer=patch_vnet0_1 configured above.

Configuring vnet0_1

When configuring vnet0_1 OVS bridge, all you need to do is to configure the patch port, because the interface port that connects to your UNL node, which is named something like vunl0_<ID1>_<ID2>, is automatically created as soon as you plug your virtual router/switch to the ovs network (as illustrated in Figure 1). So, before you execute these commands, make sure you’ve already connected your UNL node to an ovs network type by using the GUI as illustrated in Figure 2.

1
2
3
ovs-vsctl add-port vnet0_1 patch_vnet0_1
ovs-vsctl set interface patch_vnet0_1 type=patch
ovs-vsctl set interface patch_vnet0_1 options:peer=patch_ovsbr0

figure_2

Troubleshooting
The vunl0_1_2 interface is connected to vnet_0_1 as shown in Figure 1. You can always use the command ovs-vsctl show to verify these details.

Verifying OVS Configuration

If everything was setup properly thus far, the output of ovs-vsctl show should look similar to this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@unl:~# ovs-vsctl show
ecd8a866-1860-4ce6-a01d-8d4978beb2ab
Bridge "ovsbr0"
Port "eth3"
Interface "eth3"
Port "ovsbr0"
Interface "ovsbr0"
type: internal
Port "patch_ovsbr0"
Interface "patch_ovsbr0"
type: patch
options: {peer="patch_vnet0_1"}
Bridge "vnet0_1"
Port "vunl0_1_2"
Interface "vunl0_1_2"
Port "patch_vnet0_1"
Interface "patch_vnet0_1"
type: patch
options: {peer="patch_ovsbr0"}
Port "vnet0_1"
Interface "vnet0_1"
type: internal
ovs_version: "2.0.2"
root@unl:~#

Verifying External Reachability

The ultimate goal of this lab was to provide external connectivity with VLAN tags between the node Juniper vMX and the external physical router R1(DM4100). So, in order to verify that the external reachability is working as expected, I’ll configure two Layer 3 VLAN tagged interfaces:

vMX Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[edit]
root@vMX# show interfaces
ge-0/0/0 {
vlan-tagging;
unit 100 {
vlan-id 100;
family inet {
address 100.0.0.1/24;
}
}
unit 101 {
vlan-id 101;
family inet {
address 101.0.0.1/24;
}
}
}
[edit]
root@vMX#

R1(DM4100) Configuration

1
2
3
4
5
6
7
8
9
10
hostname R1(DM4100)
!
interface vlan 100
ip address 100.0.0.2/24
set-member tagged ethernet 1/2
!
interface vlan 101
ip address 101.0.0.2/24
set-member tagged ethernet 1/2
!

Reachability test on VLAN100-101 trunk

As you can see, it’s possible to ping on these tagged interfaces:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
root@vMX> ping 100.0.0.2
PING 100.0.0.2 (100.0.0.2): 56 data bytes
64 bytes from 100.0.0.2: icmp_seq=0 ttl=64 time=35.810 ms
64 bytes from 100.0.0.2: icmp_seq=1 ttl=64 time=2.272 ms
64 bytes from 100.0.0.2: icmp_seq=2 ttl=64 time=2.280 ms
64 bytes from 100.0.0.2: icmp_seq=3 ttl=64 time=3.227 ms
64 bytes from 100.0.0.2: icmp_seq=4 ttl=64 time=2.103 ms
^C
--- 100.0.0.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.103/9.138/35.810/13.342 ms
root@vMX> show arp
MAC Address Address Name Interface Flags
00:04:df:6d:bc:a4 100.0.0.2 100.0.0.2 ge-0/0/0.100 none
root@vMX> ping 101.0.0.2
PING 101.0.0.2 (101.0.0.2): 56 data bytes
64 bytes from 101.0.0.2: icmp_seq=0 ttl=64 time=6.610 ms
64 bytes from 101.0.0.2: icmp_seq=1 ttl=64 time=2.170 ms
64 bytes from 101.0.0.2: icmp_seq=2 ttl=64 time=3.211 ms
^C
--- 101.0.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.170/3.997/6.610/1.896 ms
root@vMX> show arp
MAC Address Address Name Interface Flags
00:04:df:6d:bc:a4 100.0.0.2 100.0.0.2 ge-0/0/0.100 none
00:04:df:6d:bc:a4 101.0.0.2 101.0.0.2 ge-0/0/0.101 none
Total entries: 2
root@vMX>
1
2
3
4
5
6
7
8
9
10
11
12
13
R1(DM4100)(config-if-eth-1/2)#show ip hardware host-table
IP address MAC VLAN Interface Static Hit
-------------------------------------- ----------------- ---- --------- ------ ---
100.0.0.1 00:05:86:71:49:00 100 Eth 1/2 N Y
100.0.0.2 00:04:DF:6D:BC:A4 100 CPU N Y
101.0.0.1 00:05:86:71:49:00 101 Eth 1/2 N Y
101.0.0.2 00:04:DF:6D:BC:A4 101 CPU N Y
Total for this criterion: 4
Total: 4 Free ipv4: 8188
R1(DM4100)(config-if-eth-1/2)#

You can use tcpdump to trace and troubleshoot traffic on these interfaces, for instance, on eth3 tcpdump -i eth3 -v:

1
2
3
4
5
6
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 101.0.0.2 tell 101.0.0.1, length 28
ARP, Ethernet (len 6), IPv4 (len 4), Reply 101.0.0.2 is-at 00:04:df:6d:bc:a4 (oui Unknown), length 46
IP (tos 0x0, ttl 64, id 82, offset 0, flags [none], proto ICMP (1), length 84)
101.0.0.1 > 101.0.0.2: ICMP echo request, id 45582, seq 0, length 64
IP (tos 0x0, ttl 64, id 43896, offset 0, flags [none], proto ICMP (1), length 84)
101.0.0.2 > 101.0.0.1: ICMP echo reply, id 45582, seq 0, length 64

Final Thoughts

To wrap it up, OVS works smoothly with UNetLab to enable VLAN trunking with external networks. Since VLANs are extensively used for segregating traffic at Layer 2, you might find this information useful if you ever need to setup UNL with external physical devices.