In this article, I’ll show you how to setup external network connectivity with VLAN trunking on UNetLab (as of this writing, aka eve-ng) using Open vSwitch (OVS). Some of the biggest advantages of using OVS instead of traditional Linux bridges (i.e. default
pnet networks on UNetLab) are the state-of-the-art switching features that OVS provides out of the box. Plus, by using OVS, I feel it less hassle to get external networks to work properly with VLAN tags on UNetLab.
Before we dive in, I’d like to point out how exactly my virtualization environment looks like. Particularly, since I have a dedicated server for UNetLab, to eliminate one layer of virtualization, I’ve installed Ubuntu 14.04 with UNetLab on a bare metal server.
If you’re not running UNetLab on bare metal server, the final configuration of OVS will be slightly different. You’ll have to configure one additional virtual patch cable between OVS and your Hypervisor vSwitch. Make sure this vSwitch is compatible with OVS, or set the Hypervisor’s vSwitch as OVS.
The topology that I’ll use to illustrate external networks connectivity with VLAN tags is shown in Figure 1. As you can see, my bare metal server running UNetLab has 2 Ethernet interfaces. I decided to use
eth4 as management interface and
eth3 as an uplink external interfaces. So,
eth3 will be set as an interface in an OVS bridge
ovsbr0, which is created manually on UNL host (Ubuntu). As a result, ultimately, UNetLab nodes (routers/switches) will be able to reach out external networks through this physical interface. In my case, as represented in Figure 1,
vMX should be able to use VLAN tags to reach out the external physical router
However, the key point is, how can you connect UNL virtual nodes to
ovsbr0 ? Fortunately, UNL already supports
ovs networks within your UNL lab topology. So, all you have to do is simply to drop an
ovs network in your lab, which, under the hood, UNL creates this network as an OVS bridge. After that, you just have to connect this OVS bridge, which will be named
vnet0_<ID>, to the other OVS bridge
ovsbr0 (that was manually created). In order to connect these two OVS bridges, I’ll use OVS patch ports. Basically, an OVS patch port acts as a virtual patch cable.
Another small detail in this topology is, how can you figure out the name of the OVS bridge that UNL automatically creates? It turns out that the name of the bridge will be automatically derived from the network ID. The network ID has the format
vnet0_<ID>, where this ID starts at 1 and it’s incremented as you add networks. In other words, if this
ovs network that you created was the first one in this topology, it’ll be named
vnet0_1. You can always consult the
network button in the GUI to figure out which network ID is being used by the ovs type network.
For more details about figuring out which ID is associated with which
vnet0 I recommend you read this previous post.
The configuration presented in this section is visually represented in Figure 1. Initially, there are no OVS bridges created on UNL host:
The following commands create
ovsbr0 and sets
eth3 as an interface on this OVS bridge. In addition, the patch cable
patch_ovsbr0 is created to be connected to OVS bridge
By default, an OVS interface works as a trunk. All VLAN tags will be allowed.
The remote patch port interface name must match accordingly to the
options:peer=patch_vnet0_1 configured above.
vnet0_1 OVS bridge, all you need to do is to configure the patch port, because the interface port that connects to your UNL node, which is named something like
vunl0_<ID1>_<ID2>, is automatically created as soon as you plug your virtual router/switch to the
ovs network (as illustrated in Figure 1). So, before you execute these commands, make sure you’ve already connected your UNL node to an
ovs network type by using the GUI as illustrated in Figure 2.
vunl0_1_2 interface is connected to
vnet_0_1 as shown in Figure 1. You can always use the command
ovs-vsctl show to verify these details.
If everything was setup properly thus far, the output of
ovs-vsctl show should look similar to this:
The ultimate goal of this lab was to provide external connectivity with VLAN tags between the node Juniper
vMX and the external physical router
R1(DM4100). So, in order to verify that the external reachability is working as expected, I’ll configure two Layer 3 VLAN tagged interfaces:
As you can see, it’s possible to ping on these tagged interfaces:
You can use tcpdump to trace and troubleshoot traffic on these interfaces, for instance, on eth3
tcpdump -i eth3 -v:
To wrap it up, OVS works smoothly with UNetLab to enable VLAN trunking with external networks. Since VLANs are extensively used for segregating traffic at Layer 2, you might find this information useful if you ever need to setup UNL with external physical devices.