Overview
We’ll go through the basics of configuring Juniper switches with VXLAN as the data plane, and EVPN as the control plane. We’ll also look at configuring active/active multihoming to the environment.
This article won’t explain how VXLAN or EVPN works, as it focuses on the configuration.
Topology
We’ll be using this simple topology.
Note: Many documents talk about using the spine/leaf topology when using VXLAN. This is certainly a good thing, but not a requirement.
This assumes four EX4650 switches, divided into two sites. We don’t want to stretch VLANs across sites, and we don’t want spanning-tree, so we’re configuring routed links in the underlay, and VXLAN/EVPN as the overlay.
VXLAN/EVPN Configuration
Underlay Configuration
Loopback Interface
We’re only going to show the configuration of Site-1-R1 here, as the others are nearly identical.
We’ll start by configuring the loopback interface. This is used as the VTEP, as well as for BGP peering later on.
system {
hostname Site-1-R1
}
interfaces {
lo0 {
unit 0 {
family inet {
address 10.254.1.1/32;
}
}
}
}
Underlay Interfaces
The underlay interfaces are used to route between the layer-3 switches. Here, we’re applying a /31 address to each one, as well as enabling jumbo frames (remember, VXLAN frames have bigger headers).
interfaces {
xe-0/0/0 {
description "Underlay > To Site-1-R2";
mtu 9192;
unit 0 {
family inet {
address 172.16.0.4/31;
}
}
}
xe-0/0/1 {
description "Underlay > To Site-2-R1";
mtu 9192;
unit 0 {
family inet {
address 172.16.0.0/31;
}
}
}
xe-0/0/2 {
description "Underlay > To Site-2-R2";
mtu 9192;
unit 0 {
family inet {
address 172.16.0.2/31;
}
}
}
}
Underlay Routing (OSPF)
We then need to make this routable. I like changing these to point-to-point interfaces, to bypass the DR/BDR elections. This makes them come up much faster.
Notice that we’re also advertising the loopback interface, to make it reachable to other switches, and changing the reference bandwidth as a matter of best practice.
protocols {
ospf {
area 0.0.0.0 {
interface lo0.0 {
passive;
}
interface xe-0/0/0.0 {
interface-type p2p;
}
interface xe-0/0/1.0 {
interface-type p2p;
}
interface xe-0/0/2.0 {
interface-type p2p;
}
}
reference-bandwidth 100g;
}
}
Overlay Configuration
The overlay configuration involves getting VXLAN VNIs to ‘stretch’ over routed links. That’s why VXLAN is called ‘MAC-in-IP’ tunnelling.
BGP
Juniper doesn’t have ECMP enabled by default, so we first create a routing policy to enable this. Documentation often enables this ‘per-packet’, but per-flow prevents packets arriving out of order.
policy-options {
policy-statement ECMP {
then {
load-balance per-flow;
}
}
}
Next, we configure general routing options. This includes a unique router ID, an autonomous system, and the ECMP policy we just configured. Note, you can use a private ASN, or you can use a registered ASN. Either is fine.
routing-options {
router-id 10.254.1.1;
autonomous-system 65000;
forwarding-table {
export ECMP;
}
}
Now we can configure BGP itself. We use a BGP group here, which is like a template (similar to a peer-group in IOS). This is used when we want to apply the same settings to many neighbours. The ‘internal’ type means we’re configuring iBGP.
You’ll notice that we use the loopback interface as the source of our BGP packets, as well as the destination for our peers.
The ‘family evpn’ section tells BGP that we want to carry EVPN NLRI. The ‘signalling’ part means that EVPN is used as a control plane protocol.
protocols {
bgp {
group EVPN-Overlay {
type internal;
local-address 10.254.1.1;
family evpn {
signaling;
}
local-as 65000;
multipath;
neighbor 10.254.1.2 {
peer-as 65000;
}
neighbor 10.254.2.1 {
peer-as 65000;
}
neighbor 10.254.2.2 {
peer-as 65000;
}
vpn-apply-export;
}
graceful-restart;
}
}
EVPN
Now to get some information into EVPN. For this, we first configure a VLAN. Here we have one that we’ll use for network management. It has VLAN ID 100, and VNI 90100. Remember that these numbers are arbitrary, so we can choose any numbers we want.
I like them to align, which makes troubleshooting easier.
vlans {
Management {
description "Network Management";
vlan-id 100;
vxlan {
vni 90100;
}
}
}
Now to configure the EVPN protocol. The first step is to choose the data plane protocol, which is VXLAN for us. Keep in mind, EVPN can be used with other protocols, like MPLS and VPLS.
For this simple deployment, we’re using ingress replication. This is where BUM traffic is converted to encapsulated unicast traffic, and sent to each VTEP.
Ingress replication is the alternative to configuring our routers as PIM routers. This will work well in a small network, but if we had 20 or more EVPN switches, we would start seeing scalability issues.
Under ‘vni-options’ we have our VXLAN VNIs that we want to manage with EVPN. We would add more as we add more VLANs/VNIs. These also have a route-target configured. We’ll see more on that soon.
Optionally, we can limit this to a list of VNIs. We’re keeping it simple, and allowing all VNIs.
protocols {
evpn {
encapsulation vxlan;
default-gateway do-not-advertise;
multicast-mode ingress-replication;
vni-options {
vni 90100 {
vrf-target target:65000:200;
}
}
extended-vni-list all;
}
}
Now we can set up the VXLAN options. First up, we need to choose an interface for our VTEP. Of course, this will be our loopback interface.
We also apply a route-distinguisher to the global routing table. We will create other routing tables soon, to segregate traffic.
The route target tells the switch which received entries to install into the global routing table. Here, I’ve allocated a tag of 100 to the global (aka ‘master’) routing table. This number is arbitrary.
switch-options {
vtep-source-interface lo0.0;
route-distinguisher 10.254.1.1:100;
vrf-target {
target:65000:100;
auto;
}
}
Tenancies
We may want to separate our environment into multiple tenancies. We don’t have to be a service provider for this. We may simply want to separate our DMZ servers from our internal servers.
Let’s create an IRB interface to go along with our Management VLAN. The address family, description, and IP address are the same as normal.
The ‘proxy-macip-advertisement’ enables MAC-IP bindings to be advertised in BGP. This is an oversimplification, but you get the idea.
‘virtual-gateway-accept-data’ enables this interface to be a default gateway for traffic between VNIs, or between the VXLAN environment and external networks.
The really interesting part is the ‘virtual-gateway-address’. This is anycast gateway. This needs to be configured the same on each switch. That is, each switch will have a unique IP address, with a common virtual gateway address.
interfaces {
irb {
unit 100 {
proxy-macip-advertisement;
virtual-gateway-accept-data;
description "Management IP";
family inet {
address 10.16.100.2/24 {
virtual-gateway-address 10.16.100.1;
}
}
}
}
}
As normal, we need to map the VLAN to the IRB interface. Nothing special here.
vlans {
Management {
l3-interface irb.100;
}
}
We then need to create routing instances (VRFs). These are for our tenants. Each will have its own separate routing table, which segregates traffic.
Notice that we assign layer-3 interfaces to the VRF
We also need to assign a route-distinguisher, so any prefixes here are unique (this allows IP space overlap if we want it).
And we need a route target. This imports prefixes that we learn from other switches.
routing-instances {
Tenant-1 {
instance-type vrf;
interface irb.100;
route-distinguisher 10.254.1.1:200;
vrf-target target:65000:200;
}
}
Multihoming
We may want to (optionally) enable multihoming. This is where we take an external device, which has no EVPN/VXLAN configuration, and connect it to more than one of our layer-3 switches.
This works in a way similar to MC-LAG, or Cisco’s vPC/VSS. In my opinion, this is much simpler (once you have EVPN working), as you don’t need to worry about layer-2 links between the L3 switches, ICL, ICCP, complicated ARP handling, and problems that come with dynamic routing protocols.
We configure ports on each L3 switch to be part of an ‘Ethernet Segment’. These are identified with an ESI, which enables this to work with EVPN. It also uses LACP as normal, to communicate with the connected device.
The connected device has no idea that EVPN and VXLAN are in use. It just sees a single switch with a single LACP system ID. This means that the connected device is configured as normal (just normal LAG or Etherchannel configuration).
ESI-LAG Template
OK, so this approach is optional, but as there is a bit of extra config per ESI-LAG interface, I like to wrap it all up into a config group (Juniper’s CLI rocks!).
Here, we’ve created a group that we will apply to interfaces later. Some of this is normal, for example the ‘unit 0’ section, and the LACP configuration.
The new part is the ‘esi’ section. There’s really only two parts to this. Each ESI-LAG needs a unique identifier. Here, we’re telling Junos to automatically work one out, based on the LACP system ID (we’ll configure that soon).
The second part is ‘all active’. This just tells the switch that all links in the ESI-LAG should actively pass traffic.
groups {
ESI-LAG {
interfaces {
<*> {
esi {
auto-derive {
lacp;
}
all-active;
}
aggregated-ether-options {
lacp {
active;
}
}
unit 0 {
family ethernet-switching {
interface-mode trunk;
vlan {
members all;
}
}
}
}
}
}
}
LAG Interfaces
And finally we configure our interfaces. The physical interface is configured just as we would for any other LAG.
interfaces {
xe-0/0/10 {
ether-options {
802.3ad ae10;
}
}
On the ae interface, we apply the config group from earlier.
It’s important to set the LACP system ID. We need this to be the same for both switches (but different for each LAG). This tells the connected device that its connected to a single switch, rather than two.
interfaces {
ae10 {
apply-groups ESI-LAG;
aggregated-ether-options {
lacp {
system-id 01:00:00:00:00:10;
}
}
}
}
That’s it! Configuration is done!
Hope this helped. I’d love to hear your feedback, so please send me a message on Twitter @netwrkdirection.