EVPN Route Type 5 in a Juniper ERB Fabric
EVPN-VXLAN fabrics come in different flavors. Centrally-Routed Bridging (CRB) does all inter-VLAN routing on the spine. Edge-Routed Bridging (ERB) pushes that routing down to the leaf — every leaf is a default gateway for its locally attached hosts. ERB is the more scalable model, and EVPN Route Type 5 is what makes it work for inter-VRF and inter-subnet traffic.
This post walks through a production ERB config on Juniper QFX switches, focusing on how Type-5 IP prefix routes are advertised and consumed across the fabric.
What is EVPN Route Type 5?
EVPN defines several route types. The most common ones in a datacenter fabric:
- Type 2 — MAC/IP advertisement. Advertises host MAC and optionally IP. This is what gives you L2 stretch and ARP suppression.
- Type 3 — Inclusive multicast. Sets up BUM (broadcast, unknown unicast, multicast) flooding trees.
- Type 5 — IP prefix route. Advertises an entire IP prefix (like a /24 subnet) into EVPN, rather than individual host routes.
Type 5 is the EVPN equivalent of what you’d do with a static route or IGP redistribution in a traditional network. It lets a leaf say: “I own 10.0.100.0/24 — send traffic for that prefix to me via VXLAN.”
In an ERB fabric, Type 5 is critical because each leaf is an IP gateway for its local subnets. Without Type 5, other leaves wouldn’t know how to reach those subnets.
The big picture
Here’s how the pieces fit together in this fabric:
┌─────────────┐ ┌─────────────┐
│ Spine 01 │ │ Spine 02 │
│ (iBGP RR) │ │ (iBGP RR) │
└──────┬──────┘ └──────┬──────┘
│ │
┌──────────┼───────────────────┼──────────┐
│ │ │ │
┌──────┴──────┐ │ ┌──────┴──────┐ │
│ Leaf 01 │ │ │ Leaf 02 │ │
│ (ERB GW) ├───┘ │ (ERB GW) ├───┘
└──────┬──────┘ └──────┬──────┘
│ │
┌─────┴─────┐ ┌─────┴─────┐
│ Servers │ │ Servers │
│ VLAN 100 │ │ VLAN 100 │
│ VLAN 101 │ │ VLAN 101 │
└───────────┘ └───────────┘
Both leaves act as gateways for the same VLANs. They run iBGP with EVPN address family to the spines (acting as route reflectors). Each leaf advertises its directly connected subnets as Type-5 routes into EVPN. The spines reflect these to all other leaves.
The VRF: where Type 5 lives
All customer-facing subnets live in a VRF. Here’s the routing-instance config:
routing-instances {
vrf-customer {
instance-type vrf;
protocols {
evpn {
irb-symmetric-routing {
vni 65001;
}
ip-prefix-routes {
advertise direct-nexthop;
encapsulation vxlan;
vni 65001;
export EXPORT_EVPN-TYPE5-VRF-CUSTOMER;
}
}
}
interface irb.100;
interface irb.101;
interface irb.102;
interface irb.103;
interface irb.104;
interface irb.105;
interface irb.106;
interface irb.107;
interface irb.108;
interface irb.109;
interface irb.110;
interface irb.111;
interface irb.112;
interface irb.113;
route-distinguisher 10.11.0.9:65001;
vrf-target target:65001:101;
vrf-table-label;
}
}
Let’s break this down.
instance-type vrf
This creates a Layer 3 VRF — a separate routing table. All the IRB interfaces (irb.100 through irb.113) are placed inside it. Traffic between these subnets is routed locally on the leaf.
irb-symmetric-routing
This is the key to ERB. With symmetric routing, traffic between two hosts on different subnets crosses the VXLAN fabric twice through the same L3 VNI:
- Ingress leaf routes the packet from the source subnet into the L3 VNI (65001)
- Packet traverses the VXLAN fabric encapsulated with VNI 65001
- Egress leaf receives it on VNI 65001, routes it into the destination subnet
The word “symmetric” means both the ingress and egress leaf perform a routing lookup. This is different from asymmetric routing where only the ingress leaf routes and the egress leaf just bridges — asymmetric requires all VLANs to exist on all leaves, which doesn’t scale.
The VNI specified here (65001) is the L3 VNI — it’s used exclusively for routed inter-subnet traffic within this VRF. It’s separate from the per-VLAN L2 VNIs.
ip-prefix-routes
This is the Type-5 configuration:
ip-prefix-routes {
advertise direct-nexthop;
encapsulation vxlan;
vni 65001;
export EXPORT_EVPN-TYPE5-VRF-CUSTOMER;
}
advertise direct-nexthop— the leaf advertises itself as the next-hop for the prefix, using its VTEP IP. Remote leaves can send VXLAN-encapsulated traffic directly to this leaf.encapsulation vxlan— Type-5 routes use VXLAN encapsulation (as opposed to MPLS in WAN EVPN).vni 65001— the L3 VNI used for encapsulating routed traffic. Must match theirb-symmetric-routingVNI.export— a policy that controls which prefixes get advertised as Type-5 routes.
Route distinguisher and route target
route-distinguisher 10.11.0.9:65001;
vrf-target target:65001:101;
- Route distinguisher (RD) — makes routes from this VRF unique in BGP. Each leaf uses its own loopback + a VRF identifier, so the same prefix from two different leaves appears as two distinct routes in BGP.
- VRF target (RT) — the import/export community that ties all leaves’ VRF instances together. Every leaf in this VRF imports and exports routes with
target:65001:101, ensuring they share the same routing table.
The export policy: what gets advertised
The export policy controls which routes from the VRF become Type-5 advertisements:
policy-options {
policy-statement EXPORT_EVPN-TYPE5-VRF-CUSTOMER {
term export-direct {
from protocol direct;
then accept;
}
term default-reject {
then reject;
}
}
}
This is deliberately simple: advertise all directly connected subnets (the IRB interfaces), reject everything else. Each leaf advertises only the subnets it’s actually a gateway for.
You could extend this to include static routes, aggregate routes, or routes learned from external BGP peers. But for a pure ERB fabric, direct routes are usually all you need.
The VLAN-to-VNI mapping
Each VLAN gets its own L2 VNI for bridged traffic:
vlans {
VLAN0100 {
description ESX-MANAGEMENT;
vlan-id 100;
l3-interface irb.100;
vxlan {
vni 101100;
}
}
VLAN0101 {
description ESX-VMOTION;
vlan-id 101;
l3-interface irb.101;
vxlan {
vni 101101;
}
}
/* ... more VLANs follow the same pattern */
}
The l3-interface ties each VLAN to an IRB interface, which is the default gateway. The vxlan vni is the L2 VNI — used for bridged traffic (MAC learning, BUM flooding) for that VLAN.
Don’t confuse these with the L3 VNI (65001) used for routed Type-5 traffic. A single VRF has one L3 VNI but many L2 VNIs — one per VLAN.
The IRB interfaces: anycast gateway
Each IRB interface uses a virtual gateway — an anycast IP/MAC shared across all leaves:
interfaces {
irb {
mtu 9216;
unit 100 {
proxy-macip-advertisement;
virtual-gateway-accept-data;
family inet {
address 10.0.100.252/24 {
virtual-gateway-address 10.0.100.254;
}
}
}
}
}
virtual-gateway-address— the shared anycast IP. Every leaf that hosts VLAN 100 uses 10.0.100.254 as the gateway. Hosts always ARP for this address and always get the same MAC back, regardless of which leaf they’re attached to.address 10.0.100.252/24— the leaf’s unique IP on this subnet. Used for control plane communication (VRRP-like, but without VRRP).proxy-macip-advertisement— the leaf advertises ARP entries on behalf of local hosts into EVPN (Type-2 routes with IP). This enables ARP suppression across the fabric.virtual-gateway-accept-data— allows the leaf to accept data traffic destined to the virtual gateway MAC, even if the local leaf isn’t the “primary.” In ERB, every leaf is always active.
The anycast gateway is what makes ERB work seamlessly. A VM can vMotion from one leaf to another and its default gateway (10.0.100.254) is still right there on the new leaf. No ARP refresh, no reconvergence.
The overlay: iBGP with EVPN signaling
All of this is distributed via iBGP with the EVPN address family:
protocols {
bgp {
group OVERLAY {
type internal;
multihop {
ttl 255;
}
local-address 10.11.0.9; /* loopback */
family evpn {
signaling;
}
export EXPORT_LEAF-OUT;
neighbor 10.11.0.1; /* spine-01 (route reflector) */
neighbor 10.11.0.2; /* spine-02 (route reflector) */
}
}
}
The spines act as route reflectors. Leaves peer with both spines for redundancy. The EXPORT_LEAF-OUT policy ensures all EVPN route types (1 through 5) are advertised:
policy-statement EXPORT_LEAF-OUT {
term accept-evpn {
from {
family evpn;
nlri-route-type [ 1 2 3 4 5 ];
}
then accept;
}
term reject {
then reject;
}
}
The global EVPN config
At the protocol level, EVPN is configured with VXLAN encapsulation and per-VNI route targets:
protocols {
evpn {
encapsulation vxlan;
multicast-mode ingress-replication;
vni-options {
vni 101100 {
vrf-target target:65001:101100;
}
vni 101101 {
vrf-target target:65001:101101;
}
/* ... one entry per L2 VNI */
}
extended-vni-list [ 101100-101115 ];
}
}
ingress-replication— BUM traffic is replicated at the ingress leaf and sent unicast to each remote VTEP. No multicast required in the underlay.vni-options— each L2 VNI gets its own route target. This controls which leaves participate in which VLANs. A leaf only imports VNI route targets for VLANs it actually hosts.extended-vni-list— tells the switch which VNIs to activate.
Switch options: VTEP identity
switch-options {
vtep-source-interface lo0.0;
route-distinguisher 10.11.0.9:5000;
vrf-import IMPORT_LEAF-IN;
vrf-target target:65001:65001;
}
vtep-source-interface lo0.0— the VTEP (VXLAN Tunnel Endpoint) uses the loopback address. All VXLAN tunnels terminate on this IP.vrf-import IMPORT_LEAF-IN— controls which EVPN routes this leaf accepts. The import policy matches on specific route-target communities for each VNI.
How it all comes together
Let’s trace what happens when Host A on Leaf 01 (VLAN 100, 10.0.100.10) sends a packet to Host B on Leaf 02 (VLAN 101, 10.0.101.20):
Host A sends the packet to its default gateway: 10.0.100.254 (the anycast virtual gateway on Leaf 01)
Leaf 01 routes the packet — it looks up 10.0.101.20 in the
vrf-customerrouting table. It finds a Type-5 route for 10.0.101.0/24 pointing to Leaf 02’s VTEP, with L3 VNI 65001.VXLAN encapsulation — Leaf 01 encapsulates the routed packet with VNI 65001 (the L3 VNI) and sends it to Leaf 02’s VTEP IP through the underlay.
Leaf 02 decapsulates — it receives the packet on VNI 65001, performs a routing lookup in
vrf-customer, and finds that 10.0.101.0/24 is directly connected on irb.101.Leaf 02 bridges the packet into VLAN 101 and delivers it to Host B.
This is symmetric routing: both leaves performed an L3 lookup. The L3 VNI (65001) carried the packet across the fabric. The L2 VNIs (101100, 101101) were only used for local bridging and MAC learning — not for the inter-subnet routed traffic.
Type 5 vs Type 2 for routing
You might wonder: can’t Type-2 routes (MAC/IP) also carry IP information? Yes — and in some simpler fabrics, Type-2 with IP is enough for inter-subnet routing. But Type-5 has advantages:
- Prefix-based, not host-based. Type 5 advertises a /24 once. Type 2 advertises each individual host IP. In a fabric with thousands of VMs, that’s a massive difference in BGP table size.
- No MAC dependency. Type 5 routes don’t carry MAC addresses — they’re pure L3. This makes them cleaner for inter-VRF routing and external connectivity.
- Scales with subnets, not hosts. Add 500 VMs to a VLAN and the Type-5 advertisement stays the same. With Type-2, that’s 500 new routes.
Key takeaways
- L3 VNI vs L2 VNI — the L3 VNI (65001) carries routed inter-subnet traffic for the entire VRF. L2 VNIs (101100, 101101, …) carry bridged intra-VLAN traffic. One VRF, one L3 VNI, many L2 VNIs.
- Symmetric routing means both ingress and egress leaves route. This avoids stretching VLANs across the fabric — a leaf only needs the VLANs it actually serves.
- The export policy is the control knob. It decides which routes become Type-5 advertisements. Start with
protocol directand expand only if needed. - Anycast gateway makes ERB seamless for hosts. The default gateway is the same IP and MAC on every leaf.
- Type 5 scales. Advertising prefixes instead of individual host routes keeps the EVPN table manageable as the fabric grows.
This is the config pattern I use for every new leaf deployment. The VRF, the L3 VNI, the export policy, the anycast IRBs — it’s the same template every time. ERB with Type-5 is one of those designs that just works once you understand the moving parts.