Network Direction

Networking Articles

AWS to ASA VPN Issues

Friday July 20, 2018

Tunnel Banner

ASA to AWS VPN Drops Traffic


I've been working with a company that integrates with several partners. One of these partners uses AWS to host their services and allows connection through site-to-site VPN only.

That shouldn't be a problem at all of course. The company in question has ASA's running Firepower Threat Defence, which supports site-to-site VPN's in a very similar manner to the traditional ASA.

So, I configured an 'always on' policy-based VPN (No VTI support in FTD yet), which seems to work fine. Well, for a while anyway.


ND Logo

If you're not familiar with ASA and FTD VPN's...




So, What's the Problem?

Broken Pipe
The partner noticed that they were losing connection to a database consistently once every hour. A continual ping between servers was showing that network connectivity was dropping at the same time.

On further investigation on both sides, we found that the VPN tunnel was dropping for a few seconds, and coming back up.





While running the continual ping, we saw that there were two pings lost consistently every hour. This doesn't sound like much, but it did make SQL unhappy. The continual ping also made sure that there was no idle timeout causing this problem.

There were no alerts on our internet connection or any other part of the network. Transport seems to be stable, and there was no packet fragmentation.


With the basics out of the way, it's time to look deeper. Looking into the logs in FMC, I found these errors:


FMC Errors
Removing peer from correlator table failed, no match

Rejecting IPSec tunnel: no matching crypto map entry for remote proxy


While it looks like we're onto something here, AWS reports that this is an expected error in some cases. AWS provides an option to configure a backup VPN tunnel. When we don't use a backup tunnel, we get these errors. In this case, we can ignore these logs.


In the FTD device, we can still connect to the classic ASA CLI. From here we can run the old commands that we're used to, such as show vpn-sessiondb l2l.

That command shows us, among other things, how long the session has been up.

From this, I was able to see that the session never went over 60 minutes. In fact, it was dropping exactly at 60 minutes. Definitley looking like a timer expiring somewhere.



We now have two debugs to run:


Debug IPSec
debug crypto isakmp 127
debug crypto ipsec 127


These debugs help us to determine if there's a problem with phase-1 or phase-2 stability.

And what do you know... The device on the AWS side of the tunnel is sending a termination message every hour.





After digging through some AWS and Cisco documentation, I found that AWS use an SA lifetime of 3600 seconds (1 hour). Cisco default to 8 hours in FTD.

I discussed this with TAC, and they agreed that this should be a negotiated value. That is, the two IKE peers should decide on using the lower value. But this doesn't seem to be working.

Normally when this timer expires, the peers should negotiate new session keys. This should be transparent, and not drop any data. In my case, AWS was ready for a new key, but the ASA wasn't. This caused the entire session to drop, and a new session to be created from scratch.


The ultimate fix was to manually configure the two endpoints to use the same values. AWS is not flexible on this point, so I reconfigured the ASA. Once that was done, the tunnel was 100% stable.



Twitter: @NetwrkDirection

YouTube: NetworkDirection

Patreon: NetworkDirection


Suggested Articles



Last update

High CPU in Firepower

Friday June 22, 2018
Circuit Board

High CPU Usage in Firepower

The Symptoms

I use Firepower Management Center quite a bit. Recently, I started getting health monitoring alerts. It looked something like this:

Health Monitor Alert from

Severity: Critical
Module: CPU Usage

Description: Using CPU05 95.34%


These alerts were spamming me every 5 minutes for a few hours.

One of our ASA's running Firepower Services was having a bad time.



The Findings


I couldn't find the answer to this on my own, so I logged a call with the TAC.
The engineer explained that this is quite common in Firepower. He called it an elephant process.

It came down to how SNORT (the IPS engine) works. SNORT is a single-threaded application. So by default, it doesn't take advantage of multi-core processors. To work around this, the ASA runs a separate instance of SNORT for each core.

The problem occurs when there is a large file transfer. In our case, there was a large file being transferred over FTP. This flow gets assigned to a SNORT process, which means that it's assigned to a single CPU core. This runs that core as hard as it can, which results in these alerts.

If the file is large enough, the CPU usage is high long enough to trigger these warnings.



The Solution

These errors can be ignored. This is Firepower's normal behaviour.
You just need to be sure that this is the scenario you find yourself in. I recommend checking if only one core is impacted. If possible, also check if there is a large file transfer going on.



Twitter: @NetwrkDirection

YouTube: NetworkDirection

Patreon: NetworkDirection


Suggested Articles


Getting Started with IPv6 Migration

Wednesday January 17, 2018


Getting Started with IPv6 Migration

We've had a look at the theory behind migration to IPv6. Now have a look at how to put this into practice.





If you're going to migrate to IPv6, you're going to need some IPv6 addresses. There are two ways to get these. One is to go to your provider and get them to give you addresses.

The other is to get some addresses of your own. This is called Provider Independent (PI) addressing and is the most flexible option. This is the option we'll focus on here.


Step 1: Find your local RIR. IANA is the organisation responsible for IP address allocation. They allocate addresses to Regional Internet Registries (RIRs). The RIR, in turn, allocate addresses to you.

The RIR you work with will depend on where you are in the world. Have a look at IANA's website to find your RIR.


Step 2: Register with the RIR. You will need an account to begin. This normally includes a yearly membership fee. The fee varies per RIR, but as an example, APNIC membership is $500 per year.

To become a member, you need to submit an application. It may take up to a week before you get approved.

Once you have a membership, you can begin to request some resources.


Step 3: Get some addresses. IPv6 address blocks are typically assigned in /48 or /32 address blocks. Different RIRs may have different sized blocks available for allocation. Some may even allocate a /56 for very small customers.

The size of the block that you're entitled to will depend on your needs. If you have customers that you need to allocate addresses to, you will be entitled to more addresses.

Each allocation will have an additional cost. The cost is calculated according to a complicated formula. There is usually a fee calculator to help you out.


Step 4: Get an Autonomous System Number. This is optional but required if you want to peer with two internet providers. This is another allocation from the RIR.


Step 5: Decide where to start. This is where you need an understanding of the theory. The primary options are:

  • Edge to core
  • Core to edge
  • IPv6 islands




Address Planning

Just like IPv4, it's important to plan out your address space. Conservation may not be much of an issue, but all the other principles still apply. For example, contiguous subnets make summarisation easier.

You probably already understand address space planning fairly well. Here are just a few tips on how things are different in IPv6.


Allocate /64 subnets everywhere! Yes really. This may come as a shock if you haven't used IPv6 yet, but it's all based around /64 allocations. Even point-to-point links will use /64. 

Notice that you should allocate /64's. If you want, you can use a longer subnet mask. A /126 or /127 is fine to use. But remember, in your address plan, allocate a /64.


When you're allocating addresses to a point-to-point link, consider this scheme. Give one end an ::A address, and the other a ::B address.

There's no technical reason for this. It's just to make things simpler for you later. One end will always be ::A, and the other will always be ::B.

IPv6 P2P  


Be careful with zero compression. This is when there are a lot of zeroes in the subnet, which get compressed. For example, 2001:0DB8:0000:0000::/64 is compressed to 2001:DB8::/64.


Once again, there's no technical reason to avoid this subnet. But consider a non-network person. This type of address can be confusing. This example could be improved by allocating 2001:Db8:0:1::/64.


When planning the address space, think hierarchically, and build in some reserve. That is, don't go around handing out all sequential subnets.

In addition to this, be careful when embedding additional information into IP addresses. This includes VLAN numbers and legacy IPv4 spaces. This may leak information to the internet. You're not hidden behind NAT anymore, remember?

Also, don't put link-local addresses into DNS. Link-local addresses aren't routable and may overlap with other segments. Keep them out of DNS.


RFC 4193 addresses are called Unique Local Addresses (ULA). These are like RFC 1918 private addresses in IPv4. You may be tempted to deploy these in your network, and use NAT.

Avoid this temptation! One goal of IPv6 is to make everything unique. Imagine readdressing later if there's a merger.

In fact, it's recommended to only use ULA addresses if you have a very specific need. Some say that they have 'secret' networks. If you do, consider route-filtering instead.



Sample Address Plan #1

Here is an example of something you might do. This assumes that you have a /48 block of IPv6 addresses. Subnets are all /64 bits long, which gives us 16-bits to play with.

We allocate addresses hierarchically, like this:

  1. Location (4-bits): Could be the country, state or so on. Business unit could also be used here
  2. Buildings (4-bits): Building number, or perhaps sub-levels
  3. Floors (4-bits): Or perhaps areas within a building
  4. Traffic Type (4-bits): Such as data, voice, guest, and so on

IPv6 Sample1  



Sample Address Plan #2

This follows the same scheme as the previous example. This case just changes the order of the bits.

The important thing is to consider how you will summarise traffic. This will depend on your needs, so use your brain here.

IPv6 Sample2  




Transition Method

Method Selection

Now it's time to select a suitable method for migration. There's no one solution to fit every network. To help decide, here are a few questions to ask yourself:

  1. What is the business goal behind IPv6?
  2. What type of network is this? Enterprise? Service provider?
  3. What's the budget and time frame?
  4. Do you need IPv6 end-to-end?
  5. What are the design constraints? Are there legacy applications, old router code, security devices that don't support IPv6?
  6. Are the support staff trained to use IPv6?
  7. Do you need multicast? What about dynamic routing protocols?


Now, decide if you need to remediate the network first. Do you need to upgrade devices to support IPv6 or just tunnel across them?



IPv6 Services

There's more to IPv6 than just IP addresses. There are all the additional services that come along with it.

You can allocate addresses dynamically or manually. Manual configuration is quite complicated, especially for workstations and servers. The dynamic options are Stateless Address Autoconfiguration (SLAAC) and DHCPv6.

SLAAC uses an IPv6 RA (Router Advertisement) message to pass the network prefix to clients. Clients use this prefix to configure their own addresses based on their MAC address. SLAAC is limited, as it cannot pass much additional information to the client.

DHCPv6 is the preferred option, as you can pass more information to the clients. This includes DNS servers, NTP servers, IP addresses, and so on.


IPv6 favours dynamic addressing, so DNS is critical. The AAAA record is the A record equivalent for IPv6. These can be delivered over an IPv4 infrastructure. In fact, this is recommended until IPv6 is fully implemented. This is so anything can access DNS from anywhere.

IPv6 may feel new, but it has been around in some form for quite a while. It is actually older than FHRP's. This is why it has an FHRP of its own built in. It is Neighbour Unreachability Detection (NUD).

The problem with NUD is that it comes from a time before fast convergence was a necessity. It converges in around 30 seconds, which is unacceptable in today's networks. For this reason, it's still recommended to use HSRP, VRRP, or GLBP.




IPv6 adds a new footprint to your network that someone may want to exploit. So, you may want to consider creating some security policies. Here, we'll look at a few of the areas you may want to secure.


Security Camera
Many security principles apply to IPv6 in the same way that they did for IPv4. For example, you need to secure the control plane. One way to do this is to use authentication between peers in your IGP. Other factors are quite different.

While you're running both v4 and v6, the recommendation is to use IPv4 for management. For example, use SSH over IPv4 to manage your routers. This is because it has proven to be secure, while IPv6 hasn't been around as long.

Also on the point of migration is tunnelling. The tunnels that you're using probably don't include encryption by default. So, if you're traversing insecure networks, you need to turn this on.

One fundamental difference is that IPv6 does not use NAT in the traditional sense. This means that your IP addresses are known end-to-end. This is just one more reason to use host-based firewalls and IPS systems.

ACL's have a seemingly small difference with IPv6. They don't support wildcard masks in IOS. This may have an implication, as this limits the effectiveness of role-based addressing.


IPv6 has a few built-in messages. You may want to filter them out like you might have done with ICMPv4. Try to resist this temptation, as these messages are critical for IPv6 to work correctly. Instead, use the layer-2 security tools shown later.

These messages include:

  • ICMPv6
  • Neighbour Discovery (ND)
  • Registration authority (RA)
  • Duplicate Address Detection (DAD)
  • Redirections


Just like IPv4, there are several security tools to use at layer-2:

  • RA-Guard - Protects from malicious RA messages, but allowing RA messages only on specified ports
  • DHCPv6 Guard - This is the same as DHCP Guard for IPv4
  • Source/Prefix Guard - A legitimate prefix is configured. Packets on the layer-2 segment are inspected to see if they are in the allowed prefix
  • Destination Guard - Protects against cache exhaustion. This stops an attacker sweeping a segment to fill the cache


There are some security advances in IPv6. IPv6 can really benefit from having PKI in the network. This is because of the Secure Neighbour Discovery (SeND) protocol. This protects against attacks like message spoofing.

This is done with a Cryptographically Generated Address (CGA). This uses the network's PKI to verify that the sender really owns the IP address it's sending from.




Twitter: @NetwrkDirection


Suggested Articles





Packet Life - IPv6 Access Lists on IOS

Cisco Live - BRKSPG-2067 - IPv6 Design and Transition Mechanisms

Cisco Live - Intermediate - Enterprise IPv6 Deployment - BRKRST-2301

Cisco Support Forums - IPv6 Subnetting - Overview and Case Study

Marwan Al-shawi and Andre Laurent - Designing for Cisco Network Service Architectures (ARCH) Foundation Learning Guide: CCDP ARCH 300-320 (ISBN 158714462X)


Last update

BGP With a Service Provider

Tuesday December 19, 2017

Traffic Top Down  

BGP With a Service Provider

So you want to peer with a service provider. Never done it before? Overwhelmed? Don't know where to start? If this sounds familiar, then this article is for you!
We're going to have a look at the process of peering with an ISP. We're not going to look too deeply into the technical details. Rather, we'll focus more on the process.





Let's start by considering the high-level topologies that you may use.

In a single-homed topology, you have a single connection to a single ISP. Generally, there are not too many reasons to use BGP here. A few static routes usually do the job well.

In a dual-homed topology, you have two or more links to a single ISP. This may use one or more routers, depending on the level of redundancy that you need. This offers partial redundancy, but the ISP itself is still a single point of failure.

Multi-homing is where more than one ISP is used. This also may use several routers. This provides the most redundancy. An entire ISP failure won't prevent you from accessing the internet.

It is possible to have two ISP's on a single router. The router is still a single point of failure, so lose the router, and you lose internet access.


BGP SingleRouters  


The topology that you choose will depend on your business requirements. A tight budget may require a single provider or a single router. Tight SLA's may require that you have no single point of failure. It's important to discuss what's important to the business.

The link between your router and the ISP is usually a /30 or /31 network. This is required for each ISP link. The easiest option is to get the ISP to assign these IP addresses.


When you have multiple ISP's, it's important that you don't become a transit area. This is where one provider sends traffic through your network, to get to the other provider.




Addresses and ASN's

Address Types

Public IP addressing can be Provider Assigned (PA), or Provider Independent (PI).

Your ISP can assign PA addresses to you. These are usually quite simple to get and come in small quantities (starting at about /28). The significant downside is that you cannot use them with or migrate to another provider. This makes them suitable for a single provider topology only.

To get PA addresses, you simply need to ask your ISP. They will usually require a justification of some sort for IPv4 addresses. This is usually a simple process of explaining how many addresses you will need over the next 6-12 months. Also explain how you plan to conserve addresses (hint: NAT). Your provider may charge a fee for the addresses.

The RIR for your area (such as RIPE, or APNIC) can assign PI addresses to you. These addresses are not tied to a particular provider. This makes them suitable for multi-ISP topologies as well as single. You can also take them with you if you change providers.

To get these addresses you need to be a member of the RIR. This incurs an annual membership fee. IP blocks are usually larger, starting with a /23. This also needs a justification and a plan for how many addresses you will need over the next 12-24 months.




IPv4 addressing is still most commonly used. But, if you're going to the effort of setting up a new peering, why not go dual-stack? This would enable IPv6 at the edge. This might make it a bit easier for you in the long term.

The process for IPv6 is nearly identical to IPv4. The only differences are that they are usually allocated in blocks of /32 or /48. Different RIR's may have different block sizes. The uplink to the ISP will typically use a /64 network, rather than /30.

Unless otherwise specified, the rest of this article will assume IPv4 for simplicity.




You will need an Autonomous System Number to peer with BGP. If you are peering with a single ISP, you may use a private or public ASN. I would recommend using a public ASN. This makes everything simpler if there are any changes in future.

Peering with multiple ISP's requires you to use a public ASN. These are also allocated by the RIR.




Working with ISP's

To kick off the process, you will need to submit an application to peer with your ISP. For the most part, they just want to get some information from you about how you want to peer.

They will want to know:

  • If you want to use authentication
  • The routing table type (more on this soon)
  • IP addressing for the link between you and them
  • The AS number that you want to use


Fibre Optic
The internet has about 630K routes, at the time I'm writing this. You have the option to download them all from the ISP. This is a lot of routes though, which doubles if you learn them from two providers. This consumes a lot of resources on the router.

The provider will likely give you some other options, such as default gateway only. In this case, you only learn a single route. This also makes it a bit difficult to load-share your outbound traffic.

An alternative is domestic routes with default-gateway. This is the routing table for all routes in your country, and a default route to find the rest. One of the providers I work with sends me about 23K routes in the domestic routing table.

Providers may also have an option of sending you routes to their other customers, and a default route. Discuss this with them, and decide what will work in your case, and what your routers will support.


Many (but not all) providers use communities. This is a way to allow you to do your own traffic engineering without having to get them involved. This will vary per ISP, so get them to send you their documentation.


While you at it, you may also want to consider Bogon route filtering. It's not a requirement at all, but it may help with security.




Advertising Routes

Times Square
At some point, you will need to send routes to your provider. But there's something important that I want you to keep in mind. Your provider will filter what you send them.

This means that you need to agree on the routes that you will send ahead of time. They are preventing you from flooding them with invalid or sub-optimal routes.

This sounds restrictive but is not as bad as it seems. BGP still provides dynamic routing. So, you can still advertise/delete routes, and give them different attributes. Also, you can usually use a portal that your ISP provides to request to advertise more routes.


If you want to advertise your own PI addresses, be aware that the smallest prefix that you can advertise is a /24. This is so the internet routing table doesn't get too large. Remember how I said that it has about 630K routes? Well, imagine if everyone were advertising /30's as well.

There is a catch here. Some providers will claim to let you advertise smaller networks. Sounds good, but the problem is that upstream providers will filter them out.

The exception to all this is if you have PA addresses. You can advertise smaller blocks because your provider owns the summary address. The /28 that they're giving you is part of a larger block that they're advertising elsewhere.


Also, you may want to consider ROA and RPKI. This are security mechanisms that only allows you to advertise the routes that you own. This prevents someone else trying to advertise your routes, and causing a black hole.




Load Sharing

Load sharing refers to using multiple paths, rather than active/standby. This is in contrast to default BGP, which uses a single path only.

The approach will vary, depending on the topology you're going to use.


Single ISP: Dual links, Single router at both ends

Use a loopback interface for eBGP peering. This needs eBGP multihop configured. Use an IGP or static routes to get to the peer router. If you're going to use static routes, consider using route tracking.

The underlying IGP (or static routes) enable ECMP. This means that you only need a single peer.

BGP Provider 1  


Single ISP: Dual links, Dual routers at the ISP end

This needs two eBGP sessions, so IGP-based ECMP is not an option.

For outbound routing, BGP multi-pathing us needed. Use the maximum-paths command. This enables ECMP which can use with up to six links.

Inbound routing is mostly controlled by the ISP.

BGP Provider 2  


Single ISP: Dual links, Dual routers at both ends

This is similar to the last scenario.

Configure iBGP between the two routers. For outbound traffic, configure an FHRP like HSRP. The routers can then use the prefixes from eBGP or iBGP to select the best router for outbound traffic. This works if you are using full routes or domestic routes. This will not work with the default route only.

You can use MED and AS-PATH to influence which router the ISP will prefer to send traffic to.

BGP Provider 3  


Dual ISP's: Dual links, Single router at the Enterprise

Use the same tricks as the last few scenarios.

For inbound routing, break the address space in half, and advertise half the space over each link. Use AS-PATH prepending to make one link more desirable for one half of the IP space.

Alternatively, advertise the same space over both links. Traffic from the outside world will use the ISP that they're closer to (based on AS-Path).

BGP Provider 4  


Dual ISP's: Dual links, Dual routers at both ends

Use the same tricks as the last scenario, with the addition of iBGP and FHRP between the routers.

BGP Provider 5  




Final Thoughts

That should give you a gentle introduction to BGP peering with a service provider.

So, what do you think? Did I miss anything? Have any tips of your own? Please leave me a comment below.

Twitter: @NetwrkDirection 


Suggested Articles





Cisco Live - https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=89232

Marwan Al-shawi and Andre Laurent - Designing for Cisco Network Service Architectures (ARCH) Foundation Learning Guide: CCDP ARCH 300-320 (ISBN 158714462X)


vPC and LAG Convergence

Thursday November 16, 2017


vPC and LAG Convergence

Recently Cisco released NXOS 7.0(3)I7(1) for the Nexus 9000 series switches. This brings two new features, called vPC Fast Convergence and LACP Convergence. These are also available on the 7000 series switches.

There wasn't a lot of information readily available, so I'm going to share what I've learned here. I'd like to take a moment to thank Amith Ronad from Cisco for helping me to understand these features.




LACP Convergence

In a normal etherchannel, LACP starts negotiating when a member link comes up. Around the same time, the switch starts to make VLANs available on the link.

LACP negotiation finishes first, but the VLANs may not all be available yet. This means that traffic can pass over the link, but the VLANs they are tagged with are unavailable. For a brief moment, some VLANs are pruned from the link, blackholing traffic.

The LACP Convergence feature changes the order a little. LACP frames are not sent until all the relevant the VLANs are available on the link. This prevents traffic blackholing.


LACP Convergence
Switch(config)# interface port-channel 10
Switch(config-if)# lacp graceful-convergence




vPC Fast Convergence

vPC has several failure-handling features in place. When the peer-link fails, the vPC member ports of the secondary switch are shut down. This is done to prevent a split-brain scenario.

The normal behaviour in a case like this is for vPC to shut down any SVI's whose VLANs are on the peer-link. Sometimes these SVI's are down before the vPC member ports finish shutting down. The effect is that traffic may flow to a member port, only to find that the SVI is down. This is another case where traffic is briefly blackholed.

If you use the vPC Fast Convergence command, you enable a new feature called MCT Down Handler. This feature creates a list of member ports, layer-3 interfaces (including SVI's), and the VLANs they use. When the peer-link fails, it sends a 'suspend' message to them all at once. The practical benefit of this is that the SVI's do not shut down first, preventing traffic loss.


Fast Convergence
Switch (config)# vpc domain 10
Switch (config-vpc-domain)# fast-convergence


Light Bulb
This leads to an interesting question. Shouldn't autostate keep the SVI's up until all the member ports are down?

In a non-vPC environment, this is true. vPC however, is special and changes the rules. It will bring down any interfaces whose VLAN is on the peer-link. This includes orphan ports by the way, which is not ideal. There are two ways you design around this.

The first option is run a separate trunk link between your switch pair. This would only carry non-vPC VLANs. These VLANs are manually pruned from the peer-link. These VLANs will no longer be affected by vPC failures.

The second option is to use the dual-active exclude interface-vlan command. This will separate the SVI status from the peer-link failure. Of the two, the first option would be preferable.





There are some small benefits to be gained from these new features. Without them, link failures may lead to 500ms of traffic loss. With the new features, this can be decreased to 50-250ms of loss. Whether this provides any practical benefit to you will depend on your environment.

There does not appear to be any downsides to enabling these features. It's surprising really, that they're commands and not just built into vPC. I can't think of any reasons you wouldn't want to enable them.





ND Logo
Vritual Port Channels

More on vPC...

vPC'sAdvanced vPC's








Twitter: @NetwrkDirection


Hitless vPC Role Change

Thursday October 19, 2017

Traffic Top Down  

Hitless vPC Role Change


Yoda "Always two there are; no more, no less. A vPC primary and a vPC secondary."

Yoda (paraphrased)




Like Yoda says, there has always been a primary and secondary in a vPC relationship. But, they've always been non-preemptive. That means that a secondary will not automatically become primary unless there's a failure of some sort.

So, if you reboot the primary switch, the secondary will become primary. When the first switch finished booting up, it will stay secondary. This is because a role change would be disruptive.


A feature called vPC Role Preempt, or 'Hitless Role Change' was recently introduced on the Nexus 9K in version 7.0(3)I7(1).  This was previously introduced on the 7K in version 7.3(0)D1(1).

This doesn't enable automatic preemption. But, it does allow you to force a transition from primary to secondary without any traffic loss.

This would be a useful feature during maintenance. If you need to reboot a switch, transition it to secondary first. Once it's finished rebooting, transition it back to the primary role.


You can verify the role a switch currently has with show vpc role. This also shows the role-priority.


Switch-1# show vpc role

vPC Role status
vPC role                        : primary
Dual Active Detection Status    : 0
vPC system-mac                  : 00:23:04:ee:be:05
vPC system-priority             : 1000
vPC local system-mac            : 28:6f:7f:ae:6c:39
vPC local role-priority         : 10
vPC local config role-priority  : 10
vPC peer system-mac             : 28:6f:7f:ae:b4:19
vPC peer role-priority          : 15
vPC peer config role-priority   : 15



The vpc role preempt command is used to switch roles.

Before switching, the priorities have to be changed. To become primary, a switch needs to have the lowest role priority.


Switch-2# vpc role preempt
ERROR: Couldn't perform role change: to change local to Primary, please adjust vpc role priority () on local and/or peer vpc so that local one is smaller than peer one.

Switch-1# vpc role preempt
ERROR: Couldn't perform role change: to change local to Secondary, please adjust vpc role priority () on local and/or peer vpc so that local one is larger than peer one.


When you change priority, the switch reminds you that the change won't take effect until you preempt the roles.


Switch-1(config-vpc-domain)# role priority 10
 Change will take effect after user has:
   1. Triggered "vpc role preempt" (non-disruptive - no traffic loss on STP root switch)
OR 2. Re-initd the vPC peer-link (disruptive)
 !!:: vPCs will be flapped on current primary vPC switch while attempting option 2 ::!!


Finally, the roles can change. This generates syslog messages on both switches.


Switch-1# vpc role preempt
Please ensure peer-switch is enabled and operational('show spanning-tree summary'). Continue (yes/no)? [no] y
Switch-1# 2017 Oct 17 03:59:55 Switch-1 %$ VDC-1 %$ %VPC-2-VPC_ROLE_CHANGE_NOTIFICATION: VPC role is changed from Master to Slave

Switch-2# 2017 Oct 17 03:59:16 Switch-2 %$ VDC-1 %$ %VPC-2-VPC_ROLE_CHANGE_NOTIFICATION: VPC role is changed from Slave to Master




ND Logo
Vritual Port Channels

More on vPC...

vPC'sAdvanced vPC's








Twitter: @NetwrkDirection


Planning a Move to the Data Centre?

Thursday September 21, 2017

 Fibre SC  

Planning a Move to the Data Centre?

If you've never worked in a third-party data centre before, the first time can be a bit of a shock. There are a lot of rules and procedures to follow, and each data centre is a bit different from the last one.

If you're going to deploy some new services in a data centre, this post is for you. The goal is to help make you aware of some factors you may not have considered yet. This will help make your first time in the data centre a smooth one. 




Welcome to the Data Centre

Getting Access

Police Security
The first step is getting access to the premises. Data centres are secure facilities, so can't just walk in. First, your company needs to add you to the list of staff allowed to access the building. This usually involves logging onto the data centre's portal and adding you as a user. Sometimes it means filling in some paperwork and emailing it to them. 

There are a lot of safety and security procedures to follow, which requires an induction. There are usually two parts to this. One is an online induction, where you read their manual, and go through a short test. There may be a general and a site-specific test to go through. The second part is the onsite walkthrough. This is where one of the local staff show you where the emergency exits are, and so on.

Congratulations! You now have access to the facility! At this point, they will take your photo and give you some sort of access card. In some data centres, you are able to come and go as you please. In others, you have to sign in and out with security at every visit.


Do you want to bring some equipment in? You'll need a ticket for that...

This part varies with each data centre. They need you to log a ticket for any work that they do. This includes performing work on your behalf and receiving deliveries.

The more strict data centres need tickets for everything. One that I go to needs a pre-arranged ticket number to get in the front door. Another one requires a ticket number to park your car in their carpark.

If you're not sure, log a ticket. You don't want to turn up and find that you can't do your job because the paperwork's not done.



Remote Work

It's not uncommon for data centres to be a distance away. Sometimes another state, or even another country. In cases like this, it may be useful to get the data centre staff to do some of the physical work for you. This is useful if you need a hard disk replaced, a cable repatched, or a tape changed.

This is not generally a cheap option, especially if it's an emergency request. But often it's still cheaper and faster than getting on a plane and doing it yourself.

Oh, and you will definitely need to raise a ticket.




Inside the Building


When you get into the building, you will find data halls, provider rooms, and shared areas.

The data halls are where the rows of racks are. There are 'hot' and 'cold' aisles. Install your equipment so it pulls cold air from the cold aisle, with the exhaust facing the hot aisle.

techtarget.com - How do I cool high-density racks?  


Some data centres are relaxed on this and don't care which way the air flows in a practical sense. On the other end of the strictness scale, onsite staff may come and inspect your work at the end of the day.

Most data centres will need you to install 'blanking plates' in unused rack space, to keep the hot and cold aisles separate. One data centre I went to in Melbourne demanded brush plates for cables, and duct tape to seal any gaps. Blanking plates are usually provided for you to use.

energystar.gov - Properly Deployed Airflow Management Devices


There is one rule that all data centres have. ​No cardboard in the data halls. Cardboard can be a fire hazard, with all the hot equipment around. Also, fibres can get sucked into equipment and shorten the lifespan of equipment. If you need boxes for storage, bring your own plastic tubs.

Another rule for the data hall is no photography. This is for security, but if you're only photographing your own racks, they usually don't mind.

Service providers have their own areas separate to regular customers. I've heard these called 'basements' or 'meet-me rooms'. When you arrange comms with your provider of choice, you connect to the equipment in these rooms. We'll get to comms later.

Shared areas include provisioning rooms, delivery areas, and break rooms. These are self-explanatory. Use the provisioning rooms to unpack your equipment (no cardboard remember?) before you take it to your racks. Some provisioning rooms have racks to install your equipment in for configuration.

In some data centres, you need to book in the provisioning rooms. Do not leave this to the last minute. They often get booked out, and you have nowhere to unpack your kit. Honestly, I've never had success booking these rooms. Even when I do everything right, they double-book them, and I end up unpacking my boxes in the delivery area.




A good sized data centre will give you the option of using regular racks, or a cage. A cage, as the name suggests, an area that's caged off from the rest of the data hall. This provides an extra layer of security, as you have to unlock the cage to get in. For the most part, they are just full of racks anyway, but you can put other things in there if you wish. I have even seen a desk in a cage.

Equinix - How to Speak Like a Data Center Geek  


Rack security varies between data centres. Some racks can be opened by a phone app or portal. Others need a physical key. Others need the data centre staff to unlock them for you. Generally, unless you provide your own racks in a cage, you don't get a choice. It's up to what the data centre provides. Personally, I prefer the kind that you can unlock remotely. This is useful if you need to give a third-party access to a specific rack, while you're not present. Access to racks like this is usually logged too.


When you rent racks, you have to pay for power draw and cooling. Mostly, cooling matches the power draw, but some data centres manage this separately. Power draw is measured in kilowatts, or sometimes in 'kVA'. As you would expect, the more devices, the higher the power draw, and the more you need to pay.

Racks will have two power rails, one on each side. These power rails are 'A' and 'B' feeds, which means that the power comes from different sources. The idea is that if the A-feed fails, the B-feed will still be ok. This also means that your devices should have dual power supplies.

Unfortunately, not all devices will have dual-power supplies. So, what then? This is when you want to get a Transfer Switch. This is a power device that connects to both power rails. Devices with single power supplies connect here and get the benefit of the dual power feeds.

schneider-electric.com - Rack-mount Transfer Switches  


As you've probably realised by now, the data centre provides your power. That generally means that you don't need to provide your own UPS.




Cross Connects, Cabling and Comms

Inter-Rack Cabling

When it comes to running cables between racks, there are a few options. As you've probably guessed, this may depend on the data centre. Or, more specifically, it depends on the racks in the data centre.

Many racks have 'punch out' holes. This is where you can remove a plate from the rack, so you can run cables between racks. This is the easiest option but can lead to messy cabling. Messy cabling can also lead to reduced air-flow, reduced cooling, and higher risk of cabling issues.

Some racks have solid sides on them, so punch-out holes are not an option. In this case, you may be able to use an overhead basket to run your cables. This 'basket' is a small mesh platform above the racks. You can run your cables out of the top of the rack, across the basket, and into another rack. Baskets are usually only available on request. Also, they only help if your racks are all together.

In the picture below, black cables coming out of the racks run across the basket. You can use whichever cables you want here.

If your racks aren't together, you will need a cross-connect. This is what the yellow areas shown below are for.

Panduit - Data Centre fibre routing containment systems  




A cross-connect is where the data centre runs cables for you. If you look above your rack, you will see areas where cross-connects and other cabling is run. Generally, you won't have any access to these yourself, unless you have a special license.

Data Hall  


Cross-connects may use different types of cables. These are usually CAT6 copper, Single-Mode fibre, and Multi-Mode Fibre. The fibre may be single-core or dual-core. Remember to consider the length of the cable run before selecting your cabling type.

To get a cross-connect installed, first, you will need to get a patch panel or FOBOT installed at the top of your rack. This is where the cabling will terminate. Some data centres will also need you to install structured cabling. This is where they pre-provision 12-cores of fibre to their infrastructure. This is so they only need to access your rack once. When you order more cross-connects, they connect the new cross-connect to the existing structured cabling. Once all the cores are used, more structured cabling needs to be installed.

stronglink.com.au - What is a FOBOT?  


Watch for the type of fibre connectors that are installed in the FOBOT or patch panel. Some data centres use LC connectors, and some use SC connectors. You will need to make sure your patch leads have the matching connectors.

Cross-connects are usually ordered in the data centres portal. Here, you provide information for the 'A' end and the 'B' end. There is usually a one-time setup fee and an on-going monthly cost.



Provider Connections

Eventually, your racks will need access to the outside world. This means that you will need WAN or internet connections.

To go about this, you will need to talk to a provider and see if they also have a presence in the data centre. Then you can run a cross-connect to your provider, which they will connect to their switch. After that's done, you can connect your router to the patch panel, and the rest is as normal.




What Else?

I hope this will help ease your transition into third-party data centres.

Do you have any other tips? Anything critical I've missed? Please drop a comment below, or send me a message on Twitter.


Twitter: @NetwrkDirection




Dynamic Routing and FEX

Wednesday August 30, 2017

Dynamic Routing and Fabric Extenders

The Problem

A few weeks ago I was working on a customer's network when I found an OSPF problem. For some reason, an ASA wouldn't peer with a Nexus switch.
To make it a bit weirder, the problem only happened on the default VRF, and only with OSPFv3. On the Nexus side, I could see the ASA neighbour, but it was stuck in INIT. On the ASA side, I couldn't see the neighbour at all.


After hours of troubleshooting, I got the TAC involved. But before I tell you what they found, let me show you the topology I was working with.



There is a pair of ASA's, running Threat Defence. They are an active-standby failover pair.

The ASAs connect to a pair of Nexus Fabric Extenders (FEX). The FEX's connect to Nexus 9000 series parent switches in straight through mode.

The ASA's to the switches with vPC. The layer3 peer-router command makes dynamic routing over vPC possible. Neighbours use SVI's for peering.


ND Logo
Virtual Port Channels

Take a look at how vPC's work

Virtual Port ChannelsRouting over vPC




The Cause

Speed Limit  
I had fallen victim to missing a footnote in one of Cisco's guides. Have a look at the article Cisco Nexus 2000 Series NX-OS Fabric Extender Configuration Guide for Cisco Nexus 9000 Series Switches. Under the Guidelines and Limitations section, the article points out that when a FEX is connected to a 9000 series parent, "the queuing capability on the FEX host interface is limited."

So what does this mean? It means that if we connect a router to a layer-2 FEX port and use SVI's, we cannot reliably run a routing protocol. The port will not prioritise control plane traffic over other types of traffic. This results in dynamic routing traffic getting dropped.

This is the only time I've seen a problem in a topology. Most of the time it works fine. But, the bottom line is that it's an unsupported design.




The Solution

Rubik Solved  
There are three solutions that come to mind. The first is to run static routes. This issue does not affect static routing, as it does not use any control plane messages.

The second option is to bypass the FEX and connect straight to the parent switch. Depending on your port requirements, this may need extra SFPs.

The third possibility is to use routed ports on the FEX. This won't be possible in all cases, but is generally preferred to peering over SVI's anyway. Be aware though, that not all FEX models support routed ports.


Have you had this problem? Any other ideas to solve it? Share your thoughts in the comments below.


Cisco Live Melbourne 2017 - Day 1

Tuesday March 7, 2017

Cisco Live 1

Cisco Live Melbourne 2017 - Day 1

Kylo Ren
We all want to be better at what we do. You wouldn't be reading this if you didn't. In the IT industry, we go to vendor events, where we get to broaden our horizons, and network woth potential colleagues.

I one fortunate man in a crowd of many who just attended day 1 of Cisco Live in Melbourne.

I walked into the convention centre this morning to be greeted by statues of Storm Troopers and Kylo Ren. As it turns out, I was at the wrong end of the convention centre, where a toy expo was being held. Still pretty cool.

Today's topics for me included a deep dive into FirePower, and a gentle introduction to ACI. For my own review, and just to share, I have outlined some of the highlights below. Perhaps it's not new to you, but hopefully it will leave you with an interesting thought or two worth digging deeper into.



Firepower Deep Dive

Reverse Pyramid
The Firepower deep dive focused on the Firepower Threat Defence (FTD) software. If you're not familiar with it, it is a newer code set that runs the Firepower IPS and ASA firewall functions. This image unifies these two technologies. ASA with Firepower Services on the other hand, runs Firepower as a separate software module.


Firepower 2100

The Firepower appliances look fantastic. The problem is that they're pricey when you factor in Firepower TAMC subscriptions. A few months ago we looked at getting a few 4110's to replace some smaller ASA's. Unfortunately, this was blocked higher up the company hierarchy due to cost.

Fortunately, there is now another model, called the Firepower 2100 series. Like the 4100 series, this comes in four models; The 2110, 2120, 2130, and 2140.

They employ a mixture of RJ45 and SFP ports; The exact port configuration varies based on model. I've been looking forward to 10G ports in a firewall, mostly because I want to use Twinax to connect them to my Nexus switches (first world problems right?). The good news for people like me is that this is available in the 2130 and 2140.

Depending on model, the 2100 series claims from 2 to 8.5 Gbps of throughput.



Generally, FTD is configured with Firepower Management Centre (FMC), which is a separate appliance. The problem with that is that FMC does not yet support configuration of all features that FTD supports. Some quick examples are EIGRP, PBR, WCCP, VxLAN, and SysOpt.

The good news is that FlexConfig is here to help. Introduced in FTD 6.2, this feature lets you add traditional ASA CLI commands to configure features that FMC does not yet know about. The catch is that FTD still needs to support the features. If you try to use FlexConfig to configure RA VPN, for example, the config will fail.

This is considered to be a supported workaround. FlexConfig configures individual appliances, and the config doesn't show in FMC.


FastPath and Pre-Filter Policies

Interesting features I didn't know about are FastPath and Pre-Filter Policies.

Generally, a packet will enter into an appliance, and firewall checks will take place. This includes ACL's, NAT lookups, and so on. After this, the IPS engine is consulted, and then additional functions such as routing and NAT are applied.

Sometimes, sending all packets through SNORT is not ideal. Latency sensitive applications would be one of these cases. 

To work with this requirement, Pre-Filter policies can be created to bypass the SNORT engine. When packets pass through without going through the IPS engine, they are using the Fast Path. Fast Path bypasses L7 inspections, SGT (Security Group Tagging), security zones, and SNORT.

This provides a good option for migration from traditional ASA. As a starting point, why not migrate ACLs to Pre-Filter rules? Additional L7, AVC, and IPS rules can be added over time.


SinkHole Servers

DNS policies can be used to identify trusted and untrusted domains. Typically, the action can be to allow or deny traffic based on the destination domain. An additional option is to divert traffic to a Sinkhole server.

The sinkhole server is a trusted server that emulated the blacklisted domain. The traffic between the compromised client and the sinkhole server can then be analysed. For example, the client may try to download malware from the server.

Wondering what to do with that information once you've got it? Me too. More research to be done...


The Life of a Packet

How does a packet live it's life inside FTD? Something like this:

FTD Flow  


Firewall Deployment Modes

There are three ways FTD can be deployed as a firewall (this doesn't include non-firewall deployments, such as sensor-only):

  • Routed - FTD acts as a router, and may participate in dynamic routing
  • Transparent - FTD acts as an L2 bridge, and may appear to be transparent to other devices
  • IRB - Integrated Routing and Bridging. Or, as I like to call it, 'magic unicorn mode'


Yes, IRB is the interesting one here. The other two modes have been around for all eternity. Nothing new there. IRB however, is quite new (released in FTD 6.2).

This allows for a mix of routed and transparent mode. This is definitely something I'll have to look deeper into. The current caveat is that only static routing is supported. Dynamic routing is on the way, but I don't know when.


Coming Soon

Finally, two new features that are nearly here...

Remote Access VPN and Cisco Threat Intellegence Director (CITD) are both going to be in 6.2.1, which should be out by the end of April.



Intro to ACI

ACI, as well as SDN as a whole, is something I know little about. Fortunately I was able to attend an introduction on ACI. I have to say, it looks pretty good.

Here's the highlights...



Water Bottle
The ACI architecture has three components. These are a Spine/Leaf topology, VxLAN switching/routing, and an APIC controller to manage policies.

Just remember this equation: ACI = Spine/Leaf + VxLAN + APIC

The APIC is a hardware appliance, and is connected to leaf switches. APICs are deployed as highly available clusters.

The leaf switches maintain a table of connected IP's. The spine switches maintain a DB, called the Global Station Table, of IP's across the entire fabric.



The APIC controls the fabric with policies. Switches no longer need to be configured individually. Policy configuration can be done with a GUI (which looks like Visio), or a CLI. The CLI does not configure individual switches, but the fabric as a whole.

The APIC is essentially a hierarchical database, containing areas such as Infrastructure, switches, and ports. Of course, it's much larger and complex than that. Think of it as something like Active Directory.

When an APIC is turned on, it first enables LLDP, and discovers the connected leaf switch. The leaf switch also turns on LLDP to discover the connected spine. The spine turns on LLDP, and the process continues until the fabric has been discovered.


Policy Model

Policies are made up of several nested objects, which are a bit like 'containers':

  • Tenant - Top level 'container'. This is a bit like a VDC on a switch
  • VRF - As always, a virtualised routing table
  • Bridge Domain - Logically like a subnet
  • End Point Group (EPG) - Logically like a VLAN



There are also three 'connectivity' components:

  • Contracts - Similar to ACL's. Allows EPG's to communicate (zero trust by default)
  • L2 External EPG - L2 uplink to a switch (like a trunk)
  • L3 External EPG - L3 uplink to a switch (like a routed port)


App Store - Yes Really, an App Store

You read that right. Cisco have an App Store. You can download apps like ServiceNow to the APIC.

What a time to be alive.




It was an intensive first day, especially the 4-hour Firepower session. Plenty more to come over the next three days.

I'm particularly looking forward to coding 101, VxLAN Labs, and ASA High Availability. A few prizes wouldn't go astray.


Were you at Cisco Live? What were the highlights for you? Please drop a comment below.


Twitter: @NetwrkDirection



Firepower Threat Defence 6.2

Tuesday January 24, 2017

Firewall Banner  

Firepower Threat Defence 6.2

Today, FTD 6.2 was released. In this blog post, I'd like to summarise the new and improved features in this version. I may get into deployments and upgrades in a future post if there's interest.



Migration Tool

The migration tool has finally been released! This is probably the most exciting feature in this release.

This is used for migrating from ASA with Firepower Services to FTD. Previously, a migration required recreating all the ASA rules (ACLs, NATs, objects) from scratch. A bit of a killer in my opinion.

Now, the migration tool will automate this process. It will allow up to 600,000 elements to be migrated, which should be enough for most deployments. According to the release notes, this requires the use of the virtual FMC on VMWare or KVM. Not really sure why this is not supported on the physical FMC.

Take note, this is for migrating rules and objects. This is not for upgrading / migrating the software.



Clustering is a feature I've wanted for some time. Well, now it's here! But only on the Firepower appliances...

So, a bit of a highlight and a sad note at the same time. Great news for anyone with a 4100 or 9300, not so great for anyone with 5500's.



Indications of Compromise

IOC's have been upgraded to take users into account. Now IOC's can be used to correlate events with hosts and users.



New Features



New nanagement features include:

  • REST API - Can be used to configure and create interfaces. A good option for ACI
  • FlexConfig - Deploy ASA templates. Enables additional new sub features such as inspections
  • PKI for FMC - Associate PKI certificates with devices in FMC
  • ThreatGrid Integration
  • PKI with Site-to-Site VPN - Use certificates with VPNs. Previously this required preshared keys




FTD virtual edition can now be used in the Azure cloud.

On the 5506 models, IRB (Integrated Bridging and Routing) has been added. This enables multiple physical interfaces to be in the same VLAN. Essentially, this allows a 5506 ASA to be in routed mode, and still have a bridge configured. In short, this allows Layer-2 switching between interfaces.




The old ASA Packet Tracer is back! This is a really useful tool for simulating different types of traffic through the ASA, to see if it's allowed or not. I have found this immensely useful on the ASA in the past, and I'm glad to see that it's now in FTD as well.

Bulk URL lookup is now supported. This is for looking up URLs to get reputation and other information. Previously this was a manual task, but now up to 250 URLs can be queried at once.


Updated Features

There have been a few policy improvements. Previously, if there were certain failures, the SNORT processes would be restarted to handle the fault. This is not always ideal, so now there is a policy option to favour either Security or Continuity.

Additionally, the following features have been improved:

  • ISE and Security Group Tags (SGT)
  • Latency based performance settings
  • Certificates that don't have private keys can be imported



Digging Deeper

This blog post has just been a quick sampler to the changes in FTD 6.2.

If you're looking at deploying this version, investigate the known issues, and the full release notes.


 Plugin disabled

Plugin sharethis cannot be executed.

Twitter: @NetwrkDirection




How Twinax Cables Ruined My Day

Thursday January 19, 2017

The day actually started out pretty well. The weather was nice, I'd had my morning coffees, and I was expecting some new firewalls to arrive. I was especially excited about this point.

You see, I had spent the last few weeks working on a new network design. I had the hardware picked out. The topology was looking good. I even had my cable maps drawn up. Everything was going well.

I'm sitting at my desk half-heartedly working on a document. When would the ASA's arrive? Would I have to wait another day? I'm trying to convince myself that it's not that big a deal when I hear the beep-beep-beep of a truck reversing outside my window. Could this be it? I jumped up and looked out the window. Garbage truck... Well, I guess we need them too.

Hours go by as the clock moves from 9:30 to 10 am. Then I hear another truck. I look out the window... definitely a courier this time... And several large boxes with Cisco printed on the side are unloaded. I'm trying to play it cool. After all, I should really be excited about a few new firewalls turning up should I? I'm a grown man after all. Shouldn't there be more exciting things in my life? I'll work that one out another time. Right now, I have a few new ASA's to unbox!


Seven boxes. Four ASA 5555's and three additional rail kits. I'm still not sure why there are four ASA's and three additional rail kits. Somehow they ended up on the purchase order. Did the supplier slip them in under the radar? Wait. Stay on target. That's a problem for another time.

A colleague of mine, whom I shall call Tim Jim, is there to help with the unboxing. Together, we get the ASA's out and install them in some pre-provisioning racks. They were racked right above some shiny new Nexus switches, which we had configured a few days earlier. Jim slides in behind the rack to begin cabling.

The lighting is poor behind the pre-prod racks, and it's a tight squeeze to get back there. It can be funny to watch people trying to navigate the narrow dark claustrophobic corridor. Trying not to trip over the cables the last person has left hanging. Hoping that the power cables are not live.

I find it even funnier that Jim's about the size of the Statue of Liberty. And that all the equipment is racked about 3RU from the floor. Not only does he have to get behind the rack, he has to crawl.



Anyway, I'm getting off topic. The ASA's have an additional interface card pre-installed. It comes with six GE SFP ports. We wanted this interface card, as the Nexus has all SFP+ ports, no RJ45. We used Twinax cables for data and RJ45 for management. There's a 2248 FEX for all the management ports.

It's all cabled up. Console ports connected to a breakout box. We're ready to go. I log onto the switches and the ASA's. Then the power cuts out. It's only a fraction of a second, but it's enough for the switches to reboot. As it turns out, one of the UPS's in the pre-prod room has bad batteries. It filters power surges, but can't keep anything turned on for more than 0.1 seconds.

The timing's bad, but that's ok. Look on the bright side; I've got my new toys to play with.



OK, let's bring a few interfaces up. Line protocol down, huh? Oh, that's right. The Nexus has 10G ports, and the ASA has 1G. Let's set the speed to 1000 on the switch ports...

ERROR: Speed is 1G, but the transceiver doesn't support this speed

Hold on, what? Sure, the Nexus has 10G ports, but I should be able to set them to 1G. Right? Right?


Oh, no... I've messed up...

Chest tightens. Breathing becomes laboured. Ego deflates.



See where I went wrong? The Twinax cables may be electrical, but they're really just SFP's. Well, they're 10G, so they're technically SFP+.

Do you see it now? SFP's run a particular speed. A 1G SFP runs at 1Gig. A 10G SFP runs at 10Gig. A 40G SFP... Well, you get the idea. What this means is that when an SFP is inserted into a switch port, the switch port will only run at the speed that the SFP supports. That means that the 10G Twinax cable simply can't be slowed down to 1G.

I don't have time for this! We're on a very tight schedule. The customer needs their network upgraded. This is going to set me back. No, no, no, no, this is going to set the whole team back. The server guys can't even start until I'm done. I need to breathe into a paper bag.



While I'm feeling ice daggers in my heart, Jim keeps a cool head. He's good at this mostly because he has no hair for insulation. He's also quite good at crisis management, and before long, we're digging up 1G Multimode SX SFPs from the spare parts bin. Maybe it's not sp bad after all? Just replace a few cables and SFP's, and we're ok...

Hang on. We're not done yet. When connecting the ASA to the Nexus, the Nexus port would come up, but the ASA's would not. It would not budge from Line Protocol Down. On top of that, every time the switch port is manually shut/no shut, the ASA would pop up an error:

Reached an autonegotiation error limit of 10 for B/D/F=20/0/1

I'm riding the rollercoaster of emotion. Time to call TAC. A nice guy from the Sydney TAC was able to help me out. He suggested running the no negotiate auto command on the Nexus switch port. What do you know? it works! We now have a way to move forward.


It's such as simple thing. Twinax is a cable with SFP's built in. It can't have its speed changed. I've been networking for years, and somehow I tripped over something so simple.

It's been 30 hours since this happened, and I'm finally starting to breathe again.


Had a similar experience? Please log in with LinkedIn or Facebook, and post a comment below.

Fibre SC

40G in the DC (Or, How I Learned to Stop Worrying and Love BiDi)

Thursday December 15, 2016

40G in the DC (Or, How I Learned to Stop Worrying and Love BiDi)

Fibre SC

The Need for Speed

Have you ever felt dismay that your 10G network is reaching capacity? Or perhaps you've felt the dread of uncertainty that your design does not provide enough bandwidth?

You are not alone.


For a while now, it's been common to run 10G on host ports in the access layer (or leaf, depending on your topology). Higher load on the access layer means more throughput in the distribution layer. This is one reason why there's an increasing trend to move to 40G in the data centre.

I felt this in a case where I needed to support an extra rack of servers, running a dense virtual machine load. This rack was not next to the rest of the equipment, so I decided to put a pair of 10G Nexus FEX at the top of the rack. 

This is in a data centre, so each FEX uplink requires a cross connect, which incurs a monthly charge. Wanting to limit the number of cross connects, 40G seemed like the best solution. At least until I read a little more on how 40G fibre works.



10G vs 40G

MPO 12
10G and 40G are a bit different. 10G SR transcievers use a pair of multi-mode fibre (MMF) cores, and have LC connectors. It is likely that your 10G data centre network uses short MMF runs between racks.

Traditional 40G SR trancievers are a different story. 40G SR uses an MMF ribbon, containing 8 to 12 fibre cores, and use an MPO-12 connector. The difference in cabling requirements is what makes it so hard to move from 10G to 40G. The MMF ribbon uses four pairs of cores, each one 10G capable. As shown in the image below, in practice these ribbons contain 12 cores, four of which are wasted.



MPO 12 Diagram  


BiDi to the Rescue

If you want 40G, but don't want to replace your fibre, you're in luck. BiDi (Bi-Directional) optics can save your fibre runs. BiDi transceivers use 2 core multi-mode fibre with LC connectors. This makes it suitable for reusing your existing fibre plant. 

BiDi uses two 20G bi-directional channels across two fibre cores. Each core has two 20G wavalengths (one for send and one for receive) at the same time. This results in 40G full duplex transmission across the fibre pair. 

BiDi supports a run of up to 100 metres (328 feet) on OM3 multi-mode, or 150 metres (492 feet) on OM4.






BiDi offers a relatively cheap and easy solution to run 40G across racks that are close together. With a limit of 150 metres, it's not made for runs between data halls. 

This solution uses multi-mode fibre which is simple if you run your own fibre between racks. If you need data centre provided cross connects, make sure they support MMF. Some data centres only support SMF cross connects.

You may need 40G over a longer run, or perhaps MMF is not an option. In this case, an alternative is to use a non-BiDi 40G SFP for single-mode fibre. Have a look at the QSFP-40G-LR4 or QSFP-40G-LR4-S.  




Like the article?

Below is my twitter . You know what to do.



Images from Cisco.com


lease log in to post comments

What is a FOBOT?

Tuesday December 13, 2016

What is a FOBOT?


Today, a data centre account manager asked which rack we wanted our FOBOT installed into. I'm just a humble network guy, and in my ignorance, I had to admit that I didn't know what he was talking about. I've done some Googling, and dear reader, I hope to save you some time some day.

A FOBOT, shown to the right, is a Fibre Optic Break Out Tray. I took a look at the picture and thought "Sure, but why couldn't you just say patch panel?". They may look similar, but as it turns out, there are a few differences.

A FOBOT is a bit more advanced than a simple patch panel. A patch panel usually offers at 12 or more holes where the cables are terminated. As a data centre customer required a cross connect, the data centre will terminate it in the patch panel. The patch panel may also have mixed cable types (for example, some fibre and some CAT6 cabling). The FOBOT will be pre-provisioned with 12-core fibre when it's installed in the top of the rack. Upon ordering cross-connects, the data centre staff then patch it through as required.

The FOBOT is also a bit more organised, and more rugged. The cables aren't just patched through to the back of the panel, they're secured in the housing.

Click the image to the right for more information.

ND Icon

Please log in to post comments

Does Your Network Need Some Firepower?

Sunday December 4, 2016

Does Your Network Need Some Firepower?

Gun Banner  

It Started at the End

AKA, It's Good to Have All the Information Up Front

This story begins a few weeks ago at the end of a network design. Usually it is odd for a story to start at the end, but in reality, thats where things got interesting.

The team and I were designing a data centre network, used to support a hyper-converged compute platform. In the compute stack is a custom written suite of applications.

In the core, we used Nexus 9000's. For firewalling the edge and DMZ, we used ASA 5500's. And underneath the network lives a truck load of compute.

Nice and simple right? Of course not. It wouldn't be interesting if it was simple, and the developers made sure it wasn't.

At the 11th hour, we got a call from one of the developers, with concerns about throughput. He said there is an SQL server farm in the DMZ, which would generate at least 1Gbps of traffic to the inside network. The poor old 5525-x just can't handle that kind of throughput, especially with Firepower.

So it's time to rethink the design. We considered two basic ways forward from here. Use larger firewalls that can handle the throughput, or more firewalls to share the load. While looking into the larger firewalls option, I decided to have a look at the Firepower 4100 series.

The Firepower 4100 appliances are new on the scene, so I had to put in a bit of research. I aim to share with you the results of my research, to provide a high-level overview to these firewalls. As it turns out, they're a bit different from the traditional ASA's.


Enter the Firepower Appliances

When Cisco design firewalls, they tend to target three different families:

  • SMB & Distributed Enterprise
  • Commercial and Enterprise
  • Data Centre, High Performance Computing, Service Provider

Firepower 9300 claims a 600% performance increase on the ASA 5585

Cisco targets the Firepower appliances at the high-performance / data centre market. Mid-2015 saw the release of the Firepower 9300 as a high throughput firewall/IPS. In fact, it has a 600% performance increase on the 5585. Unfortunately, this is out of the price range for many of us.

Luckily, in early 2016, Cisco released the Firepower 4100 series. This provides us with another option if we need more than the 5585 can provide. If you can't justify the ludicrously expensive 9300, you can use the obscenely expensive 4100 series. 

Been working with ASA's for a while? Good news everyone! The Firepower appliances can run the traditional 'ASA with Firepower Services' image.

Do you like the idea of converging firewall and IPS functions? You're in luck. The Firepower appliances can also run the Firepower Threat Defence (FTD) image.

If you havent heard about FTD yet, it is the new unified code image for ASA's and Firepower appliances. It enables the appliance to run as both a firewall and Firepower IPS at the same time. This is a little different to ASA with Firepower, as they run as two separate software modules.

You can use Firepower Management Centre (FMC) to manage FTD for IPS and firewall rules. In contrast, ASA with Firepower Services can only have IPS services managed from FMC. 

It appears that FTD will be the future of Cisco firewalls.

Police Security  



There are two main types of software on the Firepower appliance. Platform Bundles and Applications.

FXOS, or Firepower eXtensible Operating System, is the Firepower operating system. For this reason, the Firepower appliance is also known as the FXOS chassis. FXOS consists of several images for managing the supervisor and security engine. Together, these are called the platform bundle.

The FXOS Chassis can run FTD or ASA images

The FXOS security engine can run different application images. These include FTD, ASA, or Radware's DDoS services. At a high level, this is like running a virtual machine on a hypervisor.

Application images can be stored offline on the supervisor. These files are CSP (Cisco Secure Package) files. This may include various versions of the same image.

An administrator can then deploy the CSP's to the security engine. The process of deploying an image known as logical device creation. These offline images are also used to update a logical device.

Remember to check that the FXOS version and the image version are compatible. Additionally, be aware that only one FTD or ASA image is deployed to the security engine at one time.




FXOS does need a bit of special configuration. This does seem to be a 'set and forget' config on the 4100 series though.

To complete the initial configuration, use the CLI over the console port. On first boot, there is a text based wizard to run through. The wizard sets the management IP, host name, and configures an NTP server. NTP is mandatory, by the way.

The initial wizard does have a slight twist though. It allows you to restore an FXOS backup, rather than performing initial configuration. This becomes useful when a failed device gets replaced.

After the intial setup wizard is complete, it's time for more configuration. You can continue using the CLI, or connect to the web interface.

The physical interfaces in the FXOS chassis are 'owned' by FXOS, not the security image (such as FTD). If you need etherchannels, configure them in FXOS first. They are then allocated to the security image. To me, this feels a bit like the system context on the traditional ASA.

There is no clustering across several FXOS chassis. The 9300 does support intra-chassis clustering, as it can run several security engines. In this case, an ASA image can run on two or more engines, and act as a cluster. The 4100 is different, as it only has one security engine. This means that there is no clustering at the FXOS level.


FTD Image

FTD is the new unified code set. It combines traditional firewalling and the Firepower IPS into a single image. The image is stored on the supervisor until its deployment to the security engine, where it runs on top of FXOS.

There are some differences in FTD, as compared to the traditional ASA software. There is not yet feature parity between the ASA image and the FTD image. Some noticable missing features are cluster and multi-context support. It is likely that these features will be an addition in the 6.3 release in around April 2017.

Another difference is that packets are natively inspected by Firepower. the ASA differs, as traffic is copied to the Firepower module for analysis. The FTD method is far more efficient.

The third major difference that I'd like to raise is management. Firepower Management Centre handles FTD management. The CLI is not used for configuration at all. The CLI is still useful for troubleshooting, and showing interface statistics. 

The FXOS Chassis requires FMC for management

Unlike the ASA 5500, the Firepower appliance cannot be managed with the FDM. It requires FMC for management. If you're not familiar with Firepower Device Manager, think of it as the ASDM replacement for FTD.

FMC is deployed as a physical or virtual appliance. The smallest investment you can make into FMC is a two device virtual appliance. Each ASA or firepower appliance consumes an FMC device license.



ASA Image

The traditional ASA image is still a viable option on the Firepower appliance. One reason for doing this includes the full list of features, which aren't yet available on FTD. Another is having current skills available to support the firewall deployment.

I'm going to speculate, but I think that FTD will be the path forward for ASA and Firepower appliances. So, it's important to consider whether it's best to stick with the ASA image, or start fresh with FTD. Currently, moving from ASA to FTD requires reimaging, which erases the config. There is talk of a migration tool coming in the future (maybe in version 6.2).




The 4100 is quite conservative in size, being only 1 unit high. This makes it the smallest of Cisco's data centre firewalls. This is quite impressive when considering how powerful it is. 

It has a power switch on the back; Just make sure you gracefully shut it down before flipping it off.




Firepower Interfaces
One of the best features of the Firepower 4100 is the 10Gb SFP+ interfaces. There are eight of them built in, as well as a 1Gbps SFP management port. This is a nice departure from the 1Gbps RJ45 ports on the smaller ASA 5500's.

There are also two single-width expansion ports for extra interfaces. 1, 10, and 40 Gbps options are available. The only one missing here is the 100Gbps interface expansion. These are double-width, which are only supported on the Firepower 9300.

There are two options for interfaces; Hardware Bypass or Non-Hardware Bypass. Hardware bypassing supports fail-to-wire. If there is a failure in the interface, traffic can continue to flow through the appliance. 

You might wonder why you would ever want traffic to pass through a firewall without inspection. The answer is that you probably wouldn't. If the appliance is only acting as an IPS sensor, it may be preferrable for traffic to continue to flow. Definitely not something you would want for your edge firewall though.



Hardware Acceleration

FP4100 Architecture
The 4100 appliances have a Smart NIC and a Crypto Accelerator. This is some advanced sillicon that enables offloading flows to hardware, for low latency. The Crypto Accelerator enables cryptographic offload for fast encrypt/decrypt operations and VPN acceleration.

The sillicon sits between each CPU and the internal switching fabric, bridging the two. For this reason, the 4110 (single CPU) has ony one Smart NIC / Crypto Accelerator. The rest of the models (dual-CPU) have two.


Model Comparison

All 4100 models use DDR4 memory, and support one or two SSD hard disks (at least one SSD disk is mandatory).

Feature 4110 4120 4120 4150
Processor 1x 12 Cores 2x 12 Cores 2x 18 Cores 2x 22 Cores
Storage 200GB 200GB 400GB 400GB
Memory 64GB 128GB 256GB 256GB
PSU's Single, Dual optional Single, Dual optional Dual Dual

Each appliance has two SSD drive bays. The disk in bay number #1 is used for Firepower (and is mandatory). The second hard disk bay is only for MSP, or Malware Storage Pack.

The MSP disk is 800GB in size, and stores forensic and file data about malware for later analysis.




In the end, the main concern is usually throughput. The table below compares the various 4100 models.

The table below uses the terms 'firewall', 'AVC' and 'NGIPS'. 'Firewall' is regular ACL type traffic based on IP address and port numbers. 

AVC (Application Visibility and Control) is deep packet inspection. In this case the firewall looks into the packet to determine the application in use. 

NGIPS (Next-Generation IPS) is the Firepower inspection module througput.


All vendors use 'magic' numbers when representing their throughput. For network appliances, throughput is generally tested using 64 byte UDP packets. This is the best case scenario, and usually won't reflect a real network. Think of the values below like a comparison rate at a bank. Useful to compare models, but it's not what you will see in the real world.


4110 4120 4140 4150
Firewall 35 Gbps 60 Gbps 70 Gbps 75 Gbps
Firewall + AVC 12 Gbps 20 Gbps 25 Gbps 30 Gbps
Firewall + AVC + NGIPS 10 Gbps 15 Gbps 20 Gbps 24 Gbps
Max Sessions (AVC) 9 mil 15 mil 25 mil 30 mil
New Conn. per sec (AVC) 68k 120k 160k 200k


Impressive stats... So how does this compare to the ASA 5585's? I have tried to put the the 4110 appliance side-by-side with the higher-level 5585's.

While reading the table, keep in the back of your mind that this is a rough comparison. There are several factors that will affect this. For example, whether the ASA or FTD image used. 

For fair comparison, remember that the 5585 does not support FTD. This is because it implements a separate Firepower hardware module). 

I recommend checking out the Firepower and ASA data sheets for more detailed information.

Additionally, remember that Firepower license costs take throughput into account. This means that the more throughtput an appliance has, whether it's used or not, the higher the cost.


4110 5585 SSP40 5585 SSP-60
FW + AVC 12 Gbps 10 Gbps 15 Gbps
FW + AVC + NGIPS 10 Gbps 6 Gbps 10 Gbps
Max Sessions (AVC) 9 mil 1.8mil 4mil
New Conn. per sec (AVC) 68k 120k 160k


As you can see, the Firepower 4110 appliance fits between the SSP-40 and SSP-60 on the 5585.



The Firepower 4100 appliance looks to be a solid option. It's high-speed interfaces, hardware offloading, and tremendous throughput make it quite attractive. Yet, there are some small drawbacks:

  • They can be quite pricey for smaller deployments
  • There are still some features missing in software today

My personal opinion is that the Firepower appliances will begin to replace the 5585's. I also think that FTD will replace the traditional ASA image in the long run. Only time will tell if this is the case.

So, what's your opinion? Are the Firepower appliances worth the price tag? Do you prefer a different vendor?

Please leave a comment below.


ND Icon




Suggested Articles



Please log in to post comments


The Curious Case of the ASA's Security Levels

Sunday November 13, 2016

Firewall Banner  

The Misunderstood Firewall

A few weeks ago, I found that I did not understand the ASA as well as I thought I did. Again.

Even after years of working with the ASA, I still seem to underestimate them. Just when I think I know them well, I find myself in a scenario that causes me to reevaluate what I thought I knew.


A colleague and I were asked to create a few new networks for some corporate Business Units. They intended to have their customer's data in these networks, so they needed multitenancy. Fair to say that it's a standard request so far.

We go about creating VLANs, VRF's, putting the appropriate trunking in place. We go to the ASA, create two sub-interfaces, enter IP addressing, give them names, assign security level 80 to them both, and finally apply an ACL.

Sound fine so far? It looked good to us too. No traffic could pass between them, but they could still get to the internet and their DR sites, so it looked like mission accomplished. Well, it would be a pretty dull blog post if it were that simple.


A few days later, it became known to us that we needed to create a management network that could access both of these environments. OK, fine, no worries, we can do that.

So we repeat the same process of allocating VLANs, sub interfaces, names, security level 80, and so on. Only we have a problem. The management network can't access either of the Business Unit networks.


I begin to check ACLs. No problem there... How about NAT? Looks fine. Surely not routing? Nope, it's ok. What is going on?



What We Have Here Is A Failure To Communicate

So, I was under the impression that security-levels are only used for out of the box security, and they are superseded when ACLs were applied to an interface. It's this misunderstanding that lead to be scratching my head for an hour, wishing I had a bottle of scotch and two glasses in my drawer, like they do in the movies.

Eventually, out of desparation and because I had nearly pulled out every hair on my head, I started thinking about security levels. I knew that there was a command to allow traffic between interfaces with different security levels, but ACL's override them anyway, so that shouldn't apply, right?

Well, as I said, I was running out of ideas. So, I went ahead and changed the security level of the management interface to 85, just for kicks.

Hold on a sec... that ping I had running just started working... What just happened?



So, I Did Some Research...

It was clear that I did not understand the ASA like I think I did. Clearly ACLs don't override security levels. But a quick Google search tells me that they do... What's going on? Time to do my own research.

So I pulled out VIRL, created a simple topology with three-armed ASA and tested out a few different scenarios, which I documented in the ASA Security Levels article.

The basic answer is that ACLs DO override interface security levels. With one exception.

What it comes down to is this: When two interfaces have the same security level, no ACL, even if it's an 'Allow All' ACL, will allow traffic to flow between them without the same-security-traffic global configuration command.



The Conclusion of the Matter

In the end, there are two ways we can deal with this situation (assuming we want traffic passing between these two interfaces).

The first option is to use the same-security-traffic command. This will definitely solve the problem, but it is a global command. This causes a potential problem; If there are several interfaces with the same level, they will all have traffic allowed between them. This may not be a suitable scenario for multitenancy.

In my case, I needed to change the security levels on the interfaces that needed to communicate. Then the ACLs would apply, and I wouldn't change anything globally.

One thing to keep in mind though, is that this can be fantastic in a true-multitenancy environment. Every customer or business unit that needs to be isolated can have the same security level applied. Then there will be no leakage between them, even if there's a misconfigured ACL.


For me, the real conclusion of the matter, all things having been considered, is to show a bit of humility and not assume I know the ASA as well as I think.


I hope this is helpful to someone.

ND Icon  



Please log in to post comments

Telco Networks - SDH/SONET and DWDM

Wednesday November 2, 2016

Recently, I have been required to look into options for connecting data centres between states. There are two main technologies that our provider can offer us; SDH and DWDM.

I'm not in the telco space, so I'm vaguely aware that they both exist, but I really don't know much about either. If you're in the campus or data centre networking space, then like me you probably have to get a connection from site-A to site-B over ethernet, and probably know little more of the underlying technology than that.

This blog will have a brief look at the two technologies, and how they can help us get our frames and packets from site-A to site-B.


Optical Networking

Both SDH and DWDM are telco products that use optical fibre. Most of us won't see the underlying technology that makes it work. Your provider will quote you a price, and give you an ethernet hand-off at either site. You're then free to do what you do best; the switching and routing.

Underneath the hood though, the provider has thousands of fibre runs across the country. Each of these fibre strands can carry multiple conversations or bit-streams, often traditional voice as well as digital data.

Both of these technologies are delivered to you in a point-to-point fashion (or site-to-site if you prefer). This is different to IP based networks such as frame relay or MPLS, which operate at a higher layer, which have a more cloud-like approach. When you use these links, you are paying for a very low-level circuit, not an IP based WAN.



SONET (Synchronous Optical Network) and SDH (Synchronous Digital Hierarchy) are essentially the same technology. The main difference is that SONET is deployed in North America, and SDH is deployed in the rest of the world.

SDH goes back to the early 1990's, and was originally used for transmitting multiple phone calls (encoded in PCM) over a single strand of fibre. Because of these origins, it is protocol independant, meaning that it can carry ATM, ethernet and others.

There are various speeds supported, and offerings often start at relatively low values, such as 10Mbps. Providers may be able to offer fractions of a 'circuit' to a customer, so bandwidths available will vary.



DWDM (Dense Wavelength Division Multiplexing) is a newer technology, and works at an even lower layer than SDH. DWDM is essentially a method of increasing the bandwidth available over existing fibre. This makes it cost-effective, as DWDM can be used instead of laying more fibre strands.

This goal is achieved by using different wavelengths (or colours of visible light) to create multiple 'virtual' fibre strands over a single physical fibre strand. And you thought virtualization was only in the server rack...

Originally, DWDM was meant to provide a cost-efffective means of addressing long distance fibre runs. A run of 1500km (932mi) will run at 10G. Longer runs of up to 4500km (2796mi) are also possible. To put this in perspective, Australia is 4042km (2512mi) wide, and the USA is 4343km (2680mi) wide. That means it is possible to have a single DWDM run across either country!

Metro deployments have made good use of DWDM too. Due to the dense multiplexing of wavelengths, it has been possible to provide more bandwidth over existing fibre, which addresses the fibre exhaustion issue in some areas.



Both of these services can provided dedicated and secure bandwidth to the customer. DWDM is the most cost-effective for long-haul networks, but has the down side of being offered at higher speeds only (it's not uncommon for providers to start their offerings at a minimum of 1G).

In the end, for those of us in the enterprise networking space, it comes down to three main questions that your provider has to answer:

  1. What does it cost?
  2. What are the SLA's?
  3. How will it be handed off?


To get some more information on these technologies, have a look at these websites:

Telecom Transmission Made Simple

The Relation and Difference between SONET/SDH and DWDM



ND Icon  



Please log in to post comments

ASA 5500-X Series and Firepower Threat Defence

Friday October 28, 2016

The History

In the old days, Cisco had a strong firewall offering, called the ASA. Unfortunately, they didn't have a strong offering in the IPS market. To address this disparity, a few years ago Cisco aquired a company called SourceFire in 2013.

SourceFire had been in the IPS industry for a while, and had some great offerings. You may have heard of Snort, an open source IPS that has been around for years. This was originally written by Martin Roesch, the founder of SourceFire.

When Cisco bought SourceFire, they rebranded the IPS suite into a product line called FirePower. FirePower can be deployed on dedicated IPS hardware in the network, or interestengly it can be integrated with the ASA firewall.


The Deployment

There are two ways this integration can be done; One is with the FirePower 4100 series and FirePower 9300 series hardware. These are made for data centre deployments with high bandwidth, and can be a bit pricey. However, they are definitely a FirePower device with ASA functionality.

The second option is to run the FirePower code on an ASA firewall. This is the inverse of the previous option (this is running FirePower on an ASA appliance, rather than running an ASA on a FirePower appliance). This is called ASA with FirePower Services, and will work on any 5500-X series (must have the 'X' in the name) that has an SSD hard disk installed.

For many of us, the second option is more feasible due to cost. This is especially true when deploying a Firewall/IPS on the network edge, or in a small campus or SMB.


The Unification

Unfortunately, there is a downside to this deployment. The FirePower code runs as a separate module on the ASA (the 5585-X series runs this module in hardware, the rest of the models runs it in software). This means that the ASA and FirePower features are administered separately. It also results in packets being redirected from the ASA to the module and back again.

To solve this issue, Cisco have released a unified software image called FirePower Threat Defence. This is a single image with both firewall and IPS functionality rolled onto one.

While the ASA with FirePower Services is still available, supported, and will likely be around for some time yet, this unified option is an attractive one, as it allows the entire ASA appliance to be centrally managed by the FirePower Management Center (formerly FireSight). This is attractive, as even small deployments can use FMC, with a 2-node virtual license.

Even smaller deployments such as an SMB can use the FirePower Device Manager, which is a non-Java replacement for the ASDM. Yes, you heard me right, I said that it doesn't use Java! That's probably the best news you will read all day.


The Fine Print

The current release of FTD is version 6.1 (released in August 2016), and there are a few catches to be aware of. Firstly, there is no support for ASA clustering. This one disappoints me, as I really want to use clustering. There is still HA available, but it is the classic failover model of active/standby.

I have also not been able to find any support for multi-context mode, which severely limits its potential for use in multi-tenanted environments. I for one, hope this is added in a future release.

If you are using the 5585-X series, then this is also not for you. FTD is not supported on the 5585-X. The simple reason is that FirePower is implemented in a completely separate hardware module.

There is some good news though, and that is support for a virtual edition, similar to the ASAv. This is called NGFWv, and it is likely that it will replace the ASAv in future, as it has all the ASA features and more. This currently runs on VMWare, AWS, and KVM.

One interesting restriction is that FTD cannot be configured at the CLI... Yes, really. It can't be configured at the CLI. There is a CLI, but it is only used for troubleshooting (show commands and such). All configuration is either in FDM locally on the appliance, or centrally via FMC.


The Conclusion

I am looking forward to using FirePower Threat Defence. I just need to wait until it supports multi-context mode, which apparently is coming (along with clustering) in version 6.3, slated for March/April 2017.

If you would like to know more, I would recommend watching BRKSEC-2050 ASA FirePower NGFW Typical Deployment Scenarios in the Cisco Live On-Demand library.


As always, I hope this article has been of interest to you.


ND Icon  



Please log in to post comments

BGP, OSPF, and Loopback Interfaces

Thursday September 29, 2016

I have been working on a lab topology that uses BGP externally, and OSPF as the IGP. While working on this I found something interesting. A LAN network could not be advertised into BGP, even though the neighbours were forming correctly.

Below is a simplified topology to explain.


BGP OSPF Loopback  


R1 is an ISP router which runs BGP only. R2 is the customer's edge router, running BGP and OSPF. R3 is an internal router running OSPF only. R3 uses a loopback interface to simulate the /24 network.

OSPF is configured on R3 and R2. R3 advertises the /24 network.

router ospf 10

network area 0
network area 0



router ospf 10

network area 0


BGP is configured on R1 and R2. R2 advertises the /24 network to the ISP

R1 - BGP

router bgp 20
bgp log-neighbor-changes
neighbor remote-as 10


R2 - BGP

router bgp 10
bgp log-neighbor-changes
network mask
neighbor remote-as 20



Looking at R1, there is a BGP neighbour (R2), but there are no prefixes.

R1 - BGP Summary

R1#sh ip bgp summary
BGP router identifier, local AS number 20
BGP table version is 1, main routing table version 1

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 4 10 16 16 1 0 0 00:11:32 0


This happens because the route for /24 is not in the routing table on R2, which prevents R2 from advertising it in BGP:

R2 - Routing table

R2#sh ip route
Output omitted is subnetted, 1 subnets
O 110/2 via, 00:16:02, GigabitEthernet0/2 is variably subnetted, 4 subnets, 2 masks
C is directly connected, GigabitEthernet0/1
L is directly connected, GigabitEthernet0/1
C is directly connected, GigabitEthernet0/2
L is directly connected, GigabitEthernet0/2


Notice how is in the routing table, but /24 is not? Turns out that this happens due to the usage of the loopback interface. Loopbacks are intended to be used for specific IP address (that is, /32 networks), so when this is added into OSPF, it is added as a /32, not a /24.

The reason this happens is because of the OSPF network type for loopback interfaces:

R3 - Loopback

R3#sh ip ospf interface lo0
Loopback0 is up, line protocol is up
Internet Address, Area 0, Attached via Network Statement
Process ID 10, Router ID, Network Type LOOPBACK, Cost: 1
Topology-MTID Cost Disabled Shutdown Topology Name
0 1 no no Base
Loopback interface is treated as a stub Host



If we change the network type to something else, such as point-to-point, this behaviour will change:


R3(config)#int loo
R3(config)#int loopback 0
R3(config-if)#ip ospf network point-to-point


This can be seen correctly in the routing table of R2:

R2 - Routing Table

R2#sh ip route
Output omitted is subnetted, 1 subnets
O 110/2 via, 00:00:28, GigabitEthernet0/2 is variably subnetted, 4 subnets, 2 masks
C is directly connected, GigabitEthernet0/1
L is directly connected, GigabitEthernet0/1
C is directly connected, GigabitEthernet0/2
L is directly connected, GigabitEthernet0/2

And as this is now in the routing table on R2, it can be advertised into BGP:

R1 - BGP Summary

R1#sh ip bgp summary
BGP router identifier, local AS number 20
BGP table version is 2, main routing table version 2
1 network entries using 144 bytes of memory
1 path entries using 80 bytes of memory
1/1 BGP path/bestpath attribute entries using 152 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 400 total bytes of memory
BGP activity 1/0 prefixes, 1/0 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 4 10 33 32 2 0 0 00:25:51 1

While this is very interesting in a lab, it's not likely to become an issue in production, as loopback interfaces will generally only be used for specific IP addresses.

ND Icon  



Please log in to post comments 

Active vs Passive Cables

Thursday September 22, 2016

I've recently been involved in some design work for upgrading some switching in a data centre, specifically in the Nexus Top-of-Rack range (the 9000 series in this case).

While looking into the best options for deploying the vPC peer-link, I was looking at 40Gb QSFP Twinax cables on Cisco's site:

Cisco 40GBASE QSFP Modules Data Sheet


In the Ordering Information, you can see that the twinax cables can be either active or passive. I don't dig into cabling that often, so I had to give myself a quick refresher on what this is, and whether it means anything to me.

A quick explanation is that a passive cable does not draw power from the switches, while active ones do. This is due to signal strength. These type of cables need a signal boost when they get to a certain length. In short, 5m or less are passive cables, over this require the cable to be active.

So does this mean anything to us? In the case I mentioned earlier, no, it doesn't matter. I need a 0.5m or 1m cable, which comes as passive only. No decisions to be made here.

But what if we were cabling between racks? Depending on the length cable run, we may be moving into 'active' territory (which come in 7m and 10m varieties). The price per meter may get a lot higher when active cables are used (especially with breakout cables), so the decision here is whether Twinax cables are the best fit, or whether another option such as fibre is better (that's fiber in American smile ).


Here's a couple of additional articles to have a look through if you're looking for more information:

Difference Between Passive and Active Twinax Cable Assembly

High speed Copper Interconnects Address Critical HPC Hurdles

ND Icon  



Please log in to post comments

Welcome to Network Direction 2.0!

Wednesday August 31, 2016

Have you ever spent hours researching a technical issue, only to forget it all six-months later when you really needed it? This has happened to me a few times, so I thought 'how can I improve my memory, so I remember all the details I spent so much time on?', and embarked on many hours of reading about memory improvement techniques.

Shortly later, I forgot most of what I read, and decided what I really needed was to write it all down. I once had a physics teacher in high school who would say 'try to teach someone else what you're learning, it will make you better at learning it yourself'. The only problem that I have with this, is that no one really wants to listen to me jabber on about one's and zeroes, and therefore I have no one to teach.

The middle-ground of course, is to write it all down online, where I can 'teach' imaginary people all the technical wonders of why the printer just doesn't work sometimes. Just kidding, no one really knows why the printer stopped working.

To this end, in mid-2010, networkdirection.net was born. Well, technically it wasn't just 'born', there were a lot of other things that had to happen first, but I'll spare you the messy details. I'm kind of dreading the awkward day when my son asks me 'Daddy, where do websites come from?'.

I didn't spend a lot of time updating the website, which is a bit sad really, as I kept paying the hosting charges year after year. So, I have decided to give it another go. I have a fresh Tiki Wiki install, a fresh logo that I downloaded for free at publicdomainvectors.org, and some time on the Cisco VIRL kit, that the boss kindly lets me use to do the occasional lab.

I hope that this will be of some use to you, and that you somehow managed to survive my twisted un-understandable sense of humour.

Please feel free to send me feedback at luke.robertson at networkdirection.net

ND Icon