Cisco Live Melbourne 2017 – Day 1

Cisco Live Melbourne 2017 – Day 1

Tuesday March 7, 2017


We all want to be better at what we do. You wouldn’t be reading this if you didn’t. In the IT industry, we go to vendor events, where we get to broaden our horizons, and network woth potential colleagues.

I was one fortunate man in a crowd of many who just attended day 1 of Cisco Live in Melbourne.

I walked into the convention centre this morning to be greeted by statues of Storm Troopers and Kylo Ren. As it turns out, I was at the wrong end of the convention centre, where a toy expo was being held. Still pretty cool.

Today’s topics for me included a deep dive into FirePower, and a gentle introduction to ACI. For my own review, and just to share, I have outlined some of the highlights below. Perhaps it’s not new to you, but hopefully it will leave you with an interesting thought or two worth digging deeper into.


Firepower Deep Dive

The Firepower deep dive focused on the Firepower Threat Defence (FTD) software. If you’re not familiar with it, it is a newer code set that runs the Firepower IPS and ASA firewall functions. This image unifies these two technologies. ASA with Firepower Services on the other hand, runs Firepower as a separate software module.

Firepower 2100

The Firepower appliances look fantastic. The problem is that they’re pricey when you factor in Firepower TAMC subscriptions. A few months ago we looked at getting a few 4110’s to replace some smaller ASA’s. Unfortunately, this was blocked higher up the company hierarchy due to cost.

Fortunately, there is now another model, called the Firepower 2100 series. Like the 4100 series, this comes in four models; The 2110, 2120, 2130, and 2140.

They employ a mixture of RJ45 and SFP ports; The exact port configuration varies based on model. I’ve been looking forward to 10G ports in a firewall, mostly because I want to use Twinax to connect them to my Nexus switches (first world problems right?). The good news for people like me is that this is available in the 2130 and 2140.

Depending on model, the 2100 series claims from 2 to 8.5 Gbps of throughput.


Generally, FTD is configured with Firepower Management Centre (FMC), which is a separate appliance. The problem with that is that FMC does not yet support configuration of all features that FTD supports. Some quick examples are EIGRP, PBR, WCCP, VxLAN, and SysOpt.

The good news is that FlexConfig is here to help. Introduced in FTD 6.2, this feature lets you add traditional ASA CLI commands to configure features that FMC does not yet know about. The catch is that FTD still needs to support the features. If you try to use FlexConfig to configure RA VPN, for example, the config will fail.

This is considered to be a supported workaround. FlexConfig configures individual appliances, and the config doesn’t show in FMC.

FastPath and Pre-Filter Policies

Interesting features I didn’t know about are FastPath and Pre-Filter Policies.

Generally, a packet will enter into an appliance, and firewall checks will take place. This includes ACL’s, NAT lookups, and so on. After this, the IPS engine is consulted, and then additional functions such as routing and NAT are applied.

Sometimes, sending all packets through SNORT is not ideal. Latency sensitive applications would be one of these cases.

To work with this requirement, Pre-Filter policies can be created to bypass the SNORT engine. When packets pass through without going through the IPS engine, they are using the Fast Path. Fast Path bypasses L7 inspections, SGT (Security Group Tagging), security zones, and SNORT.

This provides a good option for migration from traditional ASA. As a starting point, why not migrate ACLs to Pre-Filter rules? Additional L7, AVC, and IPS rules can be added over time.

SinkHole Servers

DNS policies can be used to identify trusted and untrusted domains. Typically, the action can be to allow or deny traffic based on the destination domain. An additional option is to divert traffic to a Sinkhole server.

The sinkhole server is a trusted server that emulated the blacklisted domain. The traffic between the compromised client and the sinkhole server can then be analysed. For example, the client may try to download malware from the server.

Wondering what to do with that information once you’ve got it? Me too. More research to be done…

The Life of a Packet

How does a packet live it’s life inside FTD? Something like this:

Firewall Deployment Modes

There are three ways FTD can be deployed as a firewall (this doesn’t include non-firewall deployments, such as sensor-only):

  • Routed – FTD acts as a router, and may participate in dynamic routing
  • Transparent – FTD acts as an L2 bridge, and may appear to be transparent to other devices
  • IRB – Integrated Routing and Bridging. Or, as I like to call it, ‘magic unicorn mode’


Yes, IRB is the interesting one here. The other two modes have been around for all eternity. Nothing new there. IRB however, is quite new (released in FTD 6.2).

This allows for a mix of routed and transparent mode. This is definitely something I’ll have to look deeper into. The current caveat is that only static routing is supported. Dynamic routing is on the way, but I don’t know when.

Coming Soon

Finally, two new features that are nearly here…

Remote Access VPN and Cisco Threat Intellegence Director (CITD) are both going to be in 6.2.1, which should be out by the end of April.

Intro to ACI

ACI, as well as SDN as a whole, is something I know little about. Fortunately I was able to attend an introduction on ACI. I have to say, it looks pretty good.

Here’s the highlights…


The ACI architecture has three components. These are a Spine/Leaf topology, VxLAN switching/routing, and an APIC controller to manage policies.

Just remember this equation: ACI = Spine/Leaf + VxLAN + APIC

The APIC is a hardware appliance, and is connected to leaf switches. APICs are deployed as highly available clusters.

The leaf switches maintain a table of connected IP’s. The spine switches maintain a DB, called the Global Station Table, of IP’s across the entire fabric.


The APIC controls the fabric with policies. Switches no longer need to be configured individually. Policy configuration can be done with a GUI (which looks like Visio), or a CLI. The CLI does not configure individual switches, but the fabric as a whole.

The APIC is essentially a hierarchical database, containing areas such as Infrastructure, switches, and ports. Of course, it’s much larger and complex than that. Think of it as something like Active Directory.

When an APIC is turned on, it first enables LLDP, and discovers the connected leaf switch. The leaf switch also turns on LLDP to discover the connected spine. The spine turns on LLDP, and the process continues until the fabric has been discovered.

Policy Model

Policies are made up of several nested objects, which are a bit like ‘containers’:

  • Tenant – Top level ‘container’. This is a bit like a VDC on a switch
  • VRF – As always, a virtualised routing table
  • Bridge Domain – Logically like a subnet
  • End Point Group (EPG) – Logically like a VLAN



There are also three ‘connectivity’ components:

  • Contracts – Similar to ACL’s. Allows EPG’s to communicate (zero trust by default)
  • L2 External EPG – L2 uplink to a switch (like a trunk)
  • L3 External EPG – L3 uplink to a switch (like a routed port)

App Store – Yes Really, an App Store

You read that right. Cisco have an App Store. You can download apps like ServiceNow to the APIC.

What a time to be alive.


It was an intensive first day, especially the 4-hour Firepower session. Plenty more to come over the next three days.

I’m particularly looking forward to coding 101VxLAN Labs, and ASA High Availability. A few prizes wouldn’t go astray.


Were you at Cisco Live? What were the highlights for you? Please drop a comment below.

Leave a Comment