Validating Cisco's Threat-Centric Security Solutions
Orchestrating Security in SDN
Test Case 3a: Application Centric Infrastructure (ACI) Application Policy Infrastructure Controller (APIC) with ASA firewalls.
SUMMARY: We reviewed the provisioning process of ASA appliances in a multi-tenant environment within ACI fabric, using both manual interface and scripting.
Cisco explained that Application Centric Infrastructure (ACI) is a new offering by Cisco, based on SDN and next-generation switching fabric and controlled by the Application Policy Infrastructure Controller (APIC). According to Cisco, the ACI architecture intends to provide a simple, flexible, scalable and resilient platform for data centers.
In this test case, Cisco aimed to demonstrate the orchestration process of physical and virtual security appliances on the ACI platform -- ASA firewalls and NGIPS and malware detection platform Firepower.
As our test bed, we used a test setup available for security engineer training at the Cisco labs. The basis of this setup is an ACI fabric consisting of one Nexus 9336 PQ as the spine switch, two Nexus 9396 PX as leaf switches and 2x UCS 220 M4L compute nodes. APIC is a software component responsible for the provisioning and control of the data plane within ACI, including the provisioning of the security functions.
Attached to the ASA fabric, the test bed contains 4x ASA 5525 and 2x Firepower 7010 appliances. The 4 ASA devices were used to demonstrate their setup as 2 resilient clusters using different model of operation -- a load-sharing cluster and an active-standby cluster as described in the following steps.
Cisco demonstrated the multi-context capability of the ASA firewalls within the ACI fabric, allowing a single ASA appliance to maintain many independent contexts for different clients, and for different locations in their service chain. The provisioning process of the security services was observed in an example scenario using the APIC (Application Policy Infrastructure Controller) to orchestrate security functions into the service chain.
The security concept of the ACI defines separate contexts for each tenant, and multiple security zones, so-called EPGs (End Point Groups) -- network areas containing network elements with specific function and security status. The administrator of the ACI can flexibly define what set of configuration abilities can be granted to each tenant -- this way, tenants may administrate the security policies within their context on their own, or delegate the administration to the ACI provider.
The communication between the EPGs is established through so called "Contracts" -- a service chain connection that also has a security policy associated with it. APIC manipulates the service graph to redirect the traffic through the security solutions. In our case, we tested a physical ASA appliance; however virtualized ASAv solution is supported in exactly same way, as well as other security solutions from Cisco or other vendors.
In the scenario used for this test case, we had several such areas representing different functions in a typical web application accessible from the Internet. Step by step, we provisioned security services in the service chain, using different security policies and resiliency settings, as described in the steps below.
1. Review of the ACI Tenant Structure
Prior to our test, the test bed already contained a set of provisioned tenants ('pod1' through 'pod20') used for the training purposes, as well as several Linux VMs representing the network functions (web server, application server and database).
As the first step, we reviewed the existing structure of the service chain. The example service architecture for a single tenant contains four security areas ('EPGs') -- Outside, Webserver, Application and Database, as presented in the diagram below. The goal of the provisioning steps was to insert and configure the security functions (ASA clusters) in the service chain between Outside and Webserver areas (load-sharing ASA cluster) and between Application and Database areas (active-standby ASA cluster).
At the beginning, the service already had provisioned contracts web-to-app (interconnecting Webserver and Application EPGs) and app-to-db (between Application and Database EPGs).
2. Review of the ASA cluster
The ASA cluster to be added to our setup is an external ASA appliance (the same procedure can be applied to a virtualized ASAv solution). Inside APIC, this cluster can be registered as a 'L4-L7 Device' and later associated with a specific tenant content and contract.
APIC supports management and configuration of a variety of external devices through a set of plugin-like packages. There is support for Cisco ASA, ASAv, Firepower platforms, but also for third-party vendors such as Radware. Cisco explained that the device-specific package is a set of scripts making it possible for APIC to manage and configure them via management connection.
We verified that the device is in fact recognized by APIC and also recognized as a cluster setup and ready to be used in a tenant context.
3. Insertion of the ASA cluster and dynamic route peering
We inserted the load-sharing ASA cluster into the service chain -- however, without a service policy applied to it. In this configuration, the ASA cluster only performs the routing between the two segments (Outside and Webserver) and applies a basic ACL-based filtering. This test step demonstrated the dynamic route peering feature of the ACI.
The ACI supports two types of adjacency for the devices (or VNFs) connected to the fabric: L2 for the direct connection to a VLAN; and L3 adjacency for the routed connection, where the fabric simulates a router instance between two network segments. In our case, the ASA cluster had a L3 adjacency to the outside network and to the webserver network. In addition, ACI acts as an OSPF neighbor to these instances, and is able to dynamically supply routes to them. We verified that the attached ASA cluster automatically obtained routes necessary to provide the connectivity between the external hosts and the webserver segment. From now on, the ASA cluster acted as a router between the EPGs. We verified the connectivity by sending ICMP pings in both directions.
4. Dynamic VLAN Allocation
Within the ACI fabric, the dataplane paths are dynamically established as the service graph setup requires. On the context-aware endpoint devices, APIC dynamically allocates VLAN interfaces for the data paths from the pool of available IDs. We verified this function by removing and reinserting the ASA cluster from the tenant context and monitoring the IDs assigned to VLAN interfaces within the ASA:
5. Modifying ASA ACLs via APIC GUI
We applied a simple security policy to the provisioned ASA cluster by modifying the ACL rules configured on it via the APIC GUI. In order to apply a different set of ACLs, we modified the function profile defined for the contract of the ASA in the APIC GUI, then applied changes to the ASA cluster. The change in the ACL rules was to deny, and later to permit, ICMP traffic again.
ASA device package for APIC translated the necessary configuration changes to the low-level configuration suitable for the ASA devices. Cisco explained that APIC does not completely rebuild the configuration, but is capable to applying exactly the changed fragment of it, thus the operation of the device would not be disrupted.
We observed that the changes we made were applied in less than one (1) second and verified the application of the new ACL rules by running ping between the Outside and Webserver areas.
6. Orchestration via Scripting
APIC provides a Python-based API that allows users to perform orchestration and configuration tasks otherwise possible with the APIC GUI from scripts.
We verified the functionality by running a series of scripts to delete, and then to completely recreate a tenant context and the associated service chain that included ASA cluster.
7. Active-Standby ASA Cluster Configuration
In addition to the load-sharing ASA cluster inserted between the Outside and Webserver EPGs, we also reviewed the second ASA cluster inserted between the Application and Database EPGs, and configured to operate in Active-Standby mode. Although EANTC did not perform actual resiliency testing, we watched the configuration of the ASA cluster using the management platform.
Table 4: Hardware & Software Versions
|ACI Spine switch||Nexus N9336PQ||v. 11.1(1r)|
|ACI Leaf switch||Nexus N9396Px||v. 11.1(1r)|
|Firewall 1||ASA5525||v. 9.5.1|
|CPU: 1x Lynnfield 2393 MHz||ASA device package v. 188.8.131.52|
|Firewall 2||ASAv30||v. 9.5.1|
|CPU: 1x Lynnfield 2393 MHz||ASA device package v. 184.108.40.206|
|IPS||Virtual NGIPS||v. 5.4.1|
|Firewall (L2 mode)||Firepower 7710||v.5.4.1|
|Firepower device package v. 220.127.116.11|
|Virtual NGIPS v.5.4.1|