Old and new SRX VPN througput

juniper Juniper has recently released a hardware refresh on the SRX branch firewall series and I had the chance to run IPsec throughput tests on them. Fortunately I also have test results for the same setup for the older devices (srx110, srx210) so I can make a comparison between those two platforms.

For various reasons the declared numbers in the marketing materials for all vendors are  way off and have nothing to do with reality but I will still use them as the baseline for comparison for the real results.

For the testing method I used a slightly modified RFC2544 test – specifically the throughput test where the maximal frame size was set to 1400B so no fragmentation would take place.

The devices we only configured with the tunnel and some management connection. No other configuration was present so these numbers must be taken with pinch of salt as normally your SRX would do more than just one VPN tunnel and the more active configuration bits the lower the throughput would be.

The VPN config:

security {
    ike {
        proposal ike_prop {
            authentication-method pre-shared-keys;
            dh-group group2;
            authentication-algorithm sha1;
            encryption-algorithm aes-256-cbc;
            lifetime-seconds 3600;
        }
        policy ike_pol {
            mode main;
            proposals ike_prop;
            pre-shared-key ascii-text "*"; ## SECRET-DATA
        }
        gateway ike_gw_srx02 {
            ike-policy ike_pol;
            address 10.1.12.2;
            external-interface ge-0/0/2;
        }
    }
    ipsec {
        proposal ipes_prop {
            protocol esp;
            authentication-algorithm hmac-sha-256-128;
            encryption-algorithm aes-256-cbc;
            lifetime-seconds 3600;
        }
        policy ipsec_pol {
            perfect-forward-secrecy {
                keys group2;
            }
            proposals ipes_prop;
        }
        vpn ipsec_vpn {
            bind-interface st0.192;
            ike {
                gateway ike_gw_srx02;
                ipsec-policy ipsec_pol;
            }
            establish-tunnels immediately;
        }
    }
    policies {
        from-zone z_internal to-zone z_internal {
            policy permit_all {
                match {
                    source-address any;
                    destination-address any;
                    application any;
                }
                then {
                    permit;
                }
            }
        }
        from-zone z_internet to-zone z_internal {
            policy deny_all {
                match {
                    source-address any;
                    destination-address any;
                    application any;
                }
                then {
                    deny;
                }
            }
        }
        from-zone z_internal to-zone z_internet {
            policy permit_out {
                match {
                    source-address any;
                    destination-address any;
                    application any;
                }
                then {
                    permit;
                }
            }
        }
        from-zone z_internet to-zone z_internet {
            policy permit_all {
                match {
                    source-address any;
                    destination-address any;
                    application any;
                }
                then {
                    permit;
                }
            }
        }
    }
    zones {
        security-zone z_internet {
            host-inbound-traffic {
                system-services {
                    ike;
                    ping;
                    ssh;
                }
            }
            interfaces {
                ge-0/0/2.0;
            }
        }
        security-zone z_internal {
            host-inbound-traffic {
                system-services {
                    ssh;
                    ping;
                }
            }
            interfaces {
                ge-0/0/3.0;
                lo0.0;
                st0.192;
            }
        }
    }
}
interfaces {
    ge-0/0/0 {
        unit 0 {
            family inet;
        }
    }
    ge-0/0/2 {
        description p2p-public-iface-srx02;
        unit 0 {
            family inet {
                address 10.1.12.1/30;
            }
        }
    }
    ge-0/0/3 {
        description Test_IPsec-VPN-throughput;
        unit 0 {
            family inet {
                address 192.168.11.1/24;
            }
        }
    }
    fxp0 {
        unit 0 {
            family inet {
                address 10.64.3.135/24;
            }
        }
    }
    lo0 {
        unit 0 {
            family inet {
                address 192.168.10.1/24;
            }
        }
    }
    st0 {
        unit 192 {
            family inet {
                mtu 1428;
                address 192.168.1.0/31;
            }
        }
    }
}
routing-options {
    static {
        route 0.0.0.0/0 next-hop 10.64.3.1;
        route 192.168.16.0/20 next-hop st0.192;
    }
}

The table below is my attempt to get as close to the declared numbers:

Device Declared @ L2/1400B Measured @ L2/1400B
SRX110h2 65 Mbps 45 Mbps
SRX210 85 Mbps 33 Mbps
SRX550* 1 Gbps 485 Mbps
SRX 320 250 Mbps  187.9 Mbps
SRX 340 500 Mbps  374.8 Mbps

You can see that in both cases of the old and the new devices the throughput is about 50% of the declared values – the reason is that the for marketing purposes upstream and downstream are counted as separate entities thus can be added together to form a nice big number.

The throughput table above is for the most favourable conditions which almost never happen in the real network so the interesting question is then how does the same tunnel behave with different packet sizes specifically with small ones where the overhead will be much bigger and the nuber of packet will also be much larger. Let’s have a look at the result of the 64B frames:

Device Measured @ L2/64B
SRX110h2 2.6 Mbps
SRX210 1.9 Mbps
SRX550* 32 Mbps
SRX 320 12.5 Mbps
SRX 340 23.8 Mbps

 

In conclusion the new models of SRX have improved the encrypted traffic throughput by about 30% on equivalent models but the overall performance is still quite low compared to the public specification.

*The SRX550 is an old model but there shouldn’t be any performance difference to the new SRX 550.

All tests were performed on the current recommended version for the platform at the time of writing.

The tester used in this testing was EXFO FTB-1 with FTB-860G module (mode details are in the test reports).

The full results in pdf format are available here:

SRX110-RFC_2544

SRX210-RFC_2544

SRX550-RFC_2544

SRX320-RFC_2544

SRX340-RFC_2544

 

Basic IPsec tunnel on Juniper’s SRX

juniperThis article is a first part of a mini-series about the Juniper’s branch router-firewalls – the SRXes. I will start with an simple example of how to build a simple VPN between two branch routers later I will do a cisco-to juniper VPN scenario and the last article will be comparing real vs. the advertised throughput.

The example I am using here is done on vSRX (also known as firefly) and ranze of physical SRX from 110 to SRX550. The configurations are virtually the same as the syntax doesn’t differ between the platforms.

Topology used for this is as follows:

SRX_IPsec_topology

As you can see it is simple topology with two hosts two edge routers and one ISP router that is there just to make sure the tunnels actually traverse some network rather then being directly connected as that makes troubleshooting much easier.

The IP plan for this lab is as follows:

Router Interface IP/network
R1 ge-0/0/0.0 2.0.0.0/31
R1 ge-0/0/3.0 192.168.0.254/24
R1 st0.0 10.0.0.1/30
R2 ge-0/0/0.0 3.0.0.0/31
R2 ge-0/0/3.0 192.168.1.254/24
R2 st0.0 10.0.0.2/30
PC-A ge-0/0/0 192.168.0.1/24
PC-B ge-0/0/0 192.168.1.1/24
ISP ge-0/0/0.0 2.0.0.1/31
ISP ge-0/0/1.0 3.0.0.1/31

 The addressing is using 2.0.0.0/31 and 3.0.0.0/31 as public IP addresses the 10.0.0.0/30 are for the tunnel interfaces and the 192.168.0.0/24 and 192.168.1.0/24 are used for the LAN segments.

Now when all addressing is done we should start writing for phase 1 of IPsec which is the key exchange. The IKE configuration consists of 3 main parts and I will try to explain briefly what the configuration of each means in real terms.

Phase 1

In phase one the Internet Key Exchange protocol is used to securely figure out what is the key that will be used in further communication. As always phase 1 consists of 3 main components – proposal, policy and gateway. These three components could be roughly described as attributes,methods and peer’s details. See below for more details.

a) IKE proposal

The proposal is a list of attributes that IKE will use for the key exchange. It has multiple components but we’ll use the minimal set of three attributes which is enough to get our key exchange running.

  • Authentication method could be either password (pre-shared key) or digital signature RSA/DSA
  • DH group – determines algorithm for the secure key exchange over unsecured network (diffie hellman) and the number of keys it can use (in effect its strength) group 2 is 1024 bits
  • Authentication algorithm is a hashing method the DH will use

security {
    ike {
        proposal ike_prop {
            authentication-method pre-shared-keys;
            dh-group group2;
            authentication-algorithm sha1;
        }
}

b) IKE policy

The policy takes the attributes from the proposal and binds them with methods of your choosing. In this case the policy has just 2 more parts.

  • mode -aggressive/main – the aggressive mode is 4 packets long exchange and is sometimes considered more secure (less packets – less exposure) but there are known attacks on aggressive mode so using the main mode is generally a good thing – it also helps with the compatibility between vendors.
  • pre-shared-key/certificate – this is the secret that will be used for the Key exchange and must match on both ends of the tunnel.
security {
    ike {
        policy ike_pol {
            mode main;
            proposals ike_prop;
            pre-shared-key ascii-text "$9$jJiPQ/9pBRStu"; ## SECRET-DATA
        }
    }
}

 

c) IKE gateway

The IKE gateway defines identities of local and remote peers. It also refers to the IKE policy and binds the IKE chain/cascade together. There is couple additional identifiers used in this example (remote/local identity, local address) but they are optional – even though I would recommend using them or their alternatives.

  • address – is the remote peer’s public IP
  • external interface – is the local physical public interface
security {
    ike {
        gateway rt02 {
            ike-policy ike_pol;
            address 3.0.0.0;
            local-identity user-at-hostname "tnk@rt01";
            remote-identity user-at-hostname "tnk@rt02";
            external-interface ge-0/0/0;
            local-address 2.0.0.0;
        }
    }
}

Phase 2

Once the key exchange is finished phase 2 can begin. It is the IPSec itself and it also consists of 3 components. They are similar both in name and purpose.

a) IPsec Proposal

Again this is defining the methods the IPsec framework will actually use for encryption.

  • protocol ESP – this is the more secure of the two options (AH/ESP) and should be used all the time some recommendation ask for using AH+ESP but I am not sure that is possible in reasonable way
  • Authentication algorithm – select the algorithm for IPsec authentication
  • Encryption algorithm – select the strength of the cypher used for the packet encryption – be careful here as this setting heavily impacts performance. One should always avoid DES/3DES as they are weak cyphers. But also avoid a high-end AES as the impact might be rather severe. In general AES256 is considered strong enough while being reasonably easy to calculate.

security{
    ipsec {
        proposal ipsec_prop {
            protocol esp;
            authentication-algorithm hmac-sha-256-128;
            encryption-algorithm aes-256-cbc;
        }
    }
}

b) IPsec Policy

In the policy the IPsec proposal is bound in and the PFS is defined. PFS is a security mechanism that does key-renegotiation after the key life-time expires. This is best practise and shouldn’t have major impact on packet processing.

security{
    ipsec {
        policy ipsec_pol {
            perfect-forward-secrecy {
                keys group2;
            }
            proposals ipsec_prop;
        }
    }
}

c) IPsec VPN

This is the last piece of the tunnel puzzle where the IKE and IPsec are bound together into one “VPN”. It also binds in the secure tunnel interface into this whole lot. The one interesting parameters is the “establish tunnels immediately” which is very helpful for debugging as it will be sending the IKE/IPsec packets to peers even without valid traffic causing them to establish before the first packet destined fort he remote network will arrive.


security{
    ipsec {
        vpn vpn_rt02 {
            bind-interface st0.0;
            ike {
                gateway rt02;
                ipsec-policy ipsec_pol;
            }
            establish-tunnels immediately;
        }
    }
}

And because the SRX is in the flow mode a security zones and accompanying security policy must be defined for the tunnels to form. First I’ll define the zones themselves – for simplicity it will be only 2 zones. In the follow up examples this policy will be changed to reflect the use of the junos-host zone.

Zone name Description
z_trust trusted internal network
z_internet untrusted public network

We’ll allocate the correct interfaces in the right zones (note the st0.0 is in the z_trust) and permit the ike and icmp/ping to be accepted from the untrusted zone for processing by the Routing Engine.


security-zone z_trust {
    host-inbound-traffic {
        system-services {
            ping;
        }
    }
    interfaces {
        ge-0/0/3.0;
        st0.0;
    }
}
security-zone z_internet {
    interfaces {
        ge-0/0/0.0 {
            host-inbound-traffic {
                system-services {
                    ping;
                    ike;
                }
            }
        }
    }
}

All traffic is allowed inside the trusted zone. This needs to be done if you have any IP traffic between your hosts passing through the SRX – it is not strictly speaking necessary for the example but it is a good thing to have in mind.


from-zone z_trust to-zone z_trust {
    policy implicit_permit {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            permit;
        }
    }
}

This policy is a clean-up – deny all traffic from public zones to private.


from-zone z_internet to-zone z_trust {
    policy deny_all {
        match {
            source-address any;
            destination-address any;
            application any;
        }
        then {
            deny;
        }
    }
}

So here we have a simple set-up for RT01 from our topology. In coming articles I will modify this config to be more suitable for faster deployments (using groups and objects) and in the end there will be couple hints on how to troubleshoot IPsec on SRXes as that topic is severely under-described.

Using vconfig to set vlan tagged interface with 802.1p CoS priority bits set

tux This is just quick how-to on setting up vlan interface on a Linux system. As this is in lengths described elsewhere I will stick just to the commands and brief explanations.

The first thing to do is to get the “vlan” package as it is not normally installed. The other bit is the 8021q kernel module which is not normally loaded but is present on Debian-based systems by defualt.

Checking the state of the module:

lsmod | grep 8021q

if this will return empty result you must load the module using mordprobe

modprobe 8021q

and verify again with lsmod as before.

Now the system is ready to get new tagged interfaces

vconfig add eth0 X

Where eth0 is your physical interface and X is the VLAN ID you want the new interface to be tagged with. The command will have an output saying that interface eth0.X has been created. This is great but you also must now somehow specify what pbit setting do you want in there as by default egress traffic is by default priority 0. In order to make lets say ping ICMP packets marked with CoS we must create an egress map for the default priority (o) in which we will re-map it to whatever we want.

vconfig set_egress_map eth0.X 0 7

In this example I have re-mapped the default priority 0 to egress priority 7

This setup can be made permanent by adding the module name in /etc/modules and adding the relevant lines in the interfaces file e.g.:

auto eth0.100
iface eth0.100 inet static
address 192.168.100.1
netmask 255.255.255.0
vlan-raw-device eth0

 

Multiple permanent linux interfaces with dhcp allocated addresses

tuxRecently I have been doing some on the HP 5500EI including a port security feature limiting the number of MAC addresses to 8. This is not a difficult configuration at all – in fact it is just one command on the interface itself .

mac-address max-mac-count 5

So now with the limit in place I would like to test it. The first thought was to use Linux alias as a fast and dirty way of doing this but unfortunately I soon found out that tit doesn’t allow for the requirements I had in mind.

  • There have to be 5 or more virtual interfaces on one physical interface
  • Each virtual interface must have its own individual MAC address
  • All virtual interfaces must be getting their own IP addresses from the DHCP server
  • All the virtual interfaces must receive an IP address from the same subnet (as they as plugged into an access port)

The main issue with just aliasing the interface is that it is a L3 interface only (uses the same MAC) and definitely doesn’t allow for DHCP allocations from the same subnet. But fortunately on Linux this is not an issue and this can be done via “ip link” feature which is part of the iproute package in Debian. The usage is rather simple:

ip link add dev intX link eth0 type macvlan
ip link del dev intX link eth0 type macvlan

Where int will be name and X the number of the new interface and eth0 is the physical interface you want to bind to. This can be repeated multiple times and the MAC address will be generated randomly. There is also a way for setting it up to whatever you want by changing the syntax to this:

ip link add dev intX link eth0 address aa:aa:aa:aa:aa:aa type macvlan

If you run this couple times and get some IP addresses on those interfaces from DHCP server you will soon notice the following messages on your switches.

%Jun 7 11:03:01:411 2000 Core1 ARP/5/ARP_DUPLICATE_IPADDR_DETECT: Detected an IP address conflict.
The device with MAC address 6e99-1b38-2b8c connected to Bridge-Aggregation2 in VLAN 100 and the device with MAC address d6b2-1ac8-9bd2 connected to Bridge-Aggregation2 in VLAN 100 are using the same IP address 10.0.3.248.

Quick check will reveal that there are no duplicate addresses assigned nor allocated so what is the system complaining about? The answer is that the defaul behavior of linux kernel is that it will repli to ARP from the first interface in the list (eth0) also it can reply from all interfaces /and or random interface making the Comware go crazy.

Fortunately this default behavior can be adjusted by the following commands:

echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo 8 > /proc/sys/net/ipv4/conf/eth0/arp_announce

There has been a lot of people around the net suggesting the second value should be 5 but that didn’t work for me at all. If you want to make these changes persistent add the line with the values into /etc/sysctl.conf

There is some more explanation of the values above here

Bit Errror Rate Test (BERT) explained

This article will be rather short in comparison with the others in the mini-series about various Ethernet/IP testing methods but it is one that is necessary as Bit Error Tests have a long tradition in telco environment (circuit based networks) but are still quite valid even in nowadays packet networks – at least for some specific cases. So without further delay let start with some theory behind the testing and some practical use followed by some use cases and best practices.

BERT introduction

As you can guess from the name this test is really to test physical layer traffic for any anomalies. This is a result from the test origins where T1/E1 circuits have been tested and each bit in each time-slot mattered as the providers were using those up to the limit as bandwidth was scarce. Also as most of the data being transferred were voice calls any pattern alterations had quite serious implications on the quality of service. This also led to the (in)famous reliability of five nines or the 99.999% which basically states that the link/device must be available 99.999% throughout a specified SLA period (normally a month or a year). One must remember that redundancy was rather rare so the requirements for hardware reliability was really high. But by the move away from the circuit-based TDM networks towards the packet-based IP networks the requirements changed. The bandwidth is now in abundance in most places and the wide deployment of advanced Ethernet and IP feature rich devices provides with plenty options for redundancy and QoS with packet-switched voice traffic on rise – one would think it is not really necessary to consider BERT as something one should use as test method but that would be huge mistake.

Why BERT

There are few considerations that can make BERT an interesting choice. I will list some I think are the most interesting.

  1. It has been designed to run for extended period of time which makes it ideal for acceptance testing which is still often required
  2. BERT is ideal for testing jitter as it was one of the primary design goals
  3. The different patters used in BERT can be used for packet optimization testing (I will discuss this later in more detail)
  4. Most of the BERT tests are smarter than just counting bit errors so the test can be used for other testing

BERT Physical setup and considerations

On Ethernet network you cannot run a simple L1 test unless you test just a piece of cable or potentially a hub as all other devices would require some address processing. This makes the test being different on Ethernet network from unframed E1 as unlike on E1 we need to set framing to Ethernet with the source and destination  defined on the tester. Also as Ethernet must be looped on a logical level it is not possible to use simple RJ45 with pair of wires going from TX to RX as you could with E1 and either hardware or software loopback reflector is required. Most tester will actually allow you to specify even layer 3 and 4 for IP addresses and UDP ports. The reason is usually so the management traffic between tester and loopbacks can use this channel for internal communication.

Pattern selection options

As this test originates from the telco industry some interesting options are usually presented on the testers. The stream can generate these patterns:

  1. All zeros or all ones – which are specific patters originated from TDM environment
  2. 0101 pattern and 1010 pattern – patterns that can be easily compressed
  3. PRBS – Pseudo Random Bit Sequence – is an deterministic sequence that cannot be compressed/optimized the details and calculation can be found on wikipedia
  4. Inverted PRBS – the same as above but the calculation function is inversed to counter any “optimization” for the PRBS

The thing to remember is that PRBS will be applied to the payload of the frame/packet/datagram so it there is any sort of optimization present it will have no effect as PRBS is by design not compressible. There are various “strengths” of the pseudo-random pattern the higher the number the less repeating it will include. Normally it is possible to see two main variants: 2^15 which is  32,767 bits long and 2^23 which 8,388,607 bits long. Obviously the longer the pattern the better and more “random” behavior it emulates.

Error injecting Options

As this test originated in telco world injecting errors was a major thing but in Ethernet network it lost its importance. If you inject even a single bit error in an Ethernet frame the CRC should be incorrect and the whole frame should be dropped on first L2 equipment it will be passed through which should always result in alarm LoF(Loss of Frame)/LoP (Loss of Pattern).

Use cases, Best Practices and Conclusion

The most common use case for BERT in nowadays network would be in commissioning new links as you can run a fairly simple test for a long time that will give you a reasonable idea bout it’s quality in terms of frames drops and jitter.

The few recommendations about how to run this test would be as follows:

  • Use the largest pattern you can.
  • Remember that the line rate and L2 rates will be different because of the overheads.
  • Remember that 99.999% of availability results in 0.8s outage in 24 hours (which can be quite a lot of frames)
  • PRBS cannot be optimized

So as you can see BERT is rather simple and straight forward test that even though is in many ways deprecated by RFC2544 and others (like Y.156sam) it is still a very good test to know especially if you are in jitter sensitive environment e.g. where VoIP and IPTV is deployed.