Using vconfig to set vlan tagged interface with 802.1p CoS priority bits set

tux This is just quick how-to on setting up vlan interface on a Linux system. As this is in lengths described elsewhere I will stick just to the commands and brief explanations.

The first thing to do is to get the “vlan” package as it is not normally installed. The other bit is the 8021q kernel module which is not normally loaded but is present on Debian-based systems by defualt.

Checking the state of the module:

lsmod | grep 8021q

if this will return empty result you must load the module using mordprobe

modprobe 8021q

and verify again with lsmod as before.

Now the system is ready to get new tagged interfaces

vconfig add eth0 X

Where eth0 is your physical interface and X is the VLAN ID you want the new interface to be tagged with. The command will have an output saying that interface eth0.X has been created. This is great but you also must now somehow specify what pbit setting do you want in there as by default egress traffic is by default priority 0. In order to make lets say ping ICMP packets marked with CoS we must create an egress map for the default priority (o) in which we will re-map it to whatever we want.

vconfig set_egress_map eth0.X 0 7

In this example I have re-mapped the default priority 0 to egress priority 7

This setup can be made permanent by adding the module name in /etc/modules and adding the relevant lines in the interfaces file e.g.:

auto eth0.100
iface eth0.100 inet static
address 192.168.100.1
netmask 255.255.255.0
vlan-raw-device eth0

 

EtherSam (Y.1564) explained

This is the last article (at least for now) from the series about testing methodologies and testing standards. I will cover some bits and pieces in the region of testing in general but it won’t be as heavy on the theory as I want to write some “hands-on” scenarios for combined use of Wireshark and PackEth as well as about some multicast scenarios. Also I will be doing more Cisco and Juniper stuff so it is quite likely I will be blogging some configs and labs. Anyway enough about the future plans and let’s start with the topic at hand.

Introduction

The ITU-T Y.1564 also more commonly known as EtherSam (which originated in the old name of the standard ITU-T Y.156sam) is a service activation test suite whose goal is to allow for rapid link testing in deployment of services. The main advantage of this test is that it allows for testing of SLA (Service Level Agreements) while deploying new service and it can do that disregarding the Physical topology (i.e. it can verify end-to-end SLA even in live environment with live traffic flowing through the network).

There is few serious considerations in general that make this test suite bit awkward to use.

First one is that this is a very new standard (initiated in 2009, published 2011) and is still changing as new drafts are still being issued.

The next rather serious problem is that this test suite is for “service activation” which means in normal language that it is no good for lab testing as it doesn’t really stress the equipment. The reason is that the EtherSam is designed around the idea of rapid deployment of new links/services in Telcos (I will write about the disadvantages of the design in later).

The last issue is that as a new standard it is rather unknown among network engineers so it takes some education before it can be used.

Traffic parameters

The theory behind this test suite is somewhere half way through between the RFC2544 and BERT tests as it tried to get the best of both while achieving similar results to both. Lets start with definitions as they are the most important. In EtherSam you can configure multiple concurrent services and each service  can have following 4 parameters:

  • CIR – Committed Information rate
  • CBS – Committed Burst Size
  • EIR – Excess Information rate
  • EBS – Excess Burst Size 

This is not as complicated as it might seem at this point. These values are only used to set the SLA. The CIR defines the minimal amount of traffic within the available bandwidth and must be always fulfilled. If there is only CIR specified on the links/services it is a good practice to have some amount of bandwidth allocated to CBS as it will allow for a small overshoot in case of traffic burstiness. Obviously one might need more flexibility in how much traffic to pass through (like over-subscription) where some frame loss is acceptable in exchange for more data being delivered. That is the Excess Information Rate. As it is obvious that once EIR is in place the data from CBS would be calculated as part of EIR so CBS setting loses its meaning. If you want to get little more flexibility in case of having more bursty traffic you can specify EBS on top of the EIR.

Traffic coloring

In the paragraph above I have described the two out of three traffic types that exist in EtherSam which would be reffered to as a green traffic (CIR+CBS) and yellow (EIR+EBS). The standard also defines a red traffic which is a traffic non-conforming to either CIR or EIR. In effect based on the EtherSam methodology this traffic should never be passed and should be dropped. This look like a absolutely trivial and obvious thing but it has one very serious consequence in deployments with over-subscription in place – you must define the EIR as the “shared” part of your QoS with specific size allocated to it. So having a random amount of free-to-grab bandwidth for the tested service will result in failing the test as passing red traffic is a fail criteria on Y.1564.

Traffic profile for EtherSam - coloring

Bandwidth profile parameters – Coupling flag and Color mode

I am putting description of these two parameters at this place just for the sole reason that they are defined in the standard but I would like to stress out that I haven’t seen them implemented in any testing equipment so far so this section will be rather short and most people can just skip it as it has little to none practical use (at least at the time of writing). These two parameters allow for the metering algorithm to be adjusted and thus change the result. Also they are valid only in certain scenarios.

  • CF – Coupling flag – Could be only set as on or off. Is only useful for introducing new service in live environment with extremely bursty traffic. It allows for coupling unused green and yellow traffic thus allowing for higher throughput.
  • CM – Color mode – allows for two options color-aware and color-blind mode where the first one is requiring the tested equipment to re-mark/re-color the traffic streams to adhere to the existing network rules whereas the color-blind expect no interference with the coloring.

The Service Configuration test

This is the first test that you can run and is meant to test a individual service. The aim is to test the CIR/EIR (and optionally CBS/EBS) comply to the setup. It is a rather simple test but except the obvious CIR/EIR/policing it allows for some variability offering the following  options:

  • Fixed frame size or EMIX pattern (1518, 1518, 1024, 64, 64)
  • optional Step Load (25%,50%,75%100%)
  • optional Burst test for the CBS and EBS (defined in Bytes)

If you have multiple services configured each one will be done separately so be careful about the time-estimate as this test is not intended to run for long time. Especially with the ramped services it is important to realize that the total duration of this test will be number of services x number of steps x step time. Also the other thing is that CBS and EBS will be tested separately adding more time to the test. In total this should not take more than 10 minutes as this test is not supposed to be replacing a long term tests.

The Service Performance test

This test is the second (and last) test you can do in Y.1564 and is in place to test all services in one go in order to check that the sum of the CIRs is actually available on the path in question. It is also meant to be a long test with specified durations 15 min, 2 hrs and 24 hrs. The EMIX and ramped traffic in the services should be available as in previous test.

I think that this test due to its simplicity can replace the BERT in many cases while giving better results for service providing.

The results and pass/fail criteria

The pass/fail criteria are rather obvious

  • Fulfilling CIR (or CIR+CBS)
  • Fulfilling EIR (or EIR + EBS)
  • Policing overshoot of traffic > CIR+EIR+EBS
  • Conform to maximal acceptable delay variation (jitter)
  • Conform to maximal acceptable round-trip latency
  • Conform to SLA’s Frame loss (or availability)

These are solid criteria and there is not much you can say against these but as always there are some considerations that must be taken in account.

First one is something I have already mentioned – there is no way for the Y.1564 to consider a shared “best effort” overshoot above the defined CIR+EIR which might be problem in some scenarios but I think it could be avoided via some hacked configuration of EIR/EBS.

Second is the SLA frame loss or more known in the telco world as availability. So if you provide let’s say 99.99% availability it means that on a 100mbps stream it would be acceptable to lose  over 2000 frames single hour which I don’t think would be found acceptable in most environments. As far as I know there is no possibility to set the availability to 100% (also no SLA would ever have this number in it). I ma not currently aware of any possible workaround for this so the only advice is to go through the data in the results table very carefully and set this option to be as close to what you expect of the test as possible (i.e. in my opinion under normal circumstances there should be 0% packet loss on 2 hours test on most systems).

The last thing I would like to mention is that there is no built-in out-of sequence counting mechanism. This might sound as an unnecessary feature but in voice-enabled environment this is  actually a very important parameter to observe.

 Conclusion

 The EtherSam is rather interesting test suite but in my opinion cannot (and was never meant to) replace the RFC2544. In some ways it can partially replace BERT in some field operations. I have to say I do welcome this standard as it addresses the last bit of testing that was not properly included in any Ethernet/IP testing suite to my knowledge. It obviously has some drawbacks but I think it has its place in field service activation environment . Only time will tell if it will become as wide spread as the RFC2544 but I certainly hope so.

 

 

 

 

Bit Errror Rate Test (BERT) explained

This article will be rather short in comparison with the others in the mini-series about various Ethernet/IP testing methods but it is one that is necessary as Bit Error Tests have a long tradition in telco environment (circuit based networks) but are still quite valid even in nowadays packet networks – at least for some specific cases. So without further delay let start with some theory behind the testing and some practical use followed by some use cases and best practices.

BERT introduction

As you can guess from the name this test is really to test physical layer traffic for any anomalies. This is a result from the test origins where T1/E1 circuits have been tested and each bit in each time-slot mattered as the providers were using those up to the limit as bandwidth was scarce. Also as most of the data being transferred were voice calls any pattern alterations had quite serious implications on the quality of service. This also led to the (in)famous reliability of five nines or the 99.999% which basically states that the link/device must be available 99.999% throughout a specified SLA period (normally a month or a year). One must remember that redundancy was rather rare so the requirements for hardware reliability was really high. But by the move away from the circuit-based TDM networks towards the packet-based IP networks the requirements changed. The bandwidth is now in abundance in most places and the wide deployment of advanced Ethernet and IP feature rich devices provides with plenty options for redundancy and QoS with packet-switched voice traffic on rise – one would think it is not really necessary to consider BERT as something one should use as test method but that would be huge mistake.

Why BERT

There are few considerations that can make BERT an interesting choice. I will list some I think are the most interesting.

  1. It has been designed to run for extended period of time which makes it ideal for acceptance testing which is still often required
  2. BERT is ideal for testing jitter as it was one of the primary design goals
  3. The different patters used in BERT can be used for packet optimization testing (I will discuss this later in more detail)
  4. Most of the BERT tests are smarter than just counting bit errors so the test can be used for other testing

BERT Physical setup and considerations

On Ethernet network you cannot run a simple L1 test unless you test just a piece of cable or potentially a hub as all other devices would require some address processing. This makes the test being different on Ethernet network from unframed E1 as unlike on E1 we need to set framing to Ethernet with the source and destination  defined on the tester. Also as Ethernet must be looped on a logical level it is not possible to use simple RJ45 with pair of wires going from TX to RX as you could with E1 and either hardware or software loopback reflector is required. Most tester will actually allow you to specify even layer 3 and 4 for IP addresses and UDP ports. The reason is usually so the management traffic between tester and loopbacks can use this channel for internal communication.

Pattern selection options

As this test originates from the telco industry some interesting options are usually presented on the testers. The stream can generate these patterns:

  1. All zeros or all ones – which are specific patters originated from TDM environment
  2. 0101 pattern and 1010 pattern – patterns that can be easily compressed
  3. PRBS – Pseudo Random Bit Sequence – is an deterministic sequence that cannot be compressed/optimized the details and calculation can be found on wikipedia
  4. Inverted PRBS – the same as above but the calculation function is inversed to counter any “optimization” for the PRBS

The thing to remember is that PRBS will be applied to the payload of the frame/packet/datagram so it there is any sort of optimization present it will have no effect as PRBS is by design not compressible. There are various “strengths” of the pseudo-random pattern the higher the number the less repeating it will include. Normally it is possible to see two main variants: 2^15 which is  32,767 bits long and 2^23 which 8,388,607 bits long. Obviously the longer the pattern the better and more “random” behavior it emulates.

Error injecting Options

As this test originated in telco world injecting errors was a major thing but in Ethernet network it lost its importance. If you inject even a single bit error in an Ethernet frame the CRC should be incorrect and the whole frame should be dropped on first L2 equipment it will be passed through which should always result in alarm LoF(Loss of Frame)/LoP (Loss of Pattern).

Use cases, Best Practices and Conclusion

The most common use case for BERT in nowadays network would be in commissioning new links as you can run a fairly simple test for a long time that will give you a reasonable idea bout it’s quality in terms of frames drops and jitter.

The few recommendations about how to run this test would be as follows:

  • Use the largest pattern you can.
  • Remember that the line rate and L2 rates will be different because of the overheads.
  • Remember that 99.999% of availability results in 0.8s outage in 24 hours (which can be quite a lot of frames)
  • PRBS cannot be optimized

So as you can see BERT is rather simple and straight forward test that even though is in many ways deprecated by RFC2544 and others (like Y.156sam) it is still a very good test to know especially if you are in jitter sensitive environment e.g. where VoIP and IPTV is deployed.

RFC2544 Testing explained

ietf-logoThis next article in this mini-series about testing Ethernet/IP networks I will write about one of the most common test – the RFC2544  “Benchmarking Methodology for Network Interconnect Devices”. The purpose of this test is quite often misunderstood even though it is clearly stated in the introduction of the standard itself. So let’s start with clarifying what this testing suite is and what it should be used for.

Introduction and considerations

As the standard say right at the beginning this test suite is in place so customers have a single point of reference while testing network equipment capabilities. So as you can see the intent of this test is to evaluate a single piece of equipment and provide results that can be easily compared between vendors. This approach has some very obvious advantages and some not so obvious drawbacks both of which I will try to cover in this article.

The advantages of the test suite can be covered rather quickly as they are for most part rather obvious. The test suite had been designed in order to provide vendor-independent comparable test with clear and easy-to-understand results. The tests themselves are measuring behaviors/variables that are absolutely a “must know” for any new network element being introduced into the network. What the test cover and how will be detailed below in detail and I have to say most of the methods are still very valid even though the standard has been approved in 1999. The main advantage (or maybe you can say disadvantage) is the test popularity as it became over the years a most used standard for Ethernet/IP network testing.

The disadvantages of RFC2544 are bit more obfuscated but are rather serious. The first  of the problems I am encountering a lot is the fact this test suite is so popular often results in two interconnected problems – it is being used in wrong place (where other test would be more suitable) and its results are being misunderstood or misinterpreted. I hope this article will help to shed some light on the test procedures and variables entering the test and subsequently on the expected results.

The other important thing to consider while making a decision if to use or not use this test is that this test suite was created to test standalone network elements and even though it can be used for service activation/ acceptance testing it is not its primary focus and the testing procedures must be adjust.  It should be considered that in some cases it will not be suitable at all (that is why there are specialized test procedures for those.

The last consideration or problem of the RFC2544 suite is that it has been created and approved devices that were around in 1999 (routers, L2 switches and hubs etc.) so it is designed in a way that is quite different from today’s multi service environment. Also the intent was to test purely  native Ethernet devices (and now legacy Ethernet transport over FFDI and token-ring) so using it in Telco environment where quite large quantities of equipment still use ATM (at least internally) can lead to very interesting results. I will discuss this in later part of this article.

Physical setup

The first thing to consider when using the RFC2544 test suite is what physical setup you will use and what reciprocations will that have on subsequent evaluation and fault-finding. The three main options are:

  • Reflected scenario (uses a dedicated hw/sw loopbacks)
  • Unidirectional scenario (uses one stream and two testers)
  • Bidirectional scenario (uses two streams from two independent testers)

Each of those scenarios is quite useful in different use case.

The most common one for field operations would be the Reflected scenario where the tester is on one location and the dedicated reflector/loopback is on another end of the tested line wherever that is. The main problem with this scenario is that there is no way how to tell in which direction lies encountered problem as the uplink is the stream from downlink with MAC and IP addresses swapped by the loopback unit. This is not a big deal in lab environment but might be a crucial thing to consider when the loopback is used in field and is couple/tens of kilometers away.

The other problem is that the loopbacks can be created in software on the measured equipment which adds layer of uncertainty about the results as the device itself can (and in most cases also will) behave differently when the loopback is external hardware.

The last issue with the reflected scenario is that in my experience the loopbacks are not exactly reliable and can themselves introduce unexpected behavior into the measured values.

The unidirectional testing uses one sender and one smart receiver that evaluates the data stream. This testing usefulness is rather limited as no real data is exclusively unidirectional. This is a rarely used setup unless one wants to use this test setup for faultfinding and it could be also used in asymmetric networks.

Bidirectional scenario is probably the most precise way how to perform most testing as it is basically running one separate stream in each direction which is evaluated on the other endpoint. This is commonly called “dual test set” or “dual test” in the tester’s setup. The obvious drawback is that you require either two testers (in case of small field units) or correctly wired and configured big tester which might get tricky. Obviously as the supply of the testers is way more limited due to the high price so this scenario is not suitable for field operations as it requires good logistics while moving the testers between sites.

Ethernet frames

The Ethernet frames used in this suite are based on Ethernet standard and are used for multiple test in the suite. The distribution is not random but tried to cover the most important frame sizes that might be present in average network. The distribution looks like this:  64, 128, 256, 512, 1024, 1280, 1518 Bytes. It is very important to note that these are frames without the 802.1Q tag. So for a tagged traffic the minimal value should be 68B for the traffic to be valid and passed through correctly-behaving device.

There is an interesting discussion point – Could a 64B frame with vlan tag exist? Seemingly in contradiction with my earlier statement – the answer is yes as the 802.1Q shim is actually taking space from the payload part of the frame. This frame should be even passed through and processed correctly as long as no equipment tries to remove the tag. Once that happens and the vlan tag is removed – the frame becomes a runt (frame smaller than minimal required size) and must be dropped on outbound interface before being sent anywhere.

RFC2544 – Throughput test

Throughput test is rather basic and even the name is rather self-explanatory – it will measure maximal amount of data you can pass through a device or link. The throughput is measured for the distribution of frame sizes mentioned above – one trial for each frame-size.

The frame sizes is first thing that is specified the second variable that is in standard but is not present on any testing equipment I’ve seen is the packet types. The test stream normally consists of UDP unicast datagrams but the standard recommends to use other packet types as well – specifically broadcast frames, SNMP-like management frames and routing-updates-like multicast. As far as I know this  recommendations are not being observed and only the unicast UDP streams are being used. In some equipment you can set the L4 header to be TCP but be aware that this TCP doesn’t behave as real TCP would (as there are no ACKs being send and window mechanism being employed on the data stream).

So what are the steps taken in this test and variables you can use to adjust the test?

The steps are are follows:

  1. Discovery phase (checking the other end of the tested link is reachable)
  2. Learning phase (binary division to evaluate what is the maximal throughput)
  3. Contiguous stream of the frames-size at the speed found in step 2) for time of at least 1 second
  4. Evaluating speed/drops/pattern changes and graphing them

 This seems to be pretty straight forward but I would like to stop at 2. as the way this is determined is quite interesting. The method commonly used is called binary division and on this place I would like to show you how it works as even though it is a simple concept it is a bit difficult to find any decent information on it. So let’s assume our equipment can only pass 60Mbps and the line rate negotiated is 100Mbps. The binary division will use the negotiated line speed as default value but if a specific value is set the it would be used as the initial maximum. This is the trialling procedure:

  1. 100Mbps – initial maximum – fail (60<100Mbps)
  2. 50Mbps – 1/2 of interval 0-100Mbps – success (50<60Mbps)
  3. 75Mbps – 1/2 of interval 50-100Mbps – fail (60<75Mbps)
  4. 62.5Mbps – 1/2 of interval 50-75Mbps – fail (60<62.5Mbps)
  5. 56.25Mbps – 1/2 of interval 50-62.5Mbps – success (56.25<60Mbps)
  6. 59.37Mbps – 1/2 of interval 56.25-62.5Mbps – success (59.37<60Mbps)
  7. 60.9Mbps – 1/2 of interval 59.37-62.5Mbps – fail (60.9 > 60Mbps)
  8. 60.1Mbps – 1/2 of interval 59.37-60.7Mbps – fail (60.1 > 60Mbps)
  9. 59.7Mbps – 1/2 of interval 59.37-60.1Mbps – success (59.8<60Mbps)
  10. 60Mbps – 1/2 of interval 59.7-60.1Mbps – success (60=60Mbps)

Step 10 is slightly adjusted for brevity but the procedure should be clear now. Any other change in TX rate would result in frames being dropped.

The configurable variables in throughput tests are:

  • maximal speed (fro equipment with mismatched line rate and throughput speed)
  • accuracy (acceptable variance in the stream)
  • number of validations – number of trials for each frame size

The results of the throughput test should be represented in a table and a graph. The advertised throughput should be based on result of this test with 64B frames.

So this seems like a pretty straight forward test but it is one of the most misinterpreted test as well. The problem is that most of people involved in evaluating the result forget to count all the overheads that are present in a Ethernet/IP network. What do I mean by this ? Well the speed is of any Ethernet link has a face value of 10/100/1000 Mbps which is a line rate on physical layer so if you want to calculate what is the effective throughput on L2 you must discount all the Ethernet framing – you think it cannot be that much ? Well let’s do the math for the worst case scenario – 64B frame as that is the one that based on the standard should be the compared benchmark.

The Ethernet frame in numbers looks like this: 7B preamble + 1B of delimiter + 12B for addressing + (optional 4 Bytes for 8021Q shim) + 2B of type/length + 48B of payload +4 B CRC + 12B of inter-frame gap so in total we have a 38B overhead to send 46B of L3 data (without using 802.1Q tag). So what does this give us ? It all depend on what the tester will and will not take in account. Most testers will disregard the 8B of preamble and 12B of inter-frame gap and will count only the logical frame that they can analyze so out of 80B line rate you will have 64B of measured traffic. So where does that leaves us the expected results are re-calculated maximal achievable speeds for the basic frame size distribution ona fast Ethernet link:

Frame size pps Mbps
64 148809  76.1
128 84459  86.4
265 45289  92.7
512 23496  96.2
768 15862  97.4
1024 11973  98
1280 9615  98.4
1518 8127 98.6

So as you can see the efficiency of the throughput has linear dependency on the frame size but never reaches 100% due to the previously discussed overheads. In telco you can still encounter a lot of devices that at least internally use ATM. This mechanism can be proprietary but the general idea is described in RFC2684. Why I am mentioning this is that you can actually spot this while evaluating the throughput test as the Ethernet-to-ATM encapsulation is very inefficient with frames of about 100B as the ATM cell has only 48B of payload so splitting a 100B frame would result in creating 3 ATM cells where the last one would be defectively empty leading to almost 30% inefficiency as compared to Ethernet line rate.

RFC2544 Latency test

Latency is the the delay between frame being sent and its reception on the other end of the measured link. There are two main types of latency we can encounter in common situations – The first one is One Way Delay where the latency is being precisely measured on a unidirectional stream. The second and more common  Round Trip Time which is a a default most scenarios (like in ICMP ping) but in our case – if you use reflected traffic. It is important to remember that RTT is cumulative and the delay might be different in upstream and downstream so even though a fairly advanced calculations are being used in order to achieve the best possible outcome it should alway be taken as an indication measurement. The Latency mechanism described for this test is closer to One Way Delay but the way it is calculated or what exactly is being displayed in depends on the tester.

The latency test will use all frame sizes from the standard distribution and the maximal speed measured in the throughput test for each one of them. For every trial there is a minimum of 120 seconds for which the test should run. Every trial should also be repeated  20 times which should be configurable as this is excessively long for most cases.

There are two modes of how to measure the latency – cut-thru (bit-forwarding) or store-and-forward these are well defined in RFC1242 and which usually will have different results (because of way the modes work and when the measurement takes place). This has one major disadvantage – these modes have been selected in an environment where Ethernet equipment mostly supported those two modes but 14 years later the world looks quite different as the third method called “hybrid” is most common method of forwarding frames in Ethernet networks. This is why I would recommend to look on the cut-thru method results rather than the store-and-forward.

The most important thing one must know about latency testing is that the result will be always displayed as an average of all trials for one frame-size with optional additional information (like min,max,deltas,means etc.).

Some testers will also provide you with information about jitter but be aware that it is not part of the RFC2544 even though it is very important thing to observe especially in voice-enabled environment.

Frame Loss

The frame loss test is using the previously determined maximal frame rate from the throughput test so it always should be run in conjunction with it. It is simply trying all frame sizes on a frame rate that was determined and checks what percentage is being lost. This test is being run in steps in which every other step is using smaller throughput (by 10%) until the ratio between send and received frames is 1 (i.e. all frames are received). Obviously no lost frames on maximal throughput rate is required for this test to be considered successful.

RFC2544 Back-to-Back test

The back-to-back test has been put in place to test equipment’s behavior with presence of  bursty traffic or in other words operation of buffers. As per all previous test each frame size will have its own trial at which the frames will be send in burst defined in seconds with minimal inter-frame delay (also known as line-rate). The initial burs must be at least 2 second long. When the number of received frames is not equal to the number of sent frames then the burst is shortened by one frame per trial until the maximum is found. Trial for each frame size should be repeated 50 times.

The important thing is as there are multiple trials the result will always be presented as average of those trials. Some testing equipment can provide more information like standard deviation and separate stats on each trial.

RFC2544 System recovery  test (from overload)

This is a bit odd and probably obsolete test but it is worth of mentioning even though I haven’t seen any testing equipment that has it implemented as the idea behind it is still very valid. The test is in place to subject the Equipment to a condition at which it will be overloaded with 110% of traffic (from the throughput measurements) for minimum of 60 seconds after which the tester will drop the TX to 50%. The aim of this construction is to measure the time difference between the switch from 110% to 50% on tester and actually receiving the 50%.  The reason for this is to test that there are no problems with buffering (underflows/overflows/reading/writing etc.) and also the amount of buffering and backlog processing.

The main issue of this test is that 110% usually makes a very small difference and most equipment can deal with it quite well. The other problem is that equipment that can do line-rate speeds is basically impossible to test (or at least it is very impractical). Also this test was intended for equipment with no or limited QoS capabilities which you will not find nowadays outside SOHO segment (and even there they are rare).

 RFC2544 Reset test

This is the last test in the suite. Its purpose is to measure time from outage caused by either hardware or software reboot till full service restoration. This is a very useful test as it can show various race conditions – like what will happen if you hammer the interface throughout the start-up sequence. But for the most mart everyone is only interested in the fact that the unit will boot (and the approximate time).

Conclusion

As you can see the tests of this suite are fairly simple but the interpretation can be confusing. I think it is clear that the throughput, frame loss and latency test can be used for link activation/acceptance  but have some rather serious flaws as they haven’t been designed for this purpose. I next article I will discuss the telco standard BERT testing and the newer service activation standard of Y.156sam also known as EtherSAM for link activation.

How to capture, analyze, create and replay ethernet traffic

I have decided to write up new article after long time of silence as I think this is a topic that many engineers are facing on fairly regular basis but finding solutions to the simple and interconnected questions is rather time consuming and not exactly simple. Let me stress at the beginning that most of the tools mentioned below are free or cheap and all of them are easily available. So enough talk and let me start with the first chapter

Part 1 – Capturing the traffic – software

There are various situation in which you might need this and various ways how to achieve it. So let me start with software which will allow you to capture the traffic on a PC.

Wireshark LogoWireshark – the Alpha and Omega

I think the first software anyone always encounters is Wireshark – it is absolutely incredible and flexible piece of software with so many functions and features it is difficult to believe it is free. The software itself uses libpcap (or its windows variant Winpcap) and is multi-platform (Windows/LINUX/MacOS). I will not write about Wireshark’s functions and features on this spot as it has been done many times and I might do it in separate article anyway. But I will point out few thing that are a must-to-know when working with this amazing software.

  • Captures made in Windows OS will never have 802.1q tags as the drivers of most network cards will just strip them (might be also fault of libpcap – never found complete answer to this)
  • If capturing large amounts of data the default behavior is to display the captured data in real time on screen which is causing crashes (as the PC is usually unable to cope with this amount of calculation)
  • Alway use the latest stable version as the developers of this software make huge improvements every release

It is needles to say that Wireshark is great for small amount of traffic but generally for capturing of data it must be tweaked and is less stable than the second option I will talk about.

TcpdumpTCPdump

This is a Linux only tool – at least as far as I know which is exclusively used from command line interface. This might seem to be like quite a drawback as many people for different reasons don’t have or cannot have Linux PC. Well think again – there is huge amount of network equipment that is based on either Linux (Checkpoint SPLAT) or BSD (Juniper,Nokia IPSO) so you can still use it. The syntax is really simple :

$ tcpdump -i etho  – will set the interface on which we listen to eth0
$ tcpdump -w filename.pcap  –  will save the captured packets in file in pcap format so you can later read them either with tcpdump or Wireshark
$ tcpdump -d – display the captured frames on the active console

I have listed only these three as those you would normally use the most there is a whole bunch of commands this software can do out of which the interesting one is that it understands regular expressions so you can filter while capturing. For the complete list check this manpage.

Part 2 – Capturing the traffic – hardware

Now when we have something to capture the traffic with we must somehow direct the flow of the traffic to our endpoint. Most people go for the easy approach and just unplug whatever equipment is in the path of the traffic they want to intercept and run the analyzer. But this is fundamentally flawed method unless you are in control of the traffic transmitter (i.e. in lab environment) – in live networks or when investigating some protocol-related issues this is just not possible as the traffic will either die out on some timers or will just not arrive at all as the upper protocols will notice that their counterpart is no longer there. Well fortunately there are ways how to achieve the traffic diversion or transparent interception.

Hub

Hub is the first thing that will come to mind – it is a simple L1 device and the main thing is it sends everything everywhere. Of course this is a simplification but it does the job – or not – The problem here is that you will not be able to buy a hub nowadays as switches are so cheap that there is really no reason for hubs to be manufactured anymore. The other limitation is that there are no hub with gigabit Ethernet ports as the have never been built.

You might have luck on e-bay but even there most people are selling switches and even routers as “hubs” so it is unlikely to get one of those. Also there is a big limitation of a hub and that is that you can actually listen only to maximum of 1/2 speed of what its declared speed is. The reason is obvious – as you listen to both part of conversation on your link from the hub, it will be limited to max of 100mpbs or more typically 10mbps. So if you will  think about a composite, equal, bidirectional stream directed to your end point the maximum of the down link which is 100mbps (or 10mbps) which will be shared as 50Mbps conversation from A->B and 50Mbps from B->A.

Despite this limitation hubs are great especially for small lab environments but unfortunately they are almost impossible to get nowadays.

Port mirroring / Span port / Tap port

This is usually the most available way of diverting traffic from live network and is present on all equipment that has a cli (ok almost – but definitely on all decent, recent equipment). The configuration is usually very simple you just must identify the port to mirror and a port you want to direct your traffic to (in Cisco catalyst switches it can be also vlan). Here is a sample config:

Switch(config)# no monitor session 1
Switch(config)# monitor session 1 source interface fastEthernet1/0/1
Switch(config)# monitor session 1 destination interface fastEthernet1/0/24
and for verification
Switch# show monitor session 1

This is a very fast thing to configure but it has she same drawbacks

  • malformed frames will never be processed by the ASICS as they will be dropped on inbound of the source interface
  • even though most vendors swear that this is capturing fully transparent there will be protocols being filtered out so the trace is never complete

Port mirroring is excellent for protocol related problems but for lower layer problem it always must be combined with interrogation of the source interface and this might be rather tricky in busy networks.

tuxLinux bridge

I will add this only as a last resort thing as it has so many drawbacks and so narrow utilization that I’ve never seen it used in real life. In linux if you have two interface cards you can bind them in a bridge group using bridge control utilities and command brctl. This will create a L2 bridge on which you can listen to traffic but this is a classic bridge with all its disadvantages of being full L2 device. Also please note that the bridge would have spanning tree turned on by default ! The example below shows how to configure the bridge group and disables the stp.

$ brctl addbr “bridgename
$brctl addif bridgename (i.e. eth0)
$brctl stp bridgename off
$ brctl show  – shows tall bridges on system)
$ brctl showstp bridgename – shows current status of spanning tree on the bridge in question

Linux bridging is very useful thing to know and understand especially for building VPNs and when you do some virtualization. For configuration details see the tutorial on linux foundation.

dualcomm logoHardware tap

This is probably the most expensive option out of all the ones mentioned here but it is my personal favorite. The network tap is a L1 device that basically can work as a smart hub bun on gigabit speed. You can either sniff one direction or the other or both at once (with the limitation of the dow-nlink speed of course). I was looking for device that would be able to do this and wouldn’t cost thousands of dollars (and I was actually considering building one myself). But then I have found Dualcomm and specifically Etap-2306 . It is a very nice piece of equipment – very simple has two different modes for capturing and you can even inject some traffic if you want. But my favorite feature was that it has SFPs as the equipment I work with is 99% fiber only this was a huge bonus. The other great thing is that is is USB povered so you can run it off your laptop without any need of additional power supply.  The device costs about $670 so it is not the first choice if you have strained budget but after using it on several occasions I have to say it was worth of every penny spent.

There is one not so obvious drawback when you start capturing data on speeds of hundreds of megabits – it is the speed of the HDD you are capturing to. Normally you would do this in field with only your laptop – but the throughput of your laptops HDD is actually always below 400mbps (more likely to be close to 200mbps). This will obviously differ between older laptops and newer ones with SSD drives but it will be a bottleneck in most cases. There are ways how to improve the capturing so this will not be such a  big problem but eliminating this issue is very difficult in field conditions – in lab you just use PC with fast drives and plenty of RAM and it will do the trick.

Part 3 – Replaying captured traffic

This might seem like a bit of a useless thing to do – why would you want to do this if you can see the whole stream in the Wireshark analyzer ? Well this is extremely useful for replication of problem in lab where you can monitor the equipment more closely and also can change configuration while the problematic event is happening. Replaying the pcap files is very common feature on testing equipment but that cost thousands and tens of thousands dollars. There is also quite a few pieces of software that do it as one of their functions but I have found those usually have some problems either with timing or sending the packets in different way than they have been received originally. After quite some research I have found Playcap which ideally matched my requirements.

playcap Signal11 logoPlaycap

Playcap is the software of choice for replaying the pcap files from your PC – it is extremely simplistic and has Windows and Linux version. The most important thing about this software is that is rigorously follows the timing of the packets in the pcap files which most of the other pcap players don’t do.

Part 4 – Creating your own traffic

If you need to prove a theory there might raise a need to actually send a frame of specific type (i.e. multicast frame) or with specific payload without having an equipment that can do it. This is a difficult problem even for small testing equipment as it is not primarily ,meant to do these thing. With big testers you usually can do this but must pay a special license not talking about impossibility getting such a device anywhere close to field (or even other lab in the same building). After rather long and unfruitful search I have found project called packeth which actually does quite a few of the above mentioned.

PackEthPacketh

The project can be found on sourceforge and is just great. I am using it for about 3 years now and performed so many tricks with it ! Have no equipment to generate QinQ, igmp, L2 muticast, want to verify unknown unicast want to send fake ARPs test IPv6 ? Yes all of those (and more) you can actually do in Packeth – as with wireshark this is absolutely fabulous piece of software which runs on both Windows and Linux. As there are very specific thing you can do with this software I will most likely write a separate article just about it.

Part 5 Traffic analyzing

The whole exercise of getting the traffic (or even the creation of it) is being done so you one can easily troubleshoot and replicate issues in lab or in live network. but once the data are on a PC the only step must follow – the packet trace analysis. There is only one name I can say about this – Wireshark. IT is the ultimate tool for traffic analysis. I will not write how to use Wireshark as it is a topic for stand alone book. But there is one thing I will say – if you want to call yourself a network engineer understanding the basics of this software is just a must. If you can write decen exptressions and know where to look to find the flows or converstaions between endpoints – it will make your life much easier. As mentioned above multiple times – I will most likely write an article on “how to” for some scenarions that are interesting from my point of view – but the documentation and community around Wireshark is huge so if you want to know anything – just go to the project’s wikipage where you’ll find plenty of useful stuff (including some sample captures or protocols).

This is the end of this first article after a long time but I will try to write follow-ups soon as I should have more time and more interesting things to write about.