EtherSam (Y.1564) explained

This is the last article (at least for now) from the series about testing methodologies and testing standards. I will cover some bits and pieces in the region of testing in general but it won’t be as heavy on the theory as I want to write some “hands-on” scenarios for combined use of Wireshark and PackEth as well as about some multicast scenarios. Also I will be doing more Cisco and Juniper stuff so it is quite likely I will be blogging some configs and labs. Anyway enough about the future plans and let’s start with the topic at hand.

Introduction

The ITU-T Y.1564 also more commonly known as EtherSam (which originated in the old name of the standard ITU-T Y.156sam) is a service activation test suite whose goal is to allow for rapid link testing in deployment of services. The main advantage of this test is that it allows for testing of SLA (Service Level Agreements) while deploying new service and it can do that disregarding the Physical topology (i.e. it can verify end-to-end SLA even in live environment with live traffic flowing through the network).

There is few serious considerations in general that make this test suite bit awkward to use.

First one is that this is a very new standard (initiated in 2009, published 2011) and is still changing as new drafts are still being issued.

The next rather serious problem is that this test suite is for “service activation” which means in normal language that it is no good for lab testing as it doesn’t really stress the equipment. The reason is that the EtherSam is designed around the idea of rapid deployment of new links/services in Telcos (I will write about the disadvantages of the design in later).

The last issue is that as a new standard it is rather unknown among network engineers so it takes some education before it can be used.

Traffic parameters

The theory behind this test suite is somewhere half way through between the RFC2544 and BERT tests as it tried to get the best of both while achieving similar results to both. Lets start with definitions as they are the most important. In EtherSam you can configure multiple concurrent services and each service  can have following 4 parameters:

  • CIR – Committed Information rate
  • CBS – Committed Burst Size
  • EIR – Excess Information rate
  • EBS – Excess Burst Size 

This is not as complicated as it might seem at this point. These values are only used to set the SLA. The CIR defines the minimal amount of traffic within the available bandwidth and must be always fulfilled. If there is only CIR specified on the links/services it is a good practice to have some amount of bandwidth allocated to CBS as it will allow for a small overshoot in case of traffic burstiness. Obviously one might need more flexibility in how much traffic to pass through (like over-subscription) where some frame loss is acceptable in exchange for more data being delivered. That is the Excess Information Rate. As it is obvious that once EIR is in place the data from CBS would be calculated as part of EIR so CBS setting loses its meaning. If you want to get little more flexibility in case of having more bursty traffic you can specify EBS on top of the EIR.

Traffic coloring

In the paragraph above I have described the two out of three traffic types that exist in EtherSam which would be reffered to as a green traffic (CIR+CBS) and yellow (EIR+EBS). The standard also defines a red traffic which is a traffic non-conforming to either CIR or EIR. In effect based on the EtherSam methodology this traffic should never be passed and should be dropped. This look like a absolutely trivial and obvious thing but it has one very serious consequence in deployments with over-subscription in place – you must define the EIR as the “shared” part of your QoS with specific size allocated to it. So having a random amount of free-to-grab bandwidth for the tested service will result in failing the test as passing red traffic is a fail criteria on Y.1564.

Traffic profile for EtherSam - coloring

Bandwidth profile parameters – Coupling flag and Color mode

I am putting description of these two parameters at this place just for the sole reason that they are defined in the standard but I would like to stress out that I haven’t seen them implemented in any testing equipment so far so this section will be rather short and most people can just skip it as it has little to none practical use (at least at the time of writing). These two parameters allow for the metering algorithm to be adjusted and thus change the result. Also they are valid only in certain scenarios.

  • CF – Coupling flag – Could be only set as on or off. Is only useful for introducing new service in live environment with extremely bursty traffic. It allows for coupling unused green and yellow traffic thus allowing for higher throughput.
  • CM – Color mode – allows for two options color-aware and color-blind mode where the first one is requiring the tested equipment to re-mark/re-color the traffic streams to adhere to the existing network rules whereas the color-blind expect no interference with the coloring.

The Service Configuration test

This is the first test that you can run and is meant to test a individual service. The aim is to test the CIR/EIR (and optionally CBS/EBS) comply to the setup. It is a rather simple test but except the obvious CIR/EIR/policing it allows for some variability offering the following  options:

  • Fixed frame size or EMIX pattern (1518, 1518, 1024, 64, 64)
  • optional Step Load (25%,50%,75%100%)
  • optional Burst test for the CBS and EBS (defined in Bytes)

If you have multiple services configured each one will be done separately so be careful about the time-estimate as this test is not intended to run for long time. Especially with the ramped services it is important to realize that the total duration of this test will be number of services x number of steps x step time. Also the other thing is that CBS and EBS will be tested separately adding more time to the test. In total this should not take more than 10 minutes as this test is not supposed to be replacing a long term tests.

The Service Performance test

This test is the second (and last) test you can do in Y.1564 and is in place to test all services in one go in order to check that the sum of the CIRs is actually available on the path in question. It is also meant to be a long test with specified durations 15 min, 2 hrs and 24 hrs. The EMIX and ramped traffic in the services should be available as in previous test.

I think that this test due to its simplicity can replace the BERT in many cases while giving better results for service providing.

The results and pass/fail criteria

The pass/fail criteria are rather obvious

  • Fulfilling CIR (or CIR+CBS)
  • Fulfilling EIR (or EIR + EBS)
  • Policing overshoot of traffic > CIR+EIR+EBS
  • Conform to maximal acceptable delay variation (jitter)
  • Conform to maximal acceptable round-trip latency
  • Conform to SLA’s Frame loss (or availability)

These are solid criteria and there is not much you can say against these but as always there are some considerations that must be taken in account.

First one is something I have already mentioned – there is no way for the Y.1564 to consider a shared “best effort” overshoot above the defined CIR+EIR which might be problem in some scenarios but I think it could be avoided via some hacked configuration of EIR/EBS.

Second is the SLA frame loss or more known in the telco world as availability. So if you provide let’s say 99.99% availability it means that on a 100mbps stream it would be acceptable to lose  over 2000 frames single hour which I don’t think would be found acceptable in most environments. As far as I know there is no possibility to set the availability to 100% (also no SLA would ever have this number in it). I ma not currently aware of any possible workaround for this so the only advice is to go through the data in the results table very carefully and set this option to be as close to what you expect of the test as possible (i.e. in my opinion under normal circumstances there should be 0% packet loss on 2 hours test on most systems).

The last thing I would like to mention is that there is no built-in out-of sequence counting mechanism. This might sound as an unnecessary feature but in voice-enabled environment this is  actually a very important parameter to observe.

 Conclusion

 The EtherSam is rather interesting test suite but in my opinion cannot (and was never meant to) replace the RFC2544. In some ways it can partially replace BERT in some field operations. I have to say I do welcome this standard as it addresses the last bit of testing that was not properly included in any Ethernet/IP testing suite to my knowledge. It obviously has some drawbacks but I think it has its place in field service activation environment . Only time will tell if it will become as wide spread as the RFC2544 but I certainly hope so.

 

 

 

 

Bit Errror Rate Test (BERT) explained

This article will be rather short in comparison with the others in the mini-series about various Ethernet/IP testing methods but it is one that is necessary as Bit Error Tests have a long tradition in telco environment (circuit based networks) but are still quite valid even in nowadays packet networks – at least for some specific cases. So without further delay let start with some theory behind the testing and some practical use followed by some use cases and best practices.

BERT introduction

As you can guess from the name this test is really to test physical layer traffic for any anomalies. This is a result from the test origins where T1/E1 circuits have been tested and each bit in each time-slot mattered as the providers were using those up to the limit as bandwidth was scarce. Also as most of the data being transferred were voice calls any pattern alterations had quite serious implications on the quality of service. This also led to the (in)famous reliability of five nines or the 99.999% which basically states that the link/device must be available 99.999% throughout a specified SLA period (normally a month or a year). One must remember that redundancy was rather rare so the requirements for hardware reliability was really high. But by the move away from the circuit-based TDM networks towards the packet-based IP networks the requirements changed. The bandwidth is now in abundance in most places and the wide deployment of advanced Ethernet and IP feature rich devices provides with plenty options for redundancy and QoS with packet-switched voice traffic on rise – one would think it is not really necessary to consider BERT as something one should use as test method but that would be huge mistake.

Why BERT

There are few considerations that can make BERT an interesting choice. I will list some I think are the most interesting.

  1. It has been designed to run for extended period of time which makes it ideal for acceptance testing which is still often required
  2. BERT is ideal for testing jitter as it was one of the primary design goals
  3. The different patters used in BERT can be used for packet optimization testing (I will discuss this later in more detail)
  4. Most of the BERT tests are smarter than just counting bit errors so the test can be used for other testing

BERT Physical setup and considerations

On Ethernet network you cannot run a simple L1 test unless you test just a piece of cable or potentially a hub as all other devices would require some address processing. This makes the test being different on Ethernet network from unframed E1 as unlike on E1 we need to set framing to Ethernet with the source and destination  defined on the tester. Also as Ethernet must be looped on a logical level it is not possible to use simple RJ45 with pair of wires going from TX to RX as you could with E1 and either hardware or software loopback reflector is required. Most tester will actually allow you to specify even layer 3 and 4 for IP addresses and UDP ports. The reason is usually so the management traffic between tester and loopbacks can use this channel for internal communication.

Pattern selection options

As this test originates from the telco industry some interesting options are usually presented on the testers. The stream can generate these patterns:

  1. All zeros or all ones – which are specific patters originated from TDM environment
  2. 0101 pattern and 1010 pattern – patterns that can be easily compressed
  3. PRBS – Pseudo Random Bit Sequence – is an deterministic sequence that cannot be compressed/optimized the details and calculation can be found on wikipedia
  4. Inverted PRBS – the same as above but the calculation function is inversed to counter any “optimization” for the PRBS

The thing to remember is that PRBS will be applied to the payload of the frame/packet/datagram so it there is any sort of optimization present it will have no effect as PRBS is by design not compressible. There are various “strengths” of the pseudo-random pattern the higher the number the less repeating it will include. Normally it is possible to see two main variants: 2^15 which is  32,767 bits long and 2^23 which 8,388,607 bits long. Obviously the longer the pattern the better and more “random” behavior it emulates.

Error injecting Options

As this test originated in telco world injecting errors was a major thing but in Ethernet network it lost its importance. If you inject even a single bit error in an Ethernet frame the CRC should be incorrect and the whole frame should be dropped on first L2 equipment it will be passed through which should always result in alarm LoF(Loss of Frame)/LoP (Loss of Pattern).

Use cases, Best Practices and Conclusion

The most common use case for BERT in nowadays network would be in commissioning new links as you can run a fairly simple test for a long time that will give you a reasonable idea bout it’s quality in terms of frames drops and jitter.

The few recommendations about how to run this test would be as follows:

  • Use the largest pattern you can.
  • Remember that the line rate and L2 rates will be different because of the overheads.
  • Remember that 99.999% of availability results in 0.8s outage in 24 hours (which can be quite a lot of frames)
  • PRBS cannot be optimized

So as you can see BERT is rather simple and straight forward test that even though is in many ways deprecated by RFC2544 and others (like Y.156sam) it is still a very good test to know especially if you are in jitter sensitive environment e.g. where VoIP and IPTV is deployed.

RFC2544 Testing explained

ietf-logoThis next article in this mini-series about testing Ethernet/IP networks I will write about one of the most common test – the RFC2544  “Benchmarking Methodology for Network Interconnect Devices”. The purpose of this test is quite often misunderstood even though it is clearly stated in the introduction of the standard itself. So let’s start with clarifying what this testing suite is and what it should be used for.

Introduction and considerations

As the standard say right at the beginning this test suite is in place so customers have a single point of reference while testing network equipment capabilities. So as you can see the intent of this test is to evaluate a single piece of equipment and provide results that can be easily compared between vendors. This approach has some very obvious advantages and some not so obvious drawbacks both of which I will try to cover in this article.

The advantages of the test suite can be covered rather quickly as they are for most part rather obvious. The test suite had been designed in order to provide vendor-independent comparable test with clear and easy-to-understand results. The tests themselves are measuring behaviors/variables that are absolutely a “must know” for any new network element being introduced into the network. What the test cover and how will be detailed below in detail and I have to say most of the methods are still very valid even though the standard has been approved in 1999. The main advantage (or maybe you can say disadvantage) is the test popularity as it became over the years a most used standard for Ethernet/IP network testing.

The disadvantages of RFC2544 are bit more obfuscated but are rather serious. The first  of the problems I am encountering a lot is the fact this test suite is so popular often results in two interconnected problems – it is being used in wrong place (where other test would be more suitable) and its results are being misunderstood or misinterpreted. I hope this article will help to shed some light on the test procedures and variables entering the test and subsequently on the expected results.

The other important thing to consider while making a decision if to use or not use this test is that this test suite was created to test standalone network elements and even though it can be used for service activation/ acceptance testing it is not its primary focus and the testing procedures must be adjust.  It should be considered that in some cases it will not be suitable at all (that is why there are specialized test procedures for those.

The last consideration or problem of the RFC2544 suite is that it has been created and approved devices that were around in 1999 (routers, L2 switches and hubs etc.) so it is designed in a way that is quite different from today’s multi service environment. Also the intent was to test purely  native Ethernet devices (and now legacy Ethernet transport over FFDI and token-ring) so using it in Telco environment where quite large quantities of equipment still use ATM (at least internally) can lead to very interesting results. I will discuss this in later part of this article.

Physical setup

The first thing to consider when using the RFC2544 test suite is what physical setup you will use and what reciprocations will that have on subsequent evaluation and fault-finding. The three main options are:

  • Reflected scenario (uses a dedicated hw/sw loopbacks)
  • Unidirectional scenario (uses one stream and two testers)
  • Bidirectional scenario (uses two streams from two independent testers)

Each of those scenarios is quite useful in different use case.

The most common one for field operations would be the Reflected scenario where the tester is on one location and the dedicated reflector/loopback is on another end of the tested line wherever that is. The main problem with this scenario is that there is no way how to tell in which direction lies encountered problem as the uplink is the stream from downlink with MAC and IP addresses swapped by the loopback unit. This is not a big deal in lab environment but might be a crucial thing to consider when the loopback is used in field and is couple/tens of kilometers away.

The other problem is that the loopbacks can be created in software on the measured equipment which adds layer of uncertainty about the results as the device itself can (and in most cases also will) behave differently when the loopback is external hardware.

The last issue with the reflected scenario is that in my experience the loopbacks are not exactly reliable and can themselves introduce unexpected behavior into the measured values.

The unidirectional testing uses one sender and one smart receiver that evaluates the data stream. This testing usefulness is rather limited as no real data is exclusively unidirectional. This is a rarely used setup unless one wants to use this test setup for faultfinding and it could be also used in asymmetric networks.

Bidirectional scenario is probably the most precise way how to perform most testing as it is basically running one separate stream in each direction which is evaluated on the other endpoint. This is commonly called “dual test set” or “dual test” in the tester’s setup. The obvious drawback is that you require either two testers (in case of small field units) or correctly wired and configured big tester which might get tricky. Obviously as the supply of the testers is way more limited due to the high price so this scenario is not suitable for field operations as it requires good logistics while moving the testers between sites.

Ethernet frames

The Ethernet frames used in this suite are based on Ethernet standard and are used for multiple test in the suite. The distribution is not random but tried to cover the most important frame sizes that might be present in average network. The distribution looks like this:  64, 128, 256, 512, 1024, 1280, 1518 Bytes. It is very important to note that these are frames without the 802.1Q tag. So for a tagged traffic the minimal value should be 68B for the traffic to be valid and passed through correctly-behaving device.

There is an interesting discussion point – Could a 64B frame with vlan tag exist? Seemingly in contradiction with my earlier statement – the answer is yes as the 802.1Q shim is actually taking space from the payload part of the frame. This frame should be even passed through and processed correctly as long as no equipment tries to remove the tag. Once that happens and the vlan tag is removed – the frame becomes a runt (frame smaller than minimal required size) and must be dropped on outbound interface before being sent anywhere.

RFC2544 – Throughput test

Throughput test is rather basic and even the name is rather self-explanatory – it will measure maximal amount of data you can pass through a device or link. The throughput is measured for the distribution of frame sizes mentioned above – one trial for each frame-size.

The frame sizes is first thing that is specified the second variable that is in standard but is not present on any testing equipment I’ve seen is the packet types. The test stream normally consists of UDP unicast datagrams but the standard recommends to use other packet types as well – specifically broadcast frames, SNMP-like management frames and routing-updates-like multicast. As far as I know this  recommendations are not being observed and only the unicast UDP streams are being used. In some equipment you can set the L4 header to be TCP but be aware that this TCP doesn’t behave as real TCP would (as there are no ACKs being send and window mechanism being employed on the data stream).

So what are the steps taken in this test and variables you can use to adjust the test?

The steps are are follows:

  1. Discovery phase (checking the other end of the tested link is reachable)
  2. Learning phase (binary division to evaluate what is the maximal throughput)
  3. Contiguous stream of the frames-size at the speed found in step 2) for time of at least 1 second
  4. Evaluating speed/drops/pattern changes and graphing them

 This seems to be pretty straight forward but I would like to stop at 2. as the way this is determined is quite interesting. The method commonly used is called binary division and on this place I would like to show you how it works as even though it is a simple concept it is a bit difficult to find any decent information on it. So let’s assume our equipment can only pass 60Mbps and the line rate negotiated is 100Mbps. The binary division will use the negotiated line speed as default value but if a specific value is set the it would be used as the initial maximum. This is the trialling procedure:

  1. 100Mbps – initial maximum – fail (60<100Mbps)
  2. 50Mbps – 1/2 of interval 0-100Mbps – success (50<60Mbps)
  3. 75Mbps – 1/2 of interval 50-100Mbps – fail (60<75Mbps)
  4. 62.5Mbps – 1/2 of interval 50-75Mbps – fail (60<62.5Mbps)
  5. 56.25Mbps – 1/2 of interval 50-62.5Mbps – success (56.25<60Mbps)
  6. 59.37Mbps – 1/2 of interval 56.25-62.5Mbps – success (59.37<60Mbps)
  7. 60.9Mbps – 1/2 of interval 59.37-62.5Mbps – fail (60.9 > 60Mbps)
  8. 60.1Mbps – 1/2 of interval 59.37-60.7Mbps – fail (60.1 > 60Mbps)
  9. 59.7Mbps – 1/2 of interval 59.37-60.1Mbps – success (59.8<60Mbps)
  10. 60Mbps – 1/2 of interval 59.7-60.1Mbps – success (60=60Mbps)

Step 10 is slightly adjusted for brevity but the procedure should be clear now. Any other change in TX rate would result in frames being dropped.

The configurable variables in throughput tests are:

  • maximal speed (fro equipment with mismatched line rate and throughput speed)
  • accuracy (acceptable variance in the stream)
  • number of validations – number of trials for each frame size

The results of the throughput test should be represented in a table and a graph. The advertised throughput should be based on result of this test with 64B frames.

So this seems like a pretty straight forward test but it is one of the most misinterpreted test as well. The problem is that most of people involved in evaluating the result forget to count all the overheads that are present in a Ethernet/IP network. What do I mean by this ? Well the speed is of any Ethernet link has a face value of 10/100/1000 Mbps which is a line rate on physical layer so if you want to calculate what is the effective throughput on L2 you must discount all the Ethernet framing – you think it cannot be that much ? Well let’s do the math for the worst case scenario – 64B frame as that is the one that based on the standard should be the compared benchmark.

The Ethernet frame in numbers looks like this: 7B preamble + 1B of delimiter + 12B for addressing + (optional 4 Bytes for 8021Q shim) + 2B of type/length + 48B of payload +4 B CRC + 12B of inter-frame gap so in total we have a 38B overhead to send 46B of L3 data (without using 802.1Q tag). So what does this give us ? It all depend on what the tester will and will not take in account. Most testers will disregard the 8B of preamble and 12B of inter-frame gap and will count only the logical frame that they can analyze so out of 80B line rate you will have 64B of measured traffic. So where does that leaves us the expected results are re-calculated maximal achievable speeds for the basic frame size distribution ona fast Ethernet link:

Frame size pps Mbps
64 148809  76.1
128 84459  86.4
265 45289  92.7
512 23496  96.2
768 15862  97.4
1024 11973  98
1280 9615  98.4
1518 8127 98.6

So as you can see the efficiency of the throughput has linear dependency on the frame size but never reaches 100% due to the previously discussed overheads. In telco you can still encounter a lot of devices that at least internally use ATM. This mechanism can be proprietary but the general idea is described in RFC2684. Why I am mentioning this is that you can actually spot this while evaluating the throughput test as the Ethernet-to-ATM encapsulation is very inefficient with frames of about 100B as the ATM cell has only 48B of payload so splitting a 100B frame would result in creating 3 ATM cells where the last one would be defectively empty leading to almost 30% inefficiency as compared to Ethernet line rate.

RFC2544 Latency test

Latency is the the delay between frame being sent and its reception on the other end of the measured link. There are two main types of latency we can encounter in common situations – The first one is One Way Delay where the latency is being precisely measured on a unidirectional stream. The second and more common  Round Trip Time which is a a default most scenarios (like in ICMP ping) but in our case – if you use reflected traffic. It is important to remember that RTT is cumulative and the delay might be different in upstream and downstream so even though a fairly advanced calculations are being used in order to achieve the best possible outcome it should alway be taken as an indication measurement. The Latency mechanism described for this test is closer to One Way Delay but the way it is calculated or what exactly is being displayed in depends on the tester.

The latency test will use all frame sizes from the standard distribution and the maximal speed measured in the throughput test for each one of them. For every trial there is a minimum of 120 seconds for which the test should run. Every trial should also be repeated  20 times which should be configurable as this is excessively long for most cases.

There are two modes of how to measure the latency – cut-thru (bit-forwarding) or store-and-forward these are well defined in RFC1242 and which usually will have different results (because of way the modes work and when the measurement takes place). This has one major disadvantage – these modes have been selected in an environment where Ethernet equipment mostly supported those two modes but 14 years later the world looks quite different as the third method called “hybrid” is most common method of forwarding frames in Ethernet networks. This is why I would recommend to look on the cut-thru method results rather than the store-and-forward.

The most important thing one must know about latency testing is that the result will be always displayed as an average of all trials for one frame-size with optional additional information (like min,max,deltas,means etc.).

Some testers will also provide you with information about jitter but be aware that it is not part of the RFC2544 even though it is very important thing to observe especially in voice-enabled environment.

Frame Loss

The frame loss test is using the previously determined maximal frame rate from the throughput test so it always should be run in conjunction with it. It is simply trying all frame sizes on a frame rate that was determined and checks what percentage is being lost. This test is being run in steps in which every other step is using smaller throughput (by 10%) until the ratio between send and received frames is 1 (i.e. all frames are received). Obviously no lost frames on maximal throughput rate is required for this test to be considered successful.

RFC2544 Back-to-Back test

The back-to-back test has been put in place to test equipment’s behavior with presence of  bursty traffic or in other words operation of buffers. As per all previous test each frame size will have its own trial at which the frames will be send in burst defined in seconds with minimal inter-frame delay (also known as line-rate). The initial burs must be at least 2 second long. When the number of received frames is not equal to the number of sent frames then the burst is shortened by one frame per trial until the maximum is found. Trial for each frame size should be repeated 50 times.

The important thing is as there are multiple trials the result will always be presented as average of those trials. Some testing equipment can provide more information like standard deviation and separate stats on each trial.

RFC2544 System recovery  test (from overload)

This is a bit odd and probably obsolete test but it is worth of mentioning even though I haven’t seen any testing equipment that has it implemented as the idea behind it is still very valid. The test is in place to subject the Equipment to a condition at which it will be overloaded with 110% of traffic (from the throughput measurements) for minimum of 60 seconds after which the tester will drop the TX to 50%. The aim of this construction is to measure the time difference between the switch from 110% to 50% on tester and actually receiving the 50%.  The reason for this is to test that there are no problems with buffering (underflows/overflows/reading/writing etc.) and also the amount of buffering and backlog processing.

The main issue of this test is that 110% usually makes a very small difference and most equipment can deal with it quite well. The other problem is that equipment that can do line-rate speeds is basically impossible to test (or at least it is very impractical). Also this test was intended for equipment with no or limited QoS capabilities which you will not find nowadays outside SOHO segment (and even there they are rare).

 RFC2544 Reset test

This is the last test in the suite. Its purpose is to measure time from outage caused by either hardware or software reboot till full service restoration. This is a very useful test as it can show various race conditions – like what will happen if you hammer the interface throughout the start-up sequence. But for the most mart everyone is only interested in the fact that the unit will boot (and the approximate time).

Conclusion

As you can see the tests of this suite are fairly simple but the interpretation can be confusing. I think it is clear that the throughput, frame loss and latency test can be used for link activation/acceptance  but have some rather serious flaws as they haven’t been designed for this purpose. I next article I will discuss the telco standard BERT testing and the newer service activation standard of Y.156sam also known as EtherSAM for link activation.

PackEth tutorial part II – The Gen-B,Gen-S and PCAP options

PackEthThis is a Second part of an article I have written some time ago about the great tool called PackETH.  This article will be much shorted as it will be focused on the less complicated (but not useful!) modes of the tool.

In the previous par I have described how to build your own packet from L2 to L4 but what if you need something else ? maybe not a single packet but a burst of packets? or what is you need to send multiple streams of various frames ? Well then you need to use the Gen-S and Gen-B modes.

Gen-B

The name stands for Generate burst. The necessary prerequisite for using this mode is for you to have a ready and valid packet loaded in the Builder mode. Once you have that you can run the burst generation. This is how the GUI looks like:

Gen-B

The “Number of packets” field is I think rather self-explanatory. The second is more interesting and deserves our attention as it can be used for buffer testing. The “Delay between packets” field is actually referring to what is correctly called the Inter Frame Gap (IFG) which is a delay between end of one frame and beginning of the next one. The physical media has a limitation of how fast you can the frames be send. Then the “max speed” check box is in effect it means you will send all the frames with minimal IFG. This situation is also be known as back-to-back scenario.  Under normal circumstances the frames would not be played out from the buffer like this unless the buffer would be full. So you are asking what is this good for ? Well the traffic can be very bursty and the software on your network equipment might not be able to handle it very well so testing this behavior before you introduce new equipment into your network is highly recommended (and is actually standardized as part of RFC 2544 test suite).

The other interesting thing you can do in Gen-B mode is to change the frame’s content on fly (from the original you’ve build in the Builder mode). The things you can do with it are quite wild. let’s have a look at the options and what they do:

  • do nothing  – will result in sending the same frame you have in Builder mode with no changes
  • MAC set random source address – rather self explanatory – good for testing L2 paths and load balancing mechanisms
  • IP set random source address – rather self explanatory – good for testing L3 paths and load balancing mechanisms
  • RTP options – this allows to simulate a more real RTP stream
  • Change Byte X and Y values – probably quite good for protocol developers

Gen-S

This is a Stream generator as there is quite often a need to play different types of traffic. For example you want to try a nice voice traffic stream you have captured or build in combination with various bursty types of traffic and see how your network will be dealing with it? Well that is why you can select up to 10 packets in pcap format and give them some basic parameters (as seen below).

Gen-S

You can also run these stream in cycles so the traffic pattern will repeat itself up to infinity. You can also enable/disable the stream on the fly to alter the traffic mix.

The PCAP

The PCAP’s only function is to open a packet in pcap format (not pcapng!) and load is into the builder once it is selected. On the other hand this can be achieved using the load button from the Builder as well so I am bit unsure what is supposed to be the extra feature here.

The Summary & a small testimonial

Well what to say at the end – I hope I was able to describe PackETH’s features with few minor hints what they can be useful for. I will most likely include PackETH in some of my other articles as a method of testing thing while playing with various LAB scenarios. The truth is that it cannot replace a proper Ethernet tester but taking in account its flexibility,stability and the fact it is free  I must say I can only rate it as high as possible.

Before I will really end this article I wanted to write one more thing a small testimonial – I successfully used PackETH over period of about three years for various testing ranging from proving equipment behavior for L2 broadcasting/multicasting, faking ARP and ICMP messages to invoking network behaviors so as proving equipment’s dealing with QinQ and it has aways been a tool in my software toolbox I could and can completely rely on. There aren’t many pieces software like this one and I would recommend to anyone anytime for both  training and troubleshooting purposes.

PackEth tutorial part I – The Inetrface and The Packet Builder

PackEthIn my previous post I have mentioned ingenious piece of software called PackEth and I have also promised that will write up a separate article about it as I just think this amazing tool deserves as much attention as I can give it.  So what this software do?  Well let me quote the author ” PackETH is GUI and CLI packet generator tool for Ethernet…It allows you to create and send any possible packet or sequence of packets on the Ethernet link.” I would add that that it is the only tool I have found that actually allows you to assemble Ethernet frames and a IP packets that actually does what you would expect it to do while being multi-platform and incredibly stable. I think I have never seen it crashing which speaks for itself.  This article will focus around version 1.6 as that is the one that has both Linux and Windows versions available. The drawback is that the L3 IPv6 support is not included.

Introduction

Getting the package is quite simple as it is a a project hosted on sourceforge. There are versions available fro r Windows Linux and MacOS. But if you are using Linux then there is a good chance that the package will be in your repository (is present in Debian stable and most likely in Ubuntu). Installation is simple – in windows just unpack the .zip file and run packeth.exe – And yes it is completely standalone software so no installation no garbage in registry etc. the installation has all libraries included so the folder after unpacking has about 18MB which I think is very reasonable. I would recommend using Linux version as along with Packeth you will be most likely using WireShark as well and that has some issues on L2 in windows. If you are happy with L3 testing only then the OS choice is irrelevant.

The Gui will open in the builder mode which is probably the most useful and most interesting mode of all the ones that are offered. Switching between the mode you can on the top left part of the menu.

GUI-main-controlls

In the middle section you can save and load configuration you have made in the past for repeated testing. In the interface button you will have a selection of interface you can use for PackETH – in Linux it is simple ethX interface. Under windows it is bit more complicated as PackETH doesn’t use the “human readable” name a.k.a. local network 1 or similar but instead it uses the system name which looks like this

\Device\NPF_{653EFF5C-E308-4494-A7DC-1C65E8BCE92F}

If you are wondering where is that name coming from it is an ID from WinPcap library and there is no simple way how to find out which ID is which interface but as normally you want to use just one – you can just disable all the others and read the ID in PackETH. The most important thing is – if you are not a superuser (or have permitted access to network cards on the machine) the interface list will be empty.

The send button is simply sending the frame/stream from the interface. The stop button us useful only when you are running continuous streams as in all other cases.

Builder mode

Is the basic and most interesting for me personally as it allows for complete buildup of a L2 frame/ L3 packet / L4 datagram. It has multiple options and parts which make it incredibly handy.

Data Link Layer

Before going into Data Link Layer of the builder – just a small reminder of how ethernet frame looks like:

ethernet frame

In the Link layer section you can choose which standard of Ethernet you want to use for your frame. The majority of Ethernet traffic in networks nowadays is Ethernet Version II (Also known as DIX). In the last data I’ve read about this topic about 5 years ago were showing that Ethernet II is about 95% of all Ethernet traffic. This should be taken in account when you build your frames as the NIC in your PC might not even be able to build 802.3 frame due to driver restrictions. I have seen this on multiple PCs. Also the receiving party might be dropping this type of frames as despite the attempted compatibility not many vendors actually care about this at all.

ethertype

If you chose the Ethernet version II frame format the ethertype field (also known as DIX type) will become available. This field identifies what protocol is encapsulated in the frame. The current options are:

  • IPv4  – 0800
  • IPv6 – 86DD
  • ARP – 0806
  • User defined – whatever number you can fit in two octets

Be aware that there is no internal logic of PackETH stopping you from selecting let’s say IPv4 DIX type and then building and ARP packet on higher layer which is incredible advantage if you want to test behavior of equipment to invalid types of traffic, but is easy to overlook when you are not after such specialty.

When we’re in the Data Link layer the next after ethertype is 802.1Q and (in)famous QinQ.

As you can see in the  picture of the Ethernet frame the first field in the 802.1Q shim is the TPID which identifies the following data as part of the shim rather than Ethertype. This fields is followed by  PCP (Priority code point) which is defined in 802.1p and is used for CoS. The priority can be selected from the menu – it also tells the standard meaning of the p-bit in question. But be aware that the numeric value actually means nothing as long as the devices passing this traffic are p-bits aware. Also the mapping is based on standard’s recommendations so in every network it can be used as seen fit by the network admins.

802.1p

The CFI (canonical format identifier) has been deprecated and re-used drop eligibility but generally it can be just ignored as most equipment just ignores it anyway. The filed of interest is the VID which defines the VLAN ID. The problem with it is that it must be written in hexadecimal digits which is not exactly user friendly but should be no big problem.

The biggest topic in this part is the QinQ. Let’s start with the definition of what is QinQ. As the name suggest it is abbreviation for all sorts of nested VLANs (aka 802.1Q in 8021Q). This practice started as totally non-standard behavior and as a result it has been implemented in many different ways before a standard has been written. The major issue is that the standard is fairly new and mos of network vendors actually doesn’t support the standardized version. Fortunately PackETH support all versions that exist plus some more as you can define whatever you want. So what are our options in the TPID field for QinQ for the outer/SP tag ?

ethertype-QinQ

  • 0x8100 – the common vlan – as most vendors even nowadays support and do the original 802.1Q in 802.1Q – extremely common – default almost everywhere
  • 0x9100,0x9200 (and missing 0x9300) – proprietary outer/SP tags used by vendors like Cisco an Juniper and is fairly common on decent equipment
  • 0x8a88 – 802.1ad format that almost no-one supports

So the outer tag we must select from a drop-down list and the TPID for the inner/C tag is  always 0x8100 (that is why the filed is grayed out). So the only thing to do is fill in the VIDs for outer tag and the inner-one. The only next step is to select what is the next layer protocol as L2 configuration is finished.

Network Layer

In the network layer you can chose between ARP and IPv4 (ind IPv6 in the newer version) and user defined payload. All of these are quite simple.

This is how IPv4 setup looks like:

IPv4-header

Let’s go through the header fields:

  • Version – should be always set to 4 (that’s why it’s called IPv4)
  • Header length – IPv4 has a variable header length due to existence of “options” field at the least significant position (this causes a lot of issues with L3 aware devices and was important driver in IPv6 development) The header length is in increments of 4 Bytes so the most common value of 20 Bytes would be equal to the default value of 5. This field uses hexadecimal numbers so the allowed values are from 1 to F
  • ToS field as originally defined in RFC2474 Is no longer used as ToS (Type of Service) in about 99% of networks and is deprecated in favor of DSCP (Differentiated Services Code Points) and these are options available:

ToS  and DSCP options

As I have never really used ToS for anything I will not really dive into explaining the variables on this place. I might do that in article I am preparing about QoS theory and its implementation in some equipment types.

  • Total length – calculates the total length of the packet and unless you want to check behavior for runt packets keep it on auto so you will generate valid packets
  • Identification – this is completely useless field no 99% cases as it is only used for reassembly of fragmented packets
  • Flags – very important as it allows/disallows fragmentation of the frame – the best practice here is for the packet to be set to 2 (do not fragment), the other option is “more fragments”  anyway this seems to be broken in the 1.6 windows version and all three values are always zeroes

ipv4-flags

  • Fragment offset – again in my testing I have found no use for playing with fragmented packets
  • TTL – Time to live – number which is decreasing while the packet is being processed through L3 device might get very handy when you need to prove how many hops away your receiver is
  • Protocol – code informing about what higher-level protocol is encapsulated in the IP packet (options ate TCP,UDP,ICMP and IGMP) again this is just a number in the header of the packet and doesn’t prevent mixup with different protocol actually being configured in upper layer.
  • Header checksum – does exactly what you would expect and again – Unless you are testing runts there is no need to uncheck the tick-box that calculates the checksum for you
  • Source and destination addresses – These do not need any comment
  • Options – a barely used field – I don’t think I ever seen it used and I have never used it myself as far as I remember
ARP

If you think – that the IPv4 was pretty simple then ARP will look like a piece of cake. I have to say I like the possibility to send fake ARPs around the network as you can easily populate various tables (specifically APR and switching tables on L2/L3 devices) without having the real source in the network. This allows you to see behavior of elements of your network that would be difficult to observe otherwise. Well there is not much to say about it and here is the screen-shot:

ARP

As you probably know ARP has been deprecated in IPv6 and it is now a component of the IP protocol itself and is know as neighbor solicitation.

Session Layer

The session layer provides you with following options : UDP, TCP, ICMP ang IGMP. I will cover all of them briefly as I most of the time do not work with L4. Also IGMP and RTP will be covered in greater detail in an upcoming article about multicasting.

UDP

Is the most common protocol I am using as most of the data I am normally dealing with are various voice frames. The Protocol itself is minimalistic and so is the possible setup as you can see below:

UDP

The one thing I would like to point out is the option to apply some specific patter of your choice (so ti is not random. This is etremely useful if you have a suspicion that a specific frame with (or within) a specific patter inside is causing some troubles in your network.

TCP

TCP is unlike UDP statefull so it must have way more options included to accommodate the windowing mechanism and 3 way handshake and some other minor things. There is a very nice article on Wikipedia about TCP and as I normally do not care much about L4 in testing I will not elaborate on the details here. This is hw the GUI looks like with all the options it has:

TCP

ICMP

This is probably the most interesting of all the L4 protocols as you can actually invoke some actions from the network nodes. The main two options are echo request and echo reply which allows you to send ping to specific nodes (which is not that special) but also fake reply which I have found very useful in the past. The other option is to send network unreachable datagram with all the messages but unless you do testing of L4 aware network (like some firewalls) then it is not of  much interest.

ICMP

IGMP

The Internet Group Management Protocol is a predominantly last mile protocol used for membership in various multicast groups. IT is widely used for multimedia delivery specifically – IPTV. It exists in 3 versions and V2 is most widely used (at least to my knowledge).  IGMP is rather simple as it has basically only two types of messages – Query (from router) and Report (from client). As you can see all of those are available to you which is ideal when troubleshooting both ends of the multicast network as combined with Wireshark you can emulate the required response.

IGMP

Conclusion

The other three parts –  Gen-B mode, Gen-S mode and PCAP will be discussed separately in a follow up article as I must try keeping the length on a readable level.