IPerf testing considerations

So after long deliberations I have decided to write this article about iperf. Let me preface this with a statement that I like iperf very much and use it very often as it has many valid uses. You can find loads of tutorials on how to use it all over the internet but this article has very different aim.Here I would like to explain proper use-cases of this tool and some limitations and caveats one should be aware of while using software based testing (which iperf is prime example of). There are many version of iperf in the wild but the topics discussed below are valid for all of them to some degree.

iperf origins and description

Iperf was originally developed by NLANR/DAST as a tool to measure primarily TCP throughput with UDP options added later. So iperf is by its nature is a TCP optimization tool. By default, it uses TCP for transport and sends blocks of 128 KBytes of data between the iperf client and the server, measuring the time it took to transfer each block and computing the average throughput of the TCP connection over a period of time (10 seconds by default). In most cases the results reported by iperf are reasonably close to the real available bandwidth, however, software based Ethernet-IP-TCP performance testing using client (hosted) software applications can be impacted by many factors as listed below.

Hardware and OS

Iperf runs in “user-space” so its performance is directly depending on the underlying system both in SW and HW.

    • CPU time – the CPU time for the OS will always take precedence over user-space programs and the influence cannot be predicted thus randomly influencing the ability to generates consistent traffic.
    • Other User space programs can influence the ability for iperf to generate traffic by requesting shared resources (CPU/RAM).
    • The amount of RAM and number of CPU available can impact performance.
    • Different NICs have different performance (wire/wireless, on board vs dedicated etc.)
    • Dependence on OS’s TCP/IP stack – each operating system has different implementation TCP/IP stack which results in different results (i.e. Linux-Linux will give different results than Linux-windows).
    • Traffic ramp-up to deliver consistent generation of the same number of packets even with all prioritization done correctly is impossible to predict due to system resource allocation causing slow ramp-up or varying micro bursts and/or drops in sent data.
    • System traffic is not counted and might interfere with the results. For example ARP, DNS, IPv6 solicitations, Signaling & Management etc.
    • The time stamping is done at the OS layer rather than at the hardware which has many implications on accuracy

The above mentioned concerns are mostly important for high precision, repeatable, measurements. From my experience the biggest issue from this lot is how much traffic the PC’s NIC can actually generate. To limit this issue you should use a dedicated NIC and make sure that the CPU is decent. (e.g. Intel atom and similar are usually fairly poor performers).

General functionality

The way iperf actually behaves is also quite important to understand as ti very directly impacts the results.

  • Iperf only provides L4 tests where the encapsulations will be inspected with higher-layer aware devices such as firewalls which can disturb or skew the results.
  • Iperf returns the rate at which data is reliably transferred from a client to
    a server over a single TCP connection (by default).
  • The operation of TCP windowing mechanism and the “service traffic” take up bandwidth which is not counted/included within the reported results.
  • The underlying OS and default window size can significantly impact the ability of Iperf. For example, the maximum window size in Linux (Ubuntu) is 1,031,920 bytes by default, which means that unless the kernel parameters are changed iperf will be limited to 85 Mbit/s on a single TCP connection over a 100ms rtt path (remember that TCP throughput is defined as WindowSize/rtt). Increasing TCP window sizes also have a direct impact on system memory utilisation which needs to be carefully considered before making any changes.
  • TCP is still a conservative protocol and might underestimate the actual rate at which data can be sustainably transmitted end-to-end.
  • Iperf cannot be used to do QoS testing as it has no ability to mark or prioritize traffic on any layer which can be affected by network configurations such as queuing and prioritisation mechanisms.
  • Iperf reports are in “Mbps” without defining the relevant layer (most likely it is L4)

The main thing to consider here is the fact that the TCP throughput is a function of delay (latency) so it is always preferable to use UDP instead. Also in order to avoid line parameters being propagated through looped/returned traffic it is always a good idea to use the bidirectional feature on UDP streams. This usually alleviates the most common issues.


From the list of considerations I think it is crucial to understand that there are may variables that come into play when using iperf and if one is aware of them iperf can serve as an absolutely amazing indication test tool. On the other hand I think it is rather obvious that in quite a few cases it can become a sub-optimal tool to choose. Some of the good examples of how to use perf are filling BE queue in QoS testing or a fast indicative test between two Linux servers etc. Iperf is extremely handy due to it’s omnipresence and ready availability on plethora of platforms but this leads to quite a lot of people abusing it for literally all network testing. Iperf is not a high precision tool and should not be used for cases where high accuracy and repeatability is important as these should really be done with dedicated HW testers.

Leave a Reply

Your email address will not be published. Required fields are marked *