Pages

Sunday, June 26, 2011

Measure network performance with Iperf/JPerf

Iperf is open source network performance tool developed by NLANR/DAST. It measures maximum TCP and UDP bandwidth performance and reports bandwidth, delay jitter, datagram loss. (Jperf the graphical front end written in Java). So it can be used to solve speed and throughput issues.

Available for windows and Linux


Setup




Man page output below

 NAME  
     iperf - perform network throughput tests  
 SYNOPSIS  
     iperf -s [ options ]  
     iperf -c server [ options ]  
     iperf -u -s [ options ]  
     iperf -u -c server [ options ]  
 DESCRIPTION  
     iperf is a tool for performing network throughput measurements. It can test either TCP or UDP throughput. To perform an iperf test the user must establish  
     both a server (to discard traffic) and a client (to generate traffic).  
 GENERAL OPTIONS  
    -f, --format             [kmKM]  format to report: Kbits, Mbits, KBytes, MBytes  
     -h, --help             print a help synopsis  
     -i, --interval n          pause n seconds between periodic bandwidth reports  
     -l, --len n[KM]          set length read/write buffer to n (default 8 KB)  
     -m, --print_mss         print TCP maximum segment size (MTU - TCP/IP header)  
     -o, --output <filename>   output the report or error message to this specified file  
     -p, --port n            set server port to listen on/connect to to n (default 5001)  
     -u, --udp              use UDP rather than TCP  
     -w, --window n[KM]     TCP window size (socket buffer size)  
     -B, --bind <host>       bind to <host>, an interface or multicast address  
     -C, --compatibility       for use with older versions does not sent extra msgs  
     -M, --mss n            set TCP maximum segment size (MTU - 40 bytes)  
     -N, --nodelay           set TCP no delay, disabling Nagle's Algorithm  
     -v, --version            print version information and quit  
     -V, --IPv6Version        Set the domain to IPv6  
     -x, --reportexclude       [CDMSV]  exclude C(connection) D(data) M(multicast) S(settings) V(server) reports  
     -y, --reportstyle C|c      if set to C or c report results as CSV (comma separated values)  
 SERVER SPECIFIC OPTIONS  
     -s, --server            run in server mode  
     -U, --single_udp        run in single threaded UDP mode  
     -D, --daemon           run the server as a daemon  
 CLIENT SPECIFIC OPTIONS  
     -b, --bandwidth n[KM]  set target bandwidth to n bits/sec (default 1 Mbit/sec). This setting requires UDP (-u).  
     -c, --client <host>     run in client mode, connecting to <host>  
     -d, --dualtest         Do a bidirectional test simultaneously  
     -n, --num n[KM]       number of bytes to transmit (instead of -t)  
     -r, --tradeoff          Do a bidirectional test individually  
     -t, --time n           time in seconds to transmit for (default 10 secs)  
     -F, --fileinput <name>   input the data to be transmitted from a file  
     -I, --stdin             input the data to be transmitted from stdin  
     -L, --listenport n       port to recieve bidirectional tests back on  
     -P, --parallel n         number of parallel client threads to run  
     -T, --ttl n            time-to-live, for multicast (default 1)  
     -Z, --linux-congestion <algo> set TCP congestion control algorithm (Linux only)  

Basic TCP Unicast Test
Server  has IP address 192.168.0.99

Server :  iperf -s
Client :  iperf -c 192.168.0.99
 ~$ iperf -c 192.168.0.99   
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 60825 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  113 MBytes 94.9 Mbits/sec  

By default the server will use a  TCP window size of 85.3KB.  The client will connect to the server on port 5001 using a TCP window size of 16KB . The bandwidth of a TCP session can be greatly affected by the size of the receive window and the latency of the link.
Below the output from Jperf .



Parallel TCP Connections
Parallel connections (-P) can be used if you need to saturate the bandwidth of a link.  Bandwidth of a single TCP session can be affected by the receive window size and the latency of the link
By default Iperf is uni-directional and sends data from the client to the server.
Server : iperf -s
Client :  iperf -c 192.168.0.99 -P 5
 ~$ iperf -c 192.168.0.99 -P 5  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 7] local 192.168.0.10 port 60830 connected with 192.168.0.99 port 5001  
 [ 5] local 192.168.0.10 port 60826 connected with 192.168.0.99 port 5001  
 [ 4] local 192.168.0.10 port 60827 connected with 192.168.0.99 port 5001  
 [ 3] local 192.168.0.10 port 60828 connected with 192.168.0.99 port 5001  
 [ 6] local 192.168.0.10 port 60829 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 4] 0.0-10.0 sec 22.9 MBytes 19.2 Mbits/sec  
 [ 7] 0.0-10.0 sec 22.9 MBytes 19.2 Mbits/sec  
 [ 5] 0.0-10.0 sec 23.0 MBytes 19.2 Mbits/sec  
 [ 3] 0.0-10.0 sec 22.8 MBytes 19.1 Mbits/sec  
 [ 6] 0.0-10.0 sec 22.7 MBytes 19.0 Mbits/sec  
 [SUM] 0.0-10.0 sec  114 MBytes 95.6 Mbits/sec  
Below another output using Jperf


Bidirectional Test
By default, only the bandwidth from the client to the server is measured.
Server :  iperf -s
Client :  iperf -c 192.168.0.99  -r
 ~$ iperf -c 192.168.0.99 -r  
 ------------------------------------------------------------  
 Server listening on TCP port 5001  
 TCP window size: 85.3 KByte (default)  
 ------------------------------------------------------------  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 5] local 192.168.0.10 port 56793 connected with 192.168.0.99 port 5001  
 Waiting for server threads to complete. Interrupt again to force quit.  
 [ ID] Interval    Transfer   Bandwidth  
 [ 5] 0.0-10.0 sec  113 MBytes 95.0 Mbits/sec  

Simultaneous Bidirectional Test
To measure the bi-directional bandwidths simultaneously
This will send data to the server, and receive data from the server simultaneously.
Server : iperf -s 
Client :  iperf -c 192.168.0.99  -d
 ~$ iperf -c 192.168.0.99 -d  
 ------------------------------------------------------------  
 Server listening on TCP port 5001  
 TCP window size: 85.3 KByte (default)  
 ------------------------------------------------------------  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 4] local 192.168.0.10 port 56794 connected with 192.168.0.99 port 5001  
 [ 5] local 192.168.0.10 port 5001 connected with 192.168.0.99 port 50096  
 [ ID] Interval    Transfer   Bandwidth  
 [ 4] 0.0-10.0 sec  105 MBytes 87.9 Mbits/sec  
 [ 5] 0.0-10.0 sec  107 MBytes 89.4 Mbits/sec  

Timing, Port , Interval
Use -t to specify the Test duration. Default is 10 seconds
Use -p to chnage communication port.  It must be configured on the client and the server with the same value, default is TCP port 5001.
-i to set the interval in seconds between periodic bandwidth reports.
Server : iperf -s -p 1024
Client : iperf -c 192.168.0.99  -t 20 -p 1024
 ~$ iperf -c 192.168.0.99 -i 2 -t 20 -p 1024  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 1024  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 48360 connected with 192.168.0.99 port 1024  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0- 2.0 sec 22.8 MBytes 95.6 Mbits/sec  
 [ 3] 2.0- 4.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 4.0- 6.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 6.0- 8.0 sec 22.7 MBytes 95.2 Mbits/sec  
 [ 3] 8.0-10.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 10.0-12.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 12.0-14.0 sec 22.6 MBytes 95.0 Mbits/sec  
 [ 3] 14.0-16.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 16.0-18.0 sec 22.7 MBytes 95.2 Mbits/sec  
 [ 3] 18.0-20.0 sec 22.6 MBytes 94.8 Mbits/sec  
 [ 3] 0.0-20.0 sec  226 MBytes 95.0 Mbits/sec  

UDP Mode (for jitter and packet loss) 
By default it allocates 1Mbps but for the test I used 50Mbps.
Packet loss should not exceed 1% for a good link. Higher packet loss will generate TCP segment retransmissions and affect the bandwidth

Server : iperf -s -u
Client : iperf -c 192.168.0.99  -u -b 50m
 ~$ iperf -c 192.168.0.99 -u -b 50m  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, UDP port 5001  
 Sending 1470 byte datagrams  
 UDP buffer size:  110 KByte (default)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 35278 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec 59.7 MBytes 50.0 Mbits/sec  
 [ 3] Sent 42555 datagrams  
 [ 3] Server Report:  
 [ 3] 0.0-10.0 sec 59.7 MBytes 50.0 Mbits/sec 0.003 ms  0/42554 (0%)  
 [ 3] 0.0-10.0 sec 1 datagrams received out-of-order  

Output Format
If you want to show the output in different format use -f  and the the format you wish  k/m/K/M  to report in  Kbits, Mbits, KBytes, MBytes respectively.

 ~$ iperf -c 192.168.0.99 -f k  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 49856 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec 116016 KBytes 95025 Kbits/sec  

Maximum Segment Size 
The maximum segment size (MSS) is a parameter of the TCP protocol that specifies the largest amount of data (in bytes), that a communications device can receive in a single TCP segment.
The maximum size packets that TCP sends can have a major impact on bandwidth, because it is more efficient to send the largest possible packet size on the network. It does not count the TCP header or the IP header
Therefore: MSS + Header ≤ MTU

Ethernet with a MTU of 1500 would result in a MSS of 1460 after subtracting 20 bytes for IPv4 header and 20 bytes for TCP header.

Server : iperf -s
Client : iperf -c 192.168.0.1  -m
 ~$ iperf -c 192.168.0.99 -m  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 16.0 KByte (default)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 49957 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  113 MBytes 94.9 Mbits/sec  
 [ 3] MSS size 1460 bytes (MTU 1500 bytes, ethernet)  


TCP packet size
Playing around with TCP window size and the proof that a small window will give not realistic results. Starting from 1kByte and increasing to 256kByte we don't see any increase after the 64kByte TCP window size.

Server : iperf -s -w 1k
Client : iperf -c 192.168.0.1  -w 1k
 ~$ iperf -c 192.168.0.99 -w 1k  
 WARNING: TCP window size set to 1024 bytes. A small window size  
 will give poor performance. See the Iperf documentation.  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 2.00 KByte (WARNING: requested 1.00 KByte)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 42042 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec 38.9 MBytes 32.7 Mbits/sec  

Server : iperf -s -w 2k

Client : iperf -c 192.168.0.1  -w 2k
 ~$ iperf -c 192.168.0.99 -w 2k  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size: 4.00 KByte (WARNING: requested 2.00 KByte)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 42043 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  105 MBytes 87.7 Mbits/sec  
 ~$ iperf -c 192.168.0.99 -w 64k  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size:  128 KByte (WARNING: requested 64.0 KByte)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 42048 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  113 MBytes 95.0 Mbits/sec  
 ~$ iperf -c 192.168.0.99 -w 128k  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size:  256 KByte (WARNING: requested  128 KByte)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 42049 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  113 MBytes 95.1 Mbits/sec  
 ~$ iperf -c 192.168.0.99 -w 256k  
 ------------------------------------------------------------  
 Client connecting to 192.168.0.99, TCP port 5001  
 TCP window size:  256 KByte (WARNING: requested  256 KByte)  
 ------------------------------------------------------------  
 [ 3] local 192.168.0.10 port 42050 connected with 192.168.0.99 port 5001  
 [ ID] Interval    Transfer   Bandwidth  
 [ 3] 0.0-10.0 sec  113 MBytes 94.9 Mbits/sec  



More example with Iperf testing here


2 comments:

依山筑閣 said...
This comment has been removed by the author.
依山筑閣 said...

In the "Bidirectional Test", we found that after the test is over, both the client and server are quit. But with UDP mode, they didn't quit.
Is this a bug?
We need to test throughput of our network with TCP and we don't want the server quit. Is there any mothod to fix it?