benchmark-tools

Introduction

There are lots of tools for traffic capturing, sending, replaying about protocols, here are some examples of these popular tools.

capture

  • tcpdump(linux)
  • wireshark(windows/linux)

sending/replay

  • sendip
  • tcprelay

http benchmark

  • ab
  • wrk

io benchmark

  • dd

Traffic

tcpdump vs wireshark

tcpdump and wireshark are used to capture traffic on the internet, while wireshark provides filter for us after capture, that’s much helpful, while for tcpdump, there is no filter after capture, filter is only available when capturing, but you can save tcpdump output to xx.pcap than open it with wireshark, hence use filter provided by wireshark.

when capturing packets tcpdump and wireshark use the same syntax like host 10.10.2.2 and tcp port 5000.

tcpdump

1
2
3
4
5
6
7
8
9
10
11
12
13
# type:      host,net,port,
# direction: src, dst, dst or src, dst and src
# proto: arp, rarp, tcp, udp
# logical: not,and, or
# combine them together
$ tcpdump -i eth0 host 10.10.10.8 and tcp src port 5000 and not dst port 6000

# save to wireshark message
$ tcpdump -i eth0 -s 0 -w /home/lzq/pt.pcap

# vxlan use 4789, 11:4 is VNI
# capture VNI 6452, vxlan packet
$ tcpdump -i mirror-out port 4789 and udp[11:4]=6452

wireshark capture

syntax

1
2
3
4
5
6
7
8
support tcpdump syntax but more than that with extra options like this.

dst port 135 and tcp port 135 and ip[2:2]==48

icmp[icmptype]==icmp-echo and ip[2:2]==92 and icmp[8:4]==0xAAAAAAAA

dst port 135 or dst port 445 or dst port 1433 and tcp[tcpflags] & (tcp-syn) != 0 and tcp[tcpflags] & (tcp-ack) = 0 and src net 192.168.0.0/24

wireshark filter

tips: exported filtered packets from wireshark.

At the wireshark main panel, apply a filter, then you see filtered packets displayed, then exported it by

File->Export specified Packets-> All packets(displayed)!

filter syntax

1
2
3
4
5
6
7
8
9
tcp.port eq 25 or icmp
tcp.port == 25 or icmp
ip.src==192.168.0.0/16 and ip.dst==192.168.0.0/16
tcp.window_size == 0 && tcp.flags.reset != 1

Match packets containing the (arbitrary) 3-byte sequence 0x81, 0x60, 0x03 at the beginning of the UDP payload, skipping the 8-byte UDP header.
udp[8:3]==81:60:03

eth.addr[0:3]==00:06:5B

wireshark display

show absolute time
View->Time Display Format

show absolute tcp seq number
Preference->Protocols->TCP [uncheck]Relative sequence numbers

for tcp retransmission how can I distinguish the original packet and retransmitted on
Check TsVal it's at packets' Tcp options Tsval Or Tcp field checksum

sendip vs tcprelay

sendip is used to send arbitrary IP packets while tcprelay is mostly used to replay(edit) captured packet(send it again).

sendip

sendip sends arbitrary IP packets(ip + upper protocol), like tcp, udp, icmp, rip etc

1
2
3
4
5
6
7
8
9
$ sudo sendip -v -d "hello" -p ipv4 -is 192.168.200.1 -id 192.168.200.2 -p udp -us 4000 -ud 1200 192.168.200.2
# send ipv4 udp with src 192.168.200.1, src port 4000, payload hello
# -is: source ip
# -id: dst ip
# -us: udp src port
# -ud: udp dst port
# -d data(payload)

$ sudo sendip -v -d "hello" -p ipv4 -is 192.168.200.1 -id 192.168.200.2 -p tcp -ts 4000 -td 1200 192.168.200.2

tcpreplay

tcpreplay is a series of commands that can be used to replay packets captured(not only tcp packet) with more control, like sending duration, loop, send number, pps, modify packets etc.

  • tcpreplay: only send packets captured
  • tcprewrite: only modify(mac, vlan, ip, tcp/udp etc)captured packets and save it to a file
  • tcpreplay-edit: modify on fly and send the modified packets == tcprewrite + tcpreplay, so options for tcprewrite still work for tcpreplay-edit
1
2
3
4
5
6
7
$ tcpreplay -i eth0 -l 2 c.pcap      # replay twice on output device eth0
$ tcpreplay-edit -i eth0 c.pcap # same as replay

$ tcprewrite --portmap=80:8000 --srcipmap=10.117.6.8:10.117.6.80 --fixcsum --ttl=125 --infile=c.pcap --outfile=b.pcap
# --enet-dmac=00:12:13:14:15:16,00:22:33:44:55:66

$ tcpreplay-edit --portmap=80:8000 --srcipmap=10.117.6.8:10.117.6.80 --fixcsum --ttl=125 c.pcap

curl vs ab vs wrk

curl used for function test supports get/post etc. while ab/wrk are used for performance test

ab and wrk both for http benchmark(software level, should not used for production), ab is from apache, old but more powerful with lots of options, while wrk is a newly tool which is very popular today.

curl

curl provides lots of options for sending one http request, by default, curl use http1.1 can change to http1.0 or http2.0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
$ curl --http1.0 www.example.com
$ curl --http1.1 www.example.com
$ curl --http2 www.example.com
$ curl www.example.com

# By default, Post uses x-www-form-urlencoded format to send data
# send post payload from a file
$ curl -X POST -d "@data.txt" http:/www.test.com

# send post request with chunked data
$ curl -H "Transfer-Encoding: chunked" -d "payload to send" http://example.com

# send post request with data
$ curl -X POST -d "name=Mona&age=20" https://www.example.com/update_info

# send post with json format, must set Content-type to json explicitly
$ curl -X POST -d "@json_data" -H "Content-Type: application/json" https://www.example.com/update_info
$ curl -X POST -d '{"key1":"value1", "key2":"value2"}' -H "Content-Type: application/json" https://www.example.com/update_info

# send password, cookie when download a file
$ curl -u "user:passwd" -b "name=john123" -O http://ubuntu.biz.net.id/18.04.2/ubuntu-18.04.2-desktop-amd64.iso

# write cookie to a file
$ curl -u "user:passwd" -c saved_cookie.txt -O http://ubuntu.biz.net.id/18.04.2/ubuntu-18.04.2-desktop-amd64.iso

# read cookie from a file
$ curl -u "user:passwd" -b saved_cookie.txt -O http://ubuntu.biz.net.id/18.04.2/ubuntu-18.04.2-desktop-amd64.iso

# check http response headers, only headers returned
$ curl -I http://www.yahoo.com

# change agent
$ curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" https://example.com/

# send request with parameter
$ curl -i http://localhost/login?key1=jason&key2=jason2

# send request with custom header
$ curl -i -H "Host: 10.10.10.3" -H "Content-Type: application/json" http://123.123.123.123/login

# bind src when sending
$ curl --interface 10.10.10.10 http://123.123.123.123/

# send SNI
# first edit /etc/hosts with below content
# 100.0.1.1 app1.xyz.com

$ curl -k --resolve app1.xyz.com https://app1.xyz.com


# curl use proxy
$ curl -x $proxy_server:$port -U $user:$passwd www.google.com

ab

ab provides lots of options for sending http request, like set cookie, header, authentication, ssl etc, by default ab sends GET request with default values. only supports http1.0

get

1
2
3
4
5
6
7
8
9
$ ab -n 1000 -c 10  http://127.0.0.1:8080/index.html
# -n 1000 total requests
# -c 10 send 10 at a time (concurrency, total request unchanged)

$ ab -n 1000 -c 10 -C "ck1=123" -C "ck2=124" -H "Accept-Encoding: gzip" -k http://127.0.0.1:8080/index.html
# with cookie(-C), http keep alive(-k), and headers(-H)

$ ab -B 10.10.10.10 http//127.0.0.1:8080/index.html
# with binding SRC

post with data

1
2
3
4
5
6
7
8
9
10
$ ab -t 30 -c 10 -p ./post.data -T "application/x-www-form-urlencoded" http://local/user

# -t means run 30 second with 10 connection with post method
$ ab -t 30 -c 10 -p ./post.data -T "application/json" http://localhost/user

# with token auth
$ ab -t 30 -c 10 -p ./post.data -T "application/json" -H 'Authorization: Token abcd1234' http://localhost/user

# with basic auth
$ ab -t 30 -c 10 -p ./post.data -T "application/json" -A 'user:passwd' http://localhost/user

wrk

run get with default header

1
2
3
4
5
$ wrk -t12 -c400 -d30s --latency http://127.0.0.1:8080/index.html
# run get with(does NOT support setting out binding address)
# -t12 12 threads to run
# -c400 keep 400 connection open during the test
# -d30s test 30 seconds with get request

run get custom header

1
2
3
4
5
$ wrk -t2 -c10 -d30s --latency -s ./get.lua http://127.0.0.1/index.html

# get.lua
wrk.method = "GET"
wrk.headers["Content-Type"] = "application/json"

run post with data

1
2
3
4
5
6
7
8
9
10
11
$ wrk -t2 -c10 -d30s --latency -s ./post.lua http://127.0.0.1/user

# post.lua
wrk.method = "POST"
wrk.body = "foo=bar&baz=quux"
wrk.headers["Content-Type"] = "application/x-www-form-urlencoded"

# post.lua with json format
wrk.method = "POST"
wrk.body = "{\"firstKey\": 'somedata', \"secondKey\": 'somedata'}"
wrk.headers["Content-Type"] = "application/json"

FAQ

why content-length is 0 with POST method by curl

The reason should be the content of the file used by POST contains zero, as curl first reads the whole file to memory, then calculate the content_length with strlen(file content), hence if the content of the file contains ‘\0’, the content_length != file size, unexpected. here is a example

1
2
3
4
5
6
7
8
9
10
# file content is all '\0'!!
$ dd if=/dev/zero of=data.txt bs=1M count=1

# actually, curl just reads the data, but not send it, the Content_Length: 0
$ curl -X POST -d "@data.txt" http:/www.test.com

# FIX it, generate a file without '\0' or with given repeated content like
$ yes 'this is a test' | head -c 1K > 1K.file
$ yes 'this is a test' | head -c 1M > 1M.file
$ curl -X POST -d "@1K.file" http:/www.test.com

send http over unix-socket

1
2
3
4
# if http server listens on unix socket
# --unix-socket (Added in 7.40.0)
$ curl --version
$ curl --unix-socket /var/run/tes.sock http://debug/api

IO

Use dd to test io benchmark, as it’s an easy tool to use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# write to another block device like nbd here
$ dd if=/dev/zero of=/dev/nbd124 bs=1M count=1000

# write testing
$ dd if=/dev/zero of=out.disk bs=1M count=1000
# with direct io for writing
$ dd if=/dev/zero of=out.disk bs=1M count=1000 oflag=direct

# read testing
$ dd if=out.disk of=/dev/null bs=1M count=1000

# check static io stats or runtime io
$ iostat
$ iotop

# fio is better than dd
# -------------------------------------------------------------------------------
# --rw=str
# read: sequential reads
# write: sequential writes

# randread: random reads
# randwrite: random writes

# rw: sequential mix of reads and writes
# randrw: random mix of reads and writes
# -------------------------------------------------------------------------------
# --numJobs=int
# The number of threads spawned by the test. By default, each thread is reported separately. To see the results for all threads as a whole, use --group_reporting.
# -------------------------------------------------------------------------------
# --iodepth=int
# Number of I/O units to keep in flight against the file. That is the amount of outstanding I/O for each thread.
# -------------------------------------------------------------------------------
# --runtime=int
# The amount of time the test will be running in seconds.
# -------------------------------------------------------------------------------
# --time_based
# If given, run for the specified runtime duration even if the files are completely read or written. The same workload will be repeated as many times as runtime allows.
# -------------------------------------------------------------------------------
$ fio --name=fiotest --filename=./test1 --size=1Gb --rw=randread --bs=4K --direct=1 --numjobs=1 --ioengine=libaio --iodepth=1 --group_reporting --runtime=60 --time_based
$ fio --name=fiotest --filename=./test1 --size=1Gb --rw=write --bs=4K --direct=1 --numjobs=1 --ioengine=libaio --iodepth=1 --group_reporting --runtime=60 --time_based
$ fio --name=fiotest --filename=./test1 --size=1Gb --rw=rw --bs=4K --direct=1 --numjobs=1 --ioengine=libaio --iodepth=1 --group_reporting --runtime=60 --time_based

Ref