stress_workload_generator

Overview

It is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It will stress test a server for the following features:

  • CPU compute
  • Cache thrashing
  • VM stress
  • Drive stress
  • I/O syncs
  • Socket stressing
  • Context switching
  • Process creation and termination
  • It includes over 60 different stress tests, over 50 CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow, over 20 virtual memory stress tests. Lots of stressors

It is not a benchmark, but is rather a tool designed

Stressor

The stress-ng stressors are grouped together in different classes, each class may have many stressors or one stressor, when you start stress, you can run a class(all stressor in that class) or just a specific stressor or several stressors at a time.

Classes:

  • cpu - CPU intensive
  • cpu-cache - stress CPU instruction and/or data caches
  • device - raw device driver stressors
  • io - generic input/output
  • interrupt - high interrupt load generators
  • filesystem - file system activity
  • memory - stack, heap, memory mapping, shared memory stressors
  • network - TCP/IP, UDP and UNIX domain socket stressors
  • os - core kernel stressors
  • pipe - pipe and UNIX socket stressors
  • scheduler - force high levels of context switching
  • security - AppArmor stressor
  • vm - Virtual Memory stressor (paging and memory)

Stressors

1
2
3
$ yum install -y stress-ng
$ stress-ng --stressor
access af-alg affinity aio aiol apparmor atomic bad-altstack bad-ioctl bigheap bind-mount binderfs branch brk bsearch cache cap chattr chdir chmod chown chroot clock clone close context copy-file cpu cpu-online crypt cyclic daemon dccp dentry dev dev-shm dir dirdeep dnotify dup dynlib efivar enosys env epoll eventfd exec fallocate fanotify fault fcntl fiemap fifo file-ioctl filename flock fork fp-error fstat full funccall funcret futex get getdent getrandom handle hdd heapsort hrtimers hsearch icache icmp-flood idle-page inode-flags inotify io iomix ioport ioprio io-uring ipsec-mb itimer judy kcmp key kill klog l1cache lease link locka lockbus lockf lockofd longjmp loop lsearch madvise malloc matrix matrix-3d mcontend membarrier memcpy memfd memhotplug memrate memthrash mergesort mincore mknod mlock mlockmany mmap mmapaddr mmapfixed mmapfork mmapmany mq mremap msg msync nanosleep netdev netlink-proc netlink-task nice nop null numa oom-pipe opcode open personality physpage pidfd ping-sock pipe pipeherd pkey poll prctl procfs pthread ptrace pty qsort quota radixsort ramfs rawdev rawpkt rawsock rawudp rdrand readahead reboot remap rename resources revio rlimit rmap rseq rtc schedpolicy sctp seal seccomp secretmem seek sem sem-sysv sendfile session set shellsort shm shm-sysv sigabrt sigchld sigfd sigfpe sigio signal sigpending sigpipe sigq sigrt sigsegv sigsuspend sigtrap skiplist sleep sock sockabuse sockdiag sockfd sockpair sockmany softlockup spawn splice stack stackmmap str stream swap switch symlink sync-file sysbadaddr sysinfo sysinval sysfs tee timer timerfd tlb-shootdown tmpfs tree tsc tsearch tun udp udp-flood unshare uprobe urandom userfaultfd utime vdso vecmath verity vfork vforkmany vm vm-addr vm-rw vm-segv vm-splice wait watchdog wcs x86syscall xattr yield zero zlib zombie

Options

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
-a N, --all N, --parallel N (mostly used combined with  --class)
start N instances of all stressors in parallel. If N is less than zero, then the
number of CPUs online is used for the number of instances. If N is zero, then the
number of CPUs in the system is used.

--sequential N
sequentially run all the stressors one by one for a default of 60 seconds. The
number of instances of each of the individual stressors to be started is N. If N
is less than zero, then the number of CPUs online is used for the number of
instances. If N is zero, then the number of CPUs in the system is used. Use the
--timeout option to specify the duration to run each stressor.

--taskset list
set CPU affinity based on the list of CPUs provided; stress-ng is bound to just use
these CPUs (Linux only). The CPUs to be used are specified by a comma separated
list of CPU (0 to N-1). One can specify a range of CPUs using '-', for example:
--taskset 0,2-3,6,7-11

-t N, --timeout T
stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y. Note: A
timeout of 0 will run stress-ng without any timeouts (run forever).

--vm-keep
don not continually unmap and map memory, just keep on re-writing to
it.

--vm-populate
populate (prefault) page tables for the memory mappings; this can
stress swapping. Only available on systems that support
MAP_POPULATE (since Linux 2.5.46).

Examples

CPU

1
2
3
4
5
6
7
8
9
# fork two processes for 10 seconds
$ stress-ng -c 2 -t 10s --metrics-brief
$ stress-ng -c 2 -t 1m --metrics-brief

# fork process of inline CPU
$ stress-ng -c 0 -t 1m --metrics-brief

# run 3 instances of the CPU stressor and pin them to CPUs 0, 2 and 3.
$ stress-ng --taskset 0,2-3 --cpu 3

Memory

1
2
3
4
5
6
7
8
9
# fork 1 process malloc/free 128M
$ stress-ng -m 1 --vm-bytes 128M -t 10s --metrics-brief
# fork 1 process malloc/free 1G
$ stress-ng -m 1 --vm-bytes 1G -t 10s --metrics-brief

$ stress-ng -m 1 --vm-bytes 1G -t 10s --vm-keep --vm-populate --metrics-brief

# fork 2 processes malloc/free 50% available memory, one 25 %
$ stress-ng -m 2 --vm-bytes 50% -t 10s --metrics-brief

interrupt load

1
2
# 32 instances at 1MHz:
$ stress-ng --timer 32 --timer-freq 1000000

page faults

1
2
3
4
5
# You can generate major page faults (by accessing a page is not loaded in memory at the time of the fault) and see the page fault rate using:
$ stress-ng --fault 0 --perf -t 1m

# or with newer kernels use the userfaultfd stressor to force even more major faults:
$ stress-ng --userfaultfd 0 --perf -t 1m