1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464
| show cpu load Each running process either using or waiting for CPU resources adds 1 to the load average. So, if your system has a load of 5, five processes are either using or waiting for the CPU, the load number doesn’t mean too much. A computer might have a load of 0 one split-second, and a load of 5 the next split-second as several processes use the CPU. Even if you could see the load at any given time, that number would be basically meaningless. That’s why Unix-like systems don’t display the current load. They display the load average — an average of the computer’s load over several periods of time. This allows you to see how much work your computer has been performing.
# uptime 10:11:01 up 18:57, 4 users, load average: 0.50, 2.13, 1.85 From left to right, these numbers show you the average load over the last one minute, the last five minutes, and the last fifteen minutes
show how much time process runs in sys, user
Real time is wall clock time. (what we could measure with a stopwatch) User time is the amount of time spend in user-mode within the process Sys is the CPU time spend in the kernel within the process.
NOTE: real can be less than user if, it's app is multi-thread or multi-process!!!
The rule of thumb is: real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs. real ≈ user: The process is CPU bound and takes no advantage of parallel exeuction. real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage.
#time ls share windows real 0m0.002s user 0m0.001s sys 0m0.001s
show latency of RT linux kernel
#cyclictest (git://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git)
show slab info
#cat /proc/slabinfo #slabtop Active / Total Objects (% used) : 133629 / 147300 (90.7%) Active / Total Slabs (% used) : 11492 / 11493 (100.0%) Active / Total Caches (% used) : 77 / 121 (63.6%) Active / Total Size (% used) : 41739.83K / 44081.89K (94.7%) Minimum / Average / Maximum Object : 0.01K / 0.30K / 128.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 44814 43159 96% 0.62K 7469 6 29876K ext3_inode_cache 36900 34614 93% 0.05K 492 75 1968K buffer_head 35213 33124 94% 0.16K 1531 23 6124K dentry_cache 7364 6463 87% 0.27K 526 14 2104K radix_tree_node 1280 1015 79% 0.25K 40 32 320K kmalloc-256 ---> two pages for one slab (Note, the management memory is not calculated!!!, but it's small)
Each cache may have many slabs(empty, partial, full), each slab is one or multiple PAGE SIZE (usually 4K for a PAGE SIZE)!
USE = (ACTIVE/OBJS)*100/100 OBJS = SLABS*(OBJ/SLAB) OBJ/SLAB = (4K*n)/OBJ_SIZE CACHE SIZE = SLABS * (4K*n)
show swap size used by each process #smem (RSS 656 mean 656K?) PID User Command Swap USS PSS RSS 2516 rabbitmq sh -c /usr/lib/rabbitmq/bin 0 96 116 656 1451 lightdm /bin/sh /usr/lib/lightdm/li 0 100 121 700 1130 root /bin/sh -e /proc/self/fd/9 0 100 122 680 1157 root /sbin/getty -8 38400 tty3 0 156 174 964
Show basic process information smem Show library-oriented view smem -m Show user-oriented view smem -u Show system view smem -R 4G -K /path/to/vmlinux -w Show totals and percentages smem -t -p Show different columns smem -c "name user pss" Sort by reverse RSS smem -s rss -r Show processes filtered by mapping smem -M libxml Show mappings filtered by process smem -m -P [e]volution Read data from capture tarball smem --source capture.tar.gz Show a bar chart labeled by pid smem --bar pid -c "pss uss" Show a pie chart of RSS labeled by name smem --pie name -s rss
Show memory usage by 'free' command
$ free total used free shared buff/cache available Mem: 24687560 11825536 8579812 258488 4282212 12299492 Swap: 16774140 0 16774140
total== 11825536 + 8579812 + 4282212 == 24687560 available = 8579812 + part of(buff/cache which is not used by OS)
total: Your total (physical) RAM (excluding a small bit that the kernel permanently reserves for itself at startup); used: memory in use by the OS(calculate apps, buffers, caches) free: memory not in use.
total = used + free + buff/cache
shared /buff/cache: This shows memory usage for specific purposes (write data to disk, buffer is used, which cache is used for storing data read from disk in memory)
The last line (Swap:) gives information about swap space usage (i.e. memory contents that have been temporarily moved to disk).
To actually understand what the numbers mean, you need a bit of background about the virtual memory (VM) subsystem in Linux. Just a short version: Linux (like most modern OS) will always try to use free RAM for caching stuff, so Mem: free will almost always be very low. caches will be freed automatically if memory gets scarce, so they do not really matter.
Inside exec()
In computing, exec is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable. This act is also referred to as an overlay. It is especially important in Unix-like systems, although exists elsewhere. As a new process is not created, the process identifier (PID) does not change, but the machine code, data, heap, and stack of the process are replaced by those of the new program.
====================================================SAR=================================================================================== sar(System Activity Report): Show system activity information, its gives more details about cpu, memory, interrupt, io, power, network etc But you can also check other commands for specific resource from below section
# -B is more general including(swap process memory + disk io) # sar -B 5 Linux 3.10.0-1160.el7.x86_64 (dev) 10/12/2022 _x86_64_ (16 CPU)
05:14:19 PM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff 05:14:24 PM 0.00 31948.80 10.00 0.00 136.20 0.00 0.00 0.00 0.00 05:14:29 PM 0.00 236544.00 10.80 0.00 57.40 0.00 0.00 0.00 0.00
# -W is about swap of process memory(swap process page to disk when there is not engouh memory) #sar -W 5 Linux 3.10.0-1160.el7.x86_64 (dev) 10/12/2022 _x86_64_ (16 CPU) 05:14:43 PM pswpin/s pswpout/s 05:14:48 PM 0.00 0.00 05:14:53 PM 0.00 0.00
Report I/O and transfer rate statistics # sar -b 5 Linux 3.10.0-1160.el7.x86_64 (dev) 10/22/2021 _x86_64_ (8 CPU)
05:34:02 PM tps rtps wtps bread/s bwrtn/s 05:34:07 PM 0.00 0.00 0.00 0.00 0.00 05:34:12 PM 0.00 0.00 0.00 0.00 0.00
Report activity for each block device # sar -d 5 Linux 3.10.0-1160.el7.x86_64 (dev) 10/22/2021 _x86_64_ (8 CPU)
05:34:20 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 05:34:25 PM dev8-0 0.20 0.00 6.40 32.00 0.00 1.00 1.00 0.02 05:34:25 PM dev253-0 0.20 0.00 6.40 32.00 0.00 1.00 1.00 0.02 05:34:25 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:34:25 PM dev253-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
show interrupt per 5s # sar -I ALL 5 Linux 3.10.0-1160.el7.x86_64 (dev) 10/22/2021 _x86_64_ (8 CPU)
05:37:36 PM INTR intr/s 05:37:41 PM 0 0.00 05:37:41 PM 1 0.00 05:37:41 PM 2 0.00 05:37:41 PM 3 0.00 05:37:41 PM 4 0.00 05:37:41 PM 5 0.00 05:37:41 PM 6 0.00 05:37:41 PM 7 0.00 05:37:41 PM 8 0.00 05:37:41 PM 9 0.00 05:37:41 PM 10 0.00 05:37:41 PM 11 0.00 05:37:41 PM 12 0.00 05:37:41 PM 13 0.00 05:37:41 PM 14 0.80 05:37:41 PM 15 0.00
show power management $ ar -m ALL 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:40:01 PM CPU MHz 05:40:11 PM all 1258.82
05:40:01 PM TEMP degC %temp DEVICE 05:40:11 PM 1 43.00 55.84 coretemp-isa-0000 05:40:11 PM 2 38.00 49.35 coretemp-isa-0000 05:40:11 PM 3 37.00 48.05 coretemp-isa-0000 05:40:11 PM 4 34.00 44.16 coretemp-isa-0000 05:40:11 PM 5 38.00 49.35 coretemp-isa-0000 05:40:11 PM 6 33.00 42.86 coretemp-isa-0000 05:40:11 PM 7 34.00 44.16 coretemp-isa-0000 05:40:11 PM 8 37.00 48.05 coretemp-isa-0000 05:40:11 PM 9 35.00 45.45 coretemp-isa-0000 05:40:11 PM 10 44.00 57.14 coretemp-isa-0001 05:40:11 PM 11 36.00 46.75 coretemp-isa-0001 05:40:11 PM 12 36.00 46.75 coretemp-isa-0001 05:40:11 PM 13 37.00 48.05 coretemp-isa-0001 05:40:11 PM 14 37.00 48.05 coretemp-isa-0001 05:40:11 PM 15 34.00 44.16 coretemp-isa-0001 05:40:11 PM 16 36.00 46.75 coretemp-isa-0001 05:40:11 PM 17 34.00 44.16 coretemp-isa-0001 05:40:11 PM 18 34.00 44.16 coretemp-isa-0001
05:40:01 PM BUS idvendor idprod maxpower manufact product 05:40:11 PM 1 8087 800a 0 05:40:11 PM 2 8087 8002 0 05:40:11 PM 1 413c a001 200 no manufacturer Gadget USB HUB
show network stats, lots of fields, only list some # sar -n ALL 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:43:15 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s 05:43:25 PM tap_metadata 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM vxlan_sys_4789 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM br0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM tap_proxy_ns 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM ovs-system 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM tap_proxy 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM eth0 2.10 2.00 0.23 0.19 0.00 0.00 0.00 05:43:25 PM lo 2.00 2.00 0.79 0.79 0.00 0.00 0.00 05:43:25 PM em2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM em4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM em3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
05:43:15 PM IFACE rxerr/s txerr/s coll/s rxdrop/s txdrop/s txcarr/s rxfram/s rxfifo/s txfifo/s 05:43:25 PM tap_metadata 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM vxlan_sys_4789 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM br0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM tap_proxy_ns 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM ovs-system 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM tap_proxy 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM eth0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM em2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM em4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM em3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 05:43:25 PM docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Report cpu queue length and load averages # sar -P ALL -q 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:48:32 PM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 05:48:42 PM 0 1069 0.43 0.46 0.49 0 Report memory utilization statistics # sar -r 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:46:25 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 05:46:35 PM 119478308 12269620 9.31 1868 9411160 5926572 3.99 4968572 5430432 1560
Report CPU utilization # sar -P ALL -u 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:47:32 PM CPU %user %nice %system %iowait %steal %idle 05:47:42 PM all 1.39 0.00 0.40 0.00 0.00 98.20 05:47:42 PM 0 40.49 0.00 0.00 0.00 0.00 59.51 05:47:42 PM 1 0.00 0.00 0.30 0.00 0.00 99.70 05:47:42 PM 2 0.41 0.00 1.32 0.00 0.00 98.28 05:47:42 PM 3 0.10 0.00 0.20 0.00 0.00 99.70 05:47:42 PM 4 0.30 0.00 0.80 0.00 0.00 98.89 05:47:42 PM 5 0.20 0.00 0.40 0.00 0.00 99.40 05:47:42 PM 6 0.20 0.00 0.40 0.00 0.00 99.40 05:47:42 PM 7 0.00 0.00 0.20 0.00 0.00 99.80 05:47:42 PM 8 0.70 0.00 0.90 0.00 0.00 98.39
Report task creation and system switching activity # sar -w 10 Linux 3.10.0-327.36.4.el7.x86_64 (A04-R08-I138-47-91TYB72.JCLOUD.COM) 10/22/2021 _x86_64_ (32 CPU)
05:48:56 PM proc/s cswch/s 05:49:06 PM 2.30 12558.70 ====================================================SAR===================================================================================
Show CPUS stats CPU utilization stats runs on user, sys, virtual processor(vm) # mpstat -P ALL -u 04:38:08 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 04:38:08 PM all 7.03 0.06 2.89 0.01 0.00 0.00 0.00 8.28 0.00 81.73 04:38:08 PM 0 0.88 0.08 3.26 0.01 0.00 0.19 0.00 10.02 0.00 85.56 04:38:08 PM 1 0.89 0.04 3.30 0.01 0.00 0.02 0.00 9.47 0.00 86.28 04:38:08 PM 2 0.83 0.08 3.15 0.01 0.00 0.01 0.00 10.15 0.00 85.78 04:38:08 PM 3 0.82 0.04 3.15 0.01 0.00 0.00 0.00 9.61 0.00 86.39 04:38:08 PM 4 1.03 0.07 4.59 0.01 0.00 0.01 0.00 12.51 0.00 81.78 04:38:08 PM 5 0.92 0.04 3.22 0.01 0.00 0.00 0.00 9.58 0.00 86.23 04:38:08 PM 6 1.10 0.07 4.63 0.01 0.00 0.00 0.00 12.52 0.00 81.66 04:38:08 PM 7 0.83 0.05 3.42 0.01 0.00 0.00 0.00 10.44 0.00 85.25 04:38:08 PM 8 99.75 0.00 0.25 0.00 0.00 0.00 0.00 0.00 0.00 0.00
CPU soft irq # mpstat -I SCPU Linux 3.10.0-693.21.4.el7.x86_64 (A01-R15-I124-40-CCK4HP2.JCLOUD.COM) 10/22/2021 _x86_64_ (64 CPU)
04:39:57 PM CPU HI/s TIMER/s NET_TX/s NET_RX/s BLOCK/s BLOCK_IOPOLL/s TASKLET/s SCHED/s HRTIMER/s RCU/s 04:39:57 PM 0 0.00 54.90 0.20 2.42 0.00 0.00 0.05 12.35 0.00 10.65 04:39:57 PM 1 0.00 41.11 0.00 0.48 0.04 0.00 7.10 43.85 0.00 7.03 04:39:57 PM 2 0.00 60.01 0.01 14.90 0.00 0.00 0.57 59.44 0.00 10.80 04:39:57 PM 3 0.00 33.81 0.00 0.50 0.04 0.00 0.00 52.79 0.00 3.72 04:39:57 PM 4 0.00 40.35 0.01 17.83 0.00 0.00 0.75 6.86 0.00 23.19 04:39:57 PM 5 0.00 44.60 0.00 0.51 0.04 0.00 0.00 53.62 0.00 7.76 04:39:57 PM 6 0.00 44.92 0.01 12.48 0.00 0.00 0.51 7.00 0.00 24.59 04:39:57 PM 7 0.00 58.52 0.00 0.46 0.04 0.00 0.00 57.85 0.00 12.73 04:39:57 PM 8 0.00 33.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 58.50
Show CPU live stats # top top - 16:45:28 up 771 days, 3:16, 1 user, load average: 6.66, 7.24, 6.54 Tasks: 670 total, 9 running, 661 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.7 us, 1.6 sy, 0.1 ni, 88.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 26379142+total, 3811248 free, 23686681+used, 23113348 buff/cache KiB Swap: 16777212 total, 16691312 free, 85900 used. 25565136 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 127796 root 10 -10 0.105t 573032 20444 S 403.6 0.2 42365,18 vswitchd 131315 root 20 0 9188560 87436 5940 S 108.6 0.0 668887:05 qemu-kvm 113257 root 20 0 9220320 77628 5916 S 78.8 0.0 300804:39 qemu-kvm 69578 root 20 0 9043128 66088 3640 S 18.2 0.0 2199:03 qemu-system-x86 123753 root 20 0 9039996 63892 3604 S 15.9 0.0 764:21.77 qemu-system-x86 113084 root 20 0 9074688 68036 1916 S 12.3 0.0 170215:57 qemu-system-x86 99040 root 20 0 16.647g 65140 1900 S 9.6 0.0 158111:53 qemu-system-x86 133933 root 20 0 4836392 64328 3648 S 8.6 0.0 381:15.29 qemu-system-x86 92403 root 20 0 4825240 62916 3308 S 7.3 0.0 21040:53 qemu-system-x86 100018 root 20 0 3471696 5216 2616 S 7.0 0.0 8384:57 logd
# htop
show live virtual memory usage show stats per 2s, actuall, it also shows io, system, cpu as well $ vmstat -n 2 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 5 4 85900 4185664 4396 22772168 0 0 1 91 0 0 15 3 82 0 0 6 0 85900 4181612 4396 22772580 0 0 64 479 55060 82145 8 1 91 0 0 8 0 85900 4184968 4396 22772636 0 0 96 98 56364 87759 8 1 91 0 0 8 0 85900 4183828 4396 22772936 0 0 96 152 58835 88482 9 1 90 0 0 6 0 85900 4180524 4396 22772920 0 0 0 320 58749 94072 9 1 90 0 0 5 0 85900 4184580 4396 22773588 0 0 0 234 67631 111630 9 2 89 0 0
show io statistics, most used for which disk has high io await. the io wait of the whole system(96.0%wa) # top top - 14:31:20 up 35 min, 4 users, load average: 2.25, 1.74, 1.68 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.3%us, 1.7%sy, 0.0%ni, 0.0%id, 96.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 245440k total, 241004k used, 4436k free, 496k buffers Swap: 409596k total, 5436k used, 404160k free, 182812k cached
show iostat per 10s of each block device(check which block device has high io wait) # sar -d 5 # iostat -txz 10 Linux 3.10.0-693.21.4.el7.x86_64 (A01-R15-I124-40-CCK4HP2.JCLOUD.COM) 10/22/2021 _x86_64_ (64 CPU)
10/22/2021 05:14:10 PM avg-cpu: %user %nice %system %iowait %steal %idle 15.31 0.06 2.89 0.01 0.00 81.73
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util nvme0n1 0.00 0.06 1.06 47.46 43.69 4897.74 203.70 0.01 0.25 0.26 0.25 0.06 0.29 sda 0.00 0.03 0.00 0.73 0.07 7.51 20.56 0.01 19.20 6.80 19.27 3.83 0.28 nb100 0.00 0.01 0.29 7.33 36.40 545.36 152.62 0.02 3.25 9.77 2.99 1.07 0.82 nb101 0.00 0.00 0.00 1.76 0.03 14.69 16.69 0.00 0.94 1.29 0.94 0.29 0.05 nb102 0.00 0.00 0.00 6.72 0.01 132.59 39.46 0.01 1.18 0.70 1.18 0.28 0.19 nb103 0.00 0.00 0.00 0.55 0.02 7.14 25.97 0.00 0.86 0.54 0.86 0.46 0.03 nb104 0.00 0.00 0.00 0.31 0.01 10.68 68.74 0.00 1.45 0.50 1.45 0.43 0.01 nb105 0.00 0.00 0.00 1.29 0.02 78.13 121.00 0.00 3.56 0.62 3.56 0.50 0.06 nb106 0.00 0.00 0.01 0.71 0.83 40.81 116.51 0.00 1.19 0.57 1.19 0.69 0.05 nb107 0.00 0.00 0.00 0.17 0.01 1.37 16.55 0.00 1.21 8.04 1.20 0.39 0.01 nb108 0.00 0.00 0.00 0.17 0.00 1.29 15.10 0.00 0.90 0.53 0.90 0.44 0.01 nb109 0.00 0.00 0.00 0.00 0.00 0.04 60.20 0.00 42.15 0.42 43.57 0.46 0.00
show io per process, which process is writing high io #iotop Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17391 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.02 % [kworker/6:0] 16896 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % python -m ipykernel_launcher -f /root/.local/share/jupyter/runtime/kernel-0fe83e4c-ccd4-49f4-ae7e-4b07fabb2dc3.json 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % systemd --switched-root --system --deserialize 22 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H] 6 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 8 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh] 9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched]
show interface statistics show stats of ifaces # ifstat #kernel Interface RX Pkts/Rate TX Pkts/Rate RX Data/Rate TX Data/Rate RX Errs/Drop TX Errs/Drop RX Over/Rate TX Coll/Rate lo 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 enp0s3 8 0 6 0 560 0 1424 0 0 0 0 0 0 0 0 0 docker0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 vethbedf2bf 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
# live stats on each interface #iftop
# live stats on each process which has network io # nethogs NetHogs version 0.8.5
PID USER PROGRAM DEV SENT RECEIVED 13337 root sshd: root@pts/2 enp0s3 0.218 0.186 KB/sec ? root unknown TCP 0.000 0.000 KB/sec TOTAL 0.218 0.186 KB/sec
show details about interface like config, stats etc # ethtool -h ethtool -g|--show-ring DEVNAME Query RX/TX ring parameters ethtool -k|--show-features|--show-offload DEVNAME Get state of protocol offload and other features ethtool -i|--driver DEVNAME Show driver information ethtool -S|--statistics DEVNAME Show adapter statistics ethtool -n|-u|--show-nfc|--show-ntuple DEVNAME Show Rx network flow classification options or rules ethtool -x|--show-rxfh-indir|--show-rxfh DEVNAME Show Rx flow hash indirection and/or hash key
show power management show power used by each process live # powertop PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables Summary: 72.3 wakeups/second, 0.0 GPU ops/seconds, 0.0 VFS ops/sec and 0.3% CPU use
Usage Events/s Category Description 122.3 µs/s 20.0 Process [PID 460] [xfsaild/dm-0] 72.8 µs/s 9.5 Timer tick_sched_timer 116.2 µs/s 6.7 Timer hrtimer_wakeup 93.0 µs/s 5.7 Process [PID 1084] /usr/bin/containerd 63.7 µs/s 5.7 Process [PID 9] [rcu_sched] 632.7 µs/s 4.8 Process [PID 1049] /home/data/Anaconda3/bin/python /home/data/Anaconda3/bin/jupyter-notebook -y --no-browser --allow-root --ip=10.0.2.1 61.6 µs/s 4.8 Process [PID 1082] /usr/bin/containerd 34.9 µs/s 2.9 Interrupt [3] net_rx(softirq) 183.8 µs/s 1.9 Interrupt [7] sched(softirq) 239.2 µs/s 1.0 kWork e1000_watchdog
Benchmark tools
for operation function #apt-get install lmbench
Layer 4 Throughput using NetPerf and iPerf, two open source network performance benchmark tools that support both UDP and TCP protocols. Each tool provides in addition other information: NetPerf for example provides tests for end-to-end latency (round-trip times or RTT) and is a good replacement for Ping
iPerf provides packet loss and delay jitter, useful to troubleshoot network performance.
for network, test the network between client(netperf) and server(netserver)
server side #netserver
client side with testing 300s, or never stop(-l 0) #netperf -H $server -l 300 -t TCP_STREAM
server side #iperf3 --server --interval 30 client side #iperf3 --client $server --time 300 --interval 30
|