libvirt-debug

Overview

libvirt provides lots of tools to manage VM or virtual disk, let’s take a quick look about them.

VM management

  • virt-install: install VM etc
  • virsh: start, stop, destroy VM, monitor and collect stats of VM etc
  • virt-manager: GUI for manage VM

Virtual disk management(provided by libguestfs)

  • guestfish: interactive shell to manage disk(–verbose for debug)
  • guestmount/guestumount: mount/umount disk to host path(–verbose for debug)
  • virt-cat/virt-copy-in/virt-copy-out/virt-format/virt-xxx: commands to manage virtual disk(–berbose for debug)

ALL libvirt tools(included libguestfs tools) communicate with libvirt daemon to manage vm or disk by default, but you can direct with qemu if libvirt is not thre

export LIBGUESTFS_BACKEND=direct virt-sysprep

Tools and frequent used command

requently used virsh command

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
$ virsh help dominfo

# virsh with remote server
$ virsh -c qemu+tcp://172.17.0.3/system xxx
# vda target name in conf
$ virsh domblkinfo vm100 vda
Capacity: 8589934592
Allocation: 200015872
Physical: 200015872

$ virsh domblkstat vm100 vda
vda rd_req 5761
vda rd_bytes 133831680
vda wr_req 86
vda wr_bytes 3244544
vda flush_operations 9
vda rd_total_times 5365449922
vda wr_total_times 599328712
vda flush_total_times 15684892

# target name in conf
$ virsh domifstat vm100 mynet0
mynet0 rx_bytes 1748
mynet0 rx_packets 29
mynet0 rx_errs 0
mynet0 rx_drop 0
mynet0 tx_bytes 0
mynet0 tx_packets 0
mynet0 tx_errs 0
mynet0 tx_drop 0

# get from memballoon driver inside guest
$ virsh dommemstat vm100
actual 1310720
swap_in 0
swap_out 0
major_fault 175
minor_fault 142267
unused 1082444
available 1210140
usable 1099536
last_update 1661222057
rss 1526228

$ virsh domblkerror vm100
No errors found

$ virsh dominfo vm100
Id: 1
Name: vm100
UUID: 4a0a3bb3-57cf-4efd-84c7-be9b74b02ccd
OS Type: hvm
State: running
CPU(s): 4
CPU time: 94.0s
Max memory: 1572864 KiB
Used memory: 1310720 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0

$ virsh domjobinfo vm100
Job type: None

$ virsh domstate vm100 --reason
running (booted)

$ virsh console vm100(exit console `Ctrl+]`)
$ virsh dumpxml vm100

virsh examples

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# this will install libvirt-client(virsh) and libvirt-daemon(libvirtd)
$ yum install -y libvirt

# install kvm and bios to start vm
$ yum install -y kvm
$ yum install -y seabios

$ libvirtd --version

$ virsh -v

# By default virsh will connect with local libvirt by unix socket
# but can connect with remote libvirtd
# ssh
$ virsh --connect qemu+ssh://remote/system
# plain tcp
$ virsh --connect qemu+tcp://remote:port/system

# show the connection
$ virsh uri

# create a vm, for QEMU and LXC, libvirtd stores vm XML to disk and in memory

# DO NOT edit the XML directly from disk at in that way, there is no validation!!!
# if you edit the on-disk XML is a VM that simply vanishes the next time libvirtd is restarted after the edit.
# The VM disappears from libvirt because the XML has become invalid, after which libvirt can't do anything with it

$ virsh dumpxml $domain

# change domain config from xml, restart it to make it work
$ virsh shutdown $domain
$ virsh edit $domain
$ virsh start $domain

# delete a vm and related files
$ virsh destroy $domain
$ virsh undefine $domain -remove-all-storage

# attach/detach disk to a vm
# --target is the device hit, it's a must option
# NOTE: vdc may be not used by guest if it's not the first available device, say, if vdb is free, even you proivdes vdc, vdb is used
# that means if you check the config, it shows vdc, but actually, vdb is used inside vm, hence --target should be the first available device if expected as set

# check existing device from xml, dev name may be not the name used inside guest
$ virsh domblklist centos
Target Source
------------------------------------------------
vda /home/data/tmp/vm100.qcow2
vdc /dev/loop0

# check the real dev name used inside guest, vda is used by /dev/loop0 while vdb is used for /home/data/tmp/vm100.qcow2
$ virsh qemu-agent-command vm100 '{"execute":"guest-exec","arguments":{"path":"lsblk","capture-output":true}}'
{"return":{"pid":980}}

$ virsh qemu-agent-command vm100 '{"execute":"guest-exec-status","arguments":{"pid":980}}'
{"return":{"exitcode":0,"out-data":"TkFNRSAgIE1BSjpNSU4gUk0gIFNJWkUgUk8gVFlQRSBNT1VOVFBPSU5UCnZkYSAgICAyNTM6MCAgICAwICAzNTBLICAwIGRpc2sgCnZkYiAgICAyNTM6MTYgICAwICAgIDhHICAwIGRpc2sgCuKUlOKUgHZkYjEgMjUzOjE3ICAgMCAgICA4RyAgMCBwYXJ0IC8KcG1lbTAgIDI1OTowICAgIDAgIDI1Nk0gIDAgZGlzayAK","exited":true}}

$ echo "TkFNRSAgIE1BSjpNSU4gUk0gIFNJWkUgUk8gVFlQRSBNT1VOVFBPSU5UCnZkYSAgICAyNTM6MCAgICAwICAzNTBLICAwIGRpc2sgCnZkYiAgICAyNTM6MTYgICAwICAgIDhHICAwIGRpc2sgCuKUlOKUgHZkYjEgMjUzOjE3ICAgMCAgICA4RyAgMCBwYXJ0IC8KcG1lbTAgIDI1OTowICAgIDAgIDI1Nk0gIDAgZGlzayAK" | base64 -d
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 350K 0 disk
vdb 253:16 0 8G 0 disk
└─vdb1 253:17 0 8G 0 part /
pmem0 259:0 0 256M 0 disk

# attach-disk <domain> <source> <target> [--targetbus <string>] [--driver <string>] [--subdriver <string>] [--iothread <string>] [--cache <string>] [--io <string>] [--type <string>] [--mode <string>] [--sourcetype <string>] [--serial <string>] [--wwn <string>] [--rawio] [--address <string>] [--multifunction] [--print-xml] [--persistent] [--config] [--live] [--current]
$ virsh attach-disk --domain centos --source /home/data/tmp/disk.raw --persistent --target vdc
# dd if=/dev/zero of=/tmp/disk.raw bs=2M count=5120 status=progress
$ virsh attach-disk centos /tmp/disk.raw vdc --persistent
$ virsh detach-disk --domain centos --config --target vdb
$ virsh detach-disk centos vdb --config

# attach/detach interface
# attach-interface <domain> <type> <source> [--target <string>] [--mac <string>] [--script <string>] [--model <string>] [--inbound <string>] [--outbound <string>] [--persistent] [--config] [--live] [--current] [--print-xml] [--managed]
# virsh domiflist centos
# ovsnet is a network: virsh net-list
Interface Type Source Model MAC
-------------------------------------------------------
vnet1 bridge ovsnet virtio 52:54:00:43:85:03

# add a new interface to domain
$ virsh attach-interface --domain centos --type network --source ovsnet --model virtio
$ virsh attach-interface --domain centos --type network --source ovsnet --model virtio --persistent
# virsh domiflist centos
Interface Type Source Model MAC
-------------------------------------------------------
vnet1 bridge ovsnet virtio 52:54:00:43:85:03
vnet2 bridge ovsnet virtio 52:54:00:a6:45:d6

$ virsh detach-interface centos bridge --mac 52:54:00:a6:45:d6
$ virsh detach-interface centos bridge --mac 52:54:00:a6:45:d6 --persistent

QMP from virsh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# 6095 is domain id, vm100 is domain name

# HMP protocol
$ virsh qemu-monitor-command –hmp 6095 info block
drive-virtio-disk0: removable=0 file=/export/jvirt/jcs-agent/instances/i-sm6pxr4068/vda backing_file=/export/jvirt/jcs-agent/instances/_base/img-8sdjnj4qbq backing_file_depth=1 ro=0 drv=qcow2 encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

# all supported QMP commands
$ virsh qemu-monitor-command vm100 --pretty '{"execute": "query-commands"}'

# QMP protocol --pretty means format json output
# virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-block"}'
$ virsh qemu-monitor-command 6095 --pretty '{ "execute": "query-block"}'
{
"return": [
{
"device": "drive-virtio-disk0",
"locked": false,
"removable": false,
"inserted": {
"iops_rd": 0,
"image": {
"backing-image": {
"virtual-size": 42949672960,
"filename": "/export/jvirt/jcs-agent/instances/_base/img-8sdjnj4qbq",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 24866193408,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"lazy-refcounts": false
}
},
"dirty-flag": false
},
"virtual-size": 42949672960,
"filename": "/export/jvirt/jcs-agent/instances/i-sm6pxr4068/vda",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 21068431360,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"lazy-refcounts": false
}
},
"backing-filename": "/export/jvirt/jcs-agent/instances/_base/img-8sdjnj4qbq",
"dirty-flag": false
},
"iops_wr": 0,
"ro": false,
"backing_file_depth": 1,
"drv": "qcow2",
"iops": 0,
"bps_wr": 0,
"backing_file": "/export/jvirt/jcs-agent/instances/_base/img-8sdjnj4qbq",
"encrypted": false,
"bps": 0,
"bps_rd": 0,
"file": "/export/jvirt/jcs-agent/instances/i-sm6pxr4068/vda",
"encryption_key_missing": false
},
"type": "unknown"
}
],
"id": "libvirt-8302918"
}

QGA

In order to run command inside guest os, we must have two things prepared.

  • A channel
  • guest agent runs in guest os to execute the command

Enable the channel, by edit xml of libvirtd with $virsh edit $domain

1
2
3
4
5
6
<devices>
<channel type='unix'>
<source mode='bind'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
</devices>

Then after save, libvirt will set extra info, the full meta is like this

1
2
3
4
5
6
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-centos/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>

After enable this channel, on host a unix socket is created for the VM at /var/lib/libvirt/qemu/channel/target/domain-3-centos/org.qemu.guest_agent.0, you can also set the path of socket with path attribute in source tag like <source mode='bind' path='/var/lib/libvirt/test.agent.0'/>
Inside the domain, a char device(/dev/virtio-ports/org.qemu.guest_agent.0) is exported to user by virtio, write/read to char dev inside guest OS, data is available at unix socket on host.

Install agent inside guest vm

1
2
3
$ yum install qemu-guest-agent
$ systemctl start qemu-guest-agent
$ systemctl enable qemu-guest-agent

QGA commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# show all supported commands that can run inside domain
$ virsh qemu-agent-command ${DOMAIN} --pretty '{"execute":"guest-info"}'

# guest-exec
# guest-exec-status
# guest-get-host-name
# guest-get-time
# guest-set-user-password
# guest-shutdown

# guest-get-cpu-usage
# guest-get-mem-usage
# guest-get-time

# File content is encrypted base64!!!
# guest-file-seek
# guest-file-read
# guest-file-write
# guest-file-flush
# guest-file-close
# guest-file-open

# open with read only
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-file-open", "arguments":{"path":"/tmp/test.txt","mode":"r"}}'
{"return":1000}
# guest-file-read, you can call these several times to get the whole file data, then close it
# this is probably used for large file
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-file-read", "arguments":{"handle":1000}}'
# read with large buffer
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-file-read", "arguments":{"handle":1000, "count":1000000}}'
# guest-file-close
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-file-close", "arguments":{"handle":1000}}'


# run arbitrary command, run `any` command inside guest
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-exec","arguments":{"path":"mkdir","arg":["-p","/root/.ssh"],"capture-output":true}}'
{"return":{"pid":911}}

# get the content of a file, it opens the file then close it every time
# if file is large, output is truncated!!!, so that you can only the first part of a larger file!!!! use guest-file-read instead for large file
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-exec","arguments":{"path":"cat","arg":["/var/log/plugin.log"],"capture-output":true}}'

# response is encrypted base64
$ virsh qemu-agent-command ${DOMAIN} '{"execute":"guest-exec-status","arguments":{"pid":911}}'

# base64 decode/encode
$ echo "hello" | base64
aGVsbG8K
$ echo "aGVsbG8K" | base64 -d
hello

# encode/decode from file
$ echo "hello" >test.txt
$ base64 ./test >encoded.txt
$ base64 -d ./encoded.txt
hello

Manage virtual disk with tools

libguestfs is a C library tool for creating, accessing and modifying virtual machine disk images. You can look inside the disk images, modify the files they contain, create them from scratch, resize them, and much more.
yum install libguestfs libguestfs-tool
libguestfs works with any disk image, including ones created in VMware, KVM, qemu, VirtualBox, Xen, and many other hypervisors

libguestfs arch

guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell, guestfish shell provides lots of built-in commands(600+) you can use to access guest file systems, it's a super-set of virt-xxx command below(--verbose for debug).

  • virt-copy-in
  • virt-copy-out
  • virt-edit
  • virt-df
  • virt-tar-in
  • virt-tar-out
  • virt-cat
  • virt-ls
  • virt-make-fs
  • virt-filesystems
  • virt-sysprep
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
#===========================start a new vm permenantly, destroy it when quit the fish shell====
# two ways to run guestfish either by providing a disk image or domain name of libvirt
# -i means inspect the image and mount the file systems(mount requires a vm started by guestfish)
# 1. guestfish will start a new vm by libvirt and mount domain disk
# --verbose to debug issue
$ guestfish --ro -d $domain -i
><fs> df-h
Filesystem Size Used Avail Use% Mounted on
tmpfs 96M 112K 96M 1% /run
/dev 237M 0 237M 0% /dev
/dev/sda1 8.0G 1.2G 6.9G 14% /sysroot

# NOTE use TAB for command completion!!!

$ virsh list
Id Name State
----------------------------------------------------
4 guestfs-cmurc2zne6ot9rt8 running

# 2. from a disk image to start a new vm(this will call libvirt to start a new vm as well)
$ guestfish -a /path/to/disk/image -i
><fs> df-h

#===========================start a new vm temporarily, destroy it when get the command result===
$ virt-filesystems -a CentOS-7-x86_64-GenericCloud-1503.qcow2
/dev/sda1
$ virt-filesystems -d $domain
/dev/sda1

$ virt-ls -d $domain /home/centos
$ virt-tar-out -d $domain /home/centos out.tar.gz
$ virt-copy-in -d $domain test.txt /home/centos
$ virt-copy-out -d $domain /home/centos/text.txt .


########################the new vm start by guest fish#######################################
# vm100 has its disk /root/jason/vm/test/vm100.qcow2
$ guestfish --ro -d vm100 -i
Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell

Operating system: CentOS Linux release 7.8.2003 (Core)
/dev/sda1 mounted on /

><fs> df


$ ps -ef | grep kvm
/usr/libexec/qemu-kvm -global virtio-blk-pci.scsi=off -nodefconfig -enable-fips -nodefaults -display none -cpu host -machine accel=kvm:tcg -m 500 -no-reboot -rtc driftfix=slew -no-hpet -global kvm-pit.lost_tick_policy=discard -kernel /var/tmp/.guestfs-0/appliance.d/kernel -initrd /var/tmp/.guestfs-0/appliance.d/initrd -device virtio-scsi-pci,id=scsi -drive file=/tmp/libguestfsKuVs4B/overlay1,cache=unsafe,format=qcow2,id=hd0,if=none -device scsi-hd,drive=hd0 -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none -device scsi-hd,drive=appliance -device virtio-serial-pci -serial stdio -device sga -chardev socket,path=/tmp/libguestfsKuVs4B/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -append panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color

# kernel: /var/tmp/.guestfs-0/appliance.d/kernel
# initrd: /var/tmp/.guestfs-0/appliance.d/initrd

# disk /tmp/libguestfsKuVs4B/overlay1 whoes backing file is /root/jason/vm/test/vm100.qcow2!!!
$ file /tmp/libguestfsKuVs4B/overlay1
/tmp/libguestfsKuVs4B/overlay1: QEMU QCOW Image (v3), has backing file (path /root/jason/vm/test/vm100.qcow2), 8589934592 bytes

NOTE

  • The guest fish vm started by libvirt by default, you can LIBGUESTFS_BACKEND=direct to start it directly
  • libvirtd is not a must for guestfish, but if you use -d $domain option, you must start libvirtd as guest fish need to get the info(disk info) from libvirtd

virsh command groups

vm xml

Use virsh to edit VM XML

  • Virtual Networks: net-edit, net-dumpxml
  • Storage Pools: pool-edit, pool-dumpxml
  • Storage Volumes: vol-edit, vol-dumpxml
  • Interfaces: iface-edit, iface-dumpxml

Redefining the XML of a running machine will not change anything, the changes will take effect after the next VM start up

Command groups

Domain management

  • virsh console
  • virsh define (define a guest domain from xml file)
  • virsh destroy(immediately terminates a running vm)
  • virsh undefine(remove cfg for an inactive vm)
  • virsh domjobinfo(return info about jobs running on a domain)
  • virsh dumpxml(output current vm info /var/run/libvirt/qemu/$domain.xml to stdout)
  • virsh edit(edit the XML cfg for a vm located at /etc/libvirtd/qemu/$domain.xml, take affect at next virsh start)
  • virsh migrate(migrate vm to another host)
  • virsh suspend/resume/start/shutdown/reboot/reset/save/restore
    • shutdown/reboot gracefully from guest like type command from guest
    • destroy/reset forcely from outside like press power button
  • virsh setmem/setvcpus
  • virsh vcpuinfo/vcpucount/vcpupin
  • virsh domiflist
  • virsh attach-device (attach device from an XML file, device can be interface, disk etc)

Domain monitor

  • virsh domblkinfo / dominfo
  • virsh domblkstat /domifstat /dommemstat
  • virsh domstate
  • virsh list –all

Domain Snapshot

  • virsh snapshot-create
  • virsh snapshot-delete
  • virsh snapshot-list
  • virsh snapshot-revert

Host and Hypervisor

  • virsh capabilities
  • virsh nodeinfo
  • virsh uri

Network Interface

  • virsh iface-dumpxml
  • virsh iface-list / iface-name / iface-mac
  • virsh iface-define / iface-undefine
  • virsh iface-start / iface-destroy
  • virsh iface-edit

Network

  • virsh net-create / net-destroy
  • virsh net-edit / net-dumpxml
  • virsh net-define / net-undefine
  • virsh net-start
  • virsh net-info / net-list / net-name

Node device

  • virsh nodedev-create
  • virsh nodedev-reattach / nodedev-dettach
  • virsh nodedev-destroy
  • virsh nodedev-dumpxml
  • virsh nodedev-list

Storage pool

  • virsh pool-create / pool-destroy / pool-delete
  • virsh pool-define / pool-undefine
  • virsh pool-start
  • virsh pool-list / pool-dumpxml
  • virsh pool-edit / pool-info

Storage Volume

  • virsh vol-create / vol-delete
  • virsh vol-info /vol-list

event

libvirt supports several type events can be monitored by client, they are domain event, qemu monitor event, network event, storage pool event, nodedev event

  • virsh event(domain event)
  • virsh qemu-monitor-event
  • virsh net-event
  • virsh nodedev-event
  • virsh pool-event
  • virsh secret-event

blk

  • virsh domblklist centos10
  • virsh attach-disk /detach-disk

libvirtd supports three modes of attaching a disk to vm, --config, --live, --persistent by default it’s --live

  • --config, new disk setting only write to /etc/libvirt/qemu/$doman.xml, only take affect after virsh destroy then virsh start. virsh reset, virsh reboot is not sufficient.
  • --live, new disk setting only in memory of running vm, you can see it by virsh dumpxml $domain or /var/run/libvirt/qemu/$domain.xml, take affect immediately, no written to /etc/libvirt/qemu/$doman.xml, virsh reset and virsh reboot you still can see it, but if you virsh destroy and virsh start, the new disk is gone as it’s not written to that file, /var/run/libvirt/qemu/$domain.xml is deleted after virsh destroy, ``/etc/libvirt/qemu/$doman.xmlis used forvirsh start`.
  • --persistent == --config + --live, that means new disk setting is in memory(/var/run/libvirt/qemu/$domain.xml) and disk(/etc/libvirt/qemu/$doman.xml)
  • --current, If the VM is offline, use –config. If the VM is running, use –live.

trobleshooting

conf runtime files

Related Paths:

  • /etc/libvirt/
  • /etc/sysconfig/libvirtd(it's a file)
  • /var/run/libvirt/
  • /var/log/libvirt/
  • /var/lib/libvirt/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# /etc/libvirt/libvirtd.conf: global conf of libvirtd
# /etc/sysconfig/libvirtd: override libvirtd.conf
# /usr/lib/systemd/system/libvirtd.service: systemd service
# /etc/libvirt/qemu/networks/default.xml: default network

# /var/run/libvirt: runtime of daemon, like daemon socket/pid of each vm
$ ls /var/run/libvirt/
hostdevmgr libvirt-admin-sock libvirt-sock libvirt-sock-ro lxc network qemu storage virtlockd-sock virtlogd-sock

# hostdevmgr network storage: setting for device, networking(rules), volume(pool, local disk)
# libvirt-admin-sock libvirt-sock libvirt-sock-ro: sock for admin, readonly etc
# lxc qemu: xml/pid of each instance, will go when boots
# virtlockd-sock virtlogd-sock: sock of locked and virtlogd

$ ls /var/run/libvirt/qemu/
centos.pid centos.xml


# /var/log/libvirt: logs of VM from vm console and part logs from libvirtd
$ ls /var/log/libvirt/qemu/
centos.log

# by default libvirtd writes logs to /var/log/message(centos) /var/log/syslog(ubuntu)
# default: log_outputs="3:syslog:libvirtd"
# change it to any file by editing /etc/libvirt/libvirtd.conf
# 1: DEBUG
# 2: INFO
# 3: WARNING
# 4: ERROR
# only log of level large or equal to 2 are sent to file
log_level = 1
log_outputs="2:file:/var/log/libvirt/libvirtd.log"


# /etc/libvirt: daemon conf and persistent XML of VMS located at qemu/
$ ls /etc/libvirt/
libvirt-admin.conf libvirtd.conf nwfilter/ qemu.conf virtlogd.conf
libvirt.conf qemu/ virtlockd.conf


$ ls /var/lib/libvirt
dnsmasq filesystems/ images/ network/ qemu/

# snapshot saves the xml of snapshot, data of snapshot is saved into image by default(internal snapshot)!!
# save/ holds the managed save(cpu/memory)
$ ls /var/lib/libvirt/qemu/
channel/ domain-1-centos save/ snapshot/

# socket of this Vm like qmp socket etc
$ ls /var/lib/libvirt/qemu/domain-1-centos
master-key.aes monitor.sock

# socket of this vm for QGA channel
$ ls /var/lib/libvirt/qemu/channel/target/domain-1-centos/
org.qemu.guest_agent.0

libvirt log

libvirtd.conf

1
2
3
4
log_level = 1
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
keepalive_interval=60
admin_keepalive_interval=60

qemu.conf

1
2
# use virtlogd as backend
stdio_handler = "logd"

virtlogd.conf

1
2
log_level = 1 
log_outputs="1:file:/var/log/libvirt/virtlogd.log"

FAQ

Difference between qemu:///system and qemu:///session, Which one should I use?

All ‘system’ URIs (be it qemu, lxc, uml, …) connect to the libvirtd daemon running as root which is launched at system startup. Virtual machines created and run using 'system' are usually launched as root

All ‘session’ URIs launch a libvirtd instance as your local user, and all VMs are run with local user permissions.
You will definitely want to use qemu:///system if your VMs are acting as servers. VM autostart on host boot only works for 'system', and the root libvirtd instance has necessary permissions to use proper networking via bridges or virtual networks

Migration

There are two primary types of migration with QEMU/KVM and libvirt:

  • Plain migration: The source host VM opens a direct unencrypted TCP connection to the destination host for sending the migration data. Unless a port is manually specified, libvirt will choose a migration port in the range 49152-49215, which will need to be open in the firewall on the remote host.

  • Tunneled migration: The source host libvirtd opens a direct connection to the destination host libvirtd for sending migration data. This allows the option of encrypting the data stream. This mode doesn’t require any extra firewall configuration, but is only supported with qemu 0.12.0 or later, and libvirt 0.7.2.

For all QEMU/KVM migrations, libvirtd must be running on the source and destination host.

  • For tunneled migration, no extra configuration should be required, you simply need to pass the –tunnelled flag to virsh migrate.

  • For plain unencrypted migration, the TCP port range 49152-49215 must be opened in the firewall on the destination host.

1
virsh migrate $domain $REMOTE_HOST_URI --migrateuri tcp://$REMOTE_HOST

How to import a vm from a terminal(no X display)

1
2
3
4
5
6
$ virsh net-list

$ virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk /home/data/tmp/CentOS-7-x86_64-GenericCloud-1503.qcow2 --import --os-type linux --wait 0 --network default
# OR
$ virt-install --name guest1-rhel7 --memory 2048 --vcpus 2 --disk /home/data/tmp/CentOS-7-x86_64-GenericCloud-1503.qcow2 --import --os-type linux --wait 0 --network none
# must set wait 0, otherwise, it will show 'Domain installation still in progress. Waiting for installation to complete'

How to install a VM from a terminal

Here we use image from remote by –location parameters

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
-l LOCATION , --location=LOCATION
Distribution tree installtion source. virt-install can recognize certain distribution trees and fetches a bootable kernel/initrd pair to launch the install.

The "LOCATION" can take one of the following forms:
DIRECTORY
Path to a local directory containing an installable distribution image
nfs:host:/path or nfs://host/path
An NFS server location containing an installable distribution image
http://host/path
An HTTP server location containing an installable distribution image
ftp://host/path
An FTP server location containing an installable distribution image

centos 7
--location 'http://mirror.i3d.net/pub/centos/7/os/x86_64/'

ubuntu(different versions has different url)
--location 'http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/'
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/'
1
$ virt-install --name ubuntu --memory 2048 --vcpus 2 --disk size=8 --location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' --os-type linux --graphics none --extra-args "console=tty0 console=ttyS0,115200n8"

how to change something of disk without boot it

virt-sysprep(only support linux!!!) can reset or unconfigure a virtual machine so that clones can be made from it. Steps in this process include removing SSH host keys, removing persistent network MAC configuration, and removing user accounts. virt-sysprep can also customize a virtual machine, for instance by adding SSH keys, users or logos. Each step can be enabled or disabled as required..

Usage

1
2
virt-sysprep [--options] -d domname
virt-sysprep [--options] -a disk.img [-a disk.img ...]

NOTE
virt-sysprep modifies the guest or disk image in place

  • The virtual machine must be shut down before.
  • disk images must not be edited concurrently.
  • virt-sysprep depends on libvirt running by default
  • export LIBGUESTFS_BACKEND=direct virt-sysprep ..(start qemu without libvirtd)
1
2
3
4
5
6
7
8
9
10
11
12
13
# NO wildcards supported for below three 
# --copy SOURCE:DEST inside vm
# --copy-in LOCALPATH:REMOTEDIR from host to guest
# --move SOURCE:DEST inside vm

# --mkdir DIR
# --delete PATH support wildcards

# --install PKG,PKG. use guest pkg manager and host network
# --uninstall PKG,PKG use guest pkg manager

# This will start a temporary vm
$ virt-sysprep --root-password password:$new --uninstall cloud-init --selinux-relabel -a CentOS-7-x86_64-GenericCloud.qcow2

How to reset a user password of domain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# There are two ways to do this
##################Way 1################################################
# if qemu-ga is running, we can change it runtime, no need to restart vm
# QGA: guest-set-user-password
$virsh set-user-password vm100 root root

##################Way 2################################################
# if no qemu-ga or qemu-ga is not working use this way
# shutdown it first
$ virsh shutdown $domain

# generate your password
$ openssl passwd -1 $yourpassword
$6$FU5Nl9oxxxx

# mount the disk with read/write mode(--verbose for debugging)
$ guestfish --rw -a /var/lib/libvirt/images/debian9-vm1.qcow2 -i

# oneshot mount
$ guestmount -a disk.img -i /tmp/disk --verbose

><fs> vi /etc/shadow
# modify password
root:$6$FU5Nl9oxxxx:17572:0:99999:7:::

><fs> flush
><fs> quit

# OR you can do it with just one command who does simliar operation for you
$ virt-sysprep --root-password password:$new -a CentOS-7-x86_64-GenericCloud.qcow2

how to change log level without libvirt restart

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 1: DEBUG  
# 2: INFO
# 3: WARNING
# 4: ERROR

# each source file register log with a tag
# filter supports wildcard match
# that means libvirt.xxx also matches here!!!

# log_filters="1:libvirt 1:daemon 1:rpc"
# log_outputs="1:file:/var/log/libvirt/libvirtd.log 2:syslog:libvirtd"

$ virt-admin daemon-log-filters

$ virt-admin daemon-log-filters "1:libvirt 1:daemon 1:rpc"
$ virt-admin daemon-log-filters
Logging filters: 1:*libvirt* 1:*daemon* 1:*rpc*

$ virt-admin daemon-log-outputs
Logging outputs: 1:file:/var/log/libvirt/libvirtd.log

$ virt-admin daemon-log-outputs "2:file:/var/log/libvirt/libvirtd.log"

how to enable client library log

By default the client library doesn’t produce any logs and usually usually it’s not very interesting on its own anyway.
The library configuration of logging is through 3 environment variables allowing to control the logging behaviour:

  • LIBVIRT_DEBUG: it can take the four following values:
    • 1 or “debug”: asking the library to log every message emitted, though the filters can be used to avoid filling up the output
    • 2 or “info”: log all non-debugging information
    • 3 or “warn”: log warnings and errors, that’s the default value
    • 4 or “error”: log only error messages
  • LIBVIRT_LOG_FILTERS: defines logging filters
  • LIBVIRT_LOG_OUTPUTS: defines logging outputs
1
2
$ export LIBVIRT_DEBUG=debug
$ export LIBVIRT_LOG_OUTPUTS="1:file:/tmp/libvirt_client.log"

how to enable libvirtd on tcp without tls

libvirtd.conf

1
2
3
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

/etc/sysconfig/libvirtd

1
LIBVIRTD_ARGS="--listen"

/etc/libvirt/libvirt.conf

1
2
# tell virsh use this uri by default
uri_default = "qemu+tcp://127.0.0.1:16509/system"

Why doesn’t ‘shutdown’ seem to work?

If you are using Xen HVM or QEMU/KVM, ACPI must be enabled in the guest for a graceful shutdown to work. To check if ACPI is enabled, run:

virsh dumpxml $your-vm-name | grep acpi

If nothing is printed, ACPI is not enabled for your machine. Use ‘virsh edit’ to add the following XML under :

If your VM is running Linux, the VM additionally needs to be running acpid to receive the ACPI events
yum install acpid

When <input> device is used

1
2
3
<devices>
<input type='mouse' bus='usb'/>
</devices>

Input devices allow interaction with the graphical framebuffer in the guest virtual machine virtual machine. that means.

  • For Desktop, you should configure mouse input, otherwise, you can only use keyboard
  • For Server without GUI, mouse input can be disable, as it no effect at all.

reduce disk image file not disk size

As most of time, disk is thin-provisioned, disk file grows when needed, but after disk file grew, it can not be resize automatcially even you delete vm inside VM, lots of free space of this disk.

NOTE

  • The virtual machine must be shut down
  • disk images must not be edited concurrently.
1
2
# the new image is smaller(No snapshot is lost)
$ qemu-img convert -O qcow2 centos7.img out.qcow2

run kvm machine from another vm

if you plan to run vm from vm by libvirt, make sure nested virtualization is enabled on host, otherwise, you will meet error from libvirtd.log like this invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm, the pass nested virtualization to vm who will plan to start new vm.

  • enable nested virtualization on host
  • pass it to vm
  • from that vm start another kvm machine.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# if you can run vm from another vm, check it by that vm
# run this in virtual vm which plans to start another vm.
$ virt-host-validate
QEMU: Checking for hardware virtualization : PASS--->nested virtulization is enabled
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpu' controller mount-point : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller mount-point : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller mount-point : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'devices' controller mount-point : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
QEMU: Checking for device assignment IOMMU support : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
LXC: Checking for Linux >= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'memory' controller mount-point : PASS
LXC: Checking for cgroup 'cpu' controller support : PASS
LXC: Checking for cgroup 'cpu' controller mount-point : PASS
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuacct' controller mount-point : PASS
LXC: Checking for cgroup 'cpuset' controller support : PASS
LXC: Checking for cgroup 'cpuset' controller mount-point : PASS
LXC: Checking for cgroup 'devices' controller support : PASS
LXC: Checking for cgroup 'devices' controller mount-point : PASS
LXC: Checking for cgroup 'blkio' controller support : PASS
LXC: Checking for cgroup 'blkio' controller mount-point : PASS

monitor event from command line

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# for event, check help first
$ virsh event --help
NAME
event - Domain Events

SYNOPSIS
event [--domain <string>] [--event <string>] [--all] [--loop] [--timeout <number>] [--list] [--timestamp]

DESCRIPTION
List event types, or wait for domain events to occur

OPTIONS
--domain <string> filter by domain name, id or uuid
--event <string> which event type to wait for
--all wait for all events instead of just one type
--loop loop until timeout or interrupt, rather than one-shot
--timeout <number> timeout seconds
--list list valid event types
--timestamp show timestamp for each printed event

# all events of all domains
$ virsh event --all --loop --timestamp

# default remote port is 16509
$ virsh -c qemu+tcp://172.17.0.3/system event --event lifecycle --loop

$ virsh qemu-monitor-event --help
NAME
qemu-monitor-event - QEMU Monitor Events

SYNOPSIS
qemu-monitor-event [--domain <string>] [--event <string>] [--pretty] [--loop] [--timeout <number>] [--regex] [--no-case] [--timestamp]

DESCRIPTION
Listen for QEMU Monitor Events

OPTIONS
--domain <string> filter by domain name, id or uuid
--event <string> filter by event name
--pretty pretty-print any JSON output
--loop loop until timeout or interrupt, rather than one-shot
--timeout <number> timeout seconds
--regex treat event as a regex rather than literal filter
--no-case treat event case-insensitively
--timestamp show timestamp for each printed event

# all qemu monitor event of all domains
$ virsh qemu-monitor-event --loop --timestamp

monitor performance of vm

Some platforms allow monitoring of performance of the virtual machine and the code executed inside. To enable the performance monitoring events you can either specify them in the perf element or enable them via virDomainSetPerfEvents API. The performance values are then retrieved using the virConnectGetAllDomainStats

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
# check perf event status of given vm
$ virsh perf vm100
cmt : disabled
mbmt : disabled
mbml : disabled
cpu_cycles : enabled
instructions : disabled
cache_references: disabled
cache_misses : disabled
branch_instructions: disabled
branch_misses : disabled
bus_cycles : disabled
stalled_cycles_frontend: disabled
stalled_cycles_backend: disabled
ref_cpu_cycles : disabled
cpu_clock : disabled
task_clock : disabled
page_faults : disabled
context_switches: disabled
cpu_migrations : disabled
page_faults_min: disabled
page_faults_maj: disabled
alignment_faults: disabled
emulation_faults: disabled

# enable perf event of given vm
$ virsh perf vm100 --enable page_faults --live
$ virsh perf vm100 --enable cache_misses --live

# get event stats
$ virsh domstats vm100
Domain: 'vm100'
state.state=1
state.reason=1
cpu.time=544449957934
cpu.user=11320000000
cpu.system=95640000000
balloon.current=2048000
balloon.maximum=2048000
balloon.swap_in=0
balloon.swap_out=0
balloon.major_fault=179
balloon.minor_fault=131463
balloon.unused=1732652
balloon.available=1832708
balloon.usable=1673880
balloon.last-update=1659001628
balloon.rss=488752
vcpu.current=2
vcpu.maximum=2
vcpu.0.state=1
vcpu.0.time=159640000000
vcpu.0.wait=0
vcpu.1.state=1
vcpu.1.time=367810000000
vcpu.1.wait=0
net.count=0
block.count=1
block.0.name=vda
block.0.path=/tmp/vm100.qcow2
block.0.rd.reqs=5746
block.0.rd.bytes=145617408
block.0.rd.times=10162755968
block.0.wr.reqs=324
block.0.wr.bytes=4339200
block.0.wr.times=1200496823
block.0.fl.reqs=158
block.0.fl.times=4497529526
block.0.allocation=3385786368
block.0.capacity=8589934592
block.0.physical=3385360384
perf.cache_misses=11298
perf.page_faults=0

add hotplug cpu to guest(add cpu to guest without rebooting it)

you must set maxcpus for qemu to use hotpluggable cpu

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# check hotpluggable cpu
# no qom-path means it's plugged out!!!
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-hotpluggable-cpus"}'
{
"return": [
{
"props": {
"core-id": 1,
"thread-id": 0,
"node-id": 0,
"socket-id": 0
},
"vcpus-count": 1,
"type": "qemu64-x86_64-cpu"
},
{
"props": {
"core-id": 0,
"thread-id": 0,
"node-id": 0,
"socket-id": 0
},
"vcpus-count": 1,
"qom-path": "/machine/unattached/device[0]",
"type": "qemu64-x86_64-cpu"
}
],
"id": "libvirt-21"
}

# add a hotpluggable cpu
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "device_add", "arguments":{"id":"cpu-2", "driver":"qemu64-x86_64-cpu", "socket-id":"0", "core-id":"1", "thread-id":"0"}}'

# query vcpu details
$virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-cpus-fast"}'

# pluggable cpu and unpluggable cpu has different qom-paths!!!
# pluggable cpu sits at /machine/peripheral
{
"return": [
{
"arch": "x86",
"thread-id": 17865,
"props": {
"core-id": 0,
"thread-id": 0,
"node-id": 0,
"socket-id": 0
},
"qom-path": "/machine/unattached/device[0]",
"cpu-index": 0,
"target": "x86_64"
},
{
"arch": "x86",
"thread-id": 18887,
"props": {
"core-id": 1,
"thread-id": 0,
"node-id": 0,
"socket-id": 0
},
"qom-path": "/machine/peripheral/cpu-2",
"cpu-index": 1,
"target": "x86_64"
}
],
"id": "libvirt-28"
}

# inside the guest, there are two cpu now!!!

memory deivce and share memory

without hotplugable memory device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<domain>
<maxMemory slots="4" unit="GiB">2</maxMemory>
<cpu>
<topology sockets="1" cores="2" threads="1"/>
<numa>
<cell id="0" cpus="0-1" memory="1024" unit="MiB"/> -----> no need to set <memory></memory>, it's auto calcualted from <numa></numa> if no <numa> must set it!!!
</numa>
</cpu>

libvirt auto generate xml /var/run/libvirt/qemu/$domain.xml

<maxMemory slots='4' unit='KiB'>4194304</maxMemory>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>

check memory device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# total memory info
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-size-summary"}'
{
"return": {
"base-memory": 1073741824,
"plugged-memory": 0
},
"id": "libvirt-20"
}

# details about pluggable memory device
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-devices"}'
{
"return": [

],
"id": "libvirt-21"
}

Inside guest check memory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# kernel code used is not counted here
$ free
total used free shared buff/cache available
Mem: 948532 77504 785368 12436 85660 750776
Swap: 0 0 0

# show result in bytes
$ lsmem -a -b
$ lsmem -a
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x0000000007ffffff 128M online no 0
0x0000000008000000-0x000000000fffffff 128M online yes 1
0x0000000010000000-0x0000000017ffffff 128M online yes 2
0x0000000018000000-0x000000001fffffff 128M online yes 3
0x0000000020000000-0x0000000027ffffff 128M online yes 4
0x0000000028000000-0x000000002fffffff 128M online no 5
0x0000000030000000-0x0000000037ffffff 128M online no 6
0x0000000038000000-0x000000003fffffff 128M online no 7

Memory block size: 128M
Total online memory: 1G
Total offline memory: 0B

with hotplugable memory(dimm) from command line

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<domain>
<maxMemory slots="4" unit="GiB">2</maxMemory>
<cpu>
<topology sockets="1" cores="2" threads="1"/>
<numa>
<cell id="0" cpus="0-1" memory="1024" unit="MiB"/> -----> no need to set <memory></memory>, it's auto calcualted from <numa></numa> if no <numa> must set it!!!
</numa>
</cpu>
<devices>
<memory model="dimm">
<target>
<size unit="MiB">256</size>
<node>0</node>
</target>
</memory>
</devices>

libvirt auto generate xml /var/run/libvirt/qemu/$domain.xml

<maxMemory slots='4' unit='KiB'>4194304</maxMemory>
<memory unit='KiB'>1310720</memory> ---> ram(numa) + dimm
<currentMemory unit='KiB'>1310720</currentMemory>---> the memory you can see in guest with free = ram(numa + dimm)

check memory device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# total memory info
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-size-summary"}'
{
"return": {
"base-memory": 1073741824,
"plugged-memory": 268435456
},
"id": "libvirt-19"
}

# details about pluggable memory device
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-devices"}'
{
"return": [
{
"type": "dimm",
"data": {
"memdev": "/objects/memdimm0",
"hotplugged": false, ------->as hotpluggable memory device is from command line(xml) not QMP command, so it is not hotplugged!!
"addr": 4294967296,
"hotpluggable": true,
"size": 268435456,
"slot": 0,
"node": 0,
"id": "dimm0"
}
}
],
"id": "libvirt-20"
}

Inside guest check memory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# kernel code used is not counted here
$ free
total used free shared buff/cache available
Mem: 1210676 82836 1042392 12432 85448 1092196
Swap: 0 0 0


# lsmem -a -b
$ lsmem
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x0000000007ffffff 128M online no 0
0x0000000008000000-0x000000000fffffff 128M online yes 1
0x0000000010000000-0x0000000017ffffff 128M online yes 2
0x0000000018000000-0x000000001fffffff 128M online no 3
0x0000000020000000-0x0000000027ffffff 128M online no 4
0x0000000028000000-0x000000002fffffff 128M online no 5
0x0000000030000000-0x0000000037ffffff 128M online no 6
0x0000000038000000-0x000000003fffffff 128M online no 7
0x0000000100000000-0x0000000107ffffff 128M online no 32
0x0000000108000000-0x000000010fffffff 128M online no 33

Memory block size: 128M
Total online memory: 1.3G
Total offline memory: 0B

with hotplugable memory(nvdimm) from command line

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<domain>
<maxMemory slots="4" unit="GiB">2</maxMemory>
<cpu>
<topology sockets="1" cores="2" threads="1"/>
<numa>
<cell id="0" cpus="0-1" memory="1024" unit="MiB"/> -----> no need to set <memory></memory>, it's auto calcualted from <numa></numa> if no <numa> must set it!!!
</numa>
</cpu>
<devices>
<memory model="dimm">
<target>
<size unit="MiB">256</size>
<node>0</node>
</target>
</memory>
<memory model="nvdimm">
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit="MiB">256</size>
<node>0</node>
</target>
</memory>
</devices>

libvirt auto generate xml /var/run/libvirt/qemu/$domain.xml

<maxMemory slots='4' unit='KiB'>4194304</maxMemory>
<memory unit='KiB'>1572864</memory> ---->memory include ram(numa)+dimm+nvdimm
<currentMemory unit='KiB'>1310720</currentMemory> ---->currentMemory not include nvdimm, but include dimm, this is the value you can see with `lsmem -ab`

check memory device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# total memory info
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-size-summary"}'
{
"return": {
"base-memory": 1073741824,
"plugged-memory": 536870912 ---->count both dimm and nvdimm
},
"id": "libvirt-19"
}

# details about pluggable memory device(dimm and nvdim)
$ virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-devices"}'
{
"return": [
{
"type": "dimm",
"data": {
"memdev": "/objects/memdimm0",
"hotplugged": false, ---->from command line
"addr": 4294967296,
"hotpluggable": true,
"size": 268435456,
"slot": 0,
"node": 0,
"id": "dimm0"
}
},
{
"type": "nvdimm",
"data": {
"memdev": "/objects/memnvdimm1",
"hotplugged": false,---->from command line
"addr": 4563402752,
"hotpluggable": true,
"size": 268435456,
"slot": 1,
"node": 0,
"id": "nvdimm1"
}
}
],
"id": "libvirt-20"
}

Inside guest check memory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# kernel code used is not counted here
$ free
total used free shared buff/cache available
Mem: 1210676 82836 1042392 12432 85448 1092196
Swap: 0 0 0
# total memory is 1.3G as well, nvdimm is not counted
$ lsmem -a
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x0000000007ffffff 128M online no 0
0x0000000008000000-0x000000000fffffff 128M online yes 1
0x0000000010000000-0x0000000017ffffff 128M online yes 2
0x0000000018000000-0x000000001fffffff 128M online yes 3
0x0000000020000000-0x0000000027ffffff 128M online yes 4
0x0000000028000000-0x000000002fffffff 128M online no 5
0x0000000030000000-0x0000000037ffffff 128M online no 6
0x0000000038000000-0x000000003fffffff 128M online no 7
0x0000000100000000-0x0000000107ffffff 128M online no 32
0x0000000108000000-0x000000010fffffff 128M online no 33

Memory block size: 128M
Total online memory: 1.3G
Total offline memory: 0B

# nvdimm is seen as block by guest /dev/pmem0!!!
$fdisk -l

Disk /dev/pmem0: 268 MB, 268435456 bytes, 524288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

with hotpluggable memory device from QMP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
$virsh qemu-monitor-command  vm100 --pretty '{ "execute": "object-add", "arguments":{"qom-type":"memory-backend-ram","id":"mem-dimm2","props":{"size":268435456}}}'
$virsh qemu-monitor-command vm100 --pretty '{ "execute": "device_add", "arguments":{"driver":"pc-dimm","id":"dm2","memdev":"mem-dimm2"}}'

$virsh qemu-monitor-command vm100 --pretty '{ "execute": "query-memory-devices"}'
{
"return": [
{
"type": "dimm",
"data": {
"memdev": "/objects/memdimm0",
"hotplugged": false,
"addr": 4294967296,
"hotpluggable": true,
"size": 268435456,
"slot": 0,
"node": 0,
"id": "dimm0"
}
},
{
"type": "nvdimm",
"data": {
"memdev": "/objects/memnvdimm1",
"hotplugged": false,-------->can NOT be removed from QMP as it is added from command line
"addr": 4563402752,
"hotpluggable": true,
"size": 268435456,
"slot": 1,
"node": 0,
"id": "nvdimm1"
}
},
{
"type": "dimm",
"data": {
"memdev": "/objects/mem-dimm2",
"hotplugged": true, ----->this can be removed from QMP command!!!
"addr": 4831838208,
"hotpluggable": true,
"size": 268435456,
"slot": 2,
"node": 0,
"id": "dm2"
}
}
],
"id": "libvirt-23"
}

# insde guest
$ free
total used free shared buff/cache available
Mem: 1472816 94176 1292552 12440 86088 1339132
Swap: 0 0 0

# total memory is increased after add a memory device
$ lsmem -a
RANGE SIZE STATE REMOVABLE BLOCK
0x0000000000000000-0x0000000007ffffff 128M online no 0
0x0000000008000000-0x000000000fffffff 128M online yes 1
0x0000000010000000-0x0000000017ffffff 128M online yes 2
0x0000000018000000-0x000000001fffffff 128M online yes 3
0x0000000020000000-0x0000000027ffffff 128M online yes 4
0x0000000028000000-0x000000002fffffff 128M online no 5
0x0000000030000000-0x0000000037ffffff 128M online no 6
0x0000000038000000-0x000000003fffffff 128M online no 7
0x0000000100000000-0x0000000107ffffff 128M online no 32
0x0000000108000000-0x000000010fffffff 128M online no 33
0x0000000120000000-0x0000000127ffffff 128M online yes 36
0x0000000128000000-0x000000012fffffff 128M online yes 37

Memory block size: 128M
Total online memory: 1.5G
Total offline memory: 0B

move qemu-process launched outside to libvirtd control

1
2
3
4
5
6
7
8
9
10
# Attach an externally launched QEMU process to the libvirt QEMU driver. 
# The QEMU process must have been created with a monitor connection using the UNIX driver.
# Ideally the process will also have had the '-name' argument specified.

$ qemu-kvm -cdrom ~/demo.iso \
-monitor unix:/tmp/demo,server,nowait \
-name foo \
-uuid cece4f9f-dff0-575d-0e8e-01fe380f12ea &
$ QEMUPID=$!
$ virsh qemu-attach $QEMUPID

limit IO

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# blkdeviotune - Set or query a block device I/O tuning parameters
# OPTIONS
# [--domain] <string> domain name, id or uuid
# [--device] <string> block device
# --total-bytes-sec <number> total throughput limit, as scaled integer (default bytes)
# --read-bytes-sec <number> read throughput limit, as scaled integer (default bytes)
# --write-bytes-sec <number> write throughput limit, as scaled integer (default bytes)
# --total-iops-sec <number> total I/O operations limit per second
# --read-iops-sec <number> read I/O operations limit per second
# --write-iops-sec <number> write I/O operations limit per second
# --total-bytes-sec-max <number> total max, as scaled integer (default bytes)
# --read-bytes-sec-max <number> read max, as scaled integer (default bytes)
# --write-bytes-sec-max <number> write max, as scaled integer (default bytes)
# --total-iops-sec-max <number> total I/O operations max
# --read-iops-sec-max <number> read I/O operations max
# --write-iops-sec-max <number> write I/O operations max
# --size-iops-sec <number> I/O size in bytes
# --group-name <string> group name to share I/O quota between multiple drives
# --total-bytes-sec-max-length <number> duration in seconds to allow total max bytes
# --read-bytes-sec-max-length <number> duration in seconds to allow read max bytes
# --write-bytes-sec-max-length <number> duration in seconds to allow write max bytes
# --total-iops-sec-max-length <number> duration in seconds to allow total I/O operations max
# --read-iops-sec-max-length <number> duration in seconds to allow read I/O operations max
# --write-iops-sec-max-length <number> duration in seconds to allow write I/O operations max

# --config affect next boot
# --live affect running domain
# --current affect current domain

# get io parameters of given disk
$virsh blkdeviotune vm100 vda
total_bytes_sec: 0
read_bytes_sec : 83886080
write_bytes_sec: 83886080
total_iops_sec : 0
read_iops_sec : 2000
write_iops_sec : 2000
total_bytes_sec_max: 0
read_bytes_sec_max: 0
write_bytes_sec_max: 0
total_iops_sec_max: 0
read_iops_sec_max: 0
write_iops_sec_max: 0
size_iops_sec : 0
group_name : drive-virtio-disk0
total_bytes_sec_max_length: 0
read_bytes_sec_max_length: 0
write_bytes_sec_max_length: 0
total_iops_sec_max_length: 0
read_iops_sec_max_length: 0
write_iops_sec_max_length: 0

$virsh blkdeviotune vm100 vda --total-iops-sec 5000

# check result
$virsh blkdeviotune vm100 vda
total_bytes_sec: 0
read_bytes_sec : 83886080
write_bytes_sec: 83886080
total_iops_sec : 5000---->changed
read_iops_sec : 0
write_iops_sec : 0
total_bytes_sec_max: 0
read_bytes_sec_max: 0
write_bytes_sec_max: 0
total_iops_sec_max: 0
read_iops_sec_max: 0
write_iops_sec_max: 0
size_iops_sec : 0
group_name : drive-virtio-disk0
total_bytes_sec_max_length: 0
read_bytes_sec_max_length: 0
write_bytes_sec_max_length: 0
total_iops_sec_max_length: 0
read_iops_sec_max_length: 0
write_iops_sec_max_length: 0

get device property value

1
2
3
4
5
6
7
# get property of given device
$virsh qemu-monitor-command --hmp vm100 info qom-tree
$virsh qemu-monitor-command vm100 --pretty '{"execute": "qom-get", "arguments": {"path": "/machine/peripheral/net1", "property": "mq"}}'
{
"return": true,
"id": "libvirt-39"
}

how to login vm by serial console for

To make virsh console vm100 work, you have to do these steps by order

  • enable console from boot parameter inside vm vm#grubby --update-kernel=ALL --args="console=ttyS0" vm#reboot

    • NOTE: without this you can still use console but lost early boot message
  • enable getty on ttyS0 inside vm

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    # touch /etc/systemd/system/serial-getty@ttyS0.service
    [Unit]
    Description=Serial Getty on %I
    Documentation=man:agetty(8) man:systemd-getty-generator(8)
    Documentation=http://0pointer.de/blog/projects/serial-console.html
    BindsTo=dev-%i.device
    After=dev-%i.device systemd-user-sessions.service plymouth-quit-wait.service
    After=rc-local.service

    # If additional gettys are spawned during boot then we should make
    # sure that this is synchronized before getty.target, even though
    # getty.target didn't actually pull it in.
    Before=getty.target
    IgnoreOnIsolate=yes

    [Service]
    ExecStart=-/sbin/agetty --keep-baud 115200,38400,9600 %I $TERM
    Type=idle
    Restart=always
    UtmpIdentifier=%I
    TTYPath=/dev/%I
    TTYReset=yes
    TTYVHangup=yes
    KillMode=process
    IgnoreSIGPIPE=no
    SendSIGHUP=yes

    [Install]
    WantedBy=getty.target

    # ln -s /etc/systemd/system/serial-getty@ttyS0.service /etc/systemd/system/getty.target.wants/
    # systemctl daemon-reload
    # systemctl start serial-getty@ttyS0.service
    # systemctl enable serial-getty@ttyS0.service
    • NOTE: without this you can still use console, but no login promt
  • virsh stop vm100

  • edit /etc/libvirt/qemu/vm100.xml to add console(serial type) device

    1
    2
    3
    4
    <console type='pty'> <!--on host auto select one-->
    <target type='serial' port='0'/> <!--insde vm /dev/ttyS0-->
    <alias name='console0'/>
    </console>
  • service libvirtd restart

  • virsh start vm100

    • inside vm $ echo hello >/dev/ttyS0
  • virsh console vm100

    • without kernel parameter, you can NOT seeing Starting ...when boots
    • without getty, you can NOT seeing login promt centos76 login:
    • got hello here

how to access console/serial without virsh command

1
2
3
4
5
6
7
$ grep redirect  /var/log/libvirt/qemu/vm100.log
115:2023-03-24T06:28:28.402166Z qemu-kvm: -chardev pty,id=charconsole1: char device redirected to /dev/pts/5 (label charconsole1)

$ screen /dev/pts/5

# quit from console(terminate screen session)
ctrl + a, then press \

use specific qemu for a vm

Use emulator in xml to do this like below.

1
2
3
4
<devices>                                                                     
<emulator>/src/qemu/build/x86_64-softmmu/qemu-system-x86_64</emulator>
...
</devices>

node device info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# list all pci devices of host(we can passthrough these devices if iommus is enabled on host)
$ virsh nodedev-list --cap pci
pci_0000_00_00_0
pci_0000_00_01_0
pci_0000_00_02_0
pci_0000_00_03_0
pci_0000_00_04_0
pci_0000_00_1f_0
pci_0000_00_1f_2
...

# get device infomration
$ virsh nodedev-dumpxml pci_0000_00_00_0
<device>
<name>pci_0000_00_00_0</name>
<path>/sys/devices/pci0000:00/0000:00:00.0</path>
<parent>computer</parent>
<capability type='pci'>
<domain>0</domain>
<bus>0</bus>
<slot>0</slot>
<function>0</function>
<product id='0x29c0'>82G33/G31/P35/P31 Express DRAM Controller</product>
<vendor id='0x8086'>Intel Corporation</vendor>
<iommuGroup number='0'>
<address domain='0x0000' bus='0x00' slot='0x00' function='0x0'/>
</iommuGroup>
</capability>
</device>

Ref