storage_nbd

Introduction

Network block devices (NBD) are used to access remote storage device that does not physically reside in the local machine, for each network block device, it’s mapped with (/dev/nbdx) to client as a local block device, you can do low level operation for this block device, like partition, format with filesystem that NFS can NOT do.

NBD works according to a client/server architecture. You use the server to make a volume available as a network block device from a host, then run the client to connect to it from another host.

NBD ARCH

All commands verified on Centos7

NBD uses TCP as its transport protocol. There is no well known port used for NBD

  • Client accesses /dev/nbdx after nbd mounts
  • Client and Server perform negotiation
  • Client sends a read request to the server(did by kernel) specifying the start offset and the length of the data to be read.
  • Server replies with a read reply, containing an error code (if any); if the error code is zero, reply header will be followed by immediate data
  • Client sends a write request, specifying the start offset and the length of the data to be written, immediately followed by raw data.
  • Server writes data out and sends a write reply, which contains an error code that may specify if an error occurred. If no error did occur, data is assumed to have been written to disk.
  • Client sends a disconnect request
  • Server disconnects.

Nbd example

In order to use nbd, need to install nbd server at server machine and nbd client at client side(also install nbd kernel module which exports nbd device for user, so that user sees it as a local block device).

NBD allows to export a real device or virtual disk at server, then client can mount it by NBD protocol.

NBD cases

Verified at Centos 7.6

1
2
3
4
5
6
7
8
9
10
# centos

# install userland tools
$ yum install nbd

# At client side, you may need to compile nbd by yourself if not found
$ modprobe nbd # this will create nbd device(unbound)
$ ls /dev/nbd*
/dev/nbd0 /dev/nbd1049 /dev/nbd110 /dev/nbd140
....

export and mount a real device from remote

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# server side, export a local real device by: /etc/nbd-server/config, create this file if not found
# section name must be unique
[generic]
[test]
exportname = /dev/sdb

$ service nbd-server restart

# client side
$ nbd-client $server_ip /dev/nbd0 -N test
# if sdb has two partitions, you will see two devices /dev/nbd0p1 and /dev/nbd0p2 check with (fdisk /dev/nbd0 -l)

# then you can create filesystem or make partition
# OR if the disk already has filesystem, just mount it

# mount the first partition
$ fdisk /dev/nbd0 -l
$ mount /dev/nbd0p1 /mnt

export and mount two disk files from remote

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# create a virtual disk
$ dd if=/dev/zero of=vmdisk1.img bs=1G count=1

# edit /etc/nbd-server/config, create if not found
[generic]
[disk1]
exportname = /tmp/vmdisk1.img
[disk2]
exportname = /tmp/vmdisk2.img


$ service nbd-server restart

$ nbd-client $server_ip /dev/nbd0 -N disk1

$ nbd-client $server_ip /dev/nbd1 -N disk2
# then you can create filesystem or make partition
# OR if the disk already has filesystem, just mount it
$ parted /dev/nbd0

mount a virtual disk image qcow2

1
2
3
4
5
6
7
8
9
10
# edit /etc/nbd-server/config, create if not found
[generic]
[test]
exportname = /tmp/windows.iso
$ service nbd-server restart

$ nbd-client $server_ip /dev/nbd0 -N test
# then mount it
$ fdisk /dev/nbd0 -l
$ mount /dev/nbd0p1 /mnt

REF