Blktap2 Userspace Tools + Library ================================ Dutch Meyer 4th June 2009 Andrew Warfield and Julian Chesterfield 16th June 2006 The blktap2 userspace toolkit provides a user-level disk I/O interface. The blktap2 mechanism involves a kernel driver that acts similarly to the existing Xen/Linux blkback driver, and a set of associated user-level libraries. Using these tools, blktap2 allows virtual block devices presented to VMs to be implemented in userspace and to be backed by raw partitions, files, network, etc. The key benefit of blktap2 is that it makes it easy and fast to write arbitrary block backends, and that these user-level backends actually perform very well. Specifically: - Metadata disk formats such as Copy-on-Write, encrypted disks, sparse formats and other compression features can be easily implemented. - Accessing file-based images from userspace avoids problems related to flushing dirty pages which are present in the Linux loopback driver. (Specifically, doing a large number of writes to an NFS-backed image don't result in the OOM killer going berserk.) - Per-disk handler processes enable easier userspace policing of block resources, and process-granularity QoS techniques (disk scheduling and related tools) may be trivially applied to block devices. - It's very easy to take advantage of userspace facilities such as networking libraries, compression utilities, peer-to-peer file-sharing systems and so on to build more complex block backends. - Crashes are contained -- incremental development/debugging is very fast. How it works (in one paragraph): Working in conjunction with the kernel blktap2 driver, all disk I/O requests from VMs are passed to the userspace deamon (using a shared memory interface) through a character device. Each active disk is mapped to an individual device node, allowing per-disk processes to implement individual block devices where desired. The userspace drivers are implemented using asynchronous (Linux libaio), O_DIRECT-based calls to preserve the unbuffered, batched and asynchronous request dispatch achieved with the existing blkback code. We provide a simple, asynchronous virtual disk interface that makes it quite easy to add new disk implementations. As of June 2009 the current supported disk formats are: - Raw Images (both on partitions and in image files) - Fast sharable RAM disk between VMs (requires some form of cluster-based filesystem support e.g. OCFS2 in the guest kernel) - VHD, including snapshots and sparse images - Qcow, including snapshots and sparse images Build and Installation Instructions =================================== Make to configure the blktap2 backend driver in your dom0 kernel. It will inter-operate with the existing backend and frontend drivers. It will also cohabitate with the original blktap driver. However, some formats (currently aio and qcow) will default to their blktap2 versions when specified in a vm configuration file. To build the tools separately, "make && make install" in tools/blktap2. Using the Tools =============== Preparing an image for boot: The userspace disk agent is configured to start automatically via xend Customize the VM config file to use the 'tap:tapdisk' handler, followed by the driver type. e.g. for a raw image such as a file or partition: disk = ['tap:tapdisk:aio:,sda1,w'] Alternatively, the vhd-util tool (installed with make install, or in /blktap2/vhd) can be used to build sparse copy-on-write vhd images. For example, to build a sparse image - vhd-util create -n MyVHDFile -s 1024 This creates a sparse 1GB file named "MyVHDFile" that can be mounted and populated with data. One can also base the image on a raw file - vhd-util snapshot -n MyVHDFile -p SomeRawFile -m This creates a sparse VHD file named "MyVHDFile" using "SomeRawFile" as a parent image. Copy-on-write semantics ensure that writes will be stored in "MyVHDFile" while reads will be directed to the most recently written version of the data, either in "MyVHDFile" or "SomeRawFile" as is appropriate. Other options exist as well, consult the vhd-util application for the complete set of VHD tools. VHD files can be mounted automatically in a guest similarly to the above AIO example simply by specifying the vhd driver. disk = ['tap:tapdisk:vhd:,sda1,w'] Snapshots: Pausing a guest will also plug the corresponding IO queue for blktap2 devices and stop blktap2 drivers. This can be used to implement a safe live snapshot of qcow and vhd disks. An example script "xmsnap" is shown in the tools/blktap2/drivers directory. This script will perform a live snapshot of a qcow disk. VHD files can use the "vhd-util snapshot" tool discussed above. If this snapshot command is applied to a raw file mounted with tap:tapdisk:AIO, include the -m flag and the driver will be reloaded as VHD. If applied to an already mounted VHD file, omit the -m flag. Mounting images in Dom0 using the blktap2 driver =============================================== Tap (and blkback) disks are also mountable in Dom0 without requiring an active VM to attach. The syntax is - tapdisk2 -n : For example - tapdisk2 -n aio:/home/images/rawFile.img When successful the location of the new device will be provided by tapdisk2 to stdout and tapdisk2 will terminate. From that point forward control of the device is provided through sysfs in the directory- /sys/class/blktap2/blktap#/ Where # is a blktap2 device number present in the path that tapdisk2 printed before terminating. The sysfs interface is largely intuitive, for example, to remove tap device 0 one would- echo 1 > /sys/class/blktap2/blktap0/remove Similarly, a pause control is available, which is can be used to plug the request queue of a live running guest. Previous versions of blktap mounted devices in dom0 by using blkfront in dom0 and the xm block-attach command. This approach is still available, though slightly more cumbersome. Tapdisk Development =============================================== People regularly ask how to develop their own tapdisk drivers, and while it has not yet been well documented, the process is relatively easy. Here I will provide a brief overview. The best reference, of course, come
#!/usr/bin/env bash

DIR="$1"

if [ -d "$DIR" ]; then
	DIR="$(cd "$DIR"; pwd)"
else
	echo "Usage: $0 toolchain-dir"
	exit 1
fi

echo -n "Locating cpp ... "
for bin in bin usr/bin usr/local/bin; do
	for cmd in "$DIR/$bin/"*-cpp; do
		if [ -x "$cmd" ]; then
			echo "$cmd"
			CPP="$cmd"
			break
		fi
	done
done

if [ ! -x "$CPP" ]; then
	echo "Can't locate a cpp executable in '$DIR' !"
	exit 1
fi

patch_specs() {
	local found=0

	for lib in $(STAGING_DIR="$DIR" "$CPP" -x c -v /dev/null 2>&1 | sed -ne 's#:# #g; s#^LIBRARY_PATH=##p'); do
		if [ -d "$lib" ]; then
			grep -qs "STAGING_DIR" "$lib/specs" && rm -f "$lib/specs"
			if [ $found -lt 1 ]; then
				echo -n "Patching specs ... "
				STAGING_DIR="$DIR" "$CPP" -dumpspecs | awk '
					mode ~ "link" {
						sub("%{L.}", "%{L*} -L %:getenv(STAGING_DIR /usr/lib) -rpath-link %:getenv(STAGING_DIR /usr/lib)")
					}
					mode ~ "cpp" {
						$0 = $0 " -idirafter %:getenv(STAGING_DIR /usr/include)"
					}
					{
						print $0
						mode = ""
					}
					/^\*cpp:/ {
						mode = "cpp"
					}
					/^\*link.*:/ {
						mode = "link"
					}
				' > "$lib/specs"
				echo "ok"
				found=1
			fi
		fi
	done

	[ $found -gt 0 ]
	return $?
}


VERSION="$(STAGING_DIR="$DIR" "$CPP" --version | sed -ne 's/^.* (.*) //; s/ .*$//; 1p')"
VERSION="${VERSION:-unknown}"

case "${VERSION##* }" in
	2.*|3.*|4.0.*|4.1.*|4.2.*)
		echo "The compiler version does not support getenv() in spec files."
		echo -n "Wrapping binaries instead ... "

		if "${0%/*}/ext-toolchain.sh" --toolchain "$DIR" --wrap "${CPP%/*}"; then
			echo "ok"
			exit 0
		else
			echo "failed"
			exit $?
		fi
	;;
	*)
		if patch_specs; then
			echo "Toolchain successfully patched."
			exit 0
		else
			echo "Failed to locate library directory!"
			exit 1
		fi
	;;
esac