aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authoriap10@labyrinth.cl.cam.ac.uk <iap10@labyrinth.cl.cam.ac.uk>2003-09-18 16:09:17 +0000
committeriap10@labyrinth.cl.cam.ac.uk <iap10@labyrinth.cl.cam.ac.uk>2003-09-18 16:09:17 +0000
commitfd2070c20b3fe58917040c1546ded23658d91833 (patch)
tree26376323d6cfecaa363ac44a1afb7e44cdeb9a6c
parentf6d42e679369f55c11463ba8d76dc0c1c8b15d07 (diff)
downloadxen-fd2070c20b3fe58917040c1546ded23658d91833.tar.gz
xen-fd2070c20b3fe58917040c1546ded23658d91833.tar.bz2
xen-fd2070c20b3fe58917040c1546ded23658d91833.zip
bitkeeper revision 1.437 (3f69d8adjFeOpChvZoY4yoiFD1epWA)
new README's and "documentation".
-rw-r--r--.rootkeys1
-rw-r--r--README106
-rw-r--r--README.CD227
-rw-r--r--TODO84
4 files changed, 246 insertions, 172 deletions
diff --git a/.rootkeys b/.rootkeys
index 80f452d4e2..21a45f01cb 100644
--- a/.rootkeys
+++ b/.rootkeys
@@ -5,6 +5,7 @@
3eb788d6Kleck_Cut0ouGneviGzliQ Makefile
3f5ef5a24IaQasQE2tyMxrfxskMmvw README
3f5ef5a2l4kfBYSQTUaOyyD76WROZQ README.CD
+3f69d8abYB1vMyD_QVDvzxy5Zscf1A TODO
3e6377b24eQqYMsDi9XrFkIgTzZ47A tools/balloon/Makefile
3e6377d6eiFjF1hHIS6JEIOFk62xSA tools/balloon/README
3e6377dbGcgnisKw16DPCaND7oGO3Q tools/balloon/balloon.c
diff --git a/README b/README
index 2f9767cd9f..a5663fdcb4 100644
--- a/README
+++ b/README
@@ -59,26 +59,27 @@ on Xen: Linux 2.4, Windows XP, and NetBSD.
The Linux 2.4 port (currently Linux 2.4.22) works very well -- we
regularly use it to host complex applications such as PostgreSQL,
-Apache, BK servers etc. It runs all applications we've tried. We
-refer to our version of Linux ported to run on Xen as "XenoLinux",
-through really it's just standard Linux ported to a new virtual CPU
-architecture that we call xeno-x86 (abbreviated to just "xeno").
+Apache, BK servers etc. It runs all user-space applications we've
+tried. We refer to our version of Linux ported to run on Xen as
+"XenoLinux", through really it's just standard Linux ported to a new
+virtual CPU architecture that we call xeno-x86 (abbreviated to just
+"xeno").
Unfortunately, the NetBSD port has stalled due to lack of man
power. We believe most of the hard stuff has already been done, and
are hoping to get the ball rolling again soon. In hindsight, a FreeBSD
-4 port might have been more useful to the community.
+4 port might have been more useful to the community. Any volunteers? :-)
The Windows XP port is nearly finished. It's running user space
applications and is generally in pretty good shape thanks to some hard
work by the team over the summer. Of course, there are issues with
releasing this code to others. We should be able to release the
-source and binaries to anyone else that's signed the Microsoft
-academic source license, which these days has very reasonable
-terms. We are in discussions with Microsoft about the possibility of
-being able to make binary releases to a larger user
-community. Obviously, there are issues with product activation in this
-environment and such like, which need to be thought through.
+source and binaries to anyone that has signed the Microsoft academic
+source license, which these days has very reasonable terms. We are in
+discussions with Microsoft about the possibility of being able to make
+binary releases to a larger user community. Obviously, there are
+issues with product activation in this environment and such like,
+which need to be thought through.
So, for the moment, you only get to run multiple copies of Linux on
Xen, but we hope this will change before too long. Even running
@@ -96,85 +97,6 @@ We've successfully booted over 128 copies of Linux on the same machine
(a dual CPU hyperthreaded Xeon box) but we imagine that it would be
more normal to use some smaller number, perhaps 10-20.
-Known limitations and work in progress
-======================================
-
-The "xenctl" tool is still rather clunky and not very user
-friendly. In particular, it should have an option to create and start
-a domain with all the necessary parameters set from a named xml file.
-
-The java xenctl tool is really just a frontend for a bunch of C tools
-named xi_* that do the actual work of talking to Xen and setting stuff
-up. Some local users prefer to drive the xi_ tools directly, typically
-from simple shell scripts. These tools are even less user friendly
-than xenctl but its arguably clearer what's going on.
-
-There's also a web based interface for controlling domains that uses
-apache/tomcat, but it has fallen out of sync with respect to the
-underlying tools, so doesn't always work as expected and needs to be
-fixed.
-
-The current Virtual Firewall Router (VFR) implementation in the
-snapshot tree is very rudimentary, and in particular, lacks the IP
-port-space sharing across domains that we've proposed that promises to
-provide a better alternative to NAT. There's a complete new
-implementation under development which also supports much better
-logging and auditing support. The current network scheduler is just
-simple round-robin between domains, without any rate limiting or rate
-guarantees. Dropping in a new scheduler should be straightforward, and
-is planned as part of the VFRv2 work package.
-
-Another area that needs further work is the interface between Xen and
-domain0 user space where the various XenoServer control daemons run.
-The current interface is somewhat ad-hoc, making use of various
-/proc/xeno entries that take a random assortment of arguments. We
-intend to reimplement this to provide a consistent means of feeding
-back accounting and logging information to the control daemon.
-
-There's also a number of memory management hacks that didn't make this
-release: We have plans for a "universal buffer cache" that enables
-otherwise unused system memory to be used by domains in a read-only
-fashion. We also have plans for inter-domain shared-memory to enable
-high-performance bulk transport for cases where the usual internal
-networking performance isn't good enough (e.g. communication with a
-internal file server on another domain).
-
-We also have plans to implement domain suspend/resume-to-file. This is
-basically an extension to the current domain building process to
-enable domain0 to read out all of the domain's state and store it in a
-file. There are complications here due to Xen's para-virtualised
-design, whereby since the physical machine memory pages available to
-the guest OS are likely to be different when the OS is resumed, we
-need to re-write the page tables appropriately.
-
-We have the equivalent of balloon driver functionality to control
-domain's memory usage, enabling a domain to give back unused pages to
-Xen. This needs properly documenting, and perhaps a way of domain0
-signalling to a domain that it requires it to reduce its memory
-footprint, rather than just the domain volunteering.
-
-The current disk scheduler is rather simplistic (batch round robin),
-and could be replaced by e.g. Cello if we have QoS isolation
-problems. For most things it seems to work OK, but there's currently
-no service differentiation or weighting.
-
-Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
-the scheduling is far from smart -- domains are currently statically
-assigned to a CPU when they are created (in a round robin fashion).
-The scheduler needs to be modified such that before going idle a
-logical CPU looks for work on other run queues (particularly on the
-same physical CPU).
-
-Xen currently only supports uniprocessor guest OSes. We have designed
-the Xen interface with MP guests in mind, and plan to build an MP
-Linux guest in due course. Basically, an MP guest would consist of
-multiple scheduling domains (one per CPU) sharing a single memory
-protection domain. The only extra complexity for the Xen VM system is
-ensuring that when a page transitions from holding a page table or
-page directory to a write-able page, we must ensure that no other CPU
-still has the page in its TLB to ensure memory system integrity. One
-other issue for supporting MP guests is that we'll need some sort of
-CPU gang scheduler, which will require some research.
Hardware support
@@ -208,4 +130,6 @@ not recommended.
Ian Pratt
-9 Sep 2003 \ No newline at end of file
+9 Sep 2003
+
+
diff --git a/README.CD b/README.CD
index 9adb83ce6e..ff47faa5d4 100644
--- a/README.CD
+++ b/README.CD
@@ -9,7 +9,7 @@
XenDemoCD 1.0 rc1
University of Cambridge Computer Laboratory
- 31 Aug 2003
+ 18 Sep 2003
http://www.cl.cam.ac.uk/netos/xen
@@ -49,37 +49,35 @@ configuration to do this), hit a key on either the keyboard or serial
line to pull up the Grub boot menu, then select one of the four boot
options:
- Xen / linux-2.4.22 X using DHCP
- Xen / linux-2.4.22 X using cmdline IP config
- Xen / linux-2.4.22 text using DHCP
- Xen / linux-2.4.22 text using cmdline IP config
+ Xen / linux-2.4.22
+ Xen / linux-2.4.22 using cmdline IP configuration
linux-2.4.22
- linux-2.4.22 single
- linux-2.4.20-rc1 single
-The last three options are plain linux kernels that run on the bare
-machine, and are included simply to help diagnose driver compatibility
+The last option is a plain linux kernel that runs on the bare machine,
+and is included simply to help diagnose driver compatibility
problems. If you are going for a command line IP config, hit "e" at
the grub menu, then edit the "ip=" parameters to reflect your setup
e.g. "ip=<ipaddr>::<gateway>:<netmask>::eth0:off". It shouldn't be
necessary to set either the nfs server or hostname
parameters. Alternatively, once xenolinux has booted you can login and
-setup networking with ifconfig and route in the normal way.
+setup networking with 'dhclient' or 'ifconfig' and 'route' in the
+normal way.
To make things easier for yourself, its worth trying to arrange for an
IP address which is the first in a sequential range of free IP
-addresses. Its useful to give each VM instance its own IP address
-(though it is possible to do NAT or use private addresses etc), and
-the configuration files on the CD allocate IP addresses sequentially
-for subsequent domains unless told otherwise.
+addresses. Its useful to give each VM instance its own public IP
+address (though it is possible to do NAT or use private addresses
+etc), and the configuration files on the CD allocate IP addresses
+sequentially for subsequent domains unless told otherwise.
After selecting the kernel to boot, stand back and watch Xen boot,
closely followed by "domain 0" running the xenolinux kernel. The boot
messages are also sent to the serial line (the baud rate can be set on
-the Xen cmdline), which can be very useful for debugging should
-anything important scroll off the screen. Xen's startup messages will
-look quite familiar as much of the hardware initialisation (SMP boot,
-apic setup) and device drivers are derived from Linux.
+the Xen cmdline, but defaults to 115200), which can be very useful for
+debugging should anything important scroll off the screen. Xen's
+startup messages will look quite familiar as much of the hardware
+initialisation (SMP boot, apic setup) and device drivers are derived
+from Linux.
If everything is well, you should see the linux rc scripts start a
bunch of standard services including sshd. Login on the console or
@@ -88,19 +86,26 @@ via ssh::
password: xendemo xendemo
Once logged in, it should look just like any regular linux box. All
-the usual tools and commands should work as per usual. You can start
-an xserver with 'startx' if you elected not to start one at boot. The
-current rc scripts also starts an Apache web server, which you should
-be able to issue requests to on port 80. If you want to browse the
-Xen / Xenolinux source, it's all located under /local, complete with
-BitKeeper repository.
-
-Because CD's aren't exactly known for their high performance, the
-machine will likely feel rather sluggish. You may wish to go ahead and
-install Xen/XenoLinux on your hard drive, either dropping Xen and the
-XenoLinux kernel down onto a pre-existing Linux distribution, or using
-the file systems from the CD (which are based on RH7.2). See the
-installation instructions later in this document.
+the usual tools and commands should work as per usual. It's probably
+best to start by configuring networking, either with 'dhclient' or
+manually via ifconfig and route, remembering to edit /etc/resolv.conf
+if you want DNS.
+
+You can start an xserver with 'startx'. It defaults to a conservative
+1024x768, but you can edit the script for higher resoloutions. The CD
+contains a load of standard software. You should be able to start
+Apache, PostgreSQL, Mozzila etc in the normal way, the because
+everything is running off CD the performance will be very sluggish and
+you may run out of memory for the 'tmpfs' file system. You may wish
+to go ahead and install Xen/XenoLinux on your hard drive, either
+dropping Xen and the XenoLinux kernel down onto a pre-existing Linux
+distribution, or using the file systems from the CD (which are based
+on RH9). See the installation instructions later in this document.
+
+If you want to browse the Xen / Xenolinux source, it's all located
+under /local/src, complete with BitKeeper repository. We've also
+included source code and configuration information for the various
+benchmarks we used in the SOSP paper.
Starting other domains
@@ -113,21 +118,81 @@ lives in /local/bin and uses /etc/xenctl.xml for its default
configuration. Run 'xenctl' without any arguments to get a help
message.
-To create a new domain, using the same xenolinux kernel image as used
-for domain0, the next consecutive IP address, and the same CD-based
-file system, type:
+The first thing to do is to set up a window in which you will receive
+console output from other domains. Console output will arrive as UDP
+packets destined for 169.254.1.0, so its necessary to setup an alias
+on eth0. The easiest way to do this is to run:
+
+ xen_nat_enable
+
+This also inserts a few NAT rules into "domain0", in case you'll be
+starting other domains without their own IP addresses. Alternatively,
+just do "ifconfig eth0:0 169.254.1.0 up". NB: The intention is that in
+future Xen will do NAT itsel (actually RSIP), but this is part of a
+larger work package that isn't stable enough to release.
+
+Next, run a the xen UDP console displayer:
+
+ xen_read_console &
+
+
+The tool used for starting and controlling domains is 'xenctl'. It's a
+java java front end to various underlying internal tools written in C
+(xi_*). Running off CD, it seems to take an age to start...
+
+xenctl uses /etc/xenctl.xml as its default configuration. The
+directory contains two different configs depending on whether you want
+to use NAT, or multiple sequential external IPs (its possible to
+override any of the parameters on the command line, if you want to set
+specific IPs etc).
+
+The default file supports NAT. To change to use multiple IPs:
+ cp /etc/xenctl.xml-publicip /etc/xenctl.xml
+
+A sequence of commands must be given to xenctl to start a
+domain. First a new domain must be created, which requires specifying
+the initial memory allocation, the kernel image to use, and the kernel
+command line. As well as the root file system details, you'll need to
+set the IP address on the command line: since Xen currently doesn't
+support a virtual console for domains >1, you won't be able to log to
+your new domain unless you've got networking configured and an sshd
+running! (using dhcp for new domains should work too.)
+
+After creating the domain, xenctl must be used to grant the domain
+access to other resources such as physical or virtual disk partions.
+Then, the domain must be started.
- xenctl new -n give_this_domain_a_name
+These commands can be entered manually, but for convenience, xenctl
+will also read them from a script and infer which domain number you're
+referring to (-nX). To use the sample script:
-domctl will return printing the domain id that has been allocated to
-the new domain (probably '1' if this is the first domain to be fired
-up). If you're running off the CD this will take a while, as there's
-huge piles of Java goop grinding away... Then, fire up the domain:
+ xenctl script -f/etc/xen-mynewdom
- xenctl start -n<domid>
+You should see the domain booting on your xen_read_console window.
+
+The xml defaults start another domain running off the CD, using a
+separate ram based file system for mutable data in root (just like
+domain 0).
+
+The new domain is started with a '4' on the kernel command line to
+tell 'init' to go to runlevel 4 rather than the default of 3. This is
+done simply to suppress a bunch of harmless error messages that would
+otherwise occur when the new (unprivileged) domain tried to access
+physical hardware resources to try setting the hwclock, system font,
+gpm etc.
+
+After it's booted, you should be able to ssh into your new domain. If
+you went for a NATed address, from domain 0 you should be able to ssh
+into '169.254.1.X' where X is the domain number. If you ran the
+xen_enable_nat script, a bunch of port redirects have been installed
+to enable you to ssh in to other domains remotely. To access the new
+virtual machine remotely, use:
+
+ ssh -p2201 root@IP.address.Of.Domain0 # use 2202 for domain 2 etc.
+
+If you configured the new domain with its own IP address, you should
+be able to ssh into it directly.
-You should see your domain boot and be able to ping and ssh into it as
-before.
"xenctl list" provides status information about running domains,
though is currently only allowed to be run by domain 0. It accesses
@@ -137,13 +202,8 @@ kill it nicely by sending a shutdown event and waiting for it to
terminate, or blow the sucker away with extreme prejudice.
If you want to configure the new domain differently, type 'xenctl' to
-get a list of arguments, e.g. use the "-4" option to set a diffrent
-IPv4 address. If you haven't any spare IP addresses on your network,
-you can configure other domains with link-local addresses
-(169.254/16), but then you'll only be able to access domains other
-than domain0 from within the machine (they won't be externally
-routeable). To automate this, there's an /etc/xenctl-linklocal.xml
-which you can copy in place of /etc/xenctl.xml
+get a list of arguments, e.g. at the 'xenctl domain new' command line
+use the "-4" option to set a diffrent IPv4 address.
xenctl can be used to set the new kernel's command line, and hence
determine what it uses as a root file system etc. Although the default
@@ -183,11 +243,12 @@ create". The virtual disk can then optionally be partitioned
by a virtual block device associated with another domain, and even
used as a boot device.
-Both virtual disks and real partitions should only be shared domains
-in a read-only fashion otherwise the linux kernels will obviously get
-very confused if the file system structure changes underneath them!
-If you want read-write sharing, export the directory to other domains
-via NFS.
+Both virtual disks and real partitions should only be shared between
+domains in a read-only fashion otherwise the linux kernels will
+obviously get very confused if the file system structure changes
+underneath them (having the same partition mounted rw twice is a sure
+fire way to cause irreparable damage)! If you want read-write
+sharing, export the directory to other domains via NFS from domain0.
About The Xen Demo CD
@@ -203,7 +264,7 @@ bootloader.
This is a bootable CD that loads Xen, and then a Linux 2.4.22 OS image
ported to run on Xen. The CD contains a copy of a file system based on
-the RedHat 7.2 distribution that is able to run directly off the CD
+the RedHat 9 distribution that is able to run directly off the CD
("live ISO"), using a "tmpfs" RAM-based file system for root (/etc
/var etc). Changes you make to the tmpfs will obviously not be
persistent across reboots!
@@ -221,18 +282,19 @@ various memory management enhancements to provide fast inter-OS
communication and sharing of memory pages between OSs. We'll release
newer snapshots as required, in the form of a BitKeeper repository
hosted on http://xen.bkbits.net (follow instructions from the project
-home page). We're obviously grateful to receive any
-bug fixes or other code you can contribute.
+home page). We're obviously grateful to receive any bug fixes or
+other code you can contribute. We suggest you join the
+xen-devel@lists.sourceforge.net mailing list.
Installing from the CD
----------------------
If you're installing Xen/XenoLinux onto an existing linux file system
-distribution, its typically necessary to copy the Xen VMM
-(/boot/image.gz) and XenoLinux kernels (/boot/xenolinux.gz) then
-modify the Grub config (/boot/grub/menu.lst or /boot/grub/grub.conf)
-on the target system.
+distribution, just copy the Xen VMM (/boot/image.gz) and XenoLinux
+kernels (/boot/xenolinux.gz), then modify the Grub config
+(/boot/grub/menu.lst or /boot/grub/grub.conf) on the target system.
+It should work on pretty much any distribution.
Xen is a "multiboot" standard boot image. Despite being a 'standard',
few boot loaders actually support it. The only two we know of are
@@ -240,10 +302,11 @@ Grub, and our modified version of linux kexec (for booting off a
XenoBoot CD -- PlanetLab have adopted the same boot CD approach).
If you need to install grub on your system, you can do so either by
-building the Grub source tree /usr/local/grub-0.93-iso9660-splashimage
-or by copying over all the files in /boot/grub and then running
-/sbin/grub and following the usual grub documentation. You'll then
-need to configure the Grub config file.
+building the Grub source tree
+/usr/local/src/grub-0.93-iso9660-splashimage or by copying over all
+the files in /boot/grub and then running /sbin/grub and following the
+usual grub documentation. You'll then need to edit the Grub
+config file.
A typical Grub menu option might look like:
@@ -261,9 +324,7 @@ there are various options to select which ones to use.
The second line specifies which xenolinux image to use, and the
standard linux command line arguments to pass to the kernel. In this
case, we're configuring the root partition and stating that it should
-be mounted read-only (normal practise). If the file system isn't
-configured for DHCP then we'd probably want to configure that on the
-kernel command line too.
+be mounted read-only (normal practice).
If we were booting with an initial ram disk (initrd), then this would
require a second "module" line, with no arguments.
@@ -295,8 +356,8 @@ good idea too.
To install the usr file system, copy the file system from CD on /usr,
though leaving out the "XenDemoCD" and "boot" directories:
- cd /usr && cp -a doc games include lib local root share tmp X11R6 bin dict etc html kerberos libexec man sbin src /mnt/usr/
-
+ cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
+
If you intend to boot off these file systems (i.e. use them for
domain0), then you probably want to copy the /usr/boot directory on
the cd over the top of the current symlink to /boot on your root
@@ -315,20 +376,20 @@ on the keyboard to get a list of supported commands.
If you have a crash you'll likely get a crash dump containing and EIP
(PC), which along with and 'objdump -d image' can be useful in
-figuring out what's happened.
-
+figuring out what's happened. Debug a xenolinux image just as you
+would any other Linux kernel.
Description of how the XenDemoCD boots
--------------------------------------
1. Grub is used to load Xen, a xenolinux kernel, and an initrd (initial
-ram disk). [The source of the version of Grub used is in /usr/local/]
+ram disk). [The source of the version of Grub used is in /usr/local/src]
2. the init=/linuxrc command line causes linux to execute /linuxrc in
the initrd.
3. the /linuxrc file attempts to mount the CD by trying the likely
-locations /dev/hd[abcd].
+locations : /dev/hd[abcd].
4. it then creates a 'tmpfs' file system and untars the
'XenDemoCD/root.tar.gz' file into the tmpfs. This contains hopefully
@@ -345,21 +406,25 @@ normally.
Building your own version of the XenDemoCD
------------------------------------------
-The filesystems on the CD are based heavily on Peter Anvin's
-SuperRescue CD version 2.1.2, which takes its content from RedHat
-7.2. Since Xen uses a "multiboot" image format, it was necessary to
-change the bootloader from isolinux to Grub0.93 with Leonid
-Lisovskiy's <lly@pisem.net> grub.0.93-iso9660.patch
+The 'live ISO' version of RedHat isbased heavily on Peter Anvin's
+SuperRescue CD version 2.1.2 and J. McDaniel's Plan-B:
+
+ http://www.kernel.org/pub/dist/superrescue/v2/
+ http://projectplanb.org/
+
+Since Xen uses a "multiboot" image format, it was necessary to change
+the bootloader from isolinux to Grub0.93 with Leonid Lisovskiy's
+<lly@pisem.net> grub.0.93-iso9660.patch
The Xen Demo CD contains all of the build scripts that were used to
-create it, so its possible to 'unpack' the current iso, modifiy it,
+create it, so it is possible to 'unpack' the current iso, modifiy it,
then build a new iso. The procedure for doing so is as follows:
First, mount either the CD, or the iso image of the CD:
mount /dev/cdrom /mnt/cdrom
or:
- mount -o loop xendemo-1.0beta.iso /mnt/cdrom
+ mount -o loop xendemo-1.0.iso /mnt/cdrom
cd to the directory you want to 'unpack' the iso into then run the
unpack script:
diff --git a/TODO b/TODO
new file mode 100644
index 0000000000..cfaaf90ff6
--- /dev/null
+++ b/TODO
@@ -0,0 +1,84 @@
+
+
+Known limitations and work in progress
+======================================
+
+The "xenctl" tool used for controling domains is still rather clunky
+and not very user friendly. In particular, it should have an option to
+create and start a domain with all the necessary parameters set from a
+named xml file.
+
+The java xenctl tool is really just a frontend for a bunch of C tools
+named xi_* that do the actual work of talking to Xen and setting stuff
+up. Some local users prefer to drive the xi_ tools directly, typically
+from simple shell scripts. These tools are even less user friendly
+than xenctl but its arguably clearer what's going on.
+
+There's also a nice web based interface for controlling domains that
+uses apache/tomcat. Unfortunately, this has fallen out of sync with
+respect to the underlying tools, so is currently not built by default
+and needs fixing.
+
+The current Virtual Firewall Router (VFR) implementation in the
+snapshot tree is very rudimentary, and in particular, lacks the IP
+port-space sharing across domains that we've proposed that promises to
+provide a better alternative to NAT. There's a complete new
+implementation under development which also supports much better
+logging and auditing support. The current network scheduler is just
+simple round-robin between domains, without any rate limiting or rate
+guarantees. Dropping in a new scheduler should be straightforward, and
+is planned as part of the VFRv2 work package.
+
+Another area that needs further work is the interface between Xen and
+domain0 user space where the various XenoServer control daemons run.
+The current interface is somewhat ad-hoc, making use of various
+/proc/xeno entries that take a random assortment of arguments. We
+intend to reimplement this to provide a consistent means of feeding
+back accounting and logging information to the control daemon. Also,
+we should provide all domains with a read/write virtual console
+interface -- currently for domains >1 it is output only.
+
+There's also a number of memory management hacks that didn't make this
+release: We have plans for a "universal buffer cache" that enables
+otherwise unused system memory to be used by domains in a read-only
+fashion. We also have plans for inter-domain shared-memory to enable
+high-performance bulk transport for cases where the usual internal
+networking performance isn't good enough (e.g. communication with a
+internal file server on another domain).
+
+We also have plans to implement domain suspend/resume-to-file. This is
+basically an extension to the current domain building process to
+enable domain0 to read out all of the domain's state and store it in a
+file. There are complications here due to Xen's para-virtualised
+design, whereby since the physical machine memory pages available to
+the guest OS are likely to be different when the OS is resumed, we
+need to re-write the page tables appropriately.
+
+We have the equivalent of balloon driver functionality to control
+domain's memory usage, enabling a domain to give back unused pages to
+Xen. This needs properly documenting, and perhaps a way of domain0
+signalling to a domain that it requires it to reduce its memory
+footprint, rather than just the domain volunteering.
+
+The current disk scheduler is rather simplistic (batch round robin),
+and could be replaced by e.g. Cello if we have QoS isolation
+problems. For most things it seems to work OK, but there's currently
+no service differentiation or weighting.
+
+Currently, although Xen runs on SMP and SMT (hyperthreaded) machines,
+the scheduling is far from smart -- domains are currently statically
+assigned to a CPU when they are created (in a round robin fashion).
+The scheduler needs to be modified such that before going idle a
+logical CPU looks for work on other run queues (particularly on the
+same physical CPU).
+
+Xen currently only supports uniprocessor guest OSes. We have designed
+the Xen interface with MP guests in mind, and plan to build an MP
+Linux guest in due course. Basically, an MP guest would consist of
+multiple scheduling domains (one per CPU) sharing a single memory
+protection domain. The only extra complexity for the Xen VM system is
+ensuring that when a page transitions from holding a page table or
+page directory to a write-able page, we must ensure that no other CPU
+still has the page in its TLB to ensure memory system integrity. One
+other issue for supporting MP guests is that we'll need some sort of
+CPU gang scheduler, which will require some research.