Future plans and enhancements ============================= For up-to-date details of features currently under implementation, visit the Xen project roadmap at: http://www.cl.cam.ac.uk/Research/SRG/netos/xen/roadmap.html IO enhancements --------------- There are also a number of memory management enhancements that didn't make this release: We have plans for a "universal buffer cache" that enables otherwise unused system memory to be used by domains in a read-only fashion. Disk scheduling --------------- The current disk scheduler is rather simplistic (batch round robin), and could be replaced by e.g. Cello if we have QoS isolation problems. For most things it seems to work OK, but there's currently no service differentiation or weighting. Improved load-balancing ----------------------- Currently, although Xen runs on SMP and SMT (hyperthreaded) machines, the scheduling is far from smart -- domains are currently statically assigned to a CPU when they are created (in a round robin fashion). We'd like to see a user-space load-balancing daemon that can shift domains between CPUs as their activity changes. Multiprocessor guest VMs ------------------------ Xen currently only supports uniprocessor guest OSes. We have designed the Xen interface with MP guests in mind, and plan to build an MP Linux guest in due course. Basically, an MP guest would consist of multiple scheduling domains (one per CPU) sharing a single memory protection domain. The only extra complexity for the Xen VM system is ensuring that when a page transitions from holding a page table or page directory to a write-able page, we must ensure that no other CPU still has the page in its TLB to ensure memory system integrity. One other issue for supporting MP guests is that we'll need some sort of CPU gang scheduler, which will require some research. Cluster management ------------------ There have been discussions regarding a unified cluster controller for Xen deployments. This would leverage the existing features of Xen to present a uniform control interface for managing a cluster as a pool of resources, rather than a set of completely distinct machines. 64-bit x86 ---------- Xen can currently use up to 4GB of memory. It's possible for 32-bit x86 machines to address up to 64GB, but it requires using a different page table format that would be rather tedious to support. Our preferred approach is to virtualize 64-bit x86 (x86/64), as supported by modern AMD and Intel processors. The large address space provided by a 64-bit execution model greatly simplifies support for large-memory configurations. Our implementation for x86/64 is in progress and should feature in our next major release.