diff options
author | Keir Fraser <keir.fraser@citrix.com> | 2010-05-04 12:42:21 +0100 |
---|---|---|
committer | Keir Fraser <keir.fraser@citrix.com> | 2010-05-04 12:42:21 +0100 |
commit | a03bd60cee72d1d7f025398c98130da7a011492e (patch) | |
tree | a485b55482f7215259a0636bd715c9f0deee3eb7 /xen/arch/x86/domain_build.c | |
parent | c499509ef2595a5daa8ef804168c94291e90ec48 (diff) | |
download | xen-a03bd60cee72d1d7f025398c98130da7a011492e.tar.gz xen-a03bd60cee72d1d7f025398c98130da7a011492e.tar.bz2 xen-a03bd60cee72d1d7f025398c98130da7a011492e.zip |
x86: fix Dom0 booting time regression
Unfortunately the changes in c/s 21035 caused boot time to go up
significantly on certain large systems. To rectify this without going
back to the old behavior, introduce a new memory allocation flag so
that Dom0 allocations can exhaust non-DMA memory before starting to
consume DMA memory. For the latter, the behavior introduced in
aforementioned c/s gets retained, while for the former we can now even
try larger chunks first.
This builds on the fact that alloc_chunk() gets called with non-
increasing 'max_pages' arguments, end hence it can store locally the
allocation order last used (as larger order allocations can't succeed
during subsequent invocations if they failed once).
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Diffstat (limited to 'xen/arch/x86/domain_build.c')
-rw-r--r-- | xen/arch/x86/domain_build.c | 34 |
1 files changed, 22 insertions, 12 deletions
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c index 3f4d683b28..8dba898281 100644 --- a/xen/arch/x86/domain_build.c +++ b/xen/arch/x86/domain_build.c @@ -126,26 +126,36 @@ string_param("dom0_ioports_disable", opt_dom0_ioports_disable); static struct page_info * __init alloc_chunk( struct domain *d, unsigned long max_pages) { + static unsigned int __initdata last_order = MAX_ORDER; + static unsigned int __initdata memflags = MEMF_no_dma; struct page_info *page; - unsigned int order, free_order; + unsigned int order = get_order_from_pages(max_pages), free_order; - /* - * Allocate up to 2MB at a time: It prevents allocating very large chunks - * from DMA pools before the >4GB pool is fully depleted. - */ - if ( max_pages > (2UL << (20 - PAGE_SHIFT)) ) - max_pages = 2UL << (20 - PAGE_SHIFT); - order = get_order_from_pages(max_pages); - if ( (max_pages & (max_pages-1)) != 0 ) - order--; - while ( (page = alloc_domheap_pages(d, order, 0)) == NULL ) + if ( order > last_order ) + order = last_order; + else if ( max_pages & (max_pages - 1) ) + --order; + while ( (page = alloc_domheap_pages(d, order, memflags)) == NULL ) if ( order-- == 0 ) break; + if ( page ) + last_order = order; + else if ( memflags ) + { + /* + * Allocate up to 2MB at a time: It prevents allocating very large + * chunks from DMA pools before the >4GB pool is fully depleted. + */ + last_order = 21 - PAGE_SHIFT; + memflags = 0; + return alloc_chunk(d, max_pages); + } + /* * Make a reasonable attempt at finding a smaller chunk at a higher * address, to avoid allocating from low memory as much as possible. */ - for ( free_order = order; page && order--; ) + for ( free_order = order; !memflags && page && order--; ) { struct page_info *pg2; |