diff options
author | Xudong Hao <xudong.hao@intel.com> | 2013-03-26 14:22:07 +0100 |
---|---|---|
committer | Jan Beulich <jbeulich@suse.com> | 2013-03-26 14:22:07 +0100 |
commit | db537fe3023bf157b85c8246782cb72a6f989b31 (patch) | |
tree | 552d8ac07bffa0e516a260b8c2f74d9a58aaeb09 /xen/common/page_alloc.c | |
parent | babea0a412ee24a94ed0bd03543060b2c6bc0bbd (diff) | |
download | xen-db537fe3023bf157b85c8246782cb72a6f989b31.tar.gz xen-db537fe3023bf157b85c8246782cb72a6f989b31.tar.bz2 xen-db537fe3023bf157b85c8246782cb72a6f989b31.zip |
x86: reserve pages when SandyBridge integrated graphics
SNB graphics devices have a bug that prevent them from accessing certain
memory ranges, namely anything below 1M and in the pages listed in the
table.
Xen does not initialize below 1MB to heap, i.e. below 1MB pages don't be
allocated, so it's unnecessary to reserve memory below the 1 MB mark
that has not already been reserved.
So reserve those pages listed in the table at xen boot if set detect a
SNB gfx device on the CPU to avoid GPU hangs.
Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Acked-by: Keir Fraser <keir@xen.org>
Diffstat (limited to 'xen/common/page_alloc.c')
-rw-r--r-- | xen/common/page_alloc.c | 23 |
1 files changed, 23 insertions, 0 deletions
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index aefef29790..203f77a485 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -152,6 +152,10 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe) { unsigned long bad_spfn, bad_epfn; const char *p; +#ifdef CONFIG_X86 + const unsigned long *badpage = NULL; + unsigned int i, array_size; +#endif ps = round_pgup(ps); pe = round_pgdown(pe); @@ -162,6 +166,25 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe) bootmem_region_add(ps >> PAGE_SHIFT, pe >> PAGE_SHIFT); +#ifdef CONFIG_X86 + /* + * Here we put platform-specific memory range workarounds, i.e. + * memory known to be corrupt or otherwise in need to be reserved on + * specific platforms. + * We get these certain pages and remove them from memory region list. + */ + badpage = get_platform_badpages(&array_size); + if ( badpage ) + { + for ( i = 0; i < array_size; i++ ) + { + bootmem_region_zap(*badpage >> PAGE_SHIFT, + (*badpage >> PAGE_SHIFT) + 1); + badpage++; + } + } +#endif + /* Check new pages against the bad-page list. */ p = opt_badpage; while ( *p != '\0' ) |