diff options
author | Keir Fraser <keir.fraser@citrix.com> | 2009-10-07 15:58:26 +0100 |
---|---|---|
committer | Keir Fraser <keir.fraser@citrix.com> | 2009-10-07 15:58:26 +0100 |
commit | d6898a50a59884fa24b6f5f7303440d1a069a944 (patch) | |
tree | e706186e1fe288fb256b4001356c308c8910484d /xen/common/page_alloc.c | |
parent | efa1e6c6b05705b4fede2bda994332639ecd70d9 (diff) | |
download | xen-d6898a50a59884fa24b6f5f7303440d1a069a944.tar.gz xen-d6898a50a59884fa24b6f5f7303440d1a069a944.tar.bz2 xen-d6898a50a59884fa24b6f5f7303440d1a069a944.zip |
Fix hypervisor crash with unpopulated NUMA nodes
On NUMA systems with memory-less nodes Xen crashes quite early in the
hypervisor (while initializing the heaps). This is not an issue if
this happens to be the last node, but "inner" nodes trigger this
reliably. On multi-node processors it is much more likely to leave a
node unequipped. The attached patch fixes this by enumerating the
node via the node_online_map instead of counting from 0 to num_nodes.
The resulting NUMA setup is still somewhat strange, but at least it
does not crash. In lowlevel/xc/xc.c there is again this enumeration
bug, but I suppose we cannot access the HV's node_online_map from this
context, so the xm info output is not correct (but xm debug-keys H
is). I plan to rework the handling of memory-less nodes later.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Diffstat (limited to 'xen/common/page_alloc.c')
-rw-r--r-- | xen/common/page_alloc.c | 11 |
1 files changed, 5 insertions, 6 deletions
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 44813ebf1c..7d19bb24e7 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -294,7 +294,6 @@ static struct page_info *alloc_heap_pages( node = cpu_to_node(smp_processor_id()); ASSERT(node >= 0); - ASSERT(node < num_nodes); ASSERT(zone_lo <= zone_hi); ASSERT(zone_hi < NR_ZONES); @@ -323,8 +322,9 @@ static struct page_info *alloc_heap_pages( } while ( zone-- > zone_lo ); /* careful: unsigned zone may wrap */ /* Pick next node, wrapping around if needed. */ - if ( ++node == num_nodes ) - node = 0; + node = next_node(node, node_online_map); + if (node == MAX_NUMNODES) + node = first_node(node_online_map); } /* Try to free memory from tmem */ @@ -466,7 +466,6 @@ static void free_heap_pages( ASSERT(order <= MAX_ORDER); ASSERT(node >= 0); - ASSERT(node < num_online_nodes()); for ( i = 0; i < (1 << order); i++ ) { @@ -817,13 +816,13 @@ static void init_heap_pages( static unsigned long avail_heap_pages( unsigned int zone_lo, unsigned int zone_hi, unsigned int node) { - unsigned int i, zone, num_nodes = num_online_nodes(); + unsigned int i, zone; unsigned long free_pages = 0; if ( zone_hi >= NR_ZONES ) zone_hi = NR_ZONES - 1; - for ( i = 0; i < num_nodes; i++ ) + for_each_online_node(i) { if ( !avail[i] ) continue; |