aboutsummaryrefslogtreecommitdiffstats
path: root/xen/arch/x86/domain_page.c
Commit message (Collapse)AuthorAgeFilesLines
* x86: make map_domain_page_global() a simple wrapper around vmap()Jan Beulich2013-07-041-54/+5
| | | | | | | | | | | | | | | This is in order to reduce the number of fundamental mapping mechanisms as well as to reduce the amount of code to be maintained. In the course of this the virtual space available to vmap() is being grown from 16Gb to 64Gb. Note that this requires callers of unmap_domain_page_global() to no longer pass misaligned pointers - map_domain_page_global() returns page size aligned pointers, so unmappinmg should be done accordingly. unmap_vcpu_info() violated this and is being adjusted here. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: fix map_domain_page() last resort fallbackJan Beulich2013-06-131-5/+5
| | | | | | | | | | | | | | | | | | Guests with vCPU count not divisible by 4 have unused bits in the last word of their inuse bitmap, and the garbage collection code therefore would get mislead believing that some entries were actually recoverable for use. Also use an earlier established local variable in mapcache_vcpu_init() instead of re-calculating the value (noticed while investigating the generally better option of setting those overhanging bits once during setup - this didn't work out in a simple enough fashion because the mapping getting established there isn't in the current address space, and hence the bitmap isn't directly accessible there). Reported-by: Konrad Wilk <konrad.wilk@oracle.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: serialize page table population in map_domain_page_global()Tim Deegan2013-04-091-1/+2
| | | | | | | | Looking at map_domain_page_global, there doesn't seem to be any locking preventing two CPUs from populating a page of global-map l1es at the same time. Signed-off-by: Tim Deegan <tim@xen.org>
* x86: use linear L1 page table for map_domain_page() page table manipulationJan Beulich2013-02-281-31/+14
| | | | | | | | This saves allocation of a Xen heap page for tracking the L1 page table pointers. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: introduce create_perdomain_mapping()Jan Beulich2013-02-281-105/+24
| | | | | | | | | | | | | ... as well as free_perdomain_mappings(), and use them to carry out the existing per-domain mapping setup/teardown. This at once makes the setup of the first sub-range PV domain specific (with idle domains also excluded), as the GDT/LDT mapping area is needed only for those. Also fix an improperly scaled BUILD_BUG_ON() expression in mapcache_domain_init(). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: debugging code for testing 16Tb support on smaller memory systemsJan Beulich2013-02-081-0/+6
| | | | | Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: properly use map_domain_page() during domain creation/destructionJan Beulich2013-01-231-16/+37
| | | | | | | | This involves no longer storing virtual addresses of the per-domain mapping L2 and L3 page tables. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: properly use map_domain_page() when building Dom0Jan Beulich2013-01-231-1/+8
| | | | | | | | | This requires a minor hack to allow the correct page tables to be used while running on Dom0's page tables (as they can't be determined from "current" at that time). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: re-introduce map_domain_page() et alJan Beulich2013-01-231-0/+471
| | | | | | | | | | | | | | | | | | This is being done mostly in the form previously used on x86-32, utilizing the second L3 page table slot within the per-domain mapping area for those mappings. It remains to be determined whether that concept is really suitable, or whether instead re-implementing at least the non-global variant from scratch would be better. Also add the helpers {clear,copy}_domain_page() as well as initial uses of them. One question is whether, to exercise the non-trivial code paths, we shouldn't make the trivial shortcuts conditional upon NDEBUG being defined. See the debugging patch at the end of the series. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* bitkeeper revision 1.1041.6.6 (40e96d3bioFNWNS55cowRl9PXLQZ9Q)kaf24@scramble.cl.cam.ac.uk2004-07-051-81/+0
| | | | | More x86-64 stuff.
* bitkeeper revision 1.952 (40c8935a3XSRdQfnx5RoO7XgaggvOQ)kaf24@scramble.cl.cam.ac.uk2004-06-101-0/+81
Towards x86_64 support. Merged a bunch of the existing x86_64 stuff back into a generic 'x86' architecture. Aim is to share as much as possible between 32- and 64-bit worlds.