| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is in order to reduce the number of fundamental mapping mechanisms
as well as to reduce the amount of code to be maintained. In the course
of this the virtual space available to vmap() is being grown from 16Gb
to 64Gb.
Note that this requires callers of unmap_domain_page_global() to no
longer pass misaligned pointers - map_domain_page_global() returns page
size aligned pointers, so unmappinmg should be done accordingly.
unmap_vcpu_info() violated this and is being adjusted here.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Guests with vCPU count not divisible by 4 have unused bits in the last
word of their inuse bitmap, and the garbage collection code therefore
would get mislead believing that some entries were actually recoverable
for use.
Also use an earlier established local variable in mapcache_vcpu_init()
instead of re-calculating the value (noticed while investigating the
generally better option of setting those overhanging bits once during
setup - this didn't work out in a simple enough fashion because the
mapping getting established there isn't in the current address space,
and hence the bitmap isn't directly accessible there).
Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
| |
Looking at map_domain_page_global, there doesn't seem to be any locking
preventing two CPUs from populating a page of global-map l1es at the
same time.
Signed-off-by: Tim Deegan <tim@xen.org>
|
|
|
|
|
|
|
|
| |
This saves allocation of a Xen heap page for tracking the L1 page table
pointers.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
... as well as free_perdomain_mappings(), and use them to carry out the
existing per-domain mapping setup/teardown. This at once makes the
setup of the first sub-range PV domain specific (with idle domains also
excluded), as the GDT/LDT mapping area is needed only for those.
Also fix an improperly scaled BUILD_BUG_ON() expression in
mapcache_domain_init().
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
| |
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
| |
This involves no longer storing virtual addresses of the per-domain
mapping L2 and L3 page tables.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
| |
This requires a minor hack to allow the correct page tables to be used
while running on Dom0's page tables (as they can't be determined from
"current" at that time).
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is being done mostly in the form previously used on x86-32,
utilizing the second L3 page table slot within the per-domain mapping
area for those mappings. It remains to be determined whether that
concept is really suitable, or whether instead re-implementing at least
the non-global variant from scratch would be better.
Also add the helpers {clear,copy}_domain_page() as well as initial uses
of them.
One question is whether, to exercise the non-trivial code paths, we
shouldn't make the trivial shortcuts conditional upon NDEBUG being
defined. See the debugging patch at the end of the series.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
| |
More x86-64 stuff.
|
|
Towards x86_64 support. Merged a bunch of the existing x86_64 stuff
back into a generic 'x86' architecture. Aim is to share as much
as possible between 32- and 64-bit worlds.
|