diff options
author | Jan Beulich <jbeulich@suse.com> | 2013-10-11 09:28:26 +0200 |
---|---|---|
committer | Jan Beulich <jbeulich@suse.com> | 2013-10-11 09:28:26 +0200 |
commit | 40d66baa46ca8a9ffa6df3e063a967d08ec92bcf (patch) | |
tree | 551daf93c5e8b6a8bd50cf58f5a30eb1da8ec1dc /xen/include/asm-x86/mm.h | |
parent | 4c37ed562224295c0f8b00211287d57cae629782 (diff) | |
download | xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.tar.gz xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.tar.bz2 xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.zip |
x86: correct LDT checks
- MMUEXT_SET_LDT should behave as similarly to the LLDT instruction as
possible: fail only if the base address is non-canonical
- instead LDT descriptor accesses should fault if the descriptor
address ends up being non-canonical (by ensuring this we at once
avoid reading an entry from the mach-to-phys table and consider it a
page table entry)
- fault propagation on using LDT selectors must distinguish #PF and #GP
(the latter must be raised for a non-canonical descriptor address,
which also applies to several other uses of propagate_page_fault(),
and hence the problem is being fixed there)
- map_ldt_shadow_page() should properly wrap addresses for 32-bit VMs
At once remove the odd invokation of map_ldt_shadow_page() from the
MMUEXT_SET_LDT handler: There's nothing really telling us that the
first LDT page is going to be preferred over others.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Diffstat (limited to 'xen/include/asm-x86/mm.h')
-rw-r--r-- | xen/include/asm-x86/mm.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 5f0387528b..c835f76b9d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -555,7 +555,7 @@ int new_guest_cr3(unsigned long pfn); void make_cr3(struct vcpu *v, unsigned long mfn); void update_cr3(struct vcpu *v); int vcpu_destroy_pagetables(struct vcpu *); -void propagate_page_fault(unsigned long addr, u16 error_code); +struct trap_bounce *propagate_page_fault(unsigned long addr, u16 error_code); void *do_page_walk(struct vcpu *v, unsigned long addr); int __sync_local_execstate(void); |