diff options
author | Jan Beulich <jbeulich@suse.com> | 2013-10-11 09:28:26 +0200 |
---|---|---|
committer | Jan Beulich <jbeulich@suse.com> | 2013-10-11 09:28:26 +0200 |
commit | 40d66baa46ca8a9ffa6df3e063a967d08ec92bcf (patch) | |
tree | 551daf93c5e8b6a8bd50cf58f5a30eb1da8ec1dc /xen/include/asm-x86/paging.h | |
parent | 4c37ed562224295c0f8b00211287d57cae629782 (diff) | |
download | xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.tar.gz xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.tar.bz2 xen-40d66baa46ca8a9ffa6df3e063a967d08ec92bcf.zip |
x86: correct LDT checks
- MMUEXT_SET_LDT should behave as similarly to the LLDT instruction as
possible: fail only if the base address is non-canonical
- instead LDT descriptor accesses should fault if the descriptor
address ends up being non-canonical (by ensuring this we at once
avoid reading an entry from the mach-to-phys table and consider it a
page table entry)
- fault propagation on using LDT selectors must distinguish #PF and #GP
(the latter must be raised for a non-canonical descriptor address,
which also applies to several other uses of propagate_page_fault(),
and hence the problem is being fixed there)
- map_ldt_shadow_page() should properly wrap addresses for 32-bit VMs
At once remove the odd invokation of map_ldt_shadow_page() from the
MMUEXT_SET_LDT handler: There's nothing really telling us that the
first LDT page is going to be preferred over others.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Diffstat (limited to 'xen/include/asm-x86/paging.h')
-rw-r--r-- | xen/include/asm-x86/paging.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h index 9553e4329d..105a0ca1b3 100644 --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -386,7 +386,8 @@ guest_get_eff_l1e(struct vcpu *v, unsigned long addr, void *eff_l1e) if ( likely(!paging_mode_translate(v->domain)) ) { ASSERT(!paging_mode_external(v->domain)); - if ( __copy_from_user(eff_l1e, + if ( !__addr_ok(addr) || + __copy_from_user(eff_l1e, &__linear_l1_table[l1_linear_offset(addr)], sizeof(l1_pgentry_t)) != 0 ) *(l1_pgentry_t *)eff_l1e = l1e_empty(); |