diff options
author | Jan Beulich <jbeulich@novell.com> | 2011-04-05 13:01:25 +0100 |
---|---|---|
committer | Jan Beulich <jbeulich@novell.com> | 2011-04-05 13:01:25 +0100 |
commit | 9a70856bb28bb8c9b1d37fb8a005447ac77b0619 (patch) | |
tree | e03eabf8a03ef712e5b93a91d4b5e13923b0c4a4 /xen/arch/x86/x86_64/mm.c | |
parent | 4551775df58d42e2dcfd2a8ac4bcc713709e8b81 (diff) | |
download | xen-9a70856bb28bb8c9b1d37fb8a005447ac77b0619.tar.gz xen-9a70856bb28bb8c9b1d37fb8a005447ac77b0619.tar.bz2 xen-9a70856bb28bb8c9b1d37fb8a005447ac77b0619.zip |
x86: split struct vcpu
This is accomplished by splitting the guest_context member, which by
itself is larger than a page on x86-64. Quite a number of fields of
this structure is completely meaningless for HVM guests, and thus a
new struct pv_vcpu gets introduced, which is being overlaid with
struct hvm_vcpu in struct arch_vcpu. The one member that is mostly
responsible for the large size is trap_ctxt, which now gets allocated
separately (unless fitting on the same page as struct arch_vcpu, as is
currently the case for x86-32), and only for non-hvm, non-idle
domains.
This change pointed out a latent problem in arch_set_info_guest(),
which is permitted to be called on already initialized vCPU-s, but
so far copied the new state into struct arch_vcpu without (in this
case) actually going through all the necessary accounting/validation
steps. The logic gets changed so that the pieces that bypass
accounting
will at least be verified to be no different from the currently active
bits, and the whole change will fail in case they are. The logic does
*not* get adjusted here to do full error recovery, that is, partially
modified state continues to not get unrolled in case of failure.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Diffstat (limited to 'xen/arch/x86/x86_64/mm.c')
-rw-r--r-- | xen/arch/x86/x86_64/mm.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 7a3424878e..58f0141382 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1100,8 +1100,8 @@ long subarch_memory_op(int op, XEN_GUEST_HANDLE(void) arg) long do_stack_switch(unsigned long ss, unsigned long esp) { fixup_guest_stack_selector(current->domain, ss); - current->arch.guest_context.kernel_ss = ss; - current->arch.guest_context.kernel_sp = esp; + current->arch.pv_vcpu.kernel_ss = ss; + current->arch.pv_vcpu.kernel_sp = esp; return 0; } @@ -1116,21 +1116,21 @@ long do_set_segment_base(unsigned int which, unsigned long base) if ( wrmsr_safe(MSR_FS_BASE, base) ) ret = -EFAULT; else - v->arch.guest_context.fs_base = base; + v->arch.pv_vcpu.fs_base = base; break; case SEGBASE_GS_USER: if ( wrmsr_safe(MSR_SHADOW_GS_BASE, base) ) ret = -EFAULT; else - v->arch.guest_context.gs_base_user = base; + v->arch.pv_vcpu.gs_base_user = base; break; case SEGBASE_GS_KERNEL: if ( wrmsr_safe(MSR_GS_BASE, base) ) ret = -EFAULT; else - v->arch.guest_context.gs_base_kernel = base; + v->arch.pv_vcpu.gs_base_kernel = base; break; case SEGBASE_GS_USER_SEL: |