aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJan Beulich <jbeulich@suse.com>2013-06-04 09:38:49 +0200
committerJan Beulich <jbeulich@suse.com>2013-06-04 09:38:49 +0200
commitc3401c1aece47dc5388184c9b6a3527655d5bbdf (patch)
tree94e6782dbc821a7b264c0134514a11c6068a860e
parent8dd9cde5d454e4cee55d0202abfd52ceeff1cd94 (diff)
downloadxen-c3401c1aece47dc5388184c9b6a3527655d5bbdf.tar.gz
xen-c3401c1aece47dc5388184c9b6a3527655d5bbdf.tar.bz2
xen-c3401c1aece47dc5388184c9b6a3527655d5bbdf.zip
x86/xsave: fix information leak on AMD CPUs
Just like for FXSAVE/FXRSTOR, XSAVE/XRSTOR also don't save/restore the last instruction and operand pointers as well as the last opcode if there's no pending unmasked exception (see CVE-2006-1056 and commit 9747:4d667a139318). While the FXSR solution sits in the save path, I prefer to have this in the restore path because there the handling is simpler (namely in the context of the pending changes to properly save the selector values for 32-bit guest code). Also this is using FFREE instead of EMMS, as it doesn't seem unlikely that in the future we may see CPUs with x87 and SSE/AVX but no MMX support. The goal here anyway is just to avoid an FPU stack overflow. I would have preferred to use FFREEP instead of FFREE (freeing two stack slots at once), but AMD doesn't document that instruction. This is CVE-2013-2076 / XSA-52. Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> master commit: 8dcf9f0113454f233089e8e5bb3970d891928410 master date: 2013-06-04 09:26:54 +0200
-rw-r--r--xen/arch/x86/i387.c15
1 files changed, 15 insertions, 0 deletions
diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c
index 585b8c9204..6e22eba752 100644
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -44,6 +44,21 @@ static void xrstor(struct vcpu *v)
{
struct xsave_struct *ptr = v->arch.xsave_area;
+ /*
+ * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
+ * is pending. Clear the x87 state here by setting it to fixed
+ * values. The hypervisor data segment can be sometimes 0 and
+ * sometimes new user value. Both should be ok. Use the FPU saved
+ * data block as a safe address because it should be in L1.
+ */
+ if ( (ptr->xsave_hdr.xstate_bv & XSTATE_FP) &&
+ !(ptr->fpu_sse.fsw & 0x0080) &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+ asm volatile ( "fnclex\n\t" /* clear exceptions */
+ "ffree %%st(7)\n\t" /* clear stack tag */
+ "fildl %0" /* load to clear state */
+ : : "m" (ptr->fpu_sse) );
+
asm volatile (
".byte " REX_PREFIX "0x0f,0xae,0x2f"
: