aboutsummaryrefslogtreecommitdiffstats
path: root/xen/arch/x86/hvm/hvm.c
Commit message (Collapse)AuthorAgeFilesLines
* x86/HVM: properly deal with hvm_copy_*_guest_phys() errorsJan Beulich2013-10-141-8/+2
| | | | | | | | | | | | | | In memory read/write handling the default case should tell the caller that the operation cannot be handled rather than the operation having succeeded, so that when new HVMCOPY_* states get added not handling them explicitly will not result in errors being ignored. In task switch emulation code stop handling some errors, but not others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* hvm/vidirian: Avoid printing page_to_mfn(NULL) on error pathsAndrew Cooper2013-10-091-16/+12
| | | | | | | | | | | | | | | | | | | | While working in the viridian code, I noticed that 4cb6c4f4941 "x86/hvm: Use get_page_from_gfn() instead of get_gfn()/put_gfn." introduced two error paths where page_to_mfn(NULL) would be formatted and presented as a bad MFN. This provides junk in the warning rather than something useful. These two codepaths are fixed up to match their counterpart in wrmsr_hypervisor_regs() While auditing the other changes from 4cb6c4f4941, I noticed a small optimisation which could be made by changing the order of the validity checks to remove 6 NULL pointer checks. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: allow HVM guests to make console_io hypercallKonrad Rzeszutek Wilk2013-10-041-0/+2
| | | | | | | | | The console_io hypercall is provided for PV guests and for HVM guests it is done via the 0xe9 port. However the PV hypercall is more efficient as it takes a string rather than one character per write. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* x86: make hvm_cpuid() tolerate NULL pointersJan Beulich2013-10-041-10/+20
| | | | | | | | | | | Now that other HVM code started making more extensive use of hvm_cpuid(), let's not force every caller to declare dummy variables for output not cared about. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Jun Nakajima <jun.nakajima@intel.com>
* x86: properly handle hvm_copy_from_guest_{phys,virt}() errorsJan Beulich2013-09-301-12/+6
| | | | | | | | | | | | Ignoring them generally implies using uninitialized data and, in all but two of the cases dealt with here, potentially leaking hypervisor stack contents to guests. This is CVE-2013-4355 / XSA-63. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Tim Deegan <tim@xen.org> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* Nested VMX: Expose unrestricted guest feature to guestYang Zhang2013-09-301-1/+2
| | | | | | | | With virtual unrestricted guest feature, L2 guest is allowed to run with PG cleared. Also, allow PAE not set during virtual vmexit emulation. Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Acked-by: Eddie.Dong@intel.com
* x86/HVM: linear address must be canonical for the whole accessed rangeJan Beulich2013-09-231-12/+13
| | | | | | | | | | | | ... rather than just for the first byte. While at it, also - make the real mode case at least dpo a wrap around check - drop the mis-named "gpf" label (we're not generating faults here) and use in-place returns instead Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: fix failure path in hvm_vcpu_initialiseGeorge Dunlap2013-09-181-1/+1
| | | | | | | | | | | | | | It looks like one of the failure cases in hvm_vcpu_initialise jumps to the wrong label; this could lead to slow leaks if something isn't cleaned up properly. I will probably change these labels in a future patch, but I figured it was better to have this fix separately. This is also a candidate for backport. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
* console: buffer and show origin of guest PV writesDaniel De Graaf2013-09-101-17/+10
| | | | | | | | | | | | | Guests other than domain 0 using the console output have previously been controlled by the VERBOSE #define, but with no designation of which guest's output was on the console. This patch converts the HVM output buffering to be used by all domains except the hardware domain (dom0): stripping non-printable characters, line buffering the output, and prefixing it with the domain ID. This is especially useful for debugging stub domains during early boot. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Keir Fraser <keir@xen.org>
* x86/xsave: fix migration from xsave-capable to xsave-incapable hostJan Beulich2013-09-091-33/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With CPUID features suitably masked this is supposed to work, but was completely broken (i.e. the case wasn't even considered when the original xsave save/restore code was written). First of all, xsave_enabled() wrongly returned the value of cpu_has_xsave, i.e. not even taking into consideration attributes of the vCPU in question. Instead this function ought to check whether the guest ever enabled xsave support (by writing a [non-zero] value to XCR0). As a result of this, a vCPU's xcr0 and xcr0_accum must no longer be initialized to XSTATE_FP_SSE (since that's a valid value a guest could write to XCR0), and the xsave/xrstor as well as the context switch code need to suitably account for this (by always enforcing at least this part of the state to be saved/loaded). This involves undoing large parts of c/s 22945:13a7d1f7f62c ("x86: add strictly sanity check for XSAVE/XRSTOR") - we need to cleanly distinguish between hardware capabilities and vCPU used features. Next both HVM and PV save code needed tweaking to not always save the full state supported by the underlying hardware, but just the parts that the guest actually used. Similarly the restore code should bail not just on state being restored that the hardware cannot handle, but also on inconsistent save state (inconsistent XCR0 settings or size of saved state not in line with XCR0). And finally the PV extended context get/set code needs to use slightly different logic than the HVM one, as here we can't just key off of xsave_enabled() (i.e. avoid doing anything if a guest doesn't use xsave) because the tools use this function to determine host capabilities as well as read/write vCPU state. The set operation in particular needs to be capable of cleanly dealing with input that consists of only the xcr0 and xcr0_accum values (if they're both zero then no further data is required). While for things to work correctly both sides (saving _and_ restoring host) need to run with the fixed code, afaict no breakage should occur if either side isn't up to date (other than the breakage that this patch attempts to fix). Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/xsave: adjust state managementJan Beulich2013-07-021-1/+7
| | | | | | | | | | | | | | | | | | The initial state for a vCPU is using default values, so there's no need to force the XRSTOR to read the state from memory. This saves a couple of thousand restores from memory just during boot of Linux on my Sandy Bridge system (I didn't try to make further measurements). The above requires that arch_set_info_guest() updates the state flags in the save area when valid floating point state got passed in, but that would really have been needed even before in case XSAVE{,OPT} decided to clear one or both of the FP and SSE bits. Furthermore, hvm_vcpu_reset_state() shouldn't just clear out the FPU/ SSE area, but needs to re-initialized MXCSR and FCW. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/hvm: fix HVMOP_inject_trap return value on successTim Deegan2013-06-211-0/+1
| | | | | | | Reported-by: Antony Saba <Antony.Saba@mandiant.com> Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Keir Fraser <keir@xen.org> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* x86/HVM: fix initialization of wallclock time for PVHVM on migrationRoger Pau Monné2013-06-121-19/+14
| | | | | | | | | | | | | | | Call update_domain_wallclock_time on hvm_latch_shinfo_size even if the bitness of the guest has already been set, this fixes the problem with the wallclock not being set for PVHVM guests on resume from migration. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Clean up the resulting code and retain the (slightly adjusted) original comment. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: fix XCR0 handlingJan Beulich2013-06-041-11/+3
| | | | | | | | | | | | | | - both VMX and SVM ignored the ECX input to XSETBV - both SVM and VMX used the full 64-bit RAX when calculating the input mask to XSETBV - faults on XSETBV did not get recovered from Also consolidate the handling for PV and HVM into a single function, and make the per-CPU variable "xcr0" static to xstate.c. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* x86: re-enable VCPUOP_register_vcpu_time_memory_areaJan Beulich2013-05-271-0/+2
| | | | | | | | | | By moving the call to update_vcpu_system_time() out of schedule() into arch-specific context switch code, the original problem of the function accessing the wrong domain's address space goes away (obvious even from patch context, as update_runstate_area() does similar copying). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/hvm: avoid p2m lookups for vlapic accesses.xen-4.3.0-rc24.3.0-rc2Tim Deegan2013-05-161-0/+17
| | | | | | | | | | | | | The LAPIC base address is a known GFN, so we can skip looking up the p2m: we know it should be handled as emulated MMIO. That helps performance in older Windows OSes, which make a _lot_ of TPR accesses. This will change the behaviour of any OS that maps other memory/devices at its LAPIC address; the new behaviour (the LAPIC mapping always wins) is closer to actual hardware behaviour. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com>
* x86: make vcpu_reset() preemptibleJan Beulich2013-05-021-1/+4
| | | | | | | | | | ... as dropping the old page tables may take significant amounts of time. This is part of CVE-2013-1918 / XSA-45. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
* x86/HVM: move per-vendor function tables into .init.dataJan Beulich2013-04-291-5/+5
| | | | | | | | | | | | | | | hvm_enable() copies the table contents rather than storing the pointer, so there's no need to keep these tables post-boot. Also constify the return values of the per-vendor initialization functions, making clear that once the per-vendor initialization is complete, the vendor specific tables won't get modified anymore. Finally, in hvm_enable(), use the returned pointer for all read accesses as being more efficient than global variable accesses. Writes of course still need to go to the global variable. Signed-off-by: Jan Beulich <jbeulich@suse.com>
* x86/hvm: convert access check for nested HVM to XSMDaniel De Graaf2013-04-231-4/+2
| | | | | | | | | | | | This adds an XSM hook for enabling nested HVM support, replacing an IS_PRIV check. This hook is a partial duplicate with the xsm_hvm_param hook, but using the existing hook would require adding the index to the hook and would require the use of a custom hook for the xsm-disabled case (using XSM_OTHER, which is less immediately readable) - whereas adding a new hook retains the clarity of the existing code. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: George Dunlap <george.dunlap@eu.citrix.com> (release perspective)
* hvm: Improve APIC INIT/SIPI emulation, fixing it for call paths other than ↵Keir Fraser2013-03-281-2/+0
| | | | | | | | | | | | | | x86_emulate(). In particular, on broadcast/multicast INIT/SIPI, we handle all target APICs at once in a single invocation of the init/sipi tasklet. This avoids needing to return an X86EMUL_RETRY error code to the caller, which was being ignored by all except x86_emulate(). The original bug, and the general approach in this fix, pointed out by Intel (yang.z.zhang@intel.com). Signed-off-by: Keir Fraser <keir@xen.org>
* x86/nhvm: properly clean up after failure to set up all vCPU-sJan Beulich2013-02-221-3/+5
| | | | | | | | | | | | | | | | Otherwise we may leak memory when setting up nHVM fails half way. This implies that the individual destroy functions will have to remain capable (in the VMX case they first need to be made so, following 26486:7648ef657fe7 and 26489:83a3fa9c8434) of being called for a vCPU that the corresponding init function was never run on. Once at it, also remove a redundant check from the corresponding parameter validation code. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org> Tested-by: Olaf Hering <olaf@aepfle.de>
* Fix emacs local variable block to use correct C style variable.David Vrabel2013-02-211-1/+1
| | | | | | | The emacs variable to set the C style from a local variable block is c-file-style, not c-set-style. Signed-off-by: David Vrabel <david.vrabel@citrix.com
* hvm: Allow triple fault to imply crash rather than rebootAndrew Cooper2013-02-151-2/+11
| | | | | | | | | | | | | | While the triple fault action on native hardware will result in a system reset, any modern operating system can and will make use of less violent reboot methods. As a result, the most likely cause of a triple fault is a fatal software bug. This patch allows the toolstack to indicate that a triple fault should mean a crash rather than a reboot. The default of reboot still remains the same. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org> Committed-by: Jan Beulich <jbeulich@suse.com>
* x86/HAP: Add global enable/disable command line optionAndrew Cooper2013-02-121-1/+12
| | | | | | | | Also, correct a copy&paste error in the documentation. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Jan Beulich <jbeulich@suse.com>
* Revert 26503:69398345c10eJan Beulich2013-02-051-8/+3
| | | | | | This conflicts with changes done in 26486:7648ef657fe7 and 26489:83a3fa9c8434 (i.e. the code added by them needs adjustment in order for the change here to be correct).
* x86/nestedhvm: properly clean up after failure to set up all vCPU-sJan Beulich2013-02-041-3/+8
| | | | | | | | | | | | This implies that the individual destroy functions will have to remain capable of being called for a vCPU that the corresponding init function was never run on. Once at it, also clean up some inefficiencies in the corresponding parameter validation code. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: properly use map_domain_page() in nested HVM codeJan Beulich2013-01-231-14/+28
| | | | | | | | | | | | | | This eliminates a couple of incorrect/inconsistent uses of map_domain_page() from VT-x code. Note that this does _not_ add error handling where none was present before, even though I think NULL returns from any of the mapping operations touched here need to properly be handled. I just don't know this code well enough to figure out what the right action in each case would be. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xen: Do not allow guests to enable nested HVM on themselvesIan Campbell2013-01-231-0/+5
| | | | | | | | | | | There is no reason for this and doing so exposes a memory leak to guests. Only toolstacks need write access to this HVM param. This is XSA-35 / CVE-2013-0152. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Jan Beulich <JBeulich@suse.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* hvm: wire up domctl and xsm hypercallsDaniel De Graaf2013-01-231-0/+4
| | | | | | | | | These hypercalls are usable by HVM guests. Once connected, simple functions of the Xen toolstack can be run from an HVM domain if that domain is permitted access (which is currently only possible via XSM). Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Committed-by: Keir Fraser <keir@xen.org>
* x86/mm: revert 26399:b0e618cb0233 (multiple vram areas)Tim Deegan2013-01-171-6/+2
| | | | | | | | Although this passed my smoke-tests at commit time, I'm now seeing screen corruption on 32-bit WinXP guests. Reverting for now. :( Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* mem_event: Add support for MEM_EVENT_REASON_MSRRazvan Cojocaru2013-01-171-0/+11
| | | | | | | | | | | | | | Add the new MEM_EVENT_REASON_MSR event type. Works similarly to the other register events, except event.gla always contains the MSR address (in addition to event.gfn, which holds the value). MEM_EVENT_REASON_MSR does not honour the HVMPME_onchangeonly bit, as doing so would complicate the hvm_msr_write_intercept() switch-based handling of writes for different MSR addresses, with little added benefit. Signed-off-by: Razvan Cojocaru <rzvncj@gmail.com> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: Provide support for multiple frame buffers in HVM guests.Robert Phillips2013-01-171-2/+6
| | | | | | | | | | | | | | | | | | | | | | Support is provided for both shadow and hardware assisted paging (HAP) modes. This code bookkeeps the set of video frame buffers (vram), detects when the guest has modified any of those buffers and, upon request, returns a bitmap of the modified pages. This lets other software components re-paint the portions of the monitor (or monitors) that have changed. Each monitor has a frame buffer of some size at some position in guest physical memory. The set of frame buffers being tracked can change over time as monitors are plugged and unplugged. Signed-off-by: Robert Phillips <robert.phillips@citrix.com> Acked-by: Tim Deegan <tim@xen.org> Removed a stray #include and a few hard tabs. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* nested_ept: Implement guest ept's walkerZhang Xiantao2013-01-151-0/+1
| | | | | | | | | | | | | Implment guest EPT PT walker, some logic is based on shadow's ia32e PT walker. During the PT walking, if the target pages are not in memory, use RETRY mechanism and get a chance to let the target page back. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* nestedhap: Change hostcr3 and p2m->cr3 to meaningful wordsZhang Xiantao2013-01-151-3/+3
| | | | | | | | | | | VMX doesn't have the concept about host cr3 for nested p2m, and only SVM has, so change it to netural words. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* xen/xsm: Add xsm_default parameter to XSM hooksDaniel De Graaf2013-01-111-13/+13
| | | | | | | | | | | | | | Include the default XSM hook action as the first argument of the hook to facilitate quick understanding of how the call site is expected to be used (dom0-only, arbitrary guest, or device model). This argument does not solely define how a given hook is interpreted, since any changes to the hook's default action need to be made identically to all callers of a hook (if there are multiple callers; most hooks only have one), and may also require changing the arguments of the hook. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Keir Fraser <keir@xen.org>
* xen: avoid calling rcu_lock_*target_domain when an XSM hook existsDaniel De Graaf2013-01-111-19/+19
| | | | | | | | | | | | | | | | | | | The rcu_lock_{,remote_}target_domain_by_id functions are wrappers around an IS_PRIV_FOR check for the current domain. This is now redundant with XSM hooks, so replace these calls with rcu_lock_domain_by_any_id or rcu_lock_remote_domain_by_id to remove the duplicate permission checks. When XSM_ENABLE is not defined or when the dummy XSM module is used, this patch should not change any functionality. Because the locations of privilege checks have sometimes moved below argument validation, error returns of some functions may change from EPERM to EINVAL when called with invalid arguments and from a domain without permission to perform the operation. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Jan Beulich <jbeulich@suse.com> Committed-by: Keir Fraser <keir@xen.org>
* x86/hvm: Bind device-model event-channels to registered device-modelKeir Fraser2013-01-091-7/+9
| | | | | | domid during vcpu initialisation. Signed-off-by: Keir Fraser <keir@xen.org>
* x86/hvm: Bind xen-created event channels to building domainDaniel De Graaf2013-01-081-2/+2
| | | | | | | | | | Instead of using a hardcoded domain 0 as the endpoint for the event channels created in hvm_vcpu_initialise, use the domain ID of the building domain so that a domain builder in a domain other than dom0 has the expected access to the event channels. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Committed-by: Jan Beulich <jbeulich@suse.com>
* x86: mark certain items staticJan Beulich2012-12-071-4/+4
| | | | | | | ..., and at once constify the data items among them. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: remove dead codeJan Beulich2012-12-061-6/+0
| | | | | Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* hvm: Limit the size of large HVM op batchesTim Deegan2012-12-041-4/+34
| | | | | | | | | | | | | | | | | Doing large p2m updates for HVMOP_track_dirty_vram without preemption ties up the physical processor. Integrating preemption into the p2m updates is hard so simply limit to 1GB which is sufficient for a 15000 * 15000 * 32bpp framebuffer. For HVMOP_modified_memory and HVMOP_set_mem_type preemptible add the necessary machinery to handle preemption. This is CVE-2012-5511 / XSA-27. Signed-off-by: Tim Deegan <tim@xen.org> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Committed-by: Ian Jackson <ian.jackson.citrix.com>
* p2m: rename p2m_is_magic to p2m_is_podOlaf Hering2012-10-221-1/+1
| | | | | | Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* hvm: handle PoD and grant pages in HVMOP_get_mem_typeOlaf Hering2012-10-191-0/+4
| | | | | | | | | | | | During kexec in a ballooned PVonHVM guest the new kernel needs to check each pfn if its backed by a mfn to find ballooned pages. Currently all PoD and grant pages will appear as HVMMEM_mmio_dm, so the new kernel has to assume they are ballooned. This is wrong: PoD pages may turn into real RAM at runtime, grant pages are also RAM. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriateStefano Stabellini2012-10-171-13/+13
| | | | | | | | | | | | Note: these changes don't make any difference on x86. Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as an hypercall argument. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Keir Fraser <keir@xen.org> Committed-by: Ian Campbell <ian.campbell@citrix.com>
* x86: consolidate frame state manipulation functionsJan Beulich2012-10-041-2/+2
| | | | | | | | Rather than doing this in multiple places, have a single central function (decode_register()) to be used by all other code. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: Save/restore TSC adjust during HVM guest migrationLiu, Jinsong2012-09-261-0/+40
| | | | | Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* x86: Implement TSC adjust feature for HVM guestLiu, Jinsong2012-09-261-1/+29
| | | | | | | | | | | | | | | | | | | IA32_TSC_ADJUST MSR is maintained separately for each logical processor. A logical processor maintains and uses the IA32_TSC_ADJUST MSR as follows: 1). On RESET, the value of the IA32_TSC_ADJUST MSR is 0; 2). If an execution of WRMSR to the IA32_TIME_STAMP_COUNTER MSR adds (or subtracts) value X from the TSC, the logical processor also adds (or subtracts) value X from the IA32_TSC_ADJUST MSR; 3). If an execution of WRMSR to the IA32_TSC_ADJUST MSR adds (or subtracts) value X from that MSR, the logical processor also adds (or subtracts) value X from the TSC. This patch provides tsc adjust support for hvm guest, with it guest OS would be happy when sync tsc. Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* x86: enable VIA CPU supportJan Beulich2012-09-211-9/+2
| | | | | | | | | | | | | Newer VIA CPUs have both 64-bit and VMX support. Enable them to be recognized for these purposes, at once stripping off any 32-bit CPU only bits from the respective CPU support file, and adding 64-bit ones found in recent Linux. This particularly implies untying the VMX == Intel assumption in a few places. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* printk: prefer %#x et at over 0x%xJan Beulich2012-09-211-3/+3
| | | | | | | | | Performance is not an issue with printk(), so let the function do minimally more work and instead save a byte per affected format specifier. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* mem_event: fix regression affecting CR3, CR4 memory eventsSteven Maresca2012-09-171-0/+4
| | | | | | | | | | | | | | | | This is a patch repairing a regression in code previously functional in 4.1.x. It appears that, during some refactoring work, calls to hvm_memory_event_cr3 and hvm_memory_event_cr4 were lost. These functions were originally called in mov_to_cr() of vmx.c, but the commit http://xenbits.xen.org/hg/xen-unstable.hg/rev/1276926e3795 abstracted the original code into generic functions up a level in hvm.c, dropping these calls in the process. Signed-off-by: Steven Maresca <steve@zentific.com> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Keir Fraser <keir@xen.org>