aboutsummaryrefslogtreecommitdiffstats
path: root/xen/arch/x86/hvm/emulate.c
Commit message (Collapse)AuthorAgeFilesLines
* x86/HVM: cache emulated instruction for retry processingJan Beulich2013-10-141-14/+43
| | | | | | | | | | | | | Rather than re-reading the instruction bytes upon retry processing, stash away and re-use what we already read. That way we can be certain that the retry won't do something different from what requested the retry, getting once again closer to real hardware behavior (where what we use retries for is simply a bus operation, not involving redundant decoding of instructions). Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly deal with hvm_copy_*_guest_phys() errorsJan Beulich2013-10-141-8/+6
| | | | | | | | | | | | | | In memory read/write handling the default case should tell the caller that the operation cannot be handled rather than the operation having succeeded, so that when new HVMCOPY_* states get added not handling them explicitly will not result in errors being ignored. In task switch emulation code stop handling some errors, but not others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: fix direct PCI port I/O emulation retry and error handlingJan Beulich2013-10-141-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | dpci_ioport_{read,write}() guest memory access failure handling should be modelled after process_portio_intercept()'s (and others): Upon encountering an error on other than the first iteration, the count successfully handled needs to be stored and X86EMUL_OKAY returned, in order for the generic instruction emulator to update register state correctly before reporting failure or retrying (both of which would only happen after re-invoking emulation). Further we leverage (and slightly extend, due to the above mentioned need to return X86EMUL_OKAY) the "large MMIO" retry model. Note that there is still a special case not explicitly taken care of here: While the first retry on the last iteration of a "rep ins" correctly recovers the already read data, an eventual subsequent retry is being handled by the pre-existing mmio-large logic (through hvmemul_do_io() storing the [recovered] data [again], also taking into consideration that the emulator converts a single iteration "ins" to ->read_io() plus ->write()). Also fix an off-by-one in the mmio-large-read logic, and slightly simplify the copying of the data. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: refuse doing string operations in certain situationsJan Beulich2013-09-231-0/+14
| | | | | | | | | | We shouldn't do any acceleration for - "rep movs" when either side is passed through MMIO or when both sides are handled by qemu - "rep ins" and "rep outs" when the memory operand is any kind of MMIO Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly handle MMIO reads and writes wider than a machine wordJan Beulich2013-09-201-20/+95
| | | | | | | | | Just like real hardware we ought to split such accesses transparently to the caller. With little extra effort we can at once even handle page crossing accesses correctly. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xen: Define new struct hvm_trap and cleanup vmx exceptionKeir Fraser2012-05-301-2/+2
| | | | | | | | | | Define new struct hvm_trap to represent information of trap, and renames hvm_inject_exception to hvm_inject_trap, then define a couple of wrappers around that function for existing callers. Signed-off-by: Keir Fraser <keir@xen.org> Signed-off-by: Xudong Hao <xudong.hao@intel.com> Committed-by: Keir Fraser <keir@xen.org>
* x86/hvm: use unlocked p2m lookups in hvmemul_rep_movs()Tim Deegan2012-05-171-23/+7
| | | | | | The eventual hvm_copy or IO emulations will re-check the p2m and DTRT. Signed-off-by: Tim Deegan <tim@xen.org>
* x86/hvm: Use get_page_from_gfn() instead of get_gfn()/put_gfn.Tim Deegan2012-05-171-33/+24
| | | | | Signed-off-by: Tim Deegan <tim@xen.org> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
* x86/hvm: put value of emulated register reads into trace recordsDavid Vrabel2012-05-141-1/+5
| | | | | | | | | | | | The tracepoint for emulated MMIO and I/O port reads was always before the emulated read or write was done. This means that for reads the register value in the trace record was always 0. So for reads, move the tracepoint until the register value is available. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Committed-by: Keir Fraser <keir@xen.org>
* x86/mm: make 'query type' argument to get_gfn into a set of flagsTim Deegan2012-03-151-1/+1
| | | | | | | | | | | | Having an enum for this won't work if we want to add any orthogonal options to it -- the existing code is only correct (after the removal of p2m_guest in the previous patch) because there are no tests anywhere for '== p2m_alloc', only for '!= p2m_query' and '== p2m_unshare'. Replace it with a set of flags. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: remove 'p2m_guest' lookup type.Tim Deegan2012-03-151-1/+1
| | | | | | | | | | | | It was neither consistently used by callers nor correctly handled by the lookup code. Instead, treat any lookup that might allocate or unshare memory as a 'guest' lookup for the purposes of: - detecting the highest pod gfn populated; and - crashing the guest on access to a broken page which were the only things this was used for. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: Fix deadlock between p2m and event channel locks.Andres Lagar-Cavilla2012-03-141-9/+29
| | | | | | | | | | | | | | | The hvm io emulation code holds the p2m lock for the duration of the emulation, which may include sending an event to qemu. On a separate path, map_domain_pirq grabs the event channel and p2m locks in opposite order. Fix this by ensuring liveness of the ram_gfn used by io emulation, with a page ref. Reported-by: "Hao, Xudong" <xudong.hao@intel.com> Signed-off-by: "Hao, Xudong" <xudong.hao@intel.com> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: Refactor possibly deadlocking get_gfn callsAndres Lagar-Cavilla2012-02-101-19/+14
| | | | | | | | | | | | When calling get_gfn multiple times on different gfn's in the same function, we can easily deadlock if p2m lookups are locked. Thus, refactor these calls to enforce simple deadlock-avoidance rules: - Lowest-numbered domain first - Lowest-numbered gfn first Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavila.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* Re-order calls to put_gfn() around wait queue invocationsAndres Lagar-Cavilla2012-02-101-1/+1
| | | | | | | | | | | | | | Since we use wait queues to handle potential ring congestion cases, code paths that try to generate a mem event while holding a gfn lock would go to sleep in non-preemptible mode. Most such code paths can be fixed by simply postponing event generation until locks are released. Signed-off-by: Adin Scannell <adin@scannell.ca> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: clean use of p2m unlocked queriesAndres Lagar-Cavilla2012-01-261-7/+28
| | | | | | | | | | Limit such queries only to p2m_query types. This is more compatible with the name and intended semantics: perform only a lookup, and explicitly in an unlocked way. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/emulator: generalize movq emulation (SSE2 and AVX variants)Jan Beulich2011-12-011-0/+15
| | | | | | | | | | | | Extend the existing movq emulation to also support its SSE2 and AVX variants, the latter implying the addition of VEX decoding. Fold the read and write cases (as most of the logic is identical), and add movntq and variants (as they're very similar). Extend the testing code to also exercise these instructions. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* Modify naming of queries into the p2mAndres Lagar-Cavilla2011-11-111-3/+26
| | | | | | | | | | | | | | | | | | | | | | Callers of lookups into the p2m code are now variants of get_gfn. All callers need to call put_gfn. The code behind it is a no-op at the moment, but will change to proper locking in a later patch. This patch does not change functionality. Only naming, and adds put_gfn's. set_p2m_entry retains its name because it is always called with p2m_lock held. This patch is humongous, unfortunately, given the dozens of call sites involved. After this patch, anyone using old style gfn_to_mfn will not succeed in compiling their code. This is on purpose: adapt to the new API. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Keir Fraser <keir@xen.org>
* hvm: Clean up I/O emulationChristoph Egger2011-10-251-34/+36
| | | | | | | | Move HVM io fields into a structure. On MMIO instruction failure print out some more bytes. Signed-off-by: Christoph Egger <Christoph.Egger@amd.com> Committed-by: Keir Fraser <keir@xen.org>
* x86/mm/p2m: Make p2m interfaces take struct domain arguments.Tim Deegan2011-06-021-6/+4
| | | | | | | | | | | | | | | | | As part of the nested HVM patch series, many p2m functions were changed to take pointers to p2m tables rather than to domains. This patch reverses that for almost all of them, which: - gets rid of a lot of "p2m_get_hostp2m(d)" in code which really shouldn't have to know anything about how gfns become mfns. - ties sharing and paging interfaces to a domain, which is what they actually act on, rather than a particular p2m table. In developing this patch it became clear that memory-sharing and nested HVM are unlikely to work well together. I haven't tried to fix that here beyond adding some assertions around suspect paths (as this patch is big enough with just the interface changes) Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
* x86/mm/p2m: merge gfn_to_mfn_unshare with other gfn_to_mfn paths.Tim Deegan2011-06-021-1/+1
| | | | | | | | | | | | | | | gfn_to_mfn_unshare() had its own function despite all other lookup types being handled in one place. Merge it into _gfn_to_mfn_type(), so that it gets the benefit of broken-page protection, for example, and tidy its interfaces up to fit. The unsharing code still has a lot of bugs, e.g. - failure to alloc for unshare on a foreign lookup still BUG()s, - at least one race condition in unshare-and-retry - p2m_* lookup types should probably be flags, not enum but it's cleaner and will make later p2m cleanups easier. Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
* svm: implement instruction fetch part of DecodeAssist (on #PF/#NPF)Keir Fraser2011-04-181-0/+2
| | | | | | | | | | | | | | Newer SVM implementations (Bulldozer) copy up to 15 bytes from the instruction stream into the VMCB when a #PF or #NPF exception is intercepted. This patch makes use of this information if available. This saves us from a) traversing the guest's page tables, b) mapping the guest's memory and c) copy the instructions from there into the hypervisor's address space. This speeds up #NPF intercepts quite a lot and avoids cache and TLB trashing. Signed-off-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Keir Fraser <keir@xen.org>
* Update my email address to long-term stable address.Keir Fraser2011-01-071-1/+1
| | | | Signed-off-by: Keir Fraser <keir@xen.org>
* x86 hvm: Clean up PIO fast path emulation.Keir Fraser2010-09-151-2/+3
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* Nested Virtualization: p2m infrastructureKeir Fraser2010-08-091-4/+6
| | | | | | | | Change p2m infrastructure to operate on per-p2m instead of per-domain. This allows us to use multiple p2m tables per-domain. Signed-off-by: Christoph Egger <Christoph.Egger@amd.com> Acked-by: Tim Deegan <Tim.Deegan@citrix.com>
* x86 hvm: msr-handling cleanupKeir Fraser2010-06-101-17/+2
| | | | Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
* x86/hvm: accelerate I/O intercept handlingKeir Fraser2010-03-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | currently we go through the emulator every time a HVM guest does an I/O port access (in/out). This is unnecessary most of the times, as both VMX and SVM provide all the necessary information already in the VMCS/VMCB. String instructions are not covered by this shortcut, but they are quite rare and we would need to access the guest memory anyway. This patch decodes the information from VMCB/VMCS and calls a simple handle_mmio wrapper. In handle_mmio() itself the emulation part will simply be skipped, this approach avoids code duplication. Since the vendor specific part is quite trivial, I implemented both the VMX and SVM part, please check the VMX part for sanity. I boot-tested both versions and ran some simple benchmarks. A micro benchmark (hammering an I/O port in a tight loop) shows a significant performance improvement (down to 66% of the time needed to handle the intercept on an AMD K8, measured in the guest with TSC). Even with reading a 1GB file from an emulated IDE harddisk (Dom0 cached) I could get a 4-5% improvement. Some guest code (e.g. the TCP stack in some Windows version) exercises the PM-Timer I/O port (0x1F48) very often (multiple 10,000 times per second), these workloads also benefit with up to 5% improvement from this patch. Signed-off-by: Andre Przywara <andre.przywara@amd.com>
* The internal Xen x86 emulator is fixed to handle shared/sharable pages corretly.Keir Fraser2009-12-171-3/+11
| | | | | | | | | If pages cannot be unshared immediately (due to lack of free memory required to create private copies) the VCPU under emulation is paused, and the emulator returns X86EMUL_RETRY, which will get resolved after some memory is freed back to Xen (possibly through host paging). Signed-off-by: Grzegorz Milos <Grzegorz.Milos@citrix.com>
* Memory paging support for HVM guest emulation.Keir Fraser2009-12-171-0/+21
| | | | | | | | | | A new HVMCOPY return value, HVMCOPY_gfn_paged_out is defined to indicate that a gfn was paged out. This value and PFEC_page_paged, as appropriate, are caught and passed up as X86EMUL_RETRY to the emulator. This will cause the emulator to keep retrying the operation until is succeeds (once the page has been paged in). Signed-off-by: Patrick Colp <Patrick.Colp@citrix.com>
* Fix a reference to X86EMUL_OKAY which was hardcoded as a 0 instead.Keir Fraser2009-12-171-1/+1
| | | | Signed-off-by: Patrick Colp <Patrick.Colp@citrix.com>
* Extend max vcpu number for HVM guestKeir Fraser2009-10-291-3/+1
| | | | | | | | | | | | | | Reduce size of Xen-qemu shared ioreq structure to 32 bytes. This has two advantages: 1. We can support up to 128 VCPUs with a single shared page 2. If/when we want to go beyond 128 VCPUs, a whole number of ioreq_t structures will pack into a single shared page, so a multi-page array will have no ioreq_t straddling a page boundary Also, while modifying qemu, replace a 32-entry vcpu-indexed array with a dynamically-allocated array. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* Miscellaneous data placement adjustmentsKeir Fraser2009-10-281-1/+1
| | | | | | | Make various data items const or __read_mostly where possible/reasonable. Signed-off-by: Jan Beulich <jbeulich@novell.com>
* x86 hvm: On failed hvm_send_assist_req(), io emulation state should beKeir Fraser2009-10-071-2/+5
| | | | | | reset to HVMIO_none, as no IO is in flight. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Do not incorrectly retire an instruction emulation when aKeir Fraser2009-10-071-2/+2
| | | | | | read/write cycle to qemu is dropped due to guest suspend. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* Mapping grant references into HVM guests, take 2Keir Fraser2009-07-131-11/+18
| | | | | | | | | | | | | After some discussion, here's a second version of the patch I posted a couple of weeks back to map grant references into HVM guests. As before, this is done by modifying the P2M map, but this time there's no new hypercall to do it. Instead, the existing GNTTABOP_map is overloaded to perform a P2M mapping if called from a shadow mode translate guest. This matches the IA64 API. Signed-off-by: Steven Smith <steven.smith@citrix.com> Acked-by: Tim Deegan <tim.deegan@citrix.com> CC: Bhaskar Jayaraman <Bhaskar.Jayaraman@lsi.com>
* xentrace: Clean up HVM I/O tracing.Keir Fraser2009-04-241-4/+4
| | | | | Signed-off-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xentrace: Trace CR accesses in hvm emulator.Keir Fraser2009-04-071-0/+3
| | | | Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* x86 hvm: Fix hvmemul_read_msr().Keir Fraser2009-03-111-1/+1
| | | | | | Original patch by Christoph Egger <christoph.egger@amd.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xentrace: Trace mmio/io read/write valueKeir Fraser2008-11-031-0/+30
| | | | | Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: More checking around REP MOVS emulation.Keir Fraser2008-08-261-8/+35
| | | | | | | Check for self-corrupting copies, and report hvm_copy errors to the console log. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Emulate RAM-to-RAM REP MOVS copies efficiently.Keir Fraser2008-08-261-7/+22
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Fix binary arithmetic in hvmemul_linear_to_phys().Keir Fraser2008-08-201-11/+4
| | | | | | | | PAGE_SIZE - (x & ~PAGE_MASK) is not equivalent to -x & ~PAGE_MASK Also the early goto could be removed. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Another clarifying comment in the HVM address translation emulation.Keir Fraser2008-08-191-0/+4
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Add clarifying comments about clipping repeated stringKeir Fraser2008-08-191-1/+10
| | | | | | instructions to 4096 iterations. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86 hvm: Build fix: param is paddr_t not ulong.Keir Fraser2008-08-191-1/+2
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86, hvm: Observe EFLAGS.DF when performing segmentation checks andKeir Fraser2008-08-191-26/+59
| | | | | | address translations on multi-iteration string instructions. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86_emulate: read/write/insn_fetch emulation hooks now all take aKeir Fraser2008-06-301-51/+56
| | | | | | | pointer to emulator data buffer, and an arbitrary byte count (up to the size of a page of memory). Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86_emulate: Support CMPXCHG16B.Keir Fraser2008-04-221-2/+6
| | | | | | | | Also clean up cmpxchg() callback handling so we can get rid of teh specific cmpxchg8b handler. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86, hvm: Allow emulation of 'multi-cycle' MMIO reads and writes,Keir Fraser2008-04-171-5/+76
| | | | | | which may require multiple round trips to the device model. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86_emulate: Emulate MMX movq instructions.Keir Fraser2008-04-171-1/+18
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* x86_emulate: Implement a more dynamic interface for handling FPUKeir Fraser2008-04-161-3/+16
| | | | | | exceptions, which will allow emulation stubs to be built dynamically in a future patch. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>