aboutsummaryrefslogtreecommitdiffstats
path: root/xen/arch/x86/mm
Commit message (Collapse)AuthorAgeFilesLines
* x86/mm/shadow: Fix initialization of PV shadow L4 tables.Tim Deegan2013-09-301-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Shadowed PV L4 tables must have the same Xen mappings as their unshadowed equivalent. This is done by copying the Xen entries verbatim from the idle pagetable, and then using guest_l4_slot() in the SHADOW_FOREACH_L4E() iterator to avoid touching those entries. adc5afbf1c70ef55c260fb93e4b8ce5ccb918706 (x86: support up to 16Tb) changed the definition of ROOT_PAGETABLE_XEN_SLOTS to extend right to the top of the address space, which causes the shadow code to copy Xen mappings into guest-kernel-address slots too. In the common case, all those slots are zero in the idle pagetable, and no harm is done. But if any slot above #271 is non-zero, Xen will crash when that slot is later cleared (it attempts to drop shadow-pagetable refcounts on its own L4 pagetables). Fix by using the new ROOT_PAGETABLE_PV_XEN_SLOTS when appropriate. Monitor pagetables need the full Xen mappings, so they keep using the old name (with its new semantics). This is CVE-2013-4356 / XSA-64. Signed-off-by: Tim Deegan <tim@xen.org> Reviewed-by: Jan Beulich <jbeulich@suse.com>
* x86/hap: Remove bogus assertion in hap_free_p2m_page()Andrew Cooper2013-09-251-1/+0
| | | | | | | | | | | | | Coverity ID: 1055622 Coverity correctly points out that this ASSERT() is unconditionally true as an unsigned integer is always >= 0. Judging from the shadow counterpart and p2m callsites, there is nothing invalid about freeing the final p2m page. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Tim Deegan <tim@xen.org>
* x86/mm: Don't dereference p2m pointer before NULL check.Tim Deegan2013-09-121-1/+3
| | | | | | | | | | Not a security bug, because in fact this is never called with a NULL argument. Coverity CID 1055955 Signed-off-by: Tim Deegan <tim@xen.org> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* x86/mm: Fix possible increment of uninitialised variableAndrew Cooper2013-09-101-1/+1
| | | | | | | | | | | | | | | Discovered by Coverity, CID 1056101 When taking the continue branch on the first iteration of the loop, gfn would indeed be uninitialised when incremented. However, as gfn is unconditionally constructed from i{1..4} before use in the loop body, having it incremented in the loop header is useless. Therefore, simply remove it. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
* mem_sharing_nominate_page: p2mt should never change before p2m_change_type()Nai Xia2013-08-081-14/+2
| | | | | | | | | | | The p2mt change check for p2m_change_type() was first introduced when this code path was not protected by p2m_lock(). Now this code path is protected by p2m_lock. So p2mt should never change before p2m_change_type(). Signed-off-by: Nai Xia <nai.xia@gmail.com> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org>
* Nested VMX: Flush TLBs and Caches if paging mode changedYang Zhang2013-08-061-0/+1
| | | | | | | | | According to SDM, if paging mode is changed, then whole TLBs and caches will be flushed. This is missed in nested handle logic. Also this fixed the issue that 64 bits windows cannot boot up on top of L1 kvm. Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/shadow: fix off-by-one in MMIO permission checkJan Beulich2013-05-151-3/+3
| | | | | | | | | iomem_access_permitted() wants an inclusive range as input. Also use pfn_to_paddr() in nearby code instead of open coding it. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
* x86/mm/shadow: remove dead code for avoiding Xen entries on 32-bit tables.Tim Deegan2013-05-091-28/+21
| | | | | | | | | | | | | | | All non-external-mode (==PV) guests have 4-level pagetables now that the PAE build of Xen is gone. This patch should have no effect, since the condition it removes could never be true anyway: the l2 offset of HYPERVISOR_VIRT_START on 64-bit Xen is much higher than any l2 offset we could have seen in the tables (and indeed bigger than the 'int' type, which clang was complaining about). Actual compat PV guest xen entries are handled by the equivalent test in the 64-bit SHADOW_FOREACH_L2E() below. Reported-by: Julien Grall <julien.grall@linaro.org> Signed-off-by: Tim Deegan <tim@xen.org>
* x86: remove IS_PRIV_FOR referencesDaniel De Graaf2013-04-232-11/+11
| | | | | | | | | | | | The check in guest_physmap_mark_populate_on_demand is redundant, since its only caller is populate_physmap whose only caller checks the xsm_memory_adjust_reservation hook prior to calling. Add a new XSM hook for the other two checks since they allow privileged domains to arbitrarily map a guest's memory. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: George Dunlap <george.dunlap@eu.citrix.com> (release perspective)
* x86/mm/shadow: spurious warning when unmapping xenheap pages.Tim Deegan2013-04-041-2/+5
| | | | | | | | | | Xenheap pages will always have an extra typecount, taken in share_xen_page_with_guest(), which doesn't come from a shadow PTE. Adjust the warning in sh_remove_all_mappings() to account for it. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Tim Deegan <tim@xen.org> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
* x86/mem_access: check for errors in p2m->set_entry().Tim Deegan2013-03-141-7/+18
| | | | | | | | These calls ought always to succeed. Assert that they do rather than ignoring the return value. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Aravindh Puthiyaparambil <aravindh@virtuata.com>
* x86/mem_sharing: check for errors in p2m->set_entry().Tim Deegan2013-03-141-4/+8
| | | | | | | | | This call ought always to succeed. Assert that it does rather than ignoring the return value. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Jan Beulich <jbeulich@suse.com>
* x86/ept: check for errors in a few callers of ept_set_entry.Tim Deegan2013-03-141-5/+15
| | | | | | | | | AFAICT in all these cases we have the p2m lock and have just checked that the p2m trie is populated so the call should succeed. Make it explicit with ASSERT() rather than just ignoring the result. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com>
* x86/mm: warn if we ever run out of shadow/hap pool for p2m/lgd ops.Tim Deegan2013-03-142-1/+13
| | | | | | | | | | | Even if the error propagates up through the p2m ops to the caller, it'll look like ENOMEM, which won't be obviously a shadow-pool problem. Warn on the console, once per domain. Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com>
* x86/shadow: don't use PV LDT area for cross-pages access emulationJan Beulich2013-03-051-19/+8
| | | | | | | | | | As of 703ac3a ("x86: introduce create_perdomain_mapping()"), the page tables for this range don't get set up anymore for non-PV guests. And the way this was done was marked as a hack rather than a proper mechanism anyway. Use vmap() instead. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Tim Deegan <tim@xen.org>
* x86/mm: fix invalid unlinking of nested p2m tablesMatthew Daley2013-02-281-5/+3
| | | | | | | | | | | | | | | | | Commit 90805dc (c/s 26387:4056e5a3d815) ("EPT: Make ept data stucture or operations neutral") makes nested p2m tables be unlinked from the host p2m table before their destruction (in p2m_teardown_nestedp2m). However, by this time the host p2m table has already been torn down, leading to a possible race condition where another allocation between the two kinds of table being torn down can lead to a linked list assertion with debug=y builds or memory corruption on debug=n ones. Fix by swapping the order the two kinds of table are torn down in. While at it, remove the condition in p2m_final_teardown, as it is already checked identically in p2m_teardown_hostp2m itself. Signed-off-by: Matthew Daley <mattjd@gmail.com> Acked-by: Tim Deegan <tim@xen.org>
* x86/mm: avoid locked lookups in shadow emulation.Tim Deegan2013-02-211-6/+16
| | | | | | | | Use get_page_from_gfn() instead of get_gfn(), avoiding taking the p2m lock in the common case. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org>
* Fix emacs local variable block to use correct C style variable.David Vrabel2013-02-2113-13/+13
| | | | | | | The emacs variable to set the C style from a local variable block is c-file-style, not c-set-style. Signed-off-by: David Vrabel <david.vrabel@citrix.com
* x86/mm: Take the p2m lock even in shadow mode.Tim Deegan2013-02-211-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | The reworking of p2m lookups to use get_gfn()/put_gfn() left the shadow code not taking the p2m lock, even in cases where the p2m would be updated (i.e. PoD). In many cases, shadow code doesn't need the exclusion that get_gfn()/put_gfn() provides, as it has its own interlocks against p2m updates, but this is taking things too far, and can lead to crashes in the PoD code. Now that most shadow-code p2m lookups are done with explicitly unlocked accessors, or with the get_page_from_gfn() accessor, which is often lock-free, we can just turn this locking on. The remaining locked lookups are in sh_page_fault() (in a path that's almost always already serializing on the paging lock), and in emulate_map_dest() (which can probably be updated to use get_page_from_gfn()). They're not addressed here but may be in a follow-up patch. Signed-off-by: Tim Deegan <tim@xen.org> Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
* x86/mm: remove two files left over from the previous vram patches.Tim Deegan2013-01-241-864/+0
| | | | | | | I seem to have missed these when reverting 26399:b0e618cb0233. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86: properly use map_domain_page() in miscellaneous placesJan Beulich2013-01-231-1/+17
| | | | | Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: properly use map_domain_page() during domain creation/destructionJan Beulich2013-01-232-2/+2
| | | | | | | | This involves no longer storing virtual addresses of the per-domain mapping L2 and L3 page tables. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/mm: revert 26399:b0e618cb0233 (multiple vram areas)Tim Deegan2013-01-178-329/+368
| | | | | | | | Although this passed my smoke-tests at commit time, I'm now seeing screen corruption on 32-bit WinXP guests. Reverting for now. :( Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: Provide support for multiple frame buffers in HVM guests.Robert Phillips2013-01-179-368/+1193
| | | | | | | | | | | | | | | | | | | | | | Support is provided for both shadow and hardware assisted paging (HAP) modes. This code bookkeeps the set of video frame buffers (vram), detects when the guest has modified any of those buffers and, upon request, returns a bitmap of the modified pages. This lets other software components re-paint the portions of the monitor (or monitors) that have changed. Each monitor has a frame buffer of some size at some position in guest physical memory. The set of frame buffers being tracked can change over time as monitors are plugged and unplugged. Signed-off-by: Robert Phillips <robert.phillips@citrix.com> Acked-by: Tim Deegan <tim@xen.org> Removed a stray #include and a few hard tabs. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* nEPT: Expose EPT & VPID capablities to L1 VMMZhang Xiantao2013-01-151-7/+17
| | | | | | | | | | | | Expose EPT's and VPID 's basic features to L1 VMM. For EPT, no EPT A/D bit feature supported. For VPID, exposes all features to L1 VMM Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* nEPT: Use minimal permission for nested p2mZhang Xiantao2013-01-152-11/+33
| | | | | | | | | | | | Emulate permission check for the nested p2m. Current solution is to use minimal permission, and once meet permission violation in L0, then determin whether it is caused by guest EPT or host EPT Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* EPT: Make ept data structure or operations neutralZhang Xiantao2013-01-152-63/+174
| | | | | | | | | | | | Share the current EPT logic with nested EPT case, so make the related data structure or operations netural to comment EPT and nested EPT. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* nested_ept: Implement guest ept's walkerZhang Xiantao2013-01-154-8/+297
| | | | | | | | | | | | | Implment guest EPT PT walker, some logic is based on shadow's ia32e PT walker. During the PT walking, if the target pages are not in memory, use RETRY mechanism and get a chance to let the target page back. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* nestedhap: Change nested p2m's walker to vendor-specificZhang Xiantao2013-01-151-30/+16
| | | | | | | | | | | EPT and NPT adopts differnt formats for each-level entry, so change the walker functions to vendor-specific. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* nestedhap: Change hostcr3 and p2m->cr3 to meaningful wordsZhang Xiantao2013-01-153-21/+23
| | | | | | | | | | | VMX doesn't have the concept about host cr3 for nested p2m, and only SVM has, so change it to netural words. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jun Nakajima <jun.nakajima@intel.com> Acked-by: Eddie Dong <eddie.dong@intel.com> Committed-by: Jan Beulich <jbeulich@suse.com>
* xen/xsm: Add xsm_default parameter to XSM hooksDaniel De Graaf2013-01-113-5/+5
| | | | | | | | | | | | | | Include the default XSM hook action as the first argument of the hook to facilitate quick understanding of how the call site is expected to be used (dom0-only, arbitrary guest, or device model). This argument does not solely define how a given hook is interpreted, since any changes to the hook's default action need to be made identically to all callers of a hook (if there are multiple callers; most hooks only have one), and may also require changing the arguments of the hook. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Keir Fraser <keir@xen.org>
* arch/x86: Add missing mem_sharing XSM hooksDaniel De Graaf2013-01-112-33/+33
| | | | | | | | | | | | This patch adds splits up the mem_sharing and mem_event XSM hooks to better cover what the code is doing. It also changes the utility function get_mem_event_op_target to rcu_lock_live_remote_domain_by_id because there is no mm-specific logic in there. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Tim Deegan <tim@xen.org> Acked-by: Jan Beulich <jbeulich@suse.com> Committed-by: Keir Fraser <keir@xen.org>
* x86/mm/hap: Adjust vram tracking to play nicely with log-dirty.Robert Phillips2012-12-132-250/+102
| | | | | | | | | | | | | | | | | | | | | | | | The previous code assumed the guest would be in one of three mutually exclusive modes for bookkeeping dirty pages: (1) shadow, (2) hap utilizing the log dirty bitmap to support functionality such as live migrate, (3) hap utilizing the log dirty bitmap to track dirty vram pages. Races arose when a guest attempted to track dirty vram while performing live migrate. (The dispatch table managed by paging_log_dirty_init() might change in the middle of a log dirty or a vram tracking function.) This change allows hap log dirty and hap vram tracking to be concurrent. Vram tracking no longer uses the log dirty bitmap. Instead it detects dirty vram pages by examining their p2m type. The log dirty bitmap is only used by the log dirty code. Because the two operations use different mechanisms, they are no longer mutually exclusive. Signed-Off-By: Robert Phillips <robert.phillips@citrix.com> Acked-by: Tim Deegan <tim@xen.org> Minor whitespace changes to conform with coding style Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* xen: centralize accounting for domain tot_pagesDan Magenheimer2012-12-101-2/+2
| | | | | | | | | | Provide and use a common function for all adjustments to a domain's tot_pages counter in anticipation of future and/or out-of-tree patches that must adjust related counters atomically. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Committed-by: Keir Fraser <keir@xen.org>
* x86: mark certain items staticJan Beulich2012-12-071-1/+1
| | | | | | | ..., and at once constify the data items among them. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/p2m: drop redundant macro definitionsJan Beulich2012-12-071-12/+0
| | | | | | | | Also, add log level indicator to P2M_ERROR(), and drop pointless underscores from all related macros' parameter names. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xen: fix error handling of guest_physmap_mark_populate_on_demand()Jan Beulich2012-12-041-3/+5
| | | | | | | | | | | | | | | | | | | | | | | The only user of the "out" label bypasses a necessary unlock, thus enabling the caller to lock up Xen. Also, the function was never meant to be called by a guest for itself, so rather than inspecting the code paths in depth for potential other problems this might cause, and adjusting e.g. the non-guest printk() in the above error path, just disallow the guest access to it. Finally, the printk() (considering its potential of spamming the log, the more that it's not using XENLOG_GUEST), is being converted to P2M_DEBUG(), as debugging is what it apparently was added for in the first place. This is XSA-30 / CVE-2012-5514. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Committed-by: Ian Jackson <ian.jackson.citrix.com>
* x86/hap: Fix memory leak of domain->arch.hvm_domain.dirty_vramTim Deegan2012-11-291-0/+3
| | | | | | Signed-off-by: Kouya Shimura <kouya@jp.fujitsu.com> Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* xen/mm/shadow: check toplevel pagetables are present before unhooking them.Ian Jackson2012-11-141-2/+6
| | | | | | | | | | | | | | | If the guest has not fully populated its top-level PAE entries when it calls HVMOP_pagetable_dying, the shadow code could try to unhook entries from MFN 0. Add a check to avoid that case. This issue was introduced by c/s 21239:b9d2db109cf5. This is a security problem, XSA-23 / CVE-2012-4538. Signed-off-by: Tim Deegan <tim@xen.org> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
* x86/physmap: Prevent incorrect updates of m2p mappingsIan Jackson2012-11-141-0/+4
| | | | | | | | | | | | | | | | | | | In certain conditions, such as low memory, set_p2m_entry() can fail. Currently, the p2m and m2p tables will get out of sync because we still update the m2p table after the p2m update has failed. If that happens, subsequent guest-invoked memory operations can cause BUG()s and ASSERT()s to kill Xen. This is fixed by only updating the m2p table iff the p2m was successfully updated. This is a security problem, XSA-22 / CVE-2012-4537. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
* x86/mm x86 shadow: Fix typo in sh_invlpg sl3 page presence checkMatthew Daley2012-11-121-1/+1
| | | | | | Signed-off-by: Matthew Daley <mattjd@gmail.com> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* p2m: rename p2m_is_magic to p2m_is_podOlaf Hering2012-10-222-2/+2
| | | | | | Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* xen: replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when appropriateStefano Stabellini2012-10-174-4/+4
| | | | | | | | | | | | Note: these changes don't make any difference on x86. Replace XEN_GUEST_HANDLE with XEN_GUEST_HANDLE_PARAM when it is used as an hypercall argument. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Keir Fraser <keir@xen.org> Committed-by: Ian Campbell <ian.campbell@citrix.com>
* x86: enable VIA CPU supportJan Beulich2012-09-212-2/+2
| | | | | | | | | | | | | Newer VIA CPUs have both 64-bit and VMX support. Enable them to be recognized for these purposes, at once stripping off any 32-bit CPU only bits from the respective CPU support file, and adding 64-bit ones found in recent Linux. This particularly implies untying the VMX == Intel assumption in a few places. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* printk: prefer %#x et at over 0x%xJan Beulich2012-09-214-7/+7
| | | | | | | | | Performance is not an issue with printk(), so let the function do minimally more work and instead save a byte per affected format specifier. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/mm: Update comments now that Xen is always 64-bit.Tim Deegan2012-09-131-9/+9
| | | | | Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86/mm: remove the linear mapping of the p2m tables.Tim Deegan2012-09-133-204/+0
| | | | | | | | | | | Mapping the p2m into the monitor tables was an important optimization on 32-bit builds, where it avoided mapping and unmapping p2m pages during a walk. On 64-bit it makes no difference -- see http://old-list-archives.xen.org/archives/html/xen-devel/2010-04/msg00981.html Get rid of it, and use the explicit walk for all lookups. Signed-off-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
* x86: We can assume CONFIG_PAGING_LEVELS==4.Keir Fraser2012-09-127-628/+25
| | | | Signed-off-by: Keir Fraser <keir@xen.org>
* xen: Remove x86_32 build target.Keir Fraser2012-09-125-120/+16
| | | | Signed-off-by: Keir Fraser <keir@xen.org>
* xen: Don't BUG_ON() PoD operations on a non-translated guest.Ian Jackson2012-09-051-1/+2
| | | | | | | | This is XSA-14 / CVE-2012-3496 Signed-off-by: Tim Deegan <tim@xen.org> Reviewed-by: Ian Campbell <ian.campbell@citrix.com> Tested-by: Ian Campbell <ian.campbell@citrix.com>