aboutsummaryrefslogtreecommitdiffstats
path: root/xen/include/asm-x86/mm.h
diff options
context:
space:
mode:
authorAndres Lagar-Cavilla <andres@lagarcavilla.org>2012-02-10 16:07:07 +0000
committerAndres Lagar-Cavilla <andres@lagarcavilla.org>2012-02-10 16:07:07 +0000
commit6b719c3d7a8d1c6f626bbb14a4427f20acf13d0a (patch)
tree0a966f63337f0da4bb19a06673390871d9ab20c4 /xen/include/asm-x86/mm.h
parentc411a2bcbcce6034e90f3b802eb2dd4d8b8ad690 (diff)
downloadxen-6b719c3d7a8d1c6f626bbb14a4427f20acf13d0a.tar.gz
xen-6b719c3d7a8d1c6f626bbb14a4427f20acf13d0a.tar.bz2
xen-6b719c3d7a8d1c6f626bbb14a4427f20acf13d0a.zip
x86/mm: Make p2m lookups fully synchronized wrt modifications
We achieve this by locking/unlocking the global p2m_lock in get/put_gfn. The lock is always taken recursively, as there are many paths that call get_gfn, and later, make another attempt at grabbing the p2m_lock. The lock is not taken for shadow lookups. We believe there are no problems remaining for synchronized p2m+shadow paging, but we are not enabling this combination due to lack of testing. Unlocked shadow p2m access are tolerable as long as shadows do not gain support for paging or sharing. HAP (EPT) lookups and all modifications do take the lock. Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org> Acked-by: Tim Deegan <tim@xen.org> Committed-by: Tim Deegan <tim@xen.org>
Diffstat (limited to 'xen/include/asm-x86/mm.h')
-rw-r--r--xen/include/asm-x86/mm.h6
1 files changed, 3 insertions, 3 deletions
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 5f004da554..ea52c7c92b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -350,9 +350,9 @@ void clear_superpage_mark(struct page_info *page);
* of (gfn,domain) tupples to a list of gfn's that the shared page is currently
* backing. Nesting may happen when sharing (and locking) two pages -- deadlock
* is avoided by locking pages in increasing order.
- * Memory sharing may take the p2m_lock within a page_lock/unlock
- * critical section. We enforce ordering between page_lock and p2m_lock using an
- * mm-locks.h construct.
+ * All memory sharing code paths take the p2m lock of the affected gfn before
+ * taking the lock for the underlying page. We enforce ordering between page_lock
+ * and p2m_lock using an mm-locks.h construct.
*
* These two users (pte serialization and memory sharing) do not collide, since
* sharing is only supported for hvm guests, which do not perform pv pte updates.