diff options
author | Andres Lagar-Cavilla <andres@lagarcavilla.org> | 2012-02-10 16:07:07 +0000 |
---|---|---|
committer | Andres Lagar-Cavilla <andres@lagarcavilla.org> | 2012-02-10 16:07:07 +0000 |
commit | 59a36d66a5d50a66f8a629b334a0cbd7af360f80 (patch) | |
tree | eb7a565b5e32cc6542dbf49205e26d9772dc46b9 /tools/memshr | |
parent | 2da4c17b3b76d190da3dda35aa24910ff69984e5 (diff) | |
download | xen-59a36d66a5d50a66f8a629b334a0cbd7af360f80.tar.gz xen-59a36d66a5d50a66f8a629b334a0cbd7af360f80.tar.bz2 xen-59a36d66a5d50a66f8a629b334a0cbd7af360f80.zip |
Use memops for mem paging, sharing, and access, instead of domctls
Per page operations in the paging, sharing, and access tracking subsystems are
all implemented with domctls (e.g. a domctl to evict one page, or to share one
page).
Under heavy load, the domctl path reveals a lack of scalability. The domctl
lock serializes dom0's vcpus in the hypervisor. When performing thousands of
per-page operations on dozens of domains, these vcpus will spin in the
hypervisor. Beyond the aggressive locking, an added inefficiency of blocking
vcpus in the domctl lock is that dom0 is prevented from re-scheduling any of
its other work-starved processes.
We retain the domctl interface for setting up and tearing down
paging/sharing/mem access for a domain. But we migrate all the per page
operations to use the memory_op hypercalls (e.g XENMEM_*).
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla>
Signed-off-by: Adin Scannell <adin@scannell.ca>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Tim Deegan <tim@xen.org>
Diffstat (limited to 'tools/memshr')
-rw-r--r-- | tools/memshr/interface.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/tools/memshr/interface.c b/tools/memshr/interface.c index b4d6d4e190..1c39dfa77d 100644 --- a/tools/memshr/interface.c +++ b/tools/memshr/interface.c @@ -186,12 +186,12 @@ int memshr_vbd_issue_ro_request(char *buf, remove the relevant ones from the map */ switch(ret) { - case XEN_DOMCTL_MEM_SHARING_S_HANDLE_INVALID: + case XENMEM_SHARING_OP_S_HANDLE_INVALID: ret = blockshr_shrhnd_remove(memshr.blks, source_st, NULL); if(ret) DPRINTF("Could not rm invl s_hnd: %u %"PRId64" %"PRId64"\n", source_st.domain, source_st.frame, source_st.handle); break; - case XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID: + case XENMEM_SHARING_OP_C_HANDLE_INVALID: ret = blockshr_shrhnd_remove(memshr.blks, client_st, NULL); if(ret) DPRINTF("Could not rm invl c_hnd: %u %"PRId64" %"PRId64"\n", client_st.domain, client_st.frame, client_st.handle); |