aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDan Magenheimer <dan.magenheimer@oracle.com>2012-08-31 21:13:39 +0100
committerDan Magenheimer <dan.magenheimer@oracle.com>2012-08-31 21:13:39 +0100
commitd6a43bb25d1ea2a478eda4222916e5b491894c53 (patch)
tree7a10ef743de6a4c7894a4fcec1224f595f24f83b
parent50188cde05e6f0f039d3d848c3acf9b2a6e21365 (diff)
downloadxen-d6a43bb25d1ea2a478eda4222916e5b491894c53.tar.gz
xen-d6a43bb25d1ea2a478eda4222916e5b491894c53.tar.bz2
xen-d6a43bb25d1ea2a478eda4222916e5b491894c53.zip
tmem: add matching unlock for an about-to-be-destroyed object
A 4.2 changeset forces a preempt_disable/enable with every lock/unlock. Tmem has dynamically allocated "objects" that contain a lock. The lock is held when the object is destroyed. No reason to unlock something that's about to be destroyed! But with the preempt_enable/disable in the generic locking code, and the fact that do_softirq ASSERTs that preempt_count must be zero, a crash occurs soon after any object is destroyed. So force lock to be released before destroying objects. Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Committed-by: Keir Fraser <keir@xen.org>
-rw-r--r--xen/common/tmem.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index dd276df327..1a8777c284 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -952,6 +952,7 @@ static NOINLINE void obj_free(obj_t *obj, int no_rebalance)
/* use no_rebalance only if all objects are being destroyed anyway */
if ( !no_rebalance )
rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
+ tmem_spin_unlock(&obj->obj_spinlock);
tmem_free(obj,sizeof(obj_t),pool);
}