| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise we may leak memory when setting up nHVM fails half way.
This implies that the individual destroy functions will have to remain
capable (in the VMX case they first need to be made so, following
26486:7648ef657fe7 and 26489:83a3fa9c8434) of being called for a vCPU
that the corresponding init function was never run on.
Once at it, also remove a redundant check from the corresponding
parameter validation code.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
Tested-by: Olaf Hering <olaf@aepfle.de>
|
|
|
|
|
|
| |
This conflicts with changes done in 26486:7648ef657fe7 and
26489:83a3fa9c8434 (i.e. the code added by them needs adjustment in
order for the change here to be correct).
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implies that the individual destroy functions will have to remain
capable of being called for a vCPU that the corresponding init function
was never run on.
Once at it, also clean up some inefficiencies in the corresponding
parameter validation code.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This eliminates a couple of incorrect/inconsistent uses of
map_domain_page() from VT-x code.
Note that this does _not_ add error handling where none was present
before, even though I think NULL returns from any of the mapping
operations touched here need to properly be handled. I just don't know
this code well enough to figure out what the right action in each case
would be.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Newer VIA CPUs have both 64-bit and VMX support. Enable them to be
recognized for these purposes, at once stripping off any 32-bit CPU
only bits from the respective CPU support file, and adding 64-bit ones
found in recent Linux.
This particularly implies untying the VMX == Intel assumption in a few
places.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- no need for calling nestedhvm_setup() explicitly (can be a normal
init-call, and can be __init)
- calling _xmalloc() for multi-page, page-aligned memory regions is
inefficient - use alloc_xenheap_pages() instead
- albeit an allocation error is unlikely here, add error handling
nevertheless (and have nestedhvm_vcpu_initialise() bail if an error
occurred during setup)
- nestedhvm_enabled() must no access d->arch.hvm_domain without first
checking that 'd' actually represents a HVM domain
Signed-off-by: Jan Beulich <JBeulich@suse.com>
Committed-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Keir Fraser <keir@xen.org>
--- 2011-10-18.orig/xen/arch/x86/hvm/nestedhvm.c 2011-10-11 17:24:46.000000000 +0200
+++ 2011-10-18/xen/arch/x86/hvm/nestedhvm.c 2011-10-18 16:45:02.000000000 +0200
@@ -114,9 +114,9 @@ nestedhvm_flushtlb_ipi(void *info)
void
nestedhvm_vmcx_flushtlb(struct p2m_domain *p2m)
{
- on_selected_cpus(&p2m->p2m_dirty_cpumask, nestedhvm_flushtlb_ipi,
+ on_selected_cpus(p2m->dirty_cpumask, nestedhvm_flushtlb_ipi,
p2m->domain, 1);
- cpumask_clear(&p2m->p2m_dirty_cpumask);
+ cpumask_clear(p2m->dirty_cpumask);
}
bool_t
--- 2011-10-18.orig/xen/arch/x86/mm/hap/nested_hap.c 2011-10-21 09:24:51.000000000 +0200
+++ 2011-10-18/xen/arch/x86/mm/hap/nested_hap.c 2011-10-18 16:44:35.000000000 +0200
@@ -88,7 +88,7 @@ nestedp2m_write_p2m_entry(struct p2m_dom
safe_write_pte(p, new);
if (old_flags & _PAGE_PRESENT)
- flush_tlb_mask(&p2m->p2m_dirty_cpumask);
+ flush_tlb_mask(p2m->dirty_cpumask);
paging_unlock(d);
}
--- 2011-10-18.orig/xen/arch/x86/mm/p2m.c 2011-10-14 09:47:46.000000000 +0200
+++ 2011-10-18/xen/arch/x86/mm/p2m.c 2011-10-21 09:28:33.000000000 +0200
@@ -81,7 +81,6 @@ static void p2m_initialise(struct domain
p2m->default_access = p2m_access_rwx;
p2m->cr3 = CR3_EADDR;
- cpumask_clear(&p2m->p2m_dirty_cpumask);
if ( hap_enabled(d) && (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) )
ept_p2m_init(p2m);
@@ -102,6 +101,8 @@ p2m_init_nestedp2m(struct domain *d)
d->arch.nested_p2m[i] = p2m = xzalloc(struct p2m_domain);
if (p2m == NULL)
return -ENOMEM;
+ if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+ return -ENOMEM;
p2m_initialise(d, p2m);
p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
@@ -118,6 +119,11 @@ int p2m_init(struct domain *d)
p2m_get_hostp2m(d) = p2m = xzalloc(struct p2m_domain);
if ( p2m == NULL )
return -ENOMEM;
+ if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+ {
+ xfree(p2m);
+ return -ENOMEM;
+ }
p2m_initialise(d, p2m);
/* Must initialise nestedp2m unconditionally
@@ -333,6 +339,9 @@ static void p2m_teardown_nestedp2m(struc
uint8_t i;
for (i = 0; i < MAX_NESTEDP2M; i++) {
+ if ( !d->arch.nested_p2m[i] )
+ continue;
+ free_cpumask_var(d->arch.nested_p2m[i]->dirty_cpumask);
xfree(d->arch.nested_p2m[i]);
d->arch.nested_p2m[i] = NULL;
}
@@ -341,8 +350,12 @@ static void p2m_teardown_nestedp2m(struc
void p2m_final_teardown(struct domain *d)
{
/* Iterate over all p2m tables per domain */
- xfree(d->arch.p2m);
- d->arch.p2m = NULL;
+ if ( d->arch.p2m )
+ {
+ free_cpumask_var(d->arch.p2m->dirty_cpumask);
+ xfree(d->arch.p2m);
+ d->arch.p2m = NULL;
+ }
/* We must teardown unconditionally because
* we initialise them unconditionally.
@@ -1200,7 +1213,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64
if (p2m->cr3 == CR3_EADDR)
hvm_asid_flush_vcpu(v);
p2m->cr3 = cr3;
- cpu_set(v->processor, p2m->p2m_dirty_cpumask);
+ cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
p2m_unlock(p2m);
nestedp2m_unlock(d);
return p2m;
@@ -1217,7 +1230,7 @@ p2m_get_nestedp2m(struct vcpu *v, uint64
p2m->cr3 = cr3;
nv->nv_flushp2m = 0;
hvm_asid_flush_vcpu(v);
- cpu_set(v->processor, p2m->p2m_dirty_cpumask);
+ cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
p2m_unlock(p2m);
nestedp2m_unlock(d);
--- 2011-10-18.orig/xen/include/asm-x86/p2m.h 2011-10-21 09:24:51.000000000 +0200
+++ 2011-10-18/xen/include/asm-x86/p2m.h 2011-10-18 16:39:34.000000000 +0200
@@ -198,7 +198,7 @@ struct p2m_domain {
* this p2m and those physical cpus whose vcpu's are in
* guestmode.
*/
- cpumask_t p2m_dirty_cpumask;
+ cpumask_var_t dirty_cpumask;
struct domain *domain; /* back pointer to domain */
|
|
|
|
|
|
|
| |
... in favor of using the new, nr_cpumask_bits-based ones.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
|
|
| |
rather than using the teardown and init functions.
This makes the locking clearer and avoids an expensive scan of all
pfns that's only needed for non-nested p2ms. It also moves the
tlb flush into the proper place in the flush logic, avoiding a
possible race against other CPUs.
Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
Acked-by: Christoph Egger <Christoph.Egger@amd.com>
|
|
|
|
|
|
|
| |
Signed-off-by: Eddie Dong <eddie.dong@intel.com>
While there, simplify and tidy the code.
Signed-off-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
| |
Move all extern declarations into appropriate header files.
This also fixes up a few places where the caller and the definition
had different signatures.
Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
|
|
|
|
| |
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
|
|
|
|
|
|
|
|
|
|
|
| |
They are a pointless level of abstraction beneath nestedhvm_* variants
of the same operations, which all callers should be using.
At the same time, nestedhvm_vcpu_initialise() does not need to call
destroy if initialisation fails. That is the vendor-specific init
function's job (clearing up its own state on failure).
Signed-off-by: Keir Fraser <keir@xen.org>
|
|
|
|
|
|
|
|
|
| |
when nested HVM is enabled after VCPus are allocated.
The previous patch would fail because the call to
nestedhvm_vcpu_initialise() in the HVM param set code
happens before nestedhvm_enabled(v->domain) is true.
Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
|
|
|
|
|
|
| |
for domains that aren't going to use it.
Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>
|
|
|
|
|
|
|
|
| |
This allows the guest to run nested guest with hap enabled.
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Acked-by: Tim Deegan <Tim.Deegan@citrix.com>
Committed-by: Tim Deegan <Tim.Deegan@citrix.com>
|
|
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Tim Deegan <Tim.Deegan@citrix.com>
Committed-by: Tim Deegan <Tim.Deegan@citrix.com>
|