aboutsummaryrefslogtreecommitdiffstats
path: root/xen
Commit message (Collapse)AuthorAgeFilesLines
* remove debugging jmpHEADmasterJames2013-10-241-1/+0
|
* patches to support booting from my grubroot2013-10-234-4/+28
|
* spinlock: ensure the flags parameter is wide enoughstagingAndrew Cooper2013-10-221-3/+15
| | | | | | | | | | Because of the construction of spin_lock_irq() (and varients), the flags parameter could be trucated. Use a BUILD_BUG_ON() to verify the width of the parameter. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* widen flags parameter for spinlock_irqsave() and friendsAndrew Cooper2013-10-222-4/+5
| | | | | | | | These issues were detected using the subsequent patch which forces a compilation error if the result from local_irq_save() would be truncated. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/irq: local_irq_restore() should not blindly popfAndrew Cooper2013-10-221-3/+8
| | | | | | | | | | local_irq_restore() should only be concerned with possibly changing the interrupt flag. A blind popf could corrupt other system flags. While playing in this area, fixup an opencoded use of X86_EFLAGS_IF. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/xsave: also save/restore XCR0 across suspend (ACPI S3)Jan Beulich2013-10-211-0/+7
| | | | | | Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* xen/arm: Add CPU ID for Broadcom Brahma-B15Marc Carino2013-10-182-0/+9
| | | | | | | | | Let Xen recognize the Broadcom Brahma-B15 CPU by adding the appropriate MIDR mask to the initialization phase. Further, ensure that the console output properly reports the CPU manufacturer as "Broadcom Corporation". Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* x86: print relevant (tail) part of filename for warnings and crashesJan Beulich2013-10-171-8/+14
| | | | | | | | | | In particular when the origin construct is in a header file (and hence the file name is an absolute path instead of just the file name portion) the information can otherwise become rather useless when the build tree isn't sitting relatively close to the file system root. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xen: arm: Emacs style fixWei Liu2013-10-161-1/+1
| | | | | Signed-off-by: Wei Liu <wei.liu2@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* add cap value to credit scheduler debug infoJuergen Gross2013-10-161-1/+2
| | | | | | | | | Currently only the weight is the only scheduling parameter printed for domains in the credit scheduler key handler. Add the cap value to be printed as well. Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* credit: unpause parked vcpu before destroying itJuergen Gross2013-10-161-0/+6
| | | | | | | | A capped out vcpu must be unpaused in case of moving it to another cpupool, otherwise it will be paused forever. Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xen/evtchn: Fix build on ARMJulien Grall2013-10-152-0/+2
| | | | | | | | | The recent event channel changes introduced by commit a77eb86 and before... break the compilation on Xen ARM. This commit adds missing includes in common/event_fifo.c and include/xen/sched.h. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* Add DOMCTL to limit the number of event channels a domain may useDavid Vrabel2013-10-146-1/+33
| | | | | | | | | | | | | | | Add XEN_DOMCTL_set_max_evtchn which may be used during domain creation to set the maximum event channel port a domain may use. This may be used to limit the amount of Xen resources (global mapping space and xenheap) that a domain may use for event channels. A domain that does not have a limit set may use all the event channels supported by the event channel ABI in use. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: add FIFO-based event channel hypercalls and port opsDavid Vrabel2013-10-145-1/+519
| | | | | | | | | | Add the implementation for the FIFO-based event channel ABI. The new hypercall sub-ops (EVTCHNOP_init_control, EVTCHNOP_expand_array) and the required evtchn_ops (set_pending, unmask, etc.). Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: implement EVTCHNOP_set_priority and add the set_priority hookDavid Vrabel2013-10-142-0/+40
| | | | | | | | | | | | Implement EVTCHNOP_set_priority. A new set_priority hook added to struct evtchn_port_ops will do the ABI specific validation and setup. If an ABI does not provide a set_priority hook (as is the case of the 2-level ABI), the sub-op will return -ENOSYS. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: add FIFO-based event channel ABIDavid Vrabel2013-10-143-3/+80
| | | | | | | | | | | | | | | | | | | | | | | | | Add the event channel hypercall sub-ops and the definitions for the shared data structures for the FIFO-based event channel ABI. The design document for this new ABI is available here: http://xenbits.xen.org/people/dvrabel/event-channels-F.pdf In summary, events are reported using a per-domain shared event array of event words. Each event word has PENDING, LINKED and MASKED bits and a LINK field for pointing to the next event in the event queue. There are 16 event queues (with different priorities) per-VCPU. Key advantages of this new ABI include: - Support for over 100,000 events (2^17). - 16 different event priorities. - Improved fairness in event latency through the use of FIFOs. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: allow many more evtchn objects to be allocated per domainDavid Vrabel2013-10-143-29/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expand the number of event channels that can be supported internally by altering now struct evtchn's are allocated. The objects are indexed using a two level scheme of groups and buckets (instead of only buckets). Each group is a page of bucket pointers. Each bucket is a page-sized array of struct evtchn's. The optimal number of evtchns per bucket is calculated at compile time. If XSM is not enabled, struct evtchn is 16 bytes and each bucket contains 256, requiring only 1 group of 512 pointers for 2^17 (131,072) event channels. With XSM enabled, struct evtchn is 24 bytes, each bucket contains 128 and 2 groups are required. For the common case of a domain with only a few event channels, instead of requiring an additional allocation for the group page, the first bucket is indexed directly. As a consequence of this, struct domain shrinks by at least 232 bytes as 32 bucket pointers are replaced with 1 bucket pointer and (at most) 2 group pointers. [ Based on a patch from Wei Liu with improvements from Malcolm Crossley. ] Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: use a per-domain variable for the max number of event channelsDavid Vrabel2013-10-145-5/+6
| | | | | | | | | Instead of the MAX_EVTCHNS(d) macro, use d->max_evtchns instead. This avoids having to repeatedly check the ABI type. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: print ABI specific state with the 'e' debug keyDavid Vrabel2013-10-143-3/+22
| | | | | | | | | | | | | | | | In the output of the 'e' debug key, print some ABI specific state in addition to the (p)ending and (m)asked bits. For the 2-level ABI, print the state of that event's selector bit. e.g., (XEN) port [p/m/s] (XEN) 1 [0/0/1]: s=3 n=0 x=0 d=0 p=74 (XEN) 2 [0/0/1]: s=3 n=0 x=0 d=0 p=75 Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* evtchn: refactor low-level event channel port opsDavid Vrabel2013-10-147-61/+189
| | | | | | | | | | | | | Use functions for the low-level event channel port operations (set/clear pending, unmask, is_pending and is_masked). Group these functions into a struct evtchn_port_op so they can be replaced by alternate implementations (for different ABIs) on a per-domain basis. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* debug: remove some event channel info from the 'i' and 'q' debug keysDavid Vrabel2013-10-142-13/+3
| | | | | | | | | | | | | The 'i' key would always use VCPU0's selector word when printing the event channel state. Remove the incorrect output as a subsequent change will add the (correct) information to the 'e' key instead. When dumping domain information, printing the state of the VIRQ_DEBUG port is redundant -- this information is available via the 'e' key. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: cache emulated instruction for retry processingJan Beulich2013-10-142-14/+46
| | | | | | | | | | | | | Rather than re-reading the instruction bytes upon retry processing, stash away and re-use what we already read. That way we can be certain that the retry won't do something different from what requested the retry, getting once again closer to real hardware behavior (where what we use retries for is simply a bus operation, not involving redundant decoding of instructions). Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly deal with hvm_copy_*_guest_phys() errorsJan Beulich2013-10-142-16/+8
| | | | | | | | | | | | | | In memory read/write handling the default case should tell the caller that the operation cannot be handled rather than the operation having succeeded, so that when new HVMCOPY_* states get added not handling them explicitly will not result in errors being ignored. In task switch emulation code stop handling some errors, but not others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: don't ignore hvm_copy_to_guest_phys() errors during I/O interceptJan Beulich2013-10-141-13/+107
| | | | | | | | | Building upon the extended retry logic we can now also make sure to not ignore errors resulting from writing data back to guest memory. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: fix direct PCI port I/O emulation retry and error handlingJan Beulich2013-10-143-18/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | dpci_ioport_{read,write}() guest memory access failure handling should be modelled after process_portio_intercept()'s (and others): Upon encountering an error on other than the first iteration, the count successfully handled needs to be stored and X86EMUL_OKAY returned, in order for the generic instruction emulator to update register state correctly before reporting failure or retrying (both of which would only happen after re-invoking emulation). Further we leverage (and slightly extend, due to the above mentioned need to return X86EMUL_OKAY) the "large MMIO" retry model. Note that there is still a special case not explicitly taken care of here: While the first retry on the last iteration of a "rep ins" correctly recovers the already read data, an eventual subsequent retry is being handled by the pre-existing mmio-large logic (through hvmemul_do_io() storing the [recovered] data [again], also taking into consideration that the emulator converts a single iteration "ins" to ->read_io() plus ->write()). Also fix an off-by-one in the mmio-large-read logic, and slightly simplify the copying of the data. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly handle backward string instruction emulationJan Beulich2013-10-143-44/+23
| | | | | | | | | | | Multiplying a signed 32-bit quantity with an unsigned 32-bit quantity produces an unsigned 32-bit result, yet for emulation of backward string instructions we need the result sign extended before getting added to the base address. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* sched: Correct function prototypesAndrew Cooper2013-10-141-3/+3
| | | | | | | struct vcpu pointers are traditionally v rather than d. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/MSI: fix locking in pci_restore_msi_state()Jan Beulich2013-10-141-1/+1
| | | | | | | | | | Right after the loop the lock is being dropped, so all loop exits should happen with the lock still held. Reported-by: Kristoffer Egefelt <kristoffer@itoc.dk> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Kristoffer Egefelt <kristoffer@itoc.dk> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* sched: fix race between sched_move_domain() and vcpu_wake()David Vrabel2013-10-141-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: David Vrabel <david.vrabel@citrix.com> sched_move_domain() changes v->processor for all the domain's VCPUs. If another domain, softirq etc. triggers a simultaneous call to vcpu_wake() (e.g., by setting an event channel as pending), then vcpu_wake() may lock one schedule lock and try to unlock another. vcpu_schedule_lock() attempts to handle this but only does so for the window between reading the schedule_lock from the per-CPU data and the spin_lock() call. This does not help with sched_move_domain() changing v->processor between the calls to vcpu_schedule_lock() and vcpu_schedule_unlock(). Fix the race by taking the schedule_lock for v->processor in sched_move_domain(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com> Use vcpu_schedule_lock_irq() (which now returns the lock) to properly retry the locking should the to be used lock have changed in the course of acquiring it (issue pointed out by George Dunlap). Add a comment explaining the state after the v->processor adjustment. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* scheduler: adjust internal locking interfaceJan Beulich2013-10-145-136/+125
| | | | | | | | | | | | | | | Make the locking functions return the lock pointers, so they can be passed to the unlocking functions (which in turn can check that the lock is still actually providing the intended protection, i.e. the parameters determining which lock is the right one didn't change). Further use proper spin lock primitives rather than open coded local_irq_...() constructs, so that interrupts can be re-enabled as appropriate while spinning. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: fix bug_line()Jan Beulich2013-10-141-2/+4
| | | | | | | | | | | Due to the packing into a bit field together with a relocated field, the computation can overflow when the relocated field ends up getting a negative value stored. Hence it isn't sufficient to correct the value by 1 in this case, but we also need to mask the result to the width of the original bit field. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: check for canonical address before doing page walksJan Beulich2013-10-112-1/+3
| | | | | | | | | | | | ... as there doesn't really exists any valid mapping for them. Particularly in the case of do_page_walk() this also avoids returning non-NULL for such invalid input. Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: use {rd,wr}{fs,gs}base when availableJan Beulich2013-10-117-29/+79
| | | | | | | | | | | | ... as being intended to be faster than MSR reads/writes. In the case of emulate_privileged_op() also use these in favor of the cached (but possibly stale) addresses from arch.pv_vcpu. This allows entirely removing the code that was the subject of XSA-67. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: add address validity check to guest_map_l1e()Jan Beulich2013-10-111-1/+2
| | | | | | | | | | Just like for guest_get_eff_l1e() this prevents accessing as page tables (and with the wrong memory attribute) internal data inside Xen happening to be mapped with 1Gb pages. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: correct LDT checksJan Beulich2013-10-115-26/+35
| | | | | | | | | | | | | | | | | | | | | | - MMUEXT_SET_LDT should behave as similarly to the LLDT instruction as possible: fail only if the base address is non-canonical - instead LDT descriptor accesses should fault if the descriptor address ends up being non-canonical (by ensuring this we at once avoid reading an entry from the mach-to-phys table and consider it a page table entry) - fault propagation on using LDT selectors must distinguish #PF and #GP (the latter must be raised for a non-canonical descriptor address, which also applies to several other uses of propagate_page_fault(), and hence the problem is being fixed there) - map_ldt_shadow_page() should properly wrap addresses for 32-bit VMs At once remove the odd invokation of map_ldt_shadow_page() from the MMUEXT_SET_LDT handler: There's nothing really telling us that the first LDT page is going to be preferred over others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: check segment descriptor read result in 64-bit OUTS emulationMatthew Daley2013-10-101-4/+4
| | | | | | | | | | | | | | | | | | | | When emulating such an operation from a 64-bit context (CS has long mode set), and the data segment is overridden to FS/GS, the result of reading the overridden segment's descriptor (read_descriptor) is not checked. If it fails, data_base is left uninitialized. This can lead to 8 bytes of Xen's stack being leaked to the guest (implicitly, i.e. via the address given in a #PF). Coverity-ID: 1055116 This is CVE-2013-4368 / XSA-67. Signed-off-by: Matthew Daley <mattjd@gmail.com> Fix formatting. Signed-off-by: Jan Beulich <jbeulich@suse.com>
* xen/arm: Fixing clear_guest_offset macroJaeyong Yoo2013-10-101-2/+3
| | | | | | | | Fix the the broken macro 'clear_guest_offset' in arm. Signed-off-by: Jaeyong Yoo <jaeyong.yoo@samsung.com> Reviewed-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen/arm32: Call start_xen only on the boot CPUJulien Grall2013-10-101-1/+2
| | | | | | | | The boot CPU can have a CPU ID non-equal to zero. Xen needs to check the logical CPU ID (in r12) to know if the CPU is the boot one. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* hvm/vidirian: Avoid printing page_to_mfn(NULL) on error pathsAndrew Cooper2013-10-092-18/+14
| | | | | | | | | | | | | | | | | | | | While working in the viridian code, I noticed that 4cb6c4f4941 "x86/hvm: Use get_page_from_gfn() instead of get_gfn()/put_gfn." introduced two error paths where page_to_mfn(NULL) would be formatted and presented as a bad MFN. This provides junk in the warning rather than something useful. These two codepaths are fixed up to match their counterpart in wrmsr_hypervisor_regs() While auditing the other changes from 4cb6c4f4941, I noticed a small optimisation which could be made by changing the order of the validity checks to remove 6 NULL pointer checks. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/traps: improvements to {rd,wr}msr_hypervisor_regs()Andrew Cooper2013-10-091-26/+15
| | | | | | | | | | | | | | | | | | | Coverity ID: 1055249 1055250 Coverity was complaining that the switch statments contained dead code in their default statements. While this is quite minor, the code flow in wrmsr_hypervisor_regs() was sufficiently opaque that I felt it approprate to fix. Other improvements include: * not shadowing the function parameter 'idx'. * use of PAGE_{SHIFT,SIZE} instead of opencoded numbers. * a more descriptive error message for attempting to write invalid indicies for hypercall pages. There is no behavioural change as a result. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
* xen/x86: Remove GB macro in asm-x86/config.hJulien Grall2013-10-081-1/+0
| | | | | | | | | | | Commit 983843e "xen: Add macros MB and GB" introduce a generic GB macro. By mistake, the macro in asm-x86/config.h was not removed. This is result to a compilation error when Xen is build for x86. Signed-off-by: Julien Grall <julien.grall@linaro.org> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* xen/dts: Support Linux initrd DT bindingsJulien Grall2013-10-081-0/+25
| | | | | | | | Linux uses the property linux,initrd-start and linux,initrd-end to know where the initrd lives in memory. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen/arm: Add support to load initrd in dom0Julien Grall2013-10-083-21/+102
| | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen/dts: Use ROUNDUP macro instead of the internal ALIGNJulien Grall2013-10-081-6/+4
| | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen: Add macro ROUNDUPJulien Grall2013-10-081-0/+2
| | | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com>
* xen: Add macros MB and GBJulien Grall2013-10-082-1/+3
| | | | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* x86/HPET: basic cleanupAndrew Cooper2013-10-083-16/+14
| | | | | | | | | * Strip trailing whitespace * Remove redundant definitions * Update stale documentation links * Move hpet_address into __initdata Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
* VT-d: fix suspected data race condition in iommu_set_root_entry()Andrew Cooper2013-10-081-16/+3
| | | | | | | | | | | | | | | | | Coverity ID: 1054967 Coverity spotted that iommu->root_maddr was optionally allocated within the protection of the iommu->lock, but was referenced with the protection of the iommu->register_lock, and freed without any lock. Luckily, the code as-is is not vulnerable to the potential risks identified. However, the alloc_pgtable_maddr() is far more appropriately done in iommu_alloc(), removing a set of spinlock calls, and a possibility for the iommu setup to fail later than iommu_alloc() with an -ENOMEM. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
* xen: add LZ4 decompression supportKyungsik Lee2013-10-078-2/+767
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for LZ4 decompression in Xen. LZ4 Decompression APIs for Xen are based on LZ4 implementation by Yann Collet. Benchmark Results(PATCH v3) Compiler: Linaro ARM gcc 4.6.2 1. ARMv7, 1.5GHz based board Kernel: linux 3.4 Uncompressed Kernel Size: 14MB Compressed Size Decompression Speed LZO 6.7MB 20.1MB/s, 25.2MB/s(UA) LZ4 7.3MB 29.1MB/s, 45.6MB/s(UA) 2. ARMv7, 1.7GHz based board Kernel: linux 3.7 Uncompressed Kernel Size: 14MB Compressed Size Decompression Speed LZO 6.0MB 34.1MB/s, 52.2MB/s(UA) LZ4 6.5MB 86.7MB/s - UA: Unaligned memory Access support - Latest patch set for LZO applied This patch set is for adding support for LZ4-compressed Kernel. LZ4 is a very fast lossless compression algorithm and it also features an extremely fast decoder [1]. But we have five of decompressors already and one question which does arise, however, is that of where do we stop adding new ones? This issue had been discussed and came to the conclusion [2]. Russell King said that we should have: - one decompressor which is the fastest - one decompressor for the highest compression ratio - one popular decompressor (eg conventional gzip) If we have a replacement one for one of these, then it should do exactly that: replace it. The benchmark shows that an 8% increase in image size vs a 66% increase in decompression speed compared to LZO(which has been known as the fastest decompressor in the Kernel). Therefore the "fast but may not be small" compression title has clearly been taken by LZ4 [3]. [1] http://code.google.com/p/lz4/ [2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157 [3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347 LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html LZ4 source repository: http://code.google.com/p/lz4/ Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com> Signed-off-by: Yann Collet <yann.collet.73@gmail.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: Improve information from domain_crash_synchronousAndrew Cooper2013-10-045-28/+53
| | | | | | | | | | | | | | | | | | | | | As it currently stands, the string "domain_crash_sync called from entry.S" is not helpful at identifying why the domain was crashed, and a debug build of Xen doesn't help the matter This patch improves the information printed, by pointing to where the crash decision was made. Specific improvements include: * Moving the ascii string "domain_crash_sync called from entry.S\n" away from some semi-hot code cache lines. * Moving the printk into C code (especially as this_cpu() is miserable to use in assembly code) * Undo the previous confusing situation of having the domain_crash_synchronous() as a macro in C code, yet a global symbol in assembly code. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>