aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* evtchn: refactor low-level event channel port opsDavid Vrabel2013-10-147-61/+189
| | | | | | | | | | | | | Use functions for the low-level event channel port operations (set/clear pending, unmask, is_pending and is_masked). Group these functions into a struct evtchn_port_op so they can be replaced by alternate implementations (for different ABIs) on a per-domain basis. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* debug: remove some event channel info from the 'i' and 'q' debug keysDavid Vrabel2013-10-142-13/+3
| | | | | | | | | | | | | The 'i' key would always use VCPU0's selector word when printing the event channel state. Remove the incorrect output as a subsequent change will add the (correct) information to the 'e' key instead. When dumping domain information, printing the state of the VIRQ_DEBUG port is redundant -- this information is available via the 'e' key. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: cache emulated instruction for retry processingJan Beulich2013-10-142-14/+46
| | | | | | | | | | | | | Rather than re-reading the instruction bytes upon retry processing, stash away and re-use what we already read. That way we can be certain that the retry won't do something different from what requested the retry, getting once again closer to real hardware behavior (where what we use retries for is simply a bus operation, not involving redundant decoding of instructions). Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly deal with hvm_copy_*_guest_phys() errorsJan Beulich2013-10-142-16/+8
| | | | | | | | | | | | | | In memory read/write handling the default case should tell the caller that the operation cannot be handled rather than the operation having succeeded, so that when new HVMCOPY_* states get added not handling them explicitly will not result in errors being ignored. In task switch emulation code stop handling some errors, but not others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: don't ignore hvm_copy_to_guest_phys() errors during I/O interceptJan Beulich2013-10-141-13/+107
| | | | | | | | | Building upon the extended retry logic we can now also make sure to not ignore errors resulting from writing data back to guest memory. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: fix direct PCI port I/O emulation retry and error handlingJan Beulich2013-10-143-18/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | dpci_ioport_{read,write}() guest memory access failure handling should be modelled after process_portio_intercept()'s (and others): Upon encountering an error on other than the first iteration, the count successfully handled needs to be stored and X86EMUL_OKAY returned, in order for the generic instruction emulator to update register state correctly before reporting failure or retrying (both of which would only happen after re-invoking emulation). Further we leverage (and slightly extend, due to the above mentioned need to return X86EMUL_OKAY) the "large MMIO" retry model. Note that there is still a special case not explicitly taken care of here: While the first retry on the last iteration of a "rep ins" correctly recovers the already read data, an eventual subsequent retry is being handled by the pre-existing mmio-large logic (through hvmemul_do_io() storing the [recovered] data [again], also taking into consideration that the emulator converts a single iteration "ins" to ->read_io() plus ->write()). Also fix an off-by-one in the mmio-large-read logic, and slightly simplify the copying of the data. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/HVM: properly handle backward string instruction emulationJan Beulich2013-10-143-44/+23
| | | | | | | | | | | Multiplying a signed 32-bit quantity with an unsigned 32-bit quantity produces an unsigned 32-bit result, yet for emulation of backward string instructions we need the result sign extended before getting added to the base address. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* sched: Correct function prototypesAndrew Cooper2013-10-141-3/+3
| | | | | | | struct vcpu pointers are traditionally v rather than d. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/MSI: fix locking in pci_restore_msi_state()Jan Beulich2013-10-141-1/+1
| | | | | | | | | | Right after the loop the lock is being dropped, so all loop exits should happen with the lock still held. Reported-by: Kristoffer Egefelt <kristoffer@itoc.dk> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Kristoffer Egefelt <kristoffer@itoc.dk> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* sched: fix race between sched_move_domain() and vcpu_wake()David Vrabel2013-10-141-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: David Vrabel <david.vrabel@citrix.com> sched_move_domain() changes v->processor for all the domain's VCPUs. If another domain, softirq etc. triggers a simultaneous call to vcpu_wake() (e.g., by setting an event channel as pending), then vcpu_wake() may lock one schedule lock and try to unlock another. vcpu_schedule_lock() attempts to handle this but only does so for the window between reading the schedule_lock from the per-CPU data and the spin_lock() call. This does not help with sched_move_domain() changing v->processor between the calls to vcpu_schedule_lock() and vcpu_schedule_unlock(). Fix the race by taking the schedule_lock for v->processor in sched_move_domain(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com> Use vcpu_schedule_lock_irq() (which now returns the lock) to properly retry the locking should the to be used lock have changed in the course of acquiring it (issue pointed out by George Dunlap). Add a comment explaining the state after the v->processor adjustment. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* scheduler: adjust internal locking interfaceJan Beulich2013-10-145-136/+125
| | | | | | | | | | | | | | | Make the locking functions return the lock pointers, so they can be passed to the unlocking functions (which in turn can check that the lock is still actually providing the intended protection, i.e. the parameters determining which lock is the right one didn't change). Further use proper spin lock primitives rather than open coded local_irq_...() constructs, so that interrupts can be re-enabled as appropriate while spinning. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: fix bug_line()Jan Beulich2013-10-141-2/+4
| | | | | | | | | | | Due to the packing into a bit field together with a relocated field, the computation can overflow when the relocated field ends up getting a negative value stored. Hence it isn't sufficient to correct the value by 1 in this case, but we also need to mask the result to the width of the original bit field. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* Revert "QEMU_TAG update"Ian Jackson2013-10-111-3/+3
| | | | | | (My script edited the wrong xen.git branch) This reverts commit 363cfda13a58eab51a4a85f30c7c740990b53c3a.
* QEMU_TAG updateIan Jackson2013-10-111-3/+3
|
* libxl: make libxl__poller_put tolerate p==NULLIan Jackson2013-10-112-4/+4
| | | | | | | | | | | | This is less fragile, and more in keeping with the usual style of initialising everything to 0 and freeing things unconditionally. Correspondingly, remove the tests at the call sites. Apropos of c1f3f174. No overall functional change. Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* x86: check for canonical address before doing page walksJan Beulich2013-10-112-1/+3
| | | | | | | | | | | | ... as there doesn't really exists any valid mapping for them. Particularly in the case of do_page_walk() this also avoids returning non-NULL for such invalid input. Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: use {rd,wr}{fs,gs}base when availableJan Beulich2013-10-117-29/+79
| | | | | | | | | | | | ... as being intended to be faster than MSR reads/writes. In the case of emulate_privileged_op() also use these in favor of the cached (but possibly stale) addresses from arch.pv_vcpu. This allows entirely removing the code that was the subject of XSA-67. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: add address validity check to guest_map_l1e()Jan Beulich2013-10-111-1/+2
| | | | | | | | | | Just like for guest_get_eff_l1e() this prevents accessing as page tables (and with the wrong memory attribute) internal data inside Xen happening to be mapped with 1Gb pages. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: correct LDT checksJan Beulich2013-10-115-26/+35
| | | | | | | | | | | | | | | | | | | | | | - MMUEXT_SET_LDT should behave as similarly to the LLDT instruction as possible: fail only if the base address is non-canonical - instead LDT descriptor accesses should fault if the descriptor address ends up being non-canonical (by ensuring this we at once avoid reading an entry from the mach-to-phys table and consider it a page table entry) - fault propagation on using LDT selectors must distinguish #PF and #GP (the latter must be raised for a non-canonical descriptor address, which also applies to several other uses of propagate_page_fault(), and hence the problem is being fixed there) - map_ldt_shadow_page() should properly wrap addresses for 32-bit VMs At once remove the odd invokation of map_ldt_shadow_page() from the MMUEXT_SET_LDT handler: There's nothing really telling us that the first LDT page is going to be preferred over others. Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* libxl: fix out-of-memory error handling in libxl_list_cpupoolMatthew Daley2013-10-101-0/+1
| | | | | | | | | | | | ...otherwise it will return freed memory. All the current users of this function check already for a NULL return, so use that. Coverity-ID: 1056194 This is CVE-2013-4371 / XSA-70 Signed-off-by: Matthew Daley <mattjd@gmail.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* tools/ocaml: fix erroneous free of cpumap in stub_xc_vcpu_getaffinityMatthew Daley2013-10-101-2/+0
| | | | | | | | | | | Not sure how it got there... Coverity-ID: 1056196 This is CVE-2013-4370 / XSA-69 Signed-off-by: Matthew Daley <mattjd@gmail.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* libxl: fix vif rate parsingIan Jackson2013-10-102-6/+17
| | | | | | | | | | | | | | | strtok can return NULL here. We don't need to use strtok anyway, so just use a simple strchr method. Coverity-ID: 1055642 This is CVE-2013-4369 / XSA-68 Signed-off-by: Matthew Daley <mattjd@gmail.com> Fix type. Add test case Signed-off-by: Ian Campbell <Ian.campbell@citrix.com>
* x86: check segment descriptor read result in 64-bit OUTS emulationMatthew Daley2013-10-101-4/+4
| | | | | | | | | | | | | | | | | | | | When emulating such an operation from a 64-bit context (CS has long mode set), and the data segment is overridden to FS/GS, the result of reading the overridden segment's descriptor (read_descriptor) is not checked. If it fails, data_base is left uninitialized. This can lead to 8 bytes of Xen's stack being leaked to the guest (implicitly, i.e. via the address given in a #PF). Coverity-ID: 1055116 This is CVE-2013-4368 / XSA-67. Signed-off-by: Matthew Daley <mattjd@gmail.com> Fix formatting. Signed-off-by: Jan Beulich <jbeulich@suse.com>
* xen/arm: Fixing clear_guest_offset macroJaeyong Yoo2013-10-101-2/+3
| | | | | | | | Fix the the broken macro 'clear_guest_offset' in arm. Signed-off-by: Jaeyong Yoo <jaeyong.yoo@samsung.com> Reviewed-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into stagingIan Campbell2013-10-100-0/+0
|\
| * xen/arm32: Call start_xen only on the boot CPUJulien Grall2013-10-101-1/+2
| | | | | | | | | | | | | | | | The boot CPU can have a CPU ID non-equal to zero. Xen needs to check the logical CPU ID (in r12) to know if the CPU is the boot one. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* | libxl: introduce libxl_node_to_cpumapDario Faggioli2013-10-102-0/+25
| | | | | | | | | | | | | | | | As an helper for the special case (of libxl_nodemap_to_cpumap) when one wants the cpumap for just one node. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
* | xl: fix a typo in main_vcpulist()Dario Faggioli2013-10-101-1/+1
| | | | | | | | | | | | | | which was preventing `xl vcpu-list -h' to work. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* | xl: update the manpage about "cpus=" and NUMA node-affinityDario Faggioli2013-10-101-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since d06b1bf169a01a9c7b0947d7825e58cb455a0ba5 ('libxl: automatic placement deals with node-affinity') it is no longer true that, if no "cpus=" option is specified, xl picks up some pCPUs by default and pin the domain there. In fact, it is the NUMA node-affinity that is affected by automatic placement, not vCPU to pCPU pinning. Update the xl config file documenation accordingly, as it seems to have been forgotten at that time. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
* | tools/migrate: Fix regression when migrating from older version of XenAndrew Cooper2013-10-1012-16/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 00a4b65f8534c9e6521eab2e6ce796ae36037774 Sep 7 2010 "libxc: provide notification of final checkpoint to restore end" broke migration from any version of Xen using tools from prior to that commit Older tools have no idea about an XC_SAVE_ID_LAST_CHECKPOINT, causing newer tools xc_domain_restore() to start reading the qemu save record, as ctx->last_checkpoint is 0. The failure looks like: xc: error: Max batch size exceeded (1970103633). Giving up. where 1970103633 = 0x756d6551 = *(uint32_t*)"Qemu" With this fix in place, the behaviour for normal migrations is reverted to how it was before the regression; the migration is considered non-checkpointed right from the start. A XC_SAVE_ID_LAST_CHECKPOINT chunk seen in the migration stream is a nop. For checkpointed migrations the behaviour is unchanged. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Ian Campbell <Ian.Campbell@citrix.com> CC: Ian Jackson <Ian.Jackson@eu.citrix.com> Acked-by: Shriram Rajagopalan <rshriram@cs.ubc.ca> (Remus bits)
* | tools: adds tracer on qemu-xen debug configure optionsFabio Fantoni2013-10-101-1/+1
| | | | | | | | | | | | | | | | | | When building tools in debug mode (debug=y), pass also --enable-trace-backend=stderr when configuring qemu-xen. Useful to improve debug. Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* | xen/arm32: Call start_xen only on the boot CPUJulien Grall2013-10-101-1/+2
|/ | | | | | | | The boot CPU can have a CPU ID non-equal to zero. Xen needs to check the logical CPU ID (in r12) to know if the CPU is the boot one. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* qemu-xen: Set localstatedir to /var.Anthony PERARD2013-10-101-0/+1
| | | | | | | | | This path is used by the QEMU build system to create the /run directory. If local-state-dir is not set, the result become $prefix/var which is not an acceptable path. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* qemu-xen: Disabling build of guest-agent.Anthony PERARD2013-10-101-0/+1
| | | | | | | It is not use when QEMU is run with Xen. Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* hvm/vidirian: Avoid printing page_to_mfn(NULL) on error pathsAndrew Cooper2013-10-092-18/+14
| | | | | | | | | | | | | | | | | | | | While working in the viridian code, I noticed that 4cb6c4f4941 "x86/hvm: Use get_page_from_gfn() instead of get_gfn()/put_gfn." introduced two error paths where page_to_mfn(NULL) would be formatted and presented as a bad MFN. This provides junk in the warning rather than something useful. These two codepaths are fixed up to match their counterpart in wrmsr_hypervisor_regs() While auditing the other changes from 4cb6c4f4941, I noticed a small optimisation which could be made by changing the order of the validity checks to remove 6 NULL pointer checks. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/traps: improvements to {rd,wr}msr_hypervisor_regs()Andrew Cooper2013-10-091-26/+15
| | | | | | | | | | | | | | | | | | | Coverity ID: 1055249 1055250 Coverity was complaining that the switch statments contained dead code in their default statements. While this is quite minor, the code flow in wrmsr_hypervisor_regs() was sufficiently opaque that I felt it approprate to fix. Other improvements include: * not shadowing the function parameter 'idx'. * use of PAGE_{SHIFT,SIZE} instead of opencoded numbers. * a more descriptive error message for attempting to write invalid indicies for hypercall pages. There is no behavioural change as a result. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
* xen/x86: Remove GB macro in asm-x86/config.hJulien Grall2013-10-081-1/+0
| | | | | | | | | | | Commit 983843e "xen: Add macros MB and GB" introduce a generic GB macro. By mistake, the macro in asm-x86/config.h was not removed. This is result to a compilation error when Xen is build for x86. Signed-off-by: Julien Grall <julien.grall@linaro.org> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* xen/dts: Support Linux initrd DT bindingsJulien Grall2013-10-081-0/+25
| | | | | | | | Linux uses the property linux,initrd-start and linux,initrd-end to know where the initrd lives in memory. Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen/arm: Add support to load initrd in dom0Julien Grall2013-10-083-21/+102
| | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen/dts: Use ROUNDUP macro instead of the internal ALIGNJulien Grall2013-10-081-6/+4
| | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen: Add macro ROUNDUPJulien Grall2013-10-081-0/+2
| | | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com>
* xen: Add macros MB and GBJulien Grall2013-10-082-1/+3
| | | | | | | Signed-off-by: Julien Grall <julien.grall@linaro.org> Acked-by: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
* x86/HPET: basic cleanupAndrew Cooper2013-10-083-16/+14
| | | | | | | | | * Strip trailing whitespace * Remove redundant definitions * Update stale documentation links * Move hpet_address into __initdata Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
* VT-d: fix suspected data race condition in iommu_set_root_entry()Andrew Cooper2013-10-081-16/+3
| | | | | | | | | | | | | | | | | Coverity ID: 1054967 Coverity spotted that iommu->root_maddr was optionally allocated within the protection of the iommu->lock, but was referenced with the protection of the iommu->register_lock, and freed without any lock. Luckily, the code as-is is not vulnerable to the potential risks identified. However, the alloc_pgtable_maddr() is far more appropriately done in iommu_alloc(), removing a set of spinlock calls, and a possibility for the iommu setup to fail later than iommu_alloc() with an -ENOMEM. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
* libxc: add LZ4 decompression supportJan Beulich2013-10-074-1/+157
| | | | | | | | | | | Since there's no shared or static library to link against, this simply re-uses the hypervisor side code. However, I only audited the code added here for possible security issues, not the referenced code in the hypervisor tree. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Ian Campbell <ian.campbell@citrix.com>
* xen: add LZ4 decompression supportKyungsik Lee2013-10-078-2/+767
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for LZ4 decompression in Xen. LZ4 Decompression APIs for Xen are based on LZ4 implementation by Yann Collet. Benchmark Results(PATCH v3) Compiler: Linaro ARM gcc 4.6.2 1. ARMv7, 1.5GHz based board Kernel: linux 3.4 Uncompressed Kernel Size: 14MB Compressed Size Decompression Speed LZO 6.7MB 20.1MB/s, 25.2MB/s(UA) LZ4 7.3MB 29.1MB/s, 45.6MB/s(UA) 2. ARMv7, 1.7GHz based board Kernel: linux 3.7 Uncompressed Kernel Size: 14MB Compressed Size Decompression Speed LZO 6.0MB 34.1MB/s, 52.2MB/s(UA) LZ4 6.5MB 86.7MB/s - UA: Unaligned memory Access support - Latest patch set for LZO applied This patch set is for adding support for LZ4-compressed Kernel. LZ4 is a very fast lossless compression algorithm and it also features an extremely fast decoder [1]. But we have five of decompressors already and one question which does arise, however, is that of where do we stop adding new ones? This issue had been discussed and came to the conclusion [2]. Russell King said that we should have: - one decompressor which is the fastest - one decompressor for the highest compression ratio - one popular decompressor (eg conventional gzip) If we have a replacement one for one of these, then it should do exactly that: replace it. The benchmark shows that an 8% increase in image size vs a 66% increase in decompression speed compared to LZO(which has been known as the fastest decompressor in the Kernel). Therefore the "fast but may not be small" compression title has clearly been taken by LZ4 [3]. [1] http://code.google.com/p/lz4/ [2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157 [3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347 LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html LZ4 source repository: http://code.google.com/p/lz4/ Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com> Signed-off-by: Yann Collet <yann.collet.73@gmail.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: Improve information from domain_crash_synchronousAndrew Cooper2013-10-045-28/+53
| | | | | | | | | | | | | | | | | | | | | As it currently stands, the string "domain_crash_sync called from entry.S" is not helpful at identifying why the domain was crashed, and a debug build of Xen doesn't help the matter This patch improves the information printed, by pointing to where the crash decision was made. Specific improvements include: * Moving the ascii string "domain_crash_sync called from entry.S\n" away from some semi-hot code cache lines. * Moving the printk into C code (especially as this_cpu() is miserable to use in assembly code) * Undo the previous confusing situation of having the domain_crash_synchronous() as a macro in C code, yet a global symbol in assembly code. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86/traps: Record last extable faulting addressAndrew Cooper2013-10-041-0/+5
| | | | | | | | ... so the following patch can identify the location of faults leading to a decision to crash a domain. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: allow HVM guests to make console_io hypercallKonrad Rzeszutek Wilk2013-10-041-0/+2
| | | | | | | | | The console_io hypercall is provided for PV guests and for HVM guests it is done via the 0xe9 port. However the PV hypercall is more efficient as it takes a string rather than one character per write. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xsm: clean up unneeded current referencesDaniel De Graaf2013-10-041-2/+2
| | | | | | | | | | | | Some XSM hooks in dummy.h used current->domain when this was also passed as a parameter; use the parameter in these cases. There are two hooks where this does not apply and which are not immediately obvious: xsm_set_target's parameters are the device model and HVM domains, and xsm_mem_sharing_op's first parameter is the source of the shared page, not the domain making the hypercall. Reported-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>