aboutsummaryrefslogtreecommitdiffstats
path: root/xen/common/xmalloc_tlsf.c
Commit message (Collapse)AuthorAgeFilesLines
* xmalloc: make whole pages xfree() clear the order field (ab)used by xmalloc()Jan Beulich2013-09-091-0/+1
| | | | | | | | | | | | | Not doing this was found to cause problems with sequences of allocation (multi-page), freeing, and then again allocation of the same page upon boot when interrupts are still disabled (causing the owner field to be non-zero, thus making the allocator attempt a TLB flush and, in its processing, triggering an assertion). Reported-by: Tomasz Wroblewski <tomasz.wroblewski@citrix.com> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Tomasz Wroblewski <tomasz.wroblewski@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* xmalloc: make close-to-PAGE_SIZE allocations more efficientJan Beulich2013-02-191-28/+43
| | | | | | | | | | | | | | | | | | | | | Rather than bumping their sizes to slightly above (a multiple of) PAGE_SIZE (in order to store tracking information), thus requiring a non-order-0 allocation even when no more than a page is being requested, return the result of alloc_xenheap_pages() directly, and use the struct page_info field underlying PFN_ORDER() to store the actual size (needed for freeing the memory). This leverages the fact that sub-allocation of memory obtained from the page allocator can only ever result in non-page-aligned memory chunks (with the exception of zero size allocations with sufficiently high alignment being requested, which is why zero-size allocations now get special cased). Use the new property to simplify allocation of the trap info array for PV guests on x86. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* x86: prevent call to xfree() in dump_irqs() while in an irq contextAndrew Cooper2012-05-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Because of c/s 24707:96987c324a4f, dump_irqs() can now be called in an irq context when a bug condition is encountered. If this is the case, ignore the call to xsm_show_irq_ssid() and the subsequent call to xfree(). This prevents an assertion failure in xfree(), and should allow all the debug information to be dumped, before failing with a BUG() because of the underlying race condition we are attempting to reproduce. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Rather than using the non-obvious conditional around an xfree() that would be passed NULL only in the inverse case (which could easily get removed by a future change on the basis that calling xfree(NULL) is benign), switch the order of checks in xfree() itself and only suppress the call to XSM that could potentially call xmalloc(). Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> Committed-by: Jan Beulich <jbeulich@suse.com>
* xmalloc: return unused full pages on multi-page allocationsJan Beulich2011-10-131-3/+24
| | | | | | | | | | | Certain (boot time) allocations are relatively large (particularly when building with high NR_CPUS), but can also happen to be pretty far away from a power-of-two size. Utilize the page allocator's (other than Linux'es) capability of allowing to return space from higher-order allocations in smaller pieces to return the unused parts immediately. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* introduce xzalloc() & CoJan Beulich2011-10-041-0/+7
| | | | | | | | | Rather than having to match a call to one of the xmalloc() flavors with a subsequent memset(), introduce a zeroing variant of each of those flavors. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.Keir Fraser2009-10-211-5/+5
| | | | | | | | | | | | | | | This was happening for xmalloc request sizes between 3921 and 3951 bytes. The reason being that xmem_pool_alloc() may add extra padding to the requested size, making the total block size greater than a page. Rather than add yet more smarts about TLSF to _xmalloc(), we just dumbly attempt any request smaller than a page via xmem_pool_alloc() first, then fall back on xmalloc_whole_pages() if this fails. Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@hp.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* Transcendent memory ("tmem") for Xen.Keir Fraser2009-05-261-12/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tmem, when called from a tmem-capable (paravirtualized) guest, makes use of otherwise unutilized ("fallow") memory to create and manage pools of pages that can be accessed from the guest either as "ephemeral" pages or as "persistent" pages. In either case, the pages are not directly addressible by the guest, only copied to and fro via the tmem interface. Ephemeral pages are a nice place for a guest to put recently evicted clean pages that it might need again; these pages can be reclaimed synchronously by Xen for other guests or other uses. Persistent pages are a nice place for a guest to put "swap" pages to avoid sending them to disk. These pages retain data as long as the guest lives, but count against the guest memory allocation. Tmem pages may optionally be compressed and, in certain cases, can be shared between guests. Tmem also handles concurrency nicely and provides limited QoS settings to combat malicious DoS attempts. Save/restore and live migration support is not yet provided. Tmem is primarily targeted for an x86 64-bit hypervisor. On a 32-bit x86 hypervisor, it has limited functionality and testing due to limitations of the xen heap. Nearly all of tmem is architecture-independent; three routines remain to be ported to ia64 and it should work on that architecture too. It is also structured to be portable to non-Xen environments. Tmem defaults off (for now) and must be enabled with a "tmem" xen boot option (and does nothing unless a tmem-capable guest is running). The "tmem_compress" boot option enables compression which takes about 10x more CPU but approximately doubles the number of pages that can be stored. Tmem can be controlled via several "xm" commands and many interesting tmem statistics can be obtained. A README and internal specification will follow, but lots of useful prose about tmem, as well as Linux patches, can be found at http://oss.oracle.com/projects/tmem . Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
* Allow memflags to be specified to alloc_xenheap_pages().Keir Fraser2009-01-281-4/+4
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xmalloc: Add pooled allocator interface.Keir Fraser2008-10-161-106/+50
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xmalloc: use tlsf algorithmKeir Fraser2008-10-161-0/+655
This patch replaces the Xen xmalloc engine with tlsf, an allocation engine that is both more space efficient and time-bounded, especially for allocation sizes between PAGE_SIZE/2 and PAGE_SIZE. The file xmalloc.c is deprecated but not yet deleted. A simple changein common/Makefile will change back to the legacy xmalloc/xfree if needed for testing. Code adapted from Nitin Gupta's tlsf-kmod, rev 229, found here: http://code.google.com/p/compcache/source/browse/trunk/sub-projects/allocat= ors/tlsf-kmod with description and performance details here: http://code.google.com/p/compcache/wiki/TLSFAllocator (new Xen code uses 4K=3DPAGE_SIZE for the region size) For detailed info on tlsf, see: http://rtportal.upv.es/rtmalloc/ Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>