aboutsummaryrefslogtreecommitdiffstats
path: root/xen/common/trace.c
Commit message (Collapse)AuthorAgeFilesLines
* use SMP barrier in common code dealing with shared memory protocolsIan Campbell2013-07-041-4/+4
| | | | | | | | | | | | | | | | | | | | | | | Xen currently makes no strong distinction between the SMP barriers (smp_mb etc) and the regular barrier (mb etc). In Linux, where we inherited these names from having imported Linux code which uses them, the SMP barriers are intended to be sufficient for implementing shared-memory protocols between processors in an SMP system while the standard barriers are useful for MMIO etc. On x86 with the stronger ordering model there is not much practical difference here but ARM has weaker barriers available which are suitable for use as SMP barriers. Therefore ensure that common code uses the SMP barriers when that is all which is required. On both ARM and x86 both types of barrier are currently identical so there is no actual change. A future patch will change smp_mb to a weaker barrier on ARM. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Keir Fraser <keir@xen.org>
* xen, libxc: rename xenctl_cpumap to xenctl_bitmapDario Faggioli2013-04-171-1/+1
| | | | | | | | | | | | | | | | | | More specifically: 1. replaces xenctl_cpumap with xenctl_bitmap 2. provides bitmap_to_xenctl_bitmap and the reverse; 3. re-implement cpumask_to_xenctl_bitmap with bitmap_to_xenctl_bitmap and the reverse; Other than #3, no functional changes. Interface only slightly afected. This is in preparation of introducing NUMA node-affinity maps. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com> Acked-by: Juergen Gross <juergen.gross@ts.fujitsu.com> Acked-by: Keir Fraser <keir@xen.org>
* xentrace: fix off-by-one in calculate_tbuf_sizeOlaf Hering2013-03-041-1/+1
| | | | | | | | | | | | | | | | | | | | | Commit "xentrace: reduce trace buffer size to something mfn_offset can reach" contains an off-by-one bug. max_mfn_offset needs to be reduced by exactly the value of t_info_first_offset. If the system has two cpus and the number of requested trace pages is very large, the final number of trace pages + the offset will not fit into a short. As a result the variable offset in alloc_trace_bufs() will wrap while allocating buffers for the second cpu. Later share_xen_page_with_privileged_guests() will be called with a wrong page and the ASSERT in this function triggers. If the ASSERT is ignored by running a non-dbg hypervisor the asserts in xentrace itself trigger because "cons" is not aligned because the very last trace page for the second cpu is a random mfn. Thanks to Jan for the quick analysis. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* Fix emacs local variable block to use correct C style variable.David Vrabel2013-02-211-1/+1
| | | | | | | The emacs variable to set the C style from a local variable block is c-file-style, not c-set-style. Signed-off-by: David Vrabel <david.vrabel@citrix.com
* trace: trace hypercalls inside a multicallDavid Vrabel2012-10-031-3/+3
| | | | | | | | | | Add a trace record for every hypercall inside a multicall. These use a new event ID (with a different sub-class ) so they may be filtered out if only the calls into hypervisor are of interest. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Acked-by: George Dunlap <george.dunlap@citrix.com> Committed-by: Keir Fraser <keir@xen.org>
* trace: improve usefulness of hypercall trace recordDavid Vrabel2012-10-031-0/+52
| | | | | | | | | | | | | | | | | | Trace hypercalls using a more useful trace record format. The EIP field is removed (it was always somewhere in the hypercall page) and include selected hypercall arguments (e.g., the number of calls in a multicall, and the number of PTE updates in an mmu_update etc.). 12 bits in the first extra word are used to indicate which arguments are present in the record and what size they are (32 or 64-bit). This is an incompatible record format so a new event ID is used so tools can distinguish between the two formats. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Acked-by: George Dunlap <george.dunlap@citrix.com> Committed-by: Keir Fraser <keir@xen.org>
* xen: Fix failure paths for xentraceGeorge Dunlap2012-04-121-6/+9
| | | | | | | | | | | | | | Problems this addresses: * After the allocation of t_info fails, the path the code takes tries to free t_info. Jump past that part instead. * The failure code assumes that unused data is zero; but the structure is never initialized. Zero the structure before using it. * The t_info pages are shared with dom0 before we know that the whole operation will succeed, and not un-shared afterwards. Don't share the pages until we know the whole thing will succeed. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> Committed-by: Keir Fraser <keir@xen.org>
* xen: allow global VIRQ handlers to be delegated to other domainsDaniel De Graaf2012-01-281-1/+1
| | | | | | | | | | | | | | | | | This patch sends global VIRQs to a domain designated as the VIRQ handler instead of sending all global VIRQ events to dom0. This is required in order to run xenstored in a stubdom, because VIRQ_DOM_EXC must be sent to xenstored for domain destruction to work properly. This patch was inspired by the xenstored stubdomain patch series sent to xen-devel by Alex Zeffertt in 2009. Signed-off-by: Diego Ongaro <diego.ongaro@citrix.com> Signed-off-by: Alex Zeffertt <alex.zeffertt@eu.citrix.com> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> Acked-by: Ian Campbell <ian.campbell@citrix.com> Committed-by: Keir Fraser <keir@xen.org>
* eliminate cpu_test_xyz()Jan Beulich2011-11-081-2/+2
| | | | | | Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
* cpumask <=> xenctl_cpumap: allocate CPU masks and byte maps dynamicallyJan Beulich2011-10-211-1/+10
| | | | | | | | | | | | | | | | | | Generally there was a NR_CPUS-bits wide array in these functions and another (through a cpumask_t) on their callers' stacks, which may get a little large for big NR_CPUS. As the functions can fail anyway, do the allocation in there. For the x86/MCA case this require a little code restructuring: By using different CPU mask accessors it was possible to avoid allocating a mask in the broadcast case. Also, this was the only user that failed to check the return value of the conversion function (which could have led to undefined behvior). Also constify the input parameters of the two functions. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* eliminate cpumask accessors referencing NR_CPUSJan Beulich2011-10-211-1/+2
| | | | | | | ... in favor of using the new, nr_cpumask_bits-based ones. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Keir Fraser <keir@xen.org>
* xentrace: update __trace_var commentOlaf Hering2011-07-191-5/+5
| | | | Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: Allow tracing to be enabled at bootGeorge Dunlap2011-07-011-4/+16
| | | | | | | | Add a "tevt_mask" parameter to the xen command-line, allowing trace records to be gathered early in boot. They will be placed into the trace buffers, and read when the user runs "xentrace". Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* tasklets: Switch a few tasklets to run in softirq context.Keir Fraser2011-06-161-1/+2
| | | | | | | There are a couple of others which may also be safe. I've converted only the most obvious one. Signed-off-by: Keir Fraser <keir@xen.org>
* xentrace: allocate non-contiguous per-cpu trace buffersOlaf Hering2011-05-261-42/+50
| | | | | Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: update __insert_record() to copy the trace record to individual mfnsOlaf Hering2011-05-261-16/+55
| | | | | | | | | | | | | Update __insert_record() to copy the trace record to individual mfns. This is a prereq before changing the per-cpu allocation from contiguous to non-contiguous allocation. v2: update offset calculation to use shift and mask update type of mfn_offset to match type of data source Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: fix type of offset to avoid ouf-of-bounds accessOlaf Hering2011-05-261-4/+4
| | | | | | | | | Update the type of the local offset variable to match the type where this variable is stored. Also update the type of t_info_first_offset because it has also a limited range. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: reduce trace buffer size to something mfn_offset can reachOlaf Hering2011-05-261-0/+15
| | | | | | | | | | | | | The start of the array which holds the list of mfns for each cpus tracebuffer is stored in an unsigned short. This limits the total amount of pages for each cpu as the number of active cpus increases. Update the math in calculate_tbuf_size() to apply also this rule to the max number of trace pages. Without this change the index can overflow. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: Remove unneeded cast when assigning pointer value to dstOlaf Hering2011-05-091-3/+3
| | | | | | | Remove unneeded cast when assigning pointer value to dst. Both arrays are uint32_t and memcpy takes a void pointer. Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: Mark data_size __read_mostly because it's only written onceOlaf Hering2011-05-091-1/+1
| | | | Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: Move the global variable t_info_first_offset into ↵Olaf Hering2011-05-091-6/+6
| | | | | | | | | | calculate_tbuf_size() Move the global variable t_info_first_offset into calculate_tbuf_size() because it is only used there. Change the type from u32 to uint32_t to match type in other places. Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: correct overflow check for number of per-cpu trace pagesOlaf Hering2011-04-181-7/+15
| | | | | | | | | | | | | | The calculated number of per-cpu trace pages is stored in t_info and shared with tools like xentrace. Since its an u16 the value may overflow because the current check is based on u32. Using the u16 means each cpu could in theory use up to 256MB as trace buffer. However such a large allocation will currently fail on x86 due to the MAX_ORDER limit. Check both max theoretical number of pages per cpu and max number of pages reachable by struct t_buf->prod/cons variables with requested number of pages. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: Move register_cpu_notifier() call into boot-time init.Keir Fraser2011-04-061-2/+2
| | | | | | | We can't do it lazily from alloc_trace_bufs() as that gets called later if tracing is enabled later by dom0. Signed-off-by: Keir Fraser <keir@xen.org>
* xentrace: remove unneeded debug printkOlaf Hering2011-04-021-1/+0
| | | | | | | | The pointer value in case of an allocation failure is rather uninteresting. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: correct formula to calculate t_info_pagesOlaf Hering2011-04-021-4/+3
| | | | | | | | | | The current formula to calculate t_info_pages, based on the initial code, is slightly incorrect. It may allocate more than needed. Each cpu has some pages/mfns stored as uint32_t. That list is stored with an offset at tinfo. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: use consistent printk prefixOlaf Hering2011-03-251-14/+17
| | | | Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: update commentsOlaf Hering2011-03-251-2/+1
| | | | | | Fix a typo, remove redundant comment. Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: remove gdprintk usage since they are not in guest contextOlaf Hering2011-03-251-3/+3
| | | | Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: print calculated numbers in calculate_tbuf_size()Olaf Hering2011-03-251-0/+2
| | | | | | | Print number of pages to allocate for per-cpu tracebuffer and metadata to ease debugging when allocation fails. Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: fix t_info_pages calculation.Olaf Hering2011-03-251-5/+5
| | | | Signed-off-by: Olaf Hering <olaf@aepfle.de>
* xentrace: dynamic tracebuffer allocationOlaf Hering2011-03-171-146/+105
| | | | | | | | | | | Allocate tracebuffers dynamically, based on the requested buffer size. Calculate t_info_size from requested t_buf size. Fix allocation failure path, free pages outside the spinlock. Remove casts for rawbuf, it can be a void pointer since no math is done. Signed-off-by: Olaf Hering <olaf@aepfle.de> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: Clean up initialisation.Keir Fraser2010-12-161-58/+47
| | | | | | Allocate no memory and print no debug messages when disabled. Signed-off-by: Keir Fraser <keir@xen.org>
* xentrace: Fix buffer allocation to properly depend on T_INFO_PAGESKeir Fraser2010-08-171-3/+3
| | | | Signed-off-by: Andre Przywara <andre.przywara@amd.com>
* trace: insert compiler memory barriersKeir Fraser2010-07-051-18/+20
| | | | | | | | | | This is to ensure fields shared writably with Dom0 get read only once for any consistency checking followed by actual calculations. I realized there was another multiple-read issue, a fix for which is also included (which at once simplifies __insert_record()). Signed-off-by: Jan Beulich <jbeulich@novell.com>
* trace: fix security issuesKeir Fraser2010-07-021-52/+80
| | | | | | | | | | | | | | | | | | | | | | | After getting a report of 3.2.3's xenmon crashing Xen (as it turned out this was because c/s 17000 was backported to that tree without also applying c/s 17515), I figured that the hypervisor shouldn't rely on any specific state of the actual trace buffer (as it is shared writable with Dom0) [GWD: Volatile quantifiers have been taken out and moved to another patch] To make clear what purpose specific variables have and/or where they got loaded from, the patch also changes the type of some of them to be explicitly u32/s32, and removes pointless assertions (like checking an unsigned variable to be >= 0). I also took the prototype adjustment of __trace_var() as an opportunity to simplify the TRACE_xD() macros. Similar simplification could be done on the (quite numerous) direct callers of the function. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* trace: adjust printk()sKeir Fraser2010-07-021-7/+8
| | | | | | | They should be lower level or rate limited. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* trace: improve check_tbuf_size()Keir Fraser2010-07-021-3/+11
| | | | | | | | It didn't consider the case of the incoming size not allowing for the 2*data_size range for t_buf->{prod,cons} Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* trace: Fix T_INFO_FIRST_OFFSET calculationKeir Fraser2010-07-021-4/+23
| | | | | | | | | This wasn't defined correctly, thus allowing in the num_online_cpus() == NR_CPUS case to pass a corrupted MFN to Dom0. Reported-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* trace: share t_info pages only in read-only modeKeir Fraser2010-06-291-1/+1
| | | | | | | | There's no need to share writably the t_info pages (Dom0 only wants [and needs] to read it) Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: restrict trace buffer MFNsKeir Fraser2010-06-281-1/+2
| | | | | | | | | Since they're being passed to Dom0 using an array of uint32_t, they must be representable as 32-bit quantities, and hence the buffer allocation must specify an upper address boundary. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
* trace: Do not touch percpu data for "impossible" cpus.Keir Fraser2010-05-141-3/+17
| | | | | | | While here, in fact only touch per-cpu data for online cpus. Use cpu notifier chain to initialise per-cpu spinlock dynamically. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* domctl: Fix cpumap/cpumask conversion functions to return an error code.Keir Fraser2010-05-121-1/+1
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xentrace: fix bug in t_info sizeKeir Fraser2010-05-101-1/+1
| | | | | | | | t_info size should be in bytes, not pages. This fixes a bug that crashes the hypervisor if the total number of all pages is more than 1024 but less than 2048. Signed-off-by: George Dunlap <george.dunlap@citrix.com>
* Move tasklet implementation into its own source files.Keir Fraser2010-04-191-1/+1
| | | | | | | | This is preparation for implementing tasklets in vcpu context rather than softirq context. There is no change to the implementation of tasklets in this patch. Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* xentrace: Bounds checking and error handlingKeir Fraser2010-04-121-11/+54
| | | | | | | | Check tbuf_size to make sure that it will fit on the t_info struct allocated at boot. Also deal with allocation failures more gracefully. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: Clear lost records when disabling tracingKeir Fraser2010-02-031-0/+15
| | | | | | | | | | | This patch clears the "lost records" flag on each cpu when tracing is disabled. Without this patch, the next time tracing starts, cpus with lost records will generate lost record traces, even though buffers are empty and no tracing has recently happened. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
* xentrace: Per-cpu xentrace buffersKeir Fraser2010-01-201-26/+116
| | | | | | | | | | | | | | | | In the current xentrace configuration, xentrace buffers are all allocated in a single contiguous chunk, and then divided among logical cpus, one buffer per cpu. The size of an allocatable chunk is fairly limited, in my experience about 128 pages (512KiB). As the number of logical cores increase, this means a much smaller maximum per-cpu trace buffer per cpu; on my dual-socket quad-core nehalem box with hyperthreading (16 logical cpus), that comes to 8 pages per logical cpu. This patch addresses this issue by allocating per-cpu buffers separately. Signed-off-by: George Dunlap <dunlapg@umich.edu>
* Introduce and use a per-CPU read-mostly sub-sectionKeir Fraser2009-07-131-2/+2
| | | | | | | | | | | | | | | Since mixing data that only gets setup once and then (perhaps frequently) gets read by remote CPUs with data that the local CPU may modify (again, perhaps frequently) still causes undesirable cache protocol related bus traffic, separate the former class of objects from the latter. These objects converted here are just picked based on their write-once (or write-very-rarely) properties; perhaps some more adjustments may be desirable subsequently. The primary users of the new sub-section will result from the next patch. Signed-off-by: Jan Beulich <jbeulich@novell.com>
* x86: Fix event-channel access for 32-bit HVM guests.Keir Fraser2009-03-031-2/+0
| | | | | | Based on a patch by Joe Jin <joe.jin@oracle.com> Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
* Allow memflags to be specified to alloc_xenheap_pages().Keir Fraser2009-01-281-1/+1
| | | | Signed-off-by: Keir Fraser <keir.fraser@citrix.com>