aboutsummaryrefslogtreecommitdiffstats
path: root/package/kernel/mac80211/patches
Commit message (Collapse)AuthorAgeFilesLines
...
* mac80211: Update to version 4.19.7-1Hauke Mehrtens2018-12-1321-233/+49
| | | | | | | | | | | This updates the backports package used in mac80211 to version 4.19.7-1 which is based on kernel 4.19.7. This integrates all the stable fixes introduces in this kernel version. The deleted patches are not needed any more because they are either included in the upstream Linux kernel 4.19.7 or in backports 4.19.7-1. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* ath9k: register GPIO chip for OF targetsMathias Kresin2018-12-122-10/+19
| | | | | | | | | | | | | | | | | This partitialy reverts commit ccab68f2d399. Registering the GPIO chip without a parent device completely breaks the ath9k GPIOs for device tree targets. As long as boards using the devicetree don't have the gpio-controller property set for the ath9k node, the unloading of the driver works as expected. Register the GPIO chip with the ath9k device as parent only for OF targets to find a trade-off between the needs of driver developers and the broken LEDs and buttons seen by users. Signed-off-by: Mathias Kresin <dev@kresin.me>
* mac80211: fix brcmfmac on brcm2708Stijn Tintel2018-12-041-0/+76
| | | | | | | | | An upstream change broke brcmfmac when loaded with modparam roamoff=1. As we are carrying a patch that enables roamoff by default on the brcm2708 target to improve stability, wireless is currently broken there. Add a patch to fix brcmfmac with roamoff=1. Signed-off-by: Stijn Tintel <stijn@linux-ipv6.be>
* mac80211: fix reordering of buffered broadcast packetsFelix Fietkau2018-11-281-0/+28
| | | | Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: fix spurious disconnections with powersave clientsFelix Fietkau2018-11-131-0/+26
| | | | | | Affects all drivers using ieee80211_tx_status_noskb, e.g. ath9k and mt76 Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: brcmfmac: backport the last accepted 4.21 changesRafał Miłecki2018-11-072-0/+117
| | | | | | It's a typo fix & patch that helps debugging possible WARN-ings. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: brcmfmac: backport NVRAM loading improvementsRafał Miłecki2018-11-078-8/+611
| | | | | | This adds support for storing board specific NVRAM files as firmware. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: brcmfmac: backport firmware loading cleanupRafał Miłecki2018-11-073-12/+244
| | | | Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: brcmfmac: backport the latest 4.20 changesRafał Miłecki2018-11-074-0/+244
| | | | Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: brcmfmac: rename 4.20 backport patchesRafał Miłecki2018-11-074-0/+0
| | | | | | Include kernel version to help tracking changes. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* ath9k: fix dynack in IBSS modeKoen Vandeputte2018-11-065-0/+309
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, dynack was only tested upstream using AP/STA mode. Testing it on IBSS, showed that late-ack detection was broken. This is caused due to dynack using Association Request/Response frames for late-ack detection, which IBSS does not use. Also allowing Authentication frames here solves this. A second issue also got fixed, which was also seen AP/STA mode: When a station was added, the estimated value would be exponentially averaged using 0 as a starting point. This means that on larger distances, the ack timeout was still not high enough before synchronizing would run out of late-ack's for estimation. Fix this by using the initial estimated value as a baseline and only start averaging in the following estimation rounds. Test setup: - 2x identical devices: RB912UAG-5HPnD + 19dB sector - IBSS - 2x2 802.11an (ar9340), HT20, long GI - RSSI's -70 / -71 - Real distance: 23910 meter Results (60s iperf runs): Fixed coverage class 54 (up to 24300m): * 21.5 Mbits/sec Dynack: * 28.9 Mbits/sec Signed-off-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
* mac80211: fix A-MSDU packet handling with TCP retransmissionFelix Fietkau2018-10-112-1/+32
| | | | | | | Improves local TCP throughput and fixes use-after-free bugs that could lead to crashes. Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: fix management frame protection issue with mt76 (and possibly ↵Felix Fietkau2018-09-291-0/+25
| | | | | | | | | other drivers) Software crypto wasn't working for management frames because the flag indicating management frame crypto was missing Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: fix ipw200 build with kernel < 4.10Hauke Mehrtens2018-09-281-0/+34
| | | | | | | The __change_mtu() function is only compiled when CPTCFG_IPW2200_PROMISCUOUS is set, more it to the general area. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* mac80211: Use backports-4.19-rc5-1.tar.xzHauke Mehrtens2018-09-276-10/+345
| | | | | | | | | | This is an official release with some minor changes compared to the unofficial 4.19-rc4-1 we used before. * added bcma and ssb again, which is removed in OpenWrt * fix to build with kernel 4.19 * other minor fixes not relevant for Openwrt. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* mac80211: fix compile warning in 986-rt2x00-add-TX-LOFT-calibration.patchHauke Mehrtens2018-09-261-1/+1
| | | | Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* mac80211: Add patches which were added laterHauke Mehrtens2018-09-2618-677/+1
| | | | | | | | These patches were added after the new matches structure for the mac80211 package was created. All the deleted patches are already integrated in kernel 4.19-rc4. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* mac80211: update to version based on 4.19-rc4Hauke Mehrtens2018-09-26104-1086/+397
| | | | | | | | | | | | This updates mac80211 to backports based on kernel 4.19-rc4. I plan to integrate all the patches which are in this tar into upstream backports soon. I used the backports generated from this code: https://github.com/hauke/backports/commits/wip2 Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
* mac80211: rt2x00: add experimental patches from Stanislaw GruszkaDaniel Golle2018-09-265-0/+1067
| | | | Signed-off-by: Daniel Golle <daniel@makrotopia.org>
* mac80211: rt2x00: remove obsolete patchDaniel Golle2018-09-261-136/+0
| | | | | | | | According to Stanislaw Gruszka the patch 600-23-rt2x00-rt2800mmio-add-a-workaround-for-spurious-TX_F.patch should be dropped. Signed-off-by: Daniel Golle <daniel@makrotopia.org>
* mac80211: rt2x00: add TX LOFT calibrationTomislav Požega2018-09-261-0/+1005
| | | | | | Add TX LOFT calibration from mtk driver. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com>
* mac80211: rt2x00: add RXIQ calibrationTomislav Požega2018-09-261-0/+417
| | | | | | | Add RXIQ calibration found in mtk driver. With old openwrt builds this gets us ~8Mbps more of RX bandwidth (test with iPA/eLNA layout). Please try if this makes any difference among various board/RF layouts. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com>
* mac80211: rt2x00: add RXDCOC calibrationTomislav Požega2018-09-261-0/+102
| | | | | | Add RXDCOC calibration code from mtk driver. Please try if this makes any difference among various board/RF layouts. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com>
* mac80211: rt2x00: add r calibrationTomislav Požega2018-09-261-0/+193
| | | | | | Add r calibration code as found in mtk driver. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com>
* mac80211: rt2x00: add RF self TXDC calibrationTomislav Požega2018-09-261-0/+89
| | | | | | Add TX self calibration based on mtk driver. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com>
* mac80211: rt2x00: write registers required for reducing power consumptionTomislav Požega2018-09-261-0/+43
| | | | | | | | | | | Write registers required for reducing power consumption like the vendor driver does when ADJUST_POWER_CONSUMPTION_SUPPORT is set. This helps devices to sync at better TX/RX rates and improves overall performance. Signed-off-by: Tomislav Požega <pozega.tomislav@gmail.com> Signed-off-by: Daniel Golle <daniel@makrotopia.org> [daniel@makrotopia.org: edited commit message]
* mac80211: rebase ontop of v4.18.5John Crispin2018-09-26283-15964/+364
| | | | Signed-off-by: John Crispin <john@phrozen.org>
* ath9k: add back support for using tx99 with active monitor interfacesFelix Fietkau2018-09-221-0/+96
| | | | | | Fixes controlling bitrate Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: fix tx queue allocation for active monitor interfacesFelix Fietkau2018-09-221-0/+26
| | | | | | Fixes a crash with drivers like ath9k Signed-off-by: Felix Fietkau <nbd@nbd.name>
* ath9k: fix unloading the moduleFelix Fietkau2018-09-202-15/+10
| | | | | | | | | Registering a GPIO chip with the ath9k device as parent prevents unload, because the gpiochip core increases the module use count. Unfortunately, the only way to avoid this at the moment seems to be to register the GPIO chip without a parent device Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: brcmfmac: backport CYW89342 support & fixes from 4.20Rafał Miłecki2018-09-124-0/+208
| | | | Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport upstream fixesKoen Vandeputte2018-09-0714-8/+679
| | | | | | | | | | Backport most significant upstream fixes (excl. hwsim fixes) Refreshed all patches. Contains important fixes for CSA (Channel Switch Announcement) and A-MSDU frames. Signed-off-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
* ath9k: fix setting up tx99 with a monitor mode interfaceFelix Fietkau2018-08-251-0/+92
| | | | Signed-off-by: Felix Fietkau <nbd@nbd.name>
* mac80211: mwl8k: Expand non-DFS 5G channelsAntonio Silverio2018-08-251-0/+37
| | | | | | | Add non-DFS 5G upper channels (149-165) besides existed 4 lower channels (36, 40, 44, 48). Signed-off-by: Antonio Silverio <menion@gmail.com>
* mac80211: brcmfmac: backport patch for per-firmware featuresRafał Miłecki2018-07-311-0/+84
| | | | | | | This allows driver to support features that can't be dynamically discovered. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: ath10k: Limit available channels via DT ieee80211-freq-limitSven Eckelmann2018-07-301-0/+44
| | | | | | | | | | | | | Tri-band devices (1x 2.4GHz + 2x 5GHz) often incorporate special filters in the RX and TX path. These filtered channel can in theory still be used by the hardware but the signal strength is reduced so much that it makes no sense. There is already a DT property to limit the available channels but ath10k has to manually call this functionality to limit the currrently set wiphy channels further. Signed-off-by: Sven Eckelmann <sven.eckelmann@openmesh.com>
* mac80211: brcmfmac: backport 4.19 patches preparing monitor mode supportRafał Miłecki2018-07-276-1/+383
| | | | | | | | Monitor mode isn't supported yet with brcmfmac, it's just an early work. This also prepares brcmfmac to work stable with new firmwares which use updated struct for passing STA info. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport brcmfmac fixes & debugging helpers from 4.18Rafał Miłecki2018-07-269-2/+341
| | | | | | | | | | | | The most important is probably regression fix in handling platform NVRAM. That bug stopped hardware from being properly calibrated breaking e.g. 5 GHz for Netgear R8000. Other than that it triggers memory dumps when experiencing firmware problems which is important for debugging purposes. Fixes: 7e8eb7f309a8 ("mac80211: backport brcmfmac firmware & clm_blob loading rework") Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport and update patches for ath10kAnsuel Smith2018-07-2219-74/+2508
| | | | | | | | | | | | | | | | | | | | | | | | This commit refreshes and updates the VHT160 ath10k support fix patches and adds a number of backports from ath-next: * 8ed05ed06fca ath10k: handle tdls peer events * 229329ff345f ath10k: wmi: modify svc bitmap parsing for wcn3990 * 14d65775687c ath10k: advertise TDLS wider bandwidth support for 5GHz * bc64d05220f3 ath10k: debugfs support to get final TPC stats for 10.4 variants * 8b2d93dd2261 ath10k: Fix kernel panic while using worker (ath10k_sta_rc_update_wk) * 4b190675ad06 ath10k: fix kernel panic while reading tpc_stats * be8cce96f14d ath10k: add support to configure channel dwell time * f40105e67478 ath: add support to get the detected radar specifications * 6f6eb1bcbeff ath10k: DFS Host Confirmation * 260e629bbf44 ath10k: fix memory leak of tpc_stats * 38441fb6fcbb ath10k: support use of channel 173 * 2e9bcd0d7324 ath10k: fix spectral scan for QCA9984 and QCA9888 chipsets Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> [move backported patches in the 3xx number space, bring in upstream order, replace incomplete patch files with git format-patch ones, rewrite commit message, fix subject] Signed-off-by: Jo-Philipp Wich <jo@mein.io>
* mac80211: initialize sinfo in cfg80211_get_stationSven Eckelmann2018-07-071-0/+42
| | | | | | | | | | | | | | | | | | | Most of the implementations behind cfg80211_get_station will not initialize sinfo to zero before manipulating it. For example, the member "filled", which indicates the filled in parts of this struct, is often only modified by enabling certain bits in the bitfield while keeping the remaining bits in their original state. A caller without a preinitialized sinfo.filled can then no longer decide which parts of sinfo were filled in by cfg80211_get_station (or actually the underlying implementations). cfg80211_get_station must therefore take care that sinfo is initialized to zero. Otherwise, the caller may tries to read information which was not filled in and which must therefore also be considered uninitialized. In batadv_v_elp_get_throughput's case, an invalid "random" expected throughput may be stored for this neighbor and thus the B.A.T.M.A.N V algorithm may switch to non-optimal neighbors for certain destinations. Signed-off-by: Sven Eckelmann <sven.eckelmann@openmesh.com>
* mac80211: rtl8xxxu: drop support patchesJohn Crispin2018-06-2657-3531/+0
| | | | | | | | | After a very enlightening but unfortunately far too short exchange with Jes we mutually agreed to drop the patches. They are unfortunately not ready yet. Acked-by: Rafał Miłecki <rafal@milecki.pl> Signed-off-by: John Crispin <john@phrozen.org>
* mac80211: ath10k: use tpt LED trigger by defaultMathias Kresin2018-06-251-0/+53
| | | | | | | | Use the tpt LED trigger for each created phy led. Ths way LEDs attached to the ath10k GPIO pins are indicating the phy status and blink on traffic. Signed-off-by: Mathias Kresin <dev@kresin.me>
* mac80211: drop 355-ath9k-limit-retries-for-powersave-response-frames.patchJohn Crispin2018-06-2219-243/+157
| | | | | | several people reported this bug to be causing drop out issues Signed-off-by: John Crispin <john@phrozen.org>
* mac80211: ath10k fix vht160 firmware crashAnsuel Smith2018-06-222-0/+182
| | | | | | When the 160mhz width is selected the ath10k firmware crash. This fix this problem. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
* mac80211: ath10k add leds supportAnsuel Smith2018-06-221-0/+617
| | | | | | This adds support for leds handled by the wireless chipset. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com>
* mac80211: backport brcmfmac changes from kernel 4.18Rafał Miłecki2018-06-1811-0/+631
| | | | Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport brcmfmac firmware & clm_blob loading reworkRafał Miłecki2018-06-188-41/+1392
| | | | | | It backports remaining brcmfmac changes from 4.17. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport brcmfmac data structure reworkRafał Miłecki2018-06-1710-9/+1426
| | | | | | It backports brcmfmac commits from kernel 4.17. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: backport "brcmfmac: cleanup and some rework" from 4.17Rafał Miłecki2018-06-179-1/+772
| | | | | | | | | | | It was described by Arend as: > This series is intended for 4.17 and includes following: > > * rework bus layer attach code. > * remove duplicate variable declaration. Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
* mac80211: ath10k: Allow to enable the thermal code of ath10kSven Eckelmann2018-06-093-2/+12
| | | | | | | | | | | | | | | Some ath10k firmware versions allow to access the chip internal a temperature sensor and allow to reduce the amount of the time when the card is allowed to send. The latter is required on devices which tend to overheat. An userspace service has to read /sys/class/ieee80211/phy*/device/hwmon/hwmon*/temp1_input regularly and then decide how much the device has to be throttled. This can be done by writing to /sys/class/ieee80211/phy*/device/cooling_device/cur_state. By default it is not throttled (0) but it can be throttled up to 100(%). Signed-off-by: Sven Eckelmann <sven.eckelmann@openmesh.com>
lass="p">)) || ((c->x86_model==7) && (c->x86_mask>=1)) || (c->x86_model> 7) ) if (cpu_has_mp) goto valid_k7; /* If we get here, it's not a certified SMP capable AMD system. */ add_taint(TAINT_UNSAFE_SMP); } valid_k7: ; } /* * TSC's upper 32 bits can't be written in earlier CPUs (before * Prescott), there is no way to resync one AP against BP. */ bool_t disable_tsc_sync; static atomic_t tsc_count; static uint64_t tsc_value; static cpumask_t tsc_sync_cpu_mask; static void synchronize_tsc_master(unsigned int slave) { unsigned int i; if ( disable_tsc_sync ) return; if ( boot_cpu_has(X86_FEATURE_TSC_RELIABLE) && !cpu_isset(slave, tsc_sync_cpu_mask) ) return; for ( i = 1; i <= 5; i++ ) { rdtscll(tsc_value); wmb(); atomic_inc(&tsc_count); while ( atomic_read(&tsc_count) != (i<<1) ) cpu_relax(); } atomic_set(&tsc_count, 0); cpu_clear(slave, tsc_sync_cpu_mask); } static void synchronize_tsc_slave(unsigned int slave) { unsigned int i; if ( disable_tsc_sync ) return; if ( boot_cpu_has(X86_FEATURE_TSC_RELIABLE) && !cpu_isset(slave, tsc_sync_cpu_mask) ) return; for ( i = 1; i <= 5; i++ ) { while ( atomic_read(&tsc_count) != ((i<<1)-1) ) cpu_relax(); rmb(); /* * If a CPU has been physically hotplugged, we may as well write * to its TSC in spite of X86_FEATURE_TSC_RELIABLE. The platform does * not sync up a new CPU's TSC for us. */ __write_tsc(tsc_value); atomic_inc(&tsc_count); } } void smp_callin(void) { unsigned int cpu = smp_processor_id(); int i, rc; /* Wait 2s total for startup. */ Dprintk("Waiting for CALLOUT.\n"); for ( i = 0; cpu_state != CPU_STATE_CALLOUT; i++ ) { BUG_ON(i >= 200); cpu_relax(); mdelay(10); } /* * The boot CPU has finished the init stage and is spinning on cpu_state * update until we finish. We are free to set up this CPU: first the APIC. */ Dprintk("CALLIN, before setup_local_APIC().\n"); x2apic_ap_setup(); setup_local_APIC(); map_cpu_to_logical_apicid(); /* Save our processor parameters. */ smp_store_cpu_info(cpu); if ( (rc = hvm_cpu_up()) != 0 ) { extern void (*dead_idle) (void); printk("CPU%d: Failed to initialise HVM. Not coming online.\n", cpu); cpu_error = rc; clear_local_APIC(); spin_debug_enable(); cpu_exit_clear(cpu); (*dead_idle)(); } /* Allow the master to continue. */ set_cpu_state(CPU_STATE_CALLIN); synchronize_tsc_slave(cpu); /* And wait for our final Ack. */ while ( cpu_state != CPU_STATE_ONLINE ) cpu_relax(); } static int booting_cpu; /* CPUs for which sibling maps can be computed. */ static cpumask_t cpu_sibling_setup_map; static void link_thread_siblings(int cpu1, int cpu2) { cpu_set(cpu1, per_cpu(cpu_sibling_map, cpu2)); cpu_set(cpu2, per_cpu(cpu_sibling_map, cpu1)); cpu_set(cpu1, per_cpu(cpu_core_map, cpu2)); cpu_set(cpu2, per_cpu(cpu_core_map, cpu1)); } static void set_cpu_sibling_map(int cpu) { int i; struct cpuinfo_x86 *c = cpu_data; cpu_set(cpu, cpu_sibling_setup_map); if ( c[cpu].x86_num_siblings > 1 ) { for_each_cpu_mask ( i, cpu_sibling_setup_map ) { if ( cpu_has(c, X86_FEATURE_TOPOEXT) ) { if ( (c[cpu].phys_proc_id == c[i].phys_proc_id) && (c[cpu].compute_unit_id == c[i].compute_unit_id) ) link_thread_siblings(cpu, i); } else if ( (c[cpu].phys_proc_id == c[i].phys_proc_id) && (c[cpu].cpu_core_id == c[i].cpu_core_id) ) { link_thread_siblings(cpu, i); } } } else { cpu_set(cpu, per_cpu(cpu_sibling_map, cpu)); } if ( c[cpu].x86_max_cores == 1 ) { per_cpu(cpu_core_map, cpu) = per_cpu(cpu_sibling_map, cpu); c[cpu].booted_cores = 1; return; } for_each_cpu_mask ( i, cpu_sibling_setup_map ) { if ( c[cpu].phys_proc_id == c[i].phys_proc_id ) { cpu_set(i, per_cpu(cpu_core_map, cpu)); cpu_set(cpu, per_cpu(cpu_core_map, i)); /* * Does this new cpu bringup a new core? */ if ( cpus_weight(per_cpu(cpu_sibling_map, cpu)) == 1 ) { /* * for each core in package, increment * the booted_cores for this new cpu */ if ( first_cpu(per_cpu(cpu_sibling_map, i)) == i ) c[cpu].booted_cores++; /* * increment the core count for all * the other cpus in this package */ if ( i != cpu ) c[i].booted_cores++; } else if ( (i != cpu) && !c[cpu].booted_cores ) { c[cpu].booted_cores = c[i].booted_cores; } } } } static void construct_percpu_idt(unsigned int cpu) { unsigned char idt_load[10]; *(unsigned short *)(&idt_load[0]) = (IDT_ENTRIES*sizeof(idt_entry_t))-1; *(unsigned long *)(&idt_load[2]) = (unsigned long)idt_tables[cpu]; __asm__ __volatile__ ( "lidt %0" : "=m" (idt_load) ); } void start_secondary(void *unused) { /* * Dont put anything before smp_callin(), SMP booting is so fragile that we * want to limit the things done here to the most necessary things. */ unsigned int cpu = booting_cpu; set_processor_id(cpu); set_current(idle_vcpu[cpu]); this_cpu(curr_vcpu) = idle_vcpu[cpu]; if ( cpu_has_efer ) rdmsrl(MSR_EFER, this_cpu(efer)); asm volatile ( "mov %%cr4,%0" : "=r" (this_cpu(cr4)) ); /* * Just as during early bootstrap, it is convenient here to disable * spinlock checking while we have IRQs disabled. This allows us to * acquire IRQ-unsafe locks when it would otherwise be disallowed. * * It is safe because the race we are usually trying to avoid involves * a group of CPUs rendezvousing in an IPI handler, where one cannot * join because it is spinning with IRQs disabled waiting to acquire a * lock held by another in the rendezvous group (the lock must be an * IRQ-unsafe lock since the CPU took the IPI after acquiring it, and * hence had IRQs enabled). This is a deadlock scenario. * * However, no CPU can be involved in rendezvous until it is online, * hence no such group can be waiting for this CPU until it is * visible in cpu_online_map. Hence such a deadlock is not possible. */ spin_debug_disable(); percpu_traps_init(); cpu_init(); smp_callin(); /* * At this point, boot CPU has fully initialised the IDT. It is * now safe to make ourselves a private copy. */ construct_percpu_idt(cpu); setup_secondary_APIC_clock(); /* * low-memory mappings have been cleared, flush them from * the local TLBs too. */ flush_tlb_local(); /* This must be done before setting cpu_online_map */ spin_debug_enable(); set_cpu_sibling_map(cpu); notify_cpu_starting(cpu); wmb(); /* * We need to hold vector_lock so there the set of online cpus * does not change while we are assigning vectors to cpus. Holding * this lock ensures we don't half assign or remove an irq from a cpu. */ lock_vector_lock(); __setup_vector_irq(cpu); cpu_set(cpu, cpu_online_map); unlock_vector_lock(); init_percpu_time(); /* We can take interrupts now: we're officially "up". */ local_irq_enable(); mtrr_ap_init(); microcode_resume_cpu(cpu); wmb(); startup_cpu_idle_loop(); } extern struct { void * esp; unsigned short ss; } stack_start; u32 cpu_2_logical_apicid[NR_CPUS] __read_mostly = { [0 ... NR_CPUS-1] = BAD_APICID }; static void map_cpu_to_logical_apicid(void) { int cpu = smp_processor_id(); int apicid = logical_smp_processor_id(); cpu_2_logical_apicid[cpu] = apicid; } static void unmap_cpu_to_logical_apicid(int cpu) { cpu_2_logical_apicid[cpu] = BAD_APICID; } static int wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip) { unsigned long send_status = 0, accept_status = 0; int maxlvt, timeout, num_starts, i; /* * Be paranoid about clearing APIC errors. */ if ( APIC_INTEGRATED(apic_version[phys_apicid]) ) { apic_read_around(APIC_SPIV); apic_write(APIC_ESR, 0); apic_read(APIC_ESR); } Dprintk("Asserting INIT.\n"); /* * Turn INIT on target chip via IPI */ apic_icr_write(APIC_INT_LEVELTRIG | APIC_INT_ASSERT | APIC_DM_INIT, phys_apicid); if ( !x2apic_enabled ) { Dprintk("Waiting for send to finish...\n"); timeout = 0; do { Dprintk("+"); udelay(100); send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY; } while ( send_status && (timeout++ < 1000) ); mdelay(10); Dprintk("Deasserting INIT.\n"); apic_icr_write(APIC_INT_LEVELTRIG | APIC_DM_INIT, phys_apicid); Dprintk("Waiting for send to finish...\n"); timeout = 0; do { Dprintk("+"); udelay(100); send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY; } while ( send_status && (timeout++ < 1000) ); } else if ( tboot_in_measured_env() ) { /* * With tboot AP is actually spinning in a mini-guest before * receiving INIT. Upon receiving INIT ipi, AP need time to VMExit, * update VMCS to tracking SIPIs and VMResume. * * While AP is in root mode handling the INIT the CPU will drop * any SIPIs */ udelay(10); } /* * Should we send STARTUP IPIs ? * * Determine this based on the APIC version. * If we don't have an integrated APIC, don't send the STARTUP IPIs. */ num_starts = APIC_INTEGRATED(apic_version[phys_apicid]) ? 2 : 0; /* Run STARTUP IPI loop. */ Dprintk("#startup loops: %d.\n", num_starts); maxlvt = get_maxlvt(); for ( i = 0; i < num_starts; i++ ) { Dprintk("Sending STARTUP #%d.\n", i+1); apic_read_around(APIC_SPIV); apic_write(APIC_ESR, 0); apic_read(APIC_ESR); Dprintk("After apic_write.\n"); /* * STARTUP IPI * Boot on the stack */ apic_icr_write(APIC_DM_STARTUP | (start_eip >> 12), phys_apicid); if ( !x2apic_enabled ) { /* Give the other CPU some time to accept the IPI. */ udelay(300); Dprintk("Startup point 1.\n"); Dprintk("Waiting for send to finish...\n"); timeout = 0; do { Dprintk("+"); udelay(100); send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY; } while ( send_status && (timeout++ < 1000) ); /* Give the other CPU some time to accept the IPI. */ udelay(200); } /* Due to the Pentium erratum 3AP. */ if ( maxlvt > 3 ) { apic_read_around(APIC_SPIV); apic_write(APIC_ESR, 0); } accept_status = (apic_read(APIC_ESR) & 0xEF); if ( send_status || accept_status ) break; } Dprintk("After Startup.\n"); if ( send_status ) printk("APIC never delivered???\n"); if ( accept_status ) printk("APIC delivery error (%lx).\n", accept_status); return (send_status | accept_status); } int alloc_cpu_id(void) { cpumask_t tmp_map; int cpu; cpus_complement(tmp_map, cpu_present_map); cpu = first_cpu(tmp_map); return (cpu < NR_CPUS) ? cpu : -ENODEV; } static int do_boot_cpu(int apicid, int cpu) { unsigned long boot_error; int timeout, rc = 0; unsigned long start_eip; /* * Save current MTRR state in case it was changed since early boot * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync: */ mtrr_save_state(); booting_cpu = cpu; /* start_eip had better be page-aligned! */ start_eip = setup_trampoline(); /* So we see what's up */ if ( opt_cpu_info ) printk("Booting processor %d/%d eip %lx\n", cpu, apicid, start_eip); stack_start.esp = stack_base[cpu]; /* This grunge runs the startup process for the targeted processor. */ set_cpu_state(CPU_STATE_INIT); Dprintk("Setting warm reset code and vector.\n"); smpboot_setup_warm_reset_vector(start_eip); /* Starting actual IPI sequence... */ boot_error = wakeup_secondary_cpu(apicid, start_eip); if ( !boot_error ) { /* Allow AP to start initializing. */ set_cpu_state(CPU_STATE_CALLOUT); Dprintk("After Callout %d.\n", cpu); /* Wait 5s total for a response. */ for ( timeout = 0; timeout < 50000; timeout++ ) { if ( cpu_state != CPU_STATE_CALLOUT ) break; udelay(100); } if ( cpu_state == CPU_STATE_CALLIN ) { /* number CPUs logically, starting from 1 (BSP is 0) */ Dprintk("OK.\n"); print_cpu_info(cpu); synchronize_tsc_master(cpu); Dprintk("CPU has booted.\n"); } else if ( cpu_state == CPU_STATE_DEAD ) { rmb(); rc = cpu_error; } else { boot_error = 1; mb(); if ( bootsym(trampoline_cpu_started) == 0xA5 ) /* trampoline started but...? */ printk("Stuck ??\n"); else /* trampoline code not run */ printk("Not responding.\n"); } } if ( boot_error ) { cpu_exit_clear(cpu); rc = -EIO; } /* mark "stuck" area as not stuck */ bootsym(trampoline_cpu_started) = 0; mb(); smpboot_restore_warm_reset_vector(); return rc; } void cpu_exit_clear(unsigned int cpu) { cpu_uninit(cpu); unmap_cpu_to_logical_apicid(cpu); set_cpu_state(CPU_STATE_DEAD); } static void cpu_smpboot_free(unsigned int cpu) { unsigned int order; xfree(idt_tables[cpu]); idt_tables[cpu] = NULL; order = get_order_from_pages(NR_RESERVED_GDT_PAGES); #ifdef __x86_64__ if ( per_cpu(compat_gdt_table, cpu) ) free_domheap_pages(virt_to_page(per_cpu(gdt_table, cpu)), order); if ( per_cpu(gdt_table, cpu) ) free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)), order); per_cpu(compat_gdt_table, cpu) = NULL; #else free_xenheap_pages(per_cpu(gdt_table, cpu), order); #endif per_cpu(gdt_table, cpu) = NULL; if ( stack_base[cpu] != NULL ) { memguard_unguard_stack(stack_base[cpu]); free_xenheap_pages(stack_base[cpu], STACK_ORDER); stack_base[cpu] = NULL; } } static int cpu_smpboot_alloc(unsigned int cpu) { unsigned int order; struct desc_struct *gdt; #ifdef __x86_64__ struct page_info *page; #endif stack_base[cpu] = alloc_xenheap_pages(STACK_ORDER, 0); if ( stack_base[cpu] == NULL ) goto oom; memguard_guard_stack(stack_base[cpu]); order = get_order_from_pages(NR_RESERVED_GDT_PAGES); #ifdef __x86_64__ page = alloc_domheap_pages(NULL, order, MEMF_node(cpu_to_node(cpu))); if ( !page ) goto oom; per_cpu(compat_gdt_table, cpu) = gdt = page_to_virt(page); memcpy(gdt, boot_cpu_compat_gdt_table, NR_RESERVED_GDT_PAGES * PAGE_SIZE); gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu; page = alloc_domheap_pages(NULL, order, MEMF_node(cpu_to_node(cpu))); if ( !page ) goto oom; per_cpu(gdt_table, cpu) = gdt = page_to_virt(page); #else per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0); if ( !gdt ) goto oom; #endif memcpy(gdt, boot_cpu_gdt_table, NR_RESERVED_GDT_PAGES * PAGE_SIZE); BUILD_BUG_ON(NR_CPUS > 0x10000); gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu; idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES); if ( idt_tables[cpu] == NULL ) goto oom; memcpy(idt_tables[cpu], idt_table, IDT_ENTRIES*sizeof(idt_entry_t)); return 0; oom: cpu_smpboot_free(cpu); return -ENOMEM; } static int cpu_smpboot_callback( struct notifier_block *nfb, unsigned long action, void *hcpu) { unsigned int cpu = (unsigned long)hcpu; int rc = 0; switch ( action ) { case CPU_UP_PREPARE: rc = cpu_smpboot_alloc(cpu); break; case CPU_UP_CANCELED: case CPU_DEAD: cpu_smpboot_free(cpu); break; default: break; } return !rc ? NOTIFY_DONE : notifier_from_errno(rc); } static struct notifier_block cpu_smpboot_nfb = { .notifier_call = cpu_smpboot_callback }; void __init smp_prepare_cpus(unsigned int max_cpus) { register_cpu_notifier(&cpu_smpboot_nfb); mtrr_aps_sync_begin(); /* Setup boot CPU information */ smp_store_cpu_info(0); /* Final full version of the data */ print_cpu_info(0); boot_cpu_physical_apicid = get_apic_id(); x86_cpu_to_apicid[0] = boot_cpu_physical_apicid; stack_base[0] = stack_start.esp; set_cpu_sibling_map(0); /* * If we couldn't find an SMP configuration at boot time, * get out of here now! */ if ( !smp_found_config && !acpi_lapic ) { printk(KERN_NOTICE "SMP motherboard not detected.\n"); init_uniprocessor: phys_cpu_present_map = physid_mask_of_physid(0); if (APIC_init_uniprocessor()) printk(KERN_NOTICE "Local APIC not detected." " Using dummy APIC emulation.\n"); map_cpu_to_logical_apicid(); cpu_set(0, per_cpu(cpu_sibling_map, 0)); cpu_set(0, per_cpu(cpu_core_map, 0)); return; } /* * Should not be necessary because the MP table should list the boot * CPU too, but we do it for the sake of robustness anyway. * Makes no sense to do this check in clustered apic mode, so skip it */ if ( !check_phys_apicid_present(boot_cpu_physical_apicid) ) { printk("weird, boot CPU (#%d) not listed by the BIOS.\n", boot_cpu_physical_apicid); physid_set(hard_smp_processor_id(), phys_cpu_present_map); } /* If we couldn't find a local APIC, then get out of here now! */ if ( APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid]) && !cpu_has_apic ) { printk(KERN_ERR "BIOS bug, local APIC #%d not detected!...\n", boot_cpu_physical_apicid); goto init_uniprocessor; } verify_local_APIC(); connect_bsp_APIC(); setup_local_APIC(); map_cpu_to_logical_apicid(); /* * construct cpu_sibling_map, so that we can tell sibling CPUs * efficiently. */ cpu_set(0, per_cpu(cpu_sibling_map, 0)); cpu_set(0, per_cpu(cpu_core_map, 0)); smpboot_setup_io_apic(); setup_boot_APIC_clock(); } void __init smp_prepare_boot_cpu(void) { cpu_set(smp_processor_id(), cpu_online_map); cpu_set(smp_processor_id(), cpu_present_map); } static void remove_siblinginfo(int cpu) { int sibling; struct cpuinfo_x86 *c = cpu_data; for_each_cpu_mask ( sibling, per_cpu(cpu_core_map, cpu) ) { cpu_clear(cpu, per_cpu(cpu_core_map, sibling)); /* Last thread sibling in this cpu core going down. */ if ( cpus_weight(per_cpu(cpu_sibling_map, cpu)) == 1 ) c[sibling].booted_cores--; } for_each_cpu_mask(sibling, per_cpu(cpu_sibling_map, cpu)) cpu_clear(cpu, per_cpu(cpu_sibling_map, sibling)); cpus_clear(per_cpu(cpu_sibling_map, cpu)); cpus_clear(per_cpu(cpu_core_map, cpu)); c[cpu].phys_proc_id = BAD_APICID; c[cpu].cpu_core_id = BAD_APICID; c[cpu].compute_unit_id = BAD_APICID; cpu_clear(cpu, cpu_sibling_setup_map); } void __cpu_disable(void) { extern void fixup_irqs(void); int cpu = smp_processor_id(); set_cpu_state(CPU_STATE_DYING); local_irq_disable(); clear_local_APIC(); /* Allow any queued timer interrupts to get serviced */ local_irq_enable(); mdelay(1); local_irq_disable(); time_suspend(); remove_siblinginfo(cpu); /* It's now safe to remove this processor from the online map */ cpu_clear(cpu, cpupool0->cpu_valid); cpu_clear(cpu, cpu_online_map); fixup_irqs(); if ( cpu_disable_scheduler(cpu) ) BUG(); } void __cpu_die(unsigned int cpu) { /* We don't do anything here: idle task is faking death itself. */ unsigned int i = 0; enum cpu_state seen_state; while ( (seen_state = cpu_state) != CPU_STATE_DEAD ) { BUG_ON(seen_state != CPU_STATE_DYING); mdelay(100); cpu_relax(); process_pending_softirqs(); if ( (++i % 10) == 0 ) printk(KERN_ERR "CPU %u still not dead...\n", cpu); } } int cpu_add(uint32_t apic_id, uint32_t acpi_id, uint32_t pxm) { int node, cpu = -1; dprintk(XENLOG_DEBUG, "cpu_add apic_id %x acpi_id %x pxm %x\n", apic_id, acpi_id, pxm); if ( (acpi_id >= MAX_MADT_ENTRIES) || (apic_id >= MAX_APICS) || (pxm >= 256) ) return -EINVAL; if ( !cpu_hotplug_begin() ) return -EBUSY; /* Detect if the cpu has been added before */ if ( x86_acpiid_to_apicid[acpi_id] != BAD_APICID ) { cpu = (x86_acpiid_to_apicid[acpi_id] != apic_id) ? -EINVAL : -EEXIST; goto out; } if ( physid_isset(apic_id, phys_cpu_present_map) ) { cpu = -EEXIST; goto out; } if ( (cpu = mp_register_lapic(apic_id, 1)) < 0 ) goto out; x86_acpiid_to_apicid[acpi_id] = apic_id; if ( !srat_disabled() ) { if ( (node = setup_node(pxm)) < 0 ) { dprintk(XENLOG_WARNING, "Setup node failed for pxm %x\n", pxm); x86_acpiid_to_apicid[acpi_id] = BAD_APICID; mp_unregister_lapic(apic_id, cpu); cpu = node; goto out; } apicid_to_node[apic_id] = node; } /* Physically added CPUs do not have synchronised TSC. */ if ( boot_cpu_has(X86_FEATURE_TSC_RELIABLE) ) { static bool_t once_only; if ( !test_and_set_bool(once_only) ) printk(XENLOG_WARNING " ** New physical CPU %u may have skewed TSC and hence " "break assumed cross-CPU TSC coherency.\n" " ** Consider using boot parameter \"tsc=skewed\" " "which forces TSC emulation where appropriate.\n", cpu); cpu_set(cpu, tsc_sync_cpu_mask); } srat_detect_node(cpu); numa_add_cpu(cpu); dprintk(XENLOG_INFO, "Add CPU %x with index %x\n", apic_id, cpu); out: cpu_hotplug_done(); return cpu; } int __cpu_up(unsigned int cpu) { int apicid, ret; if ( (apicid = x86_cpu_to_apicid[cpu]) == BAD_APICID ) return -ENODEV; if ( (ret = do_boot_cpu(apicid, cpu)) != 0 ) return ret; set_cpu_state(CPU_STATE_ONLINE); while ( !cpu_isset(cpu, cpu_online_map) ) { cpu_relax(); process_pending_softirqs(); } return 0; } void __init smp_cpus_done(unsigned int max_cpus) { if ( smp_b_stepping ) printk(KERN_WARNING "WARNING: SMP operation may be " "unreliable with B stepping processors.\n"); /* * Don't taint if we are running SMP kernel on a single non-MP * approved Athlon */ if ( tainted & TAINT_UNSAFE_SMP ) { if ( num_online_cpus() > 1 ) printk(KERN_INFO "WARNING: This combination of AMD " "processors is not suitable for SMP.\n"); else tainted &= ~TAINT_UNSAFE_SMP; } if ( nmi_watchdog == NMI_LOCAL_APIC ) check_nmi_watchdog(); setup_ioapic_dest(); mtrr_save_state(); mtrr_aps_sync_end(); } void __init smp_intr_init(void) { int irq, seridx, cpu = smp_processor_id(); /* * IRQ0 must be given a fixed assignment and initialized, * because it's used before the IO-APIC is set up. */ irq_vector[0] = FIRST_HIPRIORITY_VECTOR; /* * Also ensure serial interrupts are high priority. We do not * want them to be blocked by unacknowledged guest-bound interrupts. */ for ( seridx = 0; seridx < 2; seridx++ ) { if ( (irq = serial_irq(seridx)) < 0 ) continue; irq_vector[irq] = FIRST_HIPRIORITY_VECTOR + seridx + 1; per_cpu(vector_irq, cpu)[FIRST_HIPRIORITY_VECTOR + seridx + 1] = irq; irq_cfg[irq].vector = FIRST_HIPRIORITY_VECTOR + seridx + 1; irq_cfg[irq].cpu_mask = cpu_online_map; } /* IPI for cleanuping vectors after irq move */ set_intr_gate(IRQ_MOVE_CLEANUP_VECTOR, irq_move_cleanup_interrupt); /* IPI for event checking. */ set_intr_gate(EVENT_CHECK_VECTOR, event_check_interrupt); /* IPI for invalidation */ set_intr_gate(INVALIDATE_TLB_VECTOR, invalidate_interrupt); /* IPI for generic function call */ set_intr_gate(CALL_FUNCTION_VECTOR, call_function_interrupt); }