aboutsummaryrefslogtreecommitdiffstats
path: root/target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch
diff options
context:
space:
mode:
Diffstat (limited to 'target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch')
-rw-r--r--target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch12268
1 files changed, 0 insertions, 12268 deletions
diff --git a/target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch b/target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch
deleted file mode 100644
index cbec144516..0000000000
--- a/target/linux/layerscape/patches-4.4/7201-staging-dpaa2-eth-initial-commit-of-dpaa2-eth-driver.patch
+++ /dev/null
@@ -1,12268 +0,0 @@
-From e588172442093fe22374dc1bfc88a7da751d6b30 Mon Sep 17 00:00:00 2001
-From: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Date: Tue, 15 Sep 2015 10:14:16 -0500
-Subject: [PATCH 201/226] staging: dpaa2-eth: initial commit of dpaa2-eth
- driver
-
-commit 3106ece5d96784b63a4eabb26661baaefedd164f
-[context adjustment]
-
-This is a commit of a squash of the cumulative dpaa2-eth patches
-in the sdk 2.0 kernel as of 3/7/2016.
-
-flib,dpaa2-eth: flib header update (Rebasing onto kernel 3.19, MC 0.6)
-
-this patch was moved from 4.0 branch
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-[Stuart: split into multiple patches]
-Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
-Integrated-by: Jilong Guo <jilong.guo@nxp.com>
-
-flib,dpaa2-eth: updated Eth (was: Rebasing onto kernel 3.19, MC 0.6)
-
-updated Ethernet driver from 4.0 branch
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-[Stuart: cherry-picked patch from 4.0 and split it up]
-Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-Conflicts:
-
- drivers/staging/Makefile
-
-Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
-
-dpaa2-eth: Adjust 'options' size
-
-The 'options' field of various MC configuration structures has changed
-from u64 to u32 as of MC firmware version 7.0.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I9ba0c19fc22f745e6be6cc40862afa18fa3ac3db
-Reviewed-on: http://git.am.freescale.net:8181/35579
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Selectively disable preemption
-
-Temporary workaround for a MC Bus API quirk which only allows us to
-specify exclusively, either a spinlock-protected MC Portal, or a
-mutex-protected one, but then tries to match the runtime context in
-order to enforce their usage.
-
-Te Be Reverted.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ida2ec1fdbdebfd2e427f96ddad7582880146fda9
-Reviewed-on: http://git.am.freescale.net:8181/35580
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Fix ethtool bug
-
-We were writing beyond the end of the allocated data area for ethtool
-statistics.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: I6b77498a78dad06970508ebbed7144be73854f7f
-Reviewed-on: http://git.am.freescale.net:8181/35583
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Retry read if store unexpectedly empty
-
-After we place a volatile dequeue command, we might get to inquire the
-store before the DMA has actually completed. In such cases, we must
-retry, lest we'll have the store overwritten by the next legitimate
-volatile dequeue.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I314fbb8b4d9f589715e42d35fc6677d726b8f5ba
-Reviewed-on: http://git.am.freescale.net:8181/35584
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-flib: Fix "missing braces around initializer" warning
-
-Gcc does not support (yet?) the ={0} initializer in the case of an array
-of structs. Fixing the Flib in order to make the warning go away.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I8782ecb714c032cfeeecf4c8323cf9dbb702b10f
-Reviewed-on: http://git.am.freescale.net:8181/35586
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-Revert "dpaa2-eth: Selectively disable preemption"
-
-This reverts commit e1455823c33b8dd48b5d2d50a7e8a11d3934cc0d.
-
-dpaa2-eth: Fix memory leak
-
-A buffer kmalloc'ed at probe time was not freed after it was no
-longer needed.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: Iba197209e9203ed306449729c6dcd23ec95f094d
-Reviewed-on: http://git.am.freescale.net:8181/35756
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Remove unused field in ldpaa_eth_priv structure
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: I124c3e4589b6420b1ea5cc05a03a51ea938b2bea
-Reviewed-on: http://git.am.freescale.net:8181/35757
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Fix "NOHZ: local_softirq_pending" warning
-
-Explicitly run softirqs after we enable NAPI. This in particular gets us
-rid of the "NOHZ: local_softirq_pending" warnings, but it also solves a
-couple of other problems, among which fluctuating performance and high
-ping latencies.
-
-Notes:
- - This will prevent us from timely processing notifications and
-other "non-frame events" coming into the software portal. So far,
-though, we only expect Dequeue Available Notifications, so this patch
-is good enough for now.
- - A degradation in console responsiveness is expected, especially in
-cases where the bottom-half runs on the same CPU as the console.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: Ia6f11da433024e80ee59e821c9eabfa5068df5e5
-Reviewed-on: http://git.am.freescale.net:8181/35830
-Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Add polling mode for link state changes
-
-Add the Kconfigurable option of using a thread for polling on
-the link state instead of relying on interrupts from the MC.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: If2fe66fc5c0fbee2568d7afa15d43ea33f92e8e2
-Reviewed-on: http://git.am.freescale.net:8181/35967
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Update copyright years.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I7e00eecfc5569027c908124726edaf06be357c02
-Reviewed-on: http://git.am.freescale.net:8181/37666
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Drain bpools when netdev is down
-
-In a data path layout with potentially a dozen interfaces, not all of
-them may be up at the same time, yet they may consume a fair amount of
-buffer space.
-Drain the buffer pool upon ifdown and re-seed it at ifup.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I24a379b643c8b5161a33b966c3314cf91024ed4a
-Reviewed-on: http://git.am.freescale.net:8181/37667
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Interrupts cleanup
-
-Add the code for cleaning up interrupts on driver removal.
-This was lost during transition from kernel 3.16 to 3.19.
-
-Also, there's no need to call devm_free_irq() if probe fails
-as the kernel will release all driver resources.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: Ifd404bbf399d5ba62e2896371076719c1d6b4214
-Reviewed-on: http://git.am.freescale.net:8181/36199
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Reviewed-on: http://git.am.freescale.net:8181/37690
-
-dpaa2-eth: Ethtool support for hashing
-
-Only one set of header fields is supported for all protocols, the driver
-silently replaces previous configuration regardless of user selected
-protocol.
-Following fields are supported:
- L2DA
- VLAN tag
- L3 proto
- IP SA
- IP DA
- L4 bytes 0 & 1 [TCP/UDP src port]
- L4 bytes 2 & 3 [TCP/UDP dst port]
-
-Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
-
-Change-Id: I97c9dac1b842fe6bc7115e40c08c42f67dee8c9c
-Reviewed-on: http://git.am.freescale.net:8181/37260
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Fix maximum number of FQs
-
-The maximum number of Rx/Tx conf FQs associated to a DPNI was not
-updated when the implementation changed. It just happened to work
-by accident.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I03e30e0121a40d0d15fcdc4bee1fb98caa17c0ef
-Reviewed-on: http://git.am.freescale.net:8181/37668
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Fix Rx buffer address alignment
-
-We need to align the start address of the Rx buffers to
-LDPAA_ETH_BUF_ALIGN bytes. We were using SMP_CACHE_BYTES instead.
-It happened to work because both defines have the value of 64,
-but this may change at some point.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I08a0f3f18f82c5581c491bd395e3ad066b25bcf5
-Reviewed-on: http://git.am.freescale.net:8181/37669
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Add buffer count to ethtool statistics
-
-Print the number of buffers available in the pool for a certain DPNI
-along with the rest of the ethtool -S stats.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ia1f5cf341c8414ae2058a73f6bc81490ef134592
-Reviewed-on: http://git.am.freescale.net:8181/37671
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Add Rx error queue
-
-Add a Kconfigurable option that allows Rx error frames to be
-enqueued on an error FQ. By default error frames are discarded,
-but for debug purposes we may want to process them at driver
-level.
-
-Note: Checkpatch issues a false positive about complex macros that
-should be parenthesized.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I7d19d00b5d5445514ebd112c886ce8ccdbb1f0da
-Reviewed-on: http://git.am.freescale.net:8181/37672
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-staging: fsl-dpaa2: FLib headers cleanup
-
-Going with the flow of moving fsl-dpaa2 headers into the drivers'
-location rather than keeping them all in one place.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ia2870cd019a4934c7835d38752a46b2a0045f30e
-Reviewed-on: http://git.am.freescale.net:8181/37674
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Klocwork fixes
-
-Fix several issues reported by Klocwork.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I1e23365765f3b0ff9b6474d8207df7c1f2433ccd
-Reviewed-on: http://git.am.freescale.net:8181/37675
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Probe devices with no hash support
-
-Don't fail at probe if the DPNI doesn't have the hash distribution
-option enabled. Instead, initialize a single Rx frame queue and
-use it for all incoming traffic.
-
-Rx flow hashing configuration through ethtool will not work
-in this case.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Iaf17e05b15946e6901c39a21b5344b89e9f1d797
-Reviewed-on: http://git.am.freescale.net:8181/37676
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Process frames in IRQ context
-
-Stop using threaded IRQs and move back to hardirq top-halves.
-This is the first patch of a small series adapting the DPIO and Ethernet
-code to these changes.
-
-Signed-off-by: Roy Pledge <roy.pledge@freescale.com>
-Tested-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-[Stuart: split dpio and eth into separate patches, updated subject]
-Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Fix bug in NAPI poll
-
-We incorrectly rearmed FQDAN notifications at the end of a NAPI cycle,
-regardless of whether the NAPI budget was consumed or not. We only need
-to rearm notifications if the NAPI cycle cleaned less frames than its
-budget, otherwise a new NAPI poll will be scheduled anyway.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ib55497bdbd769047420b3150668f2e2aef3c93f6
-Reviewed-on: http://git.am.freescale.net:8181/38317
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Use dma_map_sg on Tx
-
-Use the simpler dma_map_sg() along with the scatterlist API if the
-egress frame is scatter-gather, at the cost of keeping some extra
-information in the frame's software annotation area.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: If293aeabbd58d031f21456704357d4ff7e53c559
-Reviewed-on: http://git.am.freescale.net:8181/37681
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Reduce retries if Tx portal busy
-
-Too many retries due to Tx portal contention led to a significant cycle
-waste and reduction in performance.
-Reducing the number of enqueue retries and dropping frame if eventually
-unsuccessful.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ib111ec61cd4294a7632348c25fa3d7f4002be0c0
-Reviewed-on: http://git.am.freescale.net:8181/37682
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Add sysfs support for TxConf affinity change
-
-This adds support in sysfs for affining Tx Confirmation queues to GPPs,
-via the affine DPIO objects.
-
-The user can specify a cpu list in /sys/class/net/ni<X>/txconf_affinity
-to which the Ethernet driver will affine the TxConf FQs, in round-robin
-fashion. This is naturally a bit coarse, because there is no "official"
-mapping of the transmitting CPUs to Tx Confirmation queues.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I4b3da632e202ceeb22986c842d746aafe2a87a81
-Reviewed-on: http://git.am.freescale.net:8181/37684
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Implement ndo_select_queue
-
-Use a very simple selection function for the egress FQ. The purpose
-behind this is to more evenly distribute Tx Confirmation traffic,
-especially in the case of multiple egress flows, when bundling it all on
-CPU 0 would make that CPU a bottleneck.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ibfe8aad7ad5c719cc95d7817d7de6d2094f0f7ed
-Reviewed-on: http://git.am.freescale.net:8181/37685
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Reduce TxConf NAPI weight back to 64
-
-It turns out that not only the kernel frowned upon the old budget of 256,
-but the measured values were well below that anyway.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I62ddd3ea1dbfd8b51e2bcb2286e0d5eb10ac7f27
-Reviewed-on: http://git.am.freescale.net:8181/37688
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Try refilling the buffer pool less often
-
-We used to check if the buffer pool needs refilling at each Rx
-frame. Instead, do that check (and the actual buffer release if
-needed) only after a pull dequeue.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: Id52fab83873c40a711b8cadfcf909eb7e2e210f3
-Reviewed-on: http://git.am.freescale.net:8181/38318
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Stay in NAPI if exact budget is met
-
-An off-by-one bug would cause premature exiting from the NAPI cycle.
-Performance degradation is particularly severe in IPFWD cases.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I9de2580c7ff8e46cbca9613890b03737add35e26
-Reviewed-on: http://git.am.freescale.net:8181/37908
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Minor changes to FQ stats
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: I0ced0e7b2eee28599cdea79094336c0d44f0d32b
-Reviewed-on: http://git.am.freescale.net:8181/38319
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Support fewer DPIOs than CPUs
-
-The previous DPIO functions would transparently choose a (perhaps
-non-affine) CPU if the required CPU was not available. Now that their API
-contract is enforced, we must make an explicit request for *any* DPIO if
-the request for an *affine* DPIO has failed.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: Ib08047ffa33518993b1ffa4671d0d4f36d6793d0
-Reviewed-on: http://git.am.freescale.net:8181/38320
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: cosmetic changes in hashing code
-
-Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
-Change-Id: I79e21a69a6fb68cdbdb8d853c059661f8988dbf9
-Reviewed-on: http://git.am.freescale.net:8181/37258
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Prefetch data before initial access
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: Ie8f0163651aea7e3e197a408f89ca98d296d4b8b
-Reviewed-on: http://git.am.freescale.net:8181/38753
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Use netif_receive_skb
-
-netif_rx() is a leftover since our pre-NAPI codebase.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: I02ff0a059862964df1bf81b247853193994c2dfe
-Reviewed-on: http://git.am.freescale.net:8181/38754
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Use napi_alloc_frag() on Rx.
-
-A bit better-suited than netdev_alloc_frag().
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Change-Id: I8863a783502db963e5dc968f049534c36ad484e2
-Reviewed-on: http://git.am.freescale.net:8181/38755
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Silence skb_realloc_headroom() warning
-
-pktgen tests tend to be too noisy because pktgen does not observe the
-net device's needed_headroom specification and we used to be pretty loud
-about that. We'll print the warning message just once.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I3c12eba29c79aa9c487307d367f6d9f4dbe447a3
-Reviewed-on: http://git.am.freescale.net:8181/38756
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Print message upon device unplugging
-
-Give a console notification when a DPNI is unplugged. This is useful for
-automated tests to know the operation (which is not instantaneous) has
-finished.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: If33033201fcee7671ad91c2b56badf3fb56a9e3e
-Reviewed-on: http://git.am.freescale.net:8181/38757
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Add debugfs support
-
-Add debugfs entries for showing detailed per-CPU and per-FQ
-counters for each network interface. Also add a knob for
-resetting these stats.
-The agregated interface statistics were already available through
-ethtool -S.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I55f5bfe07a15b0d1bf0c6175d8829654163a4318
-Reviewed-on: http://git.am.freescale.net:8181/38758
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: limited support for flow steering
-
-Steering is supported on a sub-set of fields, including DMAC, IP SRC
-and DST, L4 ports.
-Steering and hashing configurations depend on each other, that makes
-the whole thing tricky to configure. Currently FS can be configured
-using only the fields selected for hashing and all the hashing fields
-must be included in the match key - masking doesn't work yet.
-
-Signed-off-by: Alex Marginean <alexandru.marginean@freescale.com>
-Change-Id: I9fa3199f7818a9a5f9d69d3483ffd839056cc468
-Reviewed-on: http://git.am.freescale.net:8181/38759
-Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Rename files into the dpaa2 nomenclature
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I1c3d62e2f19a59d4b65727234fd7df2dfd8683d9
-Reviewed-on: http://git.am.freescale.net:8181/38965
-Reviewed-by: Alexandru Marginean <Alexandru.Marginean@freescale.com>
-Reviewed-by: Ruxandra Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-Tested-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-staging: dpaa2-eth: migrated remaining flibs for MC fw 8.0.0
-
-Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
-[Stuart: split eth part into separate patch, updated subject]
-Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Clear 'backup_pool' attribute
-
-New MC-0.7 firmware allows specifying an alternate buffer pool, but we
-are momentarily not using that feature.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Change-Id: I0a6e6626512b7bbddfef732c71f1400b67f3e619
-Reviewed-on: http://git.am.freescale.net:8181/39149
-Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
-Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Do programing of MSIs in devm_request_threaded_irq()
-
-With the new dprc_set_obj_irq() we can now program MSIS in the device
-in the callback invoked from devm_request_threaded_irq().
-Since this callback is invoked with interrupts disabled, we need to
-use an atomic portal, instead of the root DPRC's built-in portal
-which is non-atomic.
-
-Signed-off-by: Itai Katz <itai.katz@freescale.com>
-Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
-[Stuart: split original patch into multiple patches]
-Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
-
-dpaa2-eth: Do not map beyond skb tail
-
-On Tx do dma_map only until skb->tail, rather than skb->end.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Declare NETIF_F_LLTX as a capability
-
-We are effectively doing lock-less Tx.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Avoid bitcopy of 'backpointers' struct
-
-Make 'struct ldpaa_eth_swa bps' a pointer and void copying it on both Tx
-and TxConf.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Use CDANs instead of FQDANs
-
-Use Channel Dequeue Available Notifications (CDANs) instead of
-Frame Queue notifications. We allocate a QMan channel (or DPCON
-object) for each available cpu and assign to it the Rx and Tx conf
-queues associated with that cpu.
-
-We usually want to have affine DPIOs and DPCONs (one for each core).
-If this is not possible due to insufficient resources, we distribute
-all ingress traffic on the cores with affine DPIOs.
-
-NAPI instances are now one per channel instead of one per FQ, as the
-interrupt source changes. Statistics counters change accordingly.
-
-Note that after this commit is applied, one needs to provide sufficient
-DPCON objects (either through DPL on restool) in order for the Ethernet
-interfaces to work.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Cleanup debugfs statistics
-
-Several minor changes to statistics reporting:
-* Fix print alignment of statistics counters
-* Fix a naming ambiguity in the cpu_stats debugfs ops
-* Add Rx/Tx error counters; these were already used, but not
-reported in the per-CPU stats
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Add tx shaping configuration in sysfs
-
-Egress traffic can be shaped via a per-DPNI SysFS entry:
- echo M N > /sys/class/net/ni<X>/tx_shaping
-where:
- M is the maximum throughput, expressed in Mbps.
- N is the maximum burst size, expressed in bytes, at most 64000.
-
-To remove shaping, use M=0, N=0.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix "Tx busy" counter
-
-Under heavy egress load, when a large number of the transmitted packets
-cannot be sent because of high portal contention, the "Tx busy" counter
-was not properly incremented.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Fix memory cleanup in case of Tx congestion
-
-The error path of ldpaa_eth_tx() was not properly freeing the SGT buffer
-if the enqueue had failed because of congestion. DMA unmapping was
-missing, too.
-
-Factor the code originally inside the TxConf callback out into a
-separate function that would be called on both TxConf and Tx paths.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Use napi_gro_receive()
-
-Call napi_gro_receive(), effectively enabling GRO.
-NOTE: We could further optimize this by looking ahead in the parse results
-received from hardware and only using GRO when the L3+L4 combination is
-appropriate.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix compilation of Rx Error FQ code
-
-Conditionally-compiled code slipped between cracks when FLibs were
-updated.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: Add Kconfig dependency on DEBUG_FS
-
-The driver's debugfs support depends on the generic CONFIG_DEBUG_FS.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix interface down/up bug
-
-If a networking interface was brought down while still receiving
-ingress traffic, the delay between DPNI disable and NAPI disable
-was not enough to ensure all in-flight frames got processed.
-Instead, some frames were left pending in the Rx queues. If the
-net device was then removed (i.e. restool unbind/unplug), the
-call to dpni_reset() silently failed and the kernel crashed on
-device replugging.
-
-Fix this by increasing the FQ drain time. Also, at ifconfig up
-we enable NAPI before starting the DPNI, to make sure we don't
-miss any early CDANs.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Iterate only through initialized channels
-
-The number of DPIO objects available to a DPNI may be fewer than the
-number of online cores. A typical example would be a DPNI with a
-distribution size smaller than 8. Since we only initialize as many
-channels (DPCONs) as there are DPIOs, iterating through all online cpus
-would produce a nasty oops when retrieving ethtool stats.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-net: pktgen: Observe needed_headroom of the device
-
-Allocate enough space so as not to force the outgoing net device to do
-skb_realloc_headroom().
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
-
-dpaa2-eth: Trace buffer pool seeding
-
-Add ftrace support for buffer pool seeding. Individual buffers are
-described by virtual and dma addresses and sizes, as well as by bpid.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Explicitly set carrier off at ifconfig up
-
-If we don't, netif_carrier_ok() will still return true even if the link
-state is marked as LINKWATCH_PENDING, which in a dpni-2-dpni case may
-last indefinitely long. This will cause "ifconfig up" followed by "ip
-link show" to report LOWER_UP when the peer DPNI is still down (and in
-fact before we've even received any link notification at all).
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix FQ type in stats print
-
-Fix a bug where the type of the Rx error queue was printed
-incorrectly in the debugfs statistics
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Don't build debugfs support as a separate module
-
-Instead have module init and exit functions declared explicitly for
-the Ethernet driver and initialize/destroy the debugfs directory there.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Remove debugfs #ifdefs from dpaa2-eth.c
-
-Instead of conditionally compiling the calls to debugfs init
-functions in dpaa2-eth.c, define no-op stubs for these functions
-in case the debugfs Kconfig option is not enabled. This makes
-the code more readable.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Use napi_complete_done()
-
-Replace napi_complete() with napi_complete_done().
-
-Together with setting /sys/class/net/ethX/gro_flush_timeout, this
-allows us to take better advantage of GRO coalescing and improves
-throughput and cpu load in TCP termination tests.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Fix error path in probe
-
-NAPI delete was called at the wrong place when exiting probe
-function on an error path
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Allocate channels based on queue count
-
-Limit the number of channels allocated per DPNI to the maximum
-between the number of Rx queues per traffic class (distribution size)
-and Tx confirmation queues (number of tx flows).
-If this happens to be larger than the number of available cores, only
-allocate one channel for each core and distribute the frame queues on
-the cores/channels in a round robin fashion.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Use DPNI setting for number of Tx flows
-
-Instead of creating one Tx flow for each online cpu, use the DPNI
-attributes for deciding how many senders we have.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Renounce sentinel in enum dpni_counter
-
-Bring back the Flib header dpni.h to its initial content by removing the
-sentinel value in enum dpni_counter.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix Rx queue count
-
-We were missing a roundup to the next power of 2 in order to be in sync
-with the MC implementation.
-Actually, moved that logic in a separate function which we'll remove
-once the MC API is updated.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Unmap the S/G table outside ldpaa_eth_free_rx_fd
-
-The Scatter-Gather table is already unmapped outside ldpaa_eth_free_rx_fd
-so no need to try to unmap it once more.
-
-Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
-
-dpaa2-eth: Use napi_schedule_irqoff()
-
-At the time we schedule NAPI, the Dequeue Available Notifications (which
-are the de facto triggers of NAPI processing) are already disabled.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-net: Fix ethernet Kconfig
-
-Re-add missing 'source' directive. This exists on the integration
-branch, but was mistakenly removed by an earlier dpaa2-eth rebase.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Manually update link state at ifup
-
-The DPMAC may have handled the link state notification before the DPNI
-is up. A new PHY state transision may not subsequently occur, so the
-DPNI must initiate a read of the DPMAC state.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Stop carrier upon ifdown
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix print messages in link state handling code
-
-Avoid an "(uninitialized)" message during DPNI probe by replacing
-netdev_info() with its corresponding dev_info().
-Purge some related comments and add some netdev messages to assist
-link state debugging.
-Remove an excessively defensive assertion.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Do not allow ethtool settings change while the NI is up
-
-Due to a MC limitation, link state changes while the DPNI is enabled
-will fail. For now, we'll just prevent the call from going down to the MC
-if we know it will fail.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Reduce ethtool messages verbosity
-
-Transform a couple of netdev_info() calls into netdev_dbg().
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Only unmask IRQs that we actually handle
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Produce fewer boot log messages
-
-No longer print one line for each all-zero hwaddr that was replaced with
-a random MAC address; just inform the user once that this has occurred.
-And reduce redundancy of some printouts in the bootlog.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix big endian issue
-
-We were not doing any endianness conversions on the scatter gather
-table entries, which caused problems on big endian kernels.
-
-For frame descriptors the QMan driver takes care of this transparently,
-but in the case of SG entries we need to do it ourselves.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Force atomic context for lazy bpool seeding
-
-We use the same ldpaa_bp_add_7() function for initial buffer pool
-seeding (from .ndo_open) and for hotpath pool replenishing. The function
-is using napi_alloc_frag() as an optimization for the Rx datapath, but
-that turns out to require atomic execution because of a this_cpu_ptr()
-call down its stack.
-This patch temporarily disables preemption around the initial seeding of
-the Rx buffer pool.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa-eth: Integrate Flib version 0.7.1.2
-
-Although API-compatible with 0.7.1.1, there are some ABI changes
-that warrant a new integration.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: No longer adjust max_dist_per_tc
-
-The MC firmware until version 0.7.1.1/8.0.2 requires that
-max_dist_per_tc have the value expected by the hardware, which would be
-different from what the user expects. MC firmware 0.7.1.2/8.0.5 fixes
-that, so we remove our transparent conversion.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Enforce 256-byte Rx alignment
-
-Hardware erratum enforced by MC requires that Rx buffer lengths and
-addresses be 265-byte aligned.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Rename Tx buf alignment macro
-
-The existing "BUF_ALIGN" macro remained confined to Tx usage, after
-separate alignment was introduced for Rx. Renaming accordingly.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Fix hashing distribution size
-
-Commit be3fb62623e4338e60fb60019f134b6055cbc127
-Author: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-Date: Fri Oct 23 18:26:44 2015 +0300
-
- dpaa2-eth: No longer adjust max_dist_per_tc
-
-missed one usage of the ldpaa_queue_count() function, making
-distribution size inadvertenly lower.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Remove ndo_select_queue
-
-Our implementation of ndo_select_queue would lead to questions regarding
-our support for qdiscs. Until we find an optimal way to select the txq
-without breaking future qdisc integration, just remove the
-ndo_select_queue callback entirely and leave the stack figure out the
-flow.
-This incurs a ~2-3% penalty on some performance tests.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Select TxConf FQ based on processor id
-
-Use smp_processor_id instead of skb queue mapping to determine the tx
-flow id and implicitly the confirmation queue.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-
-dpaa2-eth: Reduce number of buffers in bpool
-
-Reduce the maximum number of buffers in each buffer pool associated
-with a DPNI. This in turn reduces the number of memory allocations
-performed in a single batch when buffers fall below a certain
-threshold.
-
-Provides a significant performance boost (~5-10% increase) on both
-termination and forwarding scenarios, while also reducing the driver
-memory footprint.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Replace "ldpaa" with "dpaa2"
-
-Replace all instances of "ldpaa"/"LDPAA" in the Ethernet driver
-(names of functions, structures, macros, etc), with "dpaa2"/"DPAA2",
-except for DPIO API function calls.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: rename ldpaa to dpaa2
-
-Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
-(Stuart: this patch was split out from the origin global rename patch)
-Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
-
-dpaa2-eth: Rename dpaa_io_query_fq_count to dpaa2_io_query_fq_count
-
-Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
-
-fsl-dpio: rename dpaa_* structure to dpaa2_*
-
-Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
-
-dpaa2-eth, dpni, fsl-mc: Updates for MC0.8.0
-
-Several changes need to be performed in sync for supporting
-the newest MC version:
-* Update mc-cmd.h
-* Update the dpni binary interface to v6.0
-* Update the DPAA2 Eth driver to account for several API changes
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-staging: fsl-dpaa2: ethernet: add support for hardware timestamping
-
-Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
-
-fsl-dpaa2: eth: Do not set bpid in egress fd
-
-We don't do FD recycling on egress, BPID is therefore not necessary.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Amend buffer refill comment
-
-A change request has been pending for placing an upper bound to the
-buffer replenish logic on Rx. However, short of practical alternatives,
-resort to amending the relevant comment and rely on ksoftirqd to
-guarantee interactivity.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Configure a taildrop threshold for each Rx frame queue.
-
-The selected value allows for Rx jumbo (10K) frames processing
-while at the same time helps balance the system in the case of
-IP forwarding.
-
-Also compute the number of buffers in the pool based on the TD
-threshold to avoid starving some of the ingress queues in small
-frames, high throughput scenarios.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Check objects' FLIB version
-
-Make sure we support the DPNI, DPCON and DPBP version, otherwise
-abort probing early on and provide an error message.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Remove likely/unlikely from cold paths
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Remove __cold attribute
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-
-fsl-dpaa2: eth: Replace netdev_XXX with dev_XXX before register_netdevice()
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Fix coccinelle issue
-
-drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c:687:1-36: WARNING:
-Assignment of bool to 0/1
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-
-fsl-dpaa2: eth: Fix minor spelling issue
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-
-fsl-dpaa2: eth: Add a couple of 'unlikely' on hot path
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Fix a bunch of minor issues found by static analysis tools
-
-As found by Klocworks and Checkpatch:
- - Unused variables
- - Integer type replacements
- - Unchecked memory allocations
- - Whitespace, alignment and newlining
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Remove "inline" keyword from static functions
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-
-fsl-dpaa2: eth: Remove BUG/BUG_ONs
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Use NAPI_POLL_WEIGHT
-
-No need to define our own macro as long as we're using the
-default value of 64.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Move dpaa2_eth_swa structure to header file
-
-It was the only structure defined inside dpaa2-eth.c
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Replace uintX_t with uX
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Minor fixes & cosmetics
-
-- Make driver log level an int, because this is what
- netif_msg_init expects.
-- Remove driver description macro as it was used only once,
- immediately after being defined
-- Remove include comment
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Move bcast address setup to dpaa2_eth_netdev_init
-
-It seems to fit better there than directly in probe.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Fix DMA mapping bug
-
-During hashing/flow steering configuration via ethtool, we were
-doing a DMA unmap from the wrong address. Fix the issue by using
-the DMA address that was initially mapped.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-dpaa2-eth: Associate buffer counting to queues instead of cpu
-
-Move the buffer counters from being percpu variables to being
-associated with QMan channels. This is more natural as we need
-to dimension the buffer pool count based on distribution size
-rather than number of online cores.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Provide driver and fw version to ethtool
-
-Read fw version from the MC and interpret DPNI FLib major.minor as the
-driver's version. Report these in 'ethool -i'.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Remove dependency on GCOV_KERNEL
-
-Signed-off-by: Cristian Sovaiala <cristi.sovaiala@nxp.com>
-
-fsl-dpaa2: eth: Remove FIXME/TODO comments from the code
-
-Some of the concerns had already been addressed, a couple are being
-fixed in place.
-Left a few TODOs related to the flow-steering code, which needs to be
-revisited before upstreaming anyway.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Remove forward declarations
-
-Instead move the functions such that they are defined prior to
-being used.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Remove dead code in IRQ handler
-
-If any of those conditions were met, it is unlikely we'd ever be there
-in the first place.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Remove dpaa2_dpbp_drain()
-
-Its sole caller was __dpaa2_dpbp_free(), so move its content and get rid
-of one function call.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Remove duplicate define
-
-We somehow ended up with two defines for the maximum number
-of tx queues.
-
-Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-
-fsl-dpaa2: eth: Move header comment to .c file
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-fsl-dpaa2: eth: Make DPCON allocation failure produce a benign message
-
-Number of DPCONs may be smaller than the number of CPUs in a number of
-valid scenarios. One such scenario is when the DPNI's distribution width
-is smaller than the number of cores and we just don't want to
-over-allocate DPCONs.
-Make the DPCON allocation failure less menacing by changing the logged
-message.
-
-While at it, remove a unused parameter in function prototype.
-
-Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
-
-dpaa2 eth: irq update
-
-Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
-
-Conflicts:
- drivers/staging/Kconfig
- drivers/staging/Makefile
----
- MAINTAINERS | 15 +
- drivers/staging/Kconfig | 2 +
- drivers/staging/Makefile | 1 +
- drivers/staging/fsl-dpaa2/Kconfig | 11 +
- drivers/staging/fsl-dpaa2/Makefile | 5 +
- drivers/staging/fsl-dpaa2/ethernet/Kconfig | 42 +
- drivers/staging/fsl-dpaa2/ethernet/Makefile | 21 +
- .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c | 319 +++
- .../staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h | 61 +
- .../staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h | 185 ++
- drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c | 2793 ++++++++++++++++++++
- drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h | 366 +++
- drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c | 882 +++++++
- drivers/staging/fsl-dpaa2/ethernet/dpkg.h | 175 ++
- drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h | 1058 ++++++++
- drivers/staging/fsl-dpaa2/ethernet/dpni.c | 1907 +++++++++++++
- drivers/staging/fsl-dpaa2/ethernet/dpni.h | 2581 ++++++++++++++++++
- drivers/staging/fsl-mc/include/mc-cmd.h | 5 +-
- drivers/staging/fsl-mc/include/net.h | 481 ++++
- net/core/pktgen.c | 1 +
- 20 files changed, 10910 insertions(+), 1 deletion(-)
- create mode 100644 drivers/staging/fsl-dpaa2/Kconfig
- create mode 100644 drivers/staging/fsl-dpaa2/Makefile
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Kconfig
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/Makefile
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpkg.h
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.c
- create mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.h
- create mode 100644 drivers/staging/fsl-mc/include/net.h
-
---- a/MAINTAINERS
-+++ b/MAINTAINERS
-@@ -4539,6 +4539,21 @@ L: linux-kernel@vger.kernel.org
- S: Maintained
- F: drivers/staging/fsl-mc/
-
-+FREESCALE DPAA2 ETH DRIVER
-+M: Ioana Radulescu <ruxandra.radulescu@freescale.com>
-+M: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
-+M: Cristian Sovaiala <cristian.sovaiala@freescale.com>
-+L: linux-kernel@vger.kernel.org
-+S: Maintained
-+F: drivers/staging/fsl-dpaa2/ethernet/
-+
-+FREESCALE QORIQ MANAGEMENT COMPLEX RESTOOL DRIVER
-+M: Lijun Pan <Lijun.Pan@freescale.com>
-+L: linux-kernel@vger.kernel.org
-+S: Maintained
-+F: drivers/staging/fsl-mc/bus/mc-ioctl.h
-+F: drivers/staging/fsl-mc/bus/mc-restool.c
-+
- FREEVXFS FILESYSTEM
- M: Christoph Hellwig <hch@infradead.org>
- W: ftp://ftp.openlinux.org/pub/people/hch/vxfs
---- a/drivers/staging/Kconfig
-+++ b/drivers/staging/Kconfig
-@@ -114,4 +114,6 @@ source "drivers/staging/most/Kconfig"
-
- source "drivers/staging/fsl_ppfe/Kconfig"
-
-+source "drivers/staging/fsl-dpaa2/Kconfig"
-+
- endif # STAGING
---- a/drivers/staging/Makefile
-+++ b/drivers/staging/Makefile
-@@ -49,3 +49,4 @@ obj-$(CONFIG_FSL_DPA) += fsl_q
- obj-$(CONFIG_WILC1000) += wilc1000/
- obj-$(CONFIG_MOST) += most/
- obj-$(CONFIG_FSL_PPFE) += fsl_ppfe/
-+obj-$(CONFIG_FSL_DPAA2) += fsl-dpaa2/
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/Kconfig
-@@ -0,0 +1,11 @@
-+#
-+# Freescale device configuration
-+#
-+
-+config FSL_DPAA2
-+ bool "Freescale DPAA2 devices"
-+ depends on FSL_MC_BUS
-+ ---help---
-+ Build drivers for Freescale DataPath Acceleration Architecture (DPAA2) family of SoCs.
-+# TODO move DPIO driver in-here?
-+source "drivers/staging/fsl-dpaa2/ethernet/Kconfig"
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/Makefile
-@@ -0,0 +1,5 @@
-+#
-+# Makefile for the Freescale network device drivers.
-+#
-+
-+obj-$(CONFIG_FSL_DPAA2_ETH) += ethernet/
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/Kconfig
-@@ -0,0 +1,42 @@
-+#
-+# Freescale DPAA Ethernet driver configuration
-+#
-+# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
-+#
-+# This file is released under the GPLv2
-+#
-+
-+menuconfig FSL_DPAA2_ETH
-+ tristate "Freescale DPAA2 Ethernet"
-+ depends on FSL_DPAA2 && FSL_MC_BUS && FSL_MC_DPIO
-+ select FSL_DPAA2_MAC
-+ default y
-+ ---help---
-+ Freescale Data Path Acceleration Architecture Ethernet
-+ driver, using the Freescale MC bus driver.
-+
-+if FSL_DPAA2_ETH
-+config FSL_DPAA2_ETH_LINK_POLL
-+ bool "Use polling mode for link state"
-+ default n
-+ ---help---
-+ Poll for detecting link state changes instead of using
-+ interrupts.
-+
-+config FSL_DPAA2_ETH_USE_ERR_QUEUE
-+ bool "Enable Rx error queue"
-+ default n
-+ ---help---
-+ Allow Rx error frames to be enqueued on an error queue
-+ and processed by the driver (by default they are dropped
-+ in hardware).
-+ This may impact performance, recommended for debugging
-+ purposes only.
-+
-+config FSL_DPAA2_ETH_DEBUGFS
-+ depends on DEBUG_FS && FSL_QBMAN_DEBUG
-+ bool "Enable debugfs support"
-+ default n
-+ ---help---
-+ Enable advanced statistics through debugfs interface.
-+endif
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/Makefile
-@@ -0,0 +1,21 @@
-+#
-+# Makefile for the Freescale DPAA Ethernet controllers
-+#
-+# Copyright (C) 2014-2015 Freescale Semiconductor, Inc.
-+#
-+# This file is released under the GPLv2
-+#
-+
-+ccflags-y += -DVERSION=\"\"
-+
-+obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
-+
-+fsl-dpaa2-eth-objs := dpaa2-eth.o dpaa2-ethtool.o dpni.o
-+fsl-dpaa2-eth-${CONFIG_FSL_DPAA2_ETH_DEBUGFS} += dpaa2-eth-debugfs.o
-+
-+#Needed by the tracing framework
-+CFLAGS_dpaa2-eth.o := -I$(src)
-+
-+ifeq ($(CONFIG_FSL_DPAA2_ETH_GCOV),y)
-+ GCOV_PROFILE := y
-+endif
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.c
-@@ -0,0 +1,319 @@
-+
-+/* Copyright 2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+
-+
-+#include <linux/module.h>
-+#include <linux/debugfs.h>
-+#include "dpaa2-eth.h"
-+#include "dpaa2-eth-debugfs.h"
-+
-+#define DPAA2_ETH_DBG_ROOT "dpaa2-eth"
-+
-+
-+static struct dentry *dpaa2_dbg_root;
-+
-+static int dpaa2_dbg_cpu_show(struct seq_file *file, void *offset)
-+{
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
-+ struct rtnl_link_stats64 *stats;
-+ struct dpaa2_eth_stats *extras;
-+ int i;
-+
-+ seq_printf(file, "Per-CPU stats for %s\n", priv->net_dev->name);
-+ seq_printf(file, "%s%16s%16s%16s%16s%16s%16s%16s%16s\n",
-+ "CPU", "Rx", "Rx Err", "Rx SG", "Tx", "Tx Err", "Tx conf",
-+ "Tx SG", "Enq busy");
-+
-+ for_each_online_cpu(i) {
-+ stats = per_cpu_ptr(priv->percpu_stats, i);
-+ extras = per_cpu_ptr(priv->percpu_extras, i);
-+ seq_printf(file, "%3d%16llu%16llu%16llu%16llu%16llu%16llu%16llu%16llu\n",
-+ i,
-+ stats->rx_packets,
-+ stats->rx_errors,
-+ extras->rx_sg_frames,
-+ stats->tx_packets,
-+ stats->tx_errors,
-+ extras->tx_conf_frames,
-+ extras->tx_sg_frames,
-+ extras->tx_portal_busy);
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_dbg_cpu_open(struct inode *inode, struct file *file)
-+{
-+ int err;
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
-+
-+ err = single_open(file, dpaa2_dbg_cpu_show, priv);
-+ if (err < 0)
-+ netdev_err(priv->net_dev, "single_open() failed\n");
-+
-+ return err;
-+}
-+
-+static const struct file_operations dpaa2_dbg_cpu_ops = {
-+ .open = dpaa2_dbg_cpu_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = single_release,
-+};
-+
-+static char *fq_type_to_str(struct dpaa2_eth_fq *fq)
-+{
-+ switch (fq->type) {
-+ case DPAA2_RX_FQ:
-+ return "Rx";
-+ case DPAA2_TX_CONF_FQ:
-+ return "Tx conf";
-+ case DPAA2_RX_ERR_FQ:
-+ return "Rx err";
-+ default:
-+ return "N/A";
-+ }
-+}
-+
-+static int dpaa2_dbg_fqs_show(struct seq_file *file, void *offset)
-+{
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
-+ struct dpaa2_eth_fq *fq;
-+ u32 fcnt, bcnt;
-+ int i, err;
-+
-+ seq_printf(file, "FQ stats for %s:\n", priv->net_dev->name);
-+ seq_printf(file, "%s%16s%16s%16s%16s\n",
-+ "VFQID", "CPU", "Type", "Frames", "Pending frames");
-+
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ fq = &priv->fq[i];
-+ err = dpaa2_io_query_fq_count(NULL, fq->fqid, &fcnt, &bcnt);
-+ if (err)
-+ fcnt = 0;
-+
-+ seq_printf(file, "%5d%16d%16s%16llu%16u\n",
-+ fq->fqid,
-+ fq->target_cpu,
-+ fq_type_to_str(fq),
-+ fq->stats.frames,
-+ fcnt);
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_dbg_fqs_open(struct inode *inode, struct file *file)
-+{
-+ int err;
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
-+
-+ err = single_open(file, dpaa2_dbg_fqs_show, priv);
-+ if (err < 0)
-+ netdev_err(priv->net_dev, "single_open() failed\n");
-+
-+ return err;
-+}
-+
-+static const struct file_operations dpaa2_dbg_fq_ops = {
-+ .open = dpaa2_dbg_fqs_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = single_release,
-+};
-+
-+static int dpaa2_dbg_ch_show(struct seq_file *file, void *offset)
-+{
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)file->private;
-+ struct dpaa2_eth_channel *ch;
-+ int i;
-+
-+ seq_printf(file, "Channel stats for %s:\n", priv->net_dev->name);
-+ seq_printf(file, "%s%16s%16s%16s%16s%16s\n",
-+ "CHID", "CPU", "Deq busy", "Frames", "CDANs",
-+ "Avg frm/CDAN");
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ seq_printf(file, "%4d%16d%16llu%16llu%16llu%16llu\n",
-+ ch->ch_id,
-+ ch->nctx.desired_cpu,
-+ ch->stats.dequeue_portal_busy,
-+ ch->stats.frames,
-+ ch->stats.cdan,
-+ ch->stats.frames / ch->stats.cdan);
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_dbg_ch_open(struct inode *inode, struct file *file)
-+{
-+ int err;
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)inode->i_private;
-+
-+ err = single_open(file, dpaa2_dbg_ch_show, priv);
-+ if (err < 0)
-+ netdev_err(priv->net_dev, "single_open() failed\n");
-+
-+ return err;
-+}
-+
-+static const struct file_operations dpaa2_dbg_ch_ops = {
-+ .open = dpaa2_dbg_ch_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = single_release,
-+};
-+
-+static ssize_t dpaa2_dbg_reset_write(struct file *file, const char __user *buf,
-+ size_t count, loff_t *offset)
-+{
-+ struct dpaa2_eth_priv *priv = file->private_data;
-+ struct rtnl_link_stats64 *percpu_stats;
-+ struct dpaa2_eth_stats *percpu_extras;
-+ struct dpaa2_eth_fq *fq;
-+ struct dpaa2_eth_channel *ch;
-+ int i;
-+
-+ for_each_online_cpu(i) {
-+ percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
-+ memset(percpu_stats, 0, sizeof(*percpu_stats));
-+
-+ percpu_extras = per_cpu_ptr(priv->percpu_extras, i);
-+ memset(percpu_extras, 0, sizeof(*percpu_extras));
-+ }
-+
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ fq = &priv->fq[i];
-+ memset(&fq->stats, 0, sizeof(fq->stats));
-+ }
-+
-+ for_each_cpu(i, &priv->dpio_cpumask) {
-+ ch = priv->channel[i];
-+ memset(&ch->stats, 0, sizeof(ch->stats));
-+ }
-+
-+ return count;
-+}
-+
-+static const struct file_operations dpaa2_dbg_reset_ops = {
-+ .open = simple_open,
-+ .write = dpaa2_dbg_reset_write,
-+};
-+
-+void dpaa2_dbg_add(struct dpaa2_eth_priv *priv)
-+{
-+ if (!dpaa2_dbg_root)
-+ return;
-+
-+ /* Create a directory for the interface */
-+ priv->dbg.dir = debugfs_create_dir(priv->net_dev->name,
-+ dpaa2_dbg_root);
-+ if (!priv->dbg.dir) {
-+ netdev_err(priv->net_dev, "debugfs_create_dir() failed\n");
-+ return;
-+ }
-+
-+ /* per-cpu stats file */
-+ priv->dbg.cpu_stats = debugfs_create_file("cpu_stats", S_IRUGO,
-+ priv->dbg.dir, priv,
-+ &dpaa2_dbg_cpu_ops);
-+ if (!priv->dbg.cpu_stats) {
-+ netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
-+ goto err_cpu_stats;
-+ }
-+
-+ /* per-fq stats file */
-+ priv->dbg.fq_stats = debugfs_create_file("fq_stats", S_IRUGO,
-+ priv->dbg.dir, priv,
-+ &dpaa2_dbg_fq_ops);
-+ if (!priv->dbg.fq_stats) {
-+ netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
-+ goto err_fq_stats;
-+ }
-+
-+ /* per-fq stats file */
-+ priv->dbg.ch_stats = debugfs_create_file("ch_stats", S_IRUGO,
-+ priv->dbg.dir, priv,
-+ &dpaa2_dbg_ch_ops);
-+ if (!priv->dbg.fq_stats) {
-+ netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
-+ goto err_ch_stats;
-+ }
-+
-+ /* reset stats */
-+ priv->dbg.reset_stats = debugfs_create_file("reset_stats", S_IWUSR,
-+ priv->dbg.dir, priv,
-+ &dpaa2_dbg_reset_ops);
-+ if (!priv->dbg.reset_stats) {
-+ netdev_err(priv->net_dev, "debugfs_create_file() failed\n");
-+ goto err_reset_stats;
-+ }
-+
-+ return;
-+
-+err_reset_stats:
-+ debugfs_remove(priv->dbg.ch_stats);
-+err_ch_stats:
-+ debugfs_remove(priv->dbg.fq_stats);
-+err_fq_stats:
-+ debugfs_remove(priv->dbg.cpu_stats);
-+err_cpu_stats:
-+ debugfs_remove(priv->dbg.dir);
-+}
-+
-+void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv)
-+{
-+ debugfs_remove(priv->dbg.reset_stats);
-+ debugfs_remove(priv->dbg.fq_stats);
-+ debugfs_remove(priv->dbg.ch_stats);
-+ debugfs_remove(priv->dbg.cpu_stats);
-+ debugfs_remove(priv->dbg.dir);
-+}
-+
-+void dpaa2_eth_dbg_init(void)
-+{
-+ dpaa2_dbg_root = debugfs_create_dir(DPAA2_ETH_DBG_ROOT, NULL);
-+ if (!dpaa2_dbg_root) {
-+ pr_err("DPAA2-ETH: debugfs create failed\n");
-+ return;
-+ }
-+
-+ pr_info("DPAA2-ETH: debugfs created\n");
-+}
-+
-+void __exit dpaa2_eth_dbg_exit(void)
-+{
-+ debugfs_remove(dpaa2_dbg_root);
-+}
-+
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-debugfs.h
-@@ -0,0 +1,61 @@
-+/* Copyright 2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+
-+#ifndef DPAA2_ETH_DEBUGFS_H
-+#define DPAA2_ETH_DEBUGFS_H
-+
-+#include <linux/dcache.h>
-+#include "dpaa2-eth.h"
-+
-+extern struct dpaa2_eth_priv *priv;
-+
-+struct dpaa2_debugfs {
-+ struct dentry *dir;
-+ struct dentry *fq_stats;
-+ struct dentry *ch_stats;
-+ struct dentry *cpu_stats;
-+ struct dentry *reset_stats;
-+};
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
-+void dpaa2_eth_dbg_init(void);
-+void dpaa2_eth_dbg_exit(void);
-+void dpaa2_dbg_add(struct dpaa2_eth_priv *priv);
-+void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv);
-+#else
-+static inline void dpaa2_eth_dbg_init(void) {}
-+static inline void dpaa2_eth_dbg_exit(void) {}
-+static inline void dpaa2_dbg_add(struct dpaa2_eth_priv *priv) {}
-+static inline void dpaa2_dbg_remove(struct dpaa2_eth_priv *priv) {}
-+#endif /* CONFIG_FSL_DPAA2_ETH_DEBUGFS */
-+
-+#endif /* DPAA2_ETH_DEBUGFS_H */
-+
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
-@@ -0,0 +1,185 @@
-+/* Copyright 2014-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+
-+#undef TRACE_SYSTEM
-+#define TRACE_SYSTEM dpaa2_eth
-+
-+#if !defined(_DPAA2_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
-+#define _DPAA2_ETH_TRACE_H
-+
-+#include <linux/skbuff.h>
-+#include <linux/netdevice.h>
-+#include "dpaa2-eth.h"
-+#include <linux/tracepoint.h>
-+
-+#define TR_FMT "[%s] fd: addr=0x%llx, len=%u, off=%u"
-+/* trace_printk format for raw buffer event class */
-+#define TR_BUF_FMT "[%s] vaddr=%p size=%zu dma_addr=%pad map_size=%zu bpid=%d"
-+
-+/* This is used to declare a class of events.
-+ * individual events of this type will be defined below.
-+ */
-+
-+/* Store details about a frame descriptor */
-+DECLARE_EVENT_CLASS(dpaa2_eth_fd,
-+ /* Trace function prototype */
-+ TP_PROTO(struct net_device *netdev,
-+ const struct dpaa2_fd *fd),
-+
-+ /* Repeat argument list here */
-+ TP_ARGS(netdev, fd),
-+
-+ /* A structure containing the relevant information we want
-+ * to record. Declare name and type for each normal element,
-+ * name, type and size for arrays. Use __string for variable
-+ * length strings.
-+ */
-+ TP_STRUCT__entry(
-+ __field(u64, fd_addr)
-+ __field(u32, fd_len)
-+ __field(u16, fd_offset)
-+ __string(name, netdev->name)
-+ ),
-+
-+ /* The function that assigns values to the above declared
-+ * fields
-+ */
-+ TP_fast_assign(
-+ __entry->fd_addr = dpaa2_fd_get_addr(fd);
-+ __entry->fd_len = dpaa2_fd_get_len(fd);
-+ __entry->fd_offset = dpaa2_fd_get_offset(fd);
-+ __assign_str(name, netdev->name);
-+ ),
-+
-+ /* This is what gets printed when the trace event is
-+ * triggered.
-+ */
-+ TP_printk(TR_FMT,
-+ __get_str(name),
-+ __entry->fd_addr,
-+ __entry->fd_len,
-+ __entry->fd_offset)
-+);
-+
-+/* Now declare events of the above type. Format is:
-+ * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
-+ */
-+
-+/* Tx (egress) fd */
-+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
-+ TP_PROTO(struct net_device *netdev,
-+ const struct dpaa2_fd *fd),
-+
-+ TP_ARGS(netdev, fd)
-+);
-+
-+/* Rx fd */
-+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
-+ TP_PROTO(struct net_device *netdev,
-+ const struct dpaa2_fd *fd),
-+
-+ TP_ARGS(netdev, fd)
-+);
-+
-+/* Tx confirmation fd */
-+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
-+ TP_PROTO(struct net_device *netdev,
-+ const struct dpaa2_fd *fd),
-+
-+ TP_ARGS(netdev, fd)
-+);
-+
-+/* Log data about raw buffers. Useful for tracing DPBP content. */
-+TRACE_EVENT(dpaa2_eth_buf_seed,
-+ /* Trace function prototype */
-+ TP_PROTO(struct net_device *netdev,
-+ /* virtual address and size */
-+ void *vaddr,
-+ size_t size,
-+ /* dma map address and size */
-+ dma_addr_t dma_addr,
-+ size_t map_size,
-+ /* buffer pool id, if relevant */
-+ u16 bpid),
-+
-+ /* Repeat argument list here */
-+ TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
-+
-+ /* A structure containing the relevant information we want
-+ * to record. Declare name and type for each normal element,
-+ * name, type and size for arrays. Use __string for variable
-+ * length strings.
-+ */
-+ TP_STRUCT__entry(
-+ __field(void *, vaddr)
-+ __field(size_t, size)
-+ __field(dma_addr_t, dma_addr)
-+ __field(size_t, map_size)
-+ __field(u16, bpid)
-+ __string(name, netdev->name)
-+ ),
-+
-+ /* The function that assigns values to the above declared
-+ * fields
-+ */
-+ TP_fast_assign(
-+ __entry->vaddr = vaddr;
-+ __entry->size = size;
-+ __entry->dma_addr = dma_addr;
-+ __entry->map_size = map_size;
-+ __entry->bpid = bpid;
-+ __assign_str(name, netdev->name);
-+ ),
-+
-+ /* This is what gets printed when the trace event is
-+ * triggered.
-+ */
-+ TP_printk(TR_BUF_FMT,
-+ __get_str(name),
-+ __entry->vaddr,
-+ __entry->size,
-+ &__entry->dma_addr,
-+ __entry->map_size,
-+ __entry->bpid)
-+);
-+
-+/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
-+ * The syntax is the same as for DECLARE_EVENT_CLASS().
-+ */
-+
-+#endif /* _DPAA2_ETH_TRACE_H */
-+
-+/* This must be outside ifdef _DPAA2_ETH_TRACE_H */
-+#undef TRACE_INCLUDE_PATH
-+#define TRACE_INCLUDE_PATH .
-+#undef TRACE_INCLUDE_FILE
-+#define TRACE_INCLUDE_FILE dpaa2-eth-trace
-+#include <trace/define_trace.h>
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
-@@ -0,0 +1,2793 @@
-+/* Copyright 2014-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#include <linux/init.h>
-+#include <linux/module.h>
-+#include <linux/platform_device.h>
-+#include <linux/etherdevice.h>
-+#include <linux/of_net.h>
-+#include <linux/interrupt.h>
-+#include <linux/msi.h>
-+#include <linux/debugfs.h>
-+#include <linux/kthread.h>
-+#include <linux/net_tstamp.h>
-+
-+#include "../../fsl-mc/include/mc.h"
-+#include "../../fsl-mc/include/mc-sys.h"
-+#include "dpaa2-eth.h"
-+
-+/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
-+ * using trace events only need to #include <trace/events/sched.h>
-+ */
-+#define CREATE_TRACE_POINTS
-+#include "dpaa2-eth-trace.h"
-+
-+MODULE_LICENSE("Dual BSD/GPL");
-+MODULE_AUTHOR("Freescale Semiconductor, Inc");
-+MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
-+
-+static int debug = -1;
-+module_param(debug, int, S_IRUGO);
-+MODULE_PARM_DESC(debug, "Module/Driver verbosity level");
-+
-+/* Oldest DPAA2 objects version we are compatible with */
-+#define DPAA2_SUPPORTED_DPNI_VERSION 6
-+#define DPAA2_SUPPORTED_DPBP_VERSION 2
-+#define DPAA2_SUPPORTED_DPCON_VERSION 2
-+
-+/* Iterate through the cpumask in a round-robin fashion. */
-+#define cpumask_rr(cpu, maskptr) \
-+do { \
-+ (cpu) = cpumask_next((cpu), (maskptr)); \
-+ if ((cpu) >= nr_cpu_ids) \
-+ (cpu) = cpumask_first((maskptr)); \
-+} while (0)
-+
-+static void dpaa2_eth_rx_csum(struct dpaa2_eth_priv *priv,
-+ u32 fd_status,
-+ struct sk_buff *skb)
-+{
-+ skb_checksum_none_assert(skb);
-+
-+ /* HW checksum validation is disabled, nothing to do here */
-+ if (!(priv->net_dev->features & NETIF_F_RXCSUM))
-+ return;
-+
-+ /* Read checksum validation bits */
-+ if (!((fd_status & DPAA2_ETH_FAS_L3CV) &&
-+ (fd_status & DPAA2_ETH_FAS_L4CV)))
-+ return;
-+
-+ /* Inform the stack there's no need to compute L3/L4 csum anymore */
-+ skb->ip_summed = CHECKSUM_UNNECESSARY;
-+}
-+
-+/* Free a received FD.
-+ * Not to be used for Tx conf FDs or on any other paths.
-+ */
-+static void dpaa2_eth_free_rx_fd(struct dpaa2_eth_priv *priv,
-+ const struct dpaa2_fd *fd,
-+ void *vaddr)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ dma_addr_t addr = dpaa2_fd_get_addr(fd);
-+ u8 fd_format = dpaa2_fd_get_format(fd);
-+
-+ if (fd_format == dpaa2_fd_sg) {
-+ struct dpaa2_sg_entry *sgt = vaddr + dpaa2_fd_get_offset(fd);
-+ void *sg_vaddr;
-+ int i;
-+
-+ for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
-+ dpaa2_sg_le_to_cpu(&sgt[i]);
-+
-+ addr = dpaa2_sg_get_addr(&sgt[i]);
-+ dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE,
-+ DMA_FROM_DEVICE);
-+
-+ sg_vaddr = phys_to_virt(addr);
-+ put_page(virt_to_head_page(sg_vaddr));
-+
-+ if (dpaa2_sg_is_final(&sgt[i]))
-+ break;
-+ }
-+ }
-+
-+ put_page(virt_to_head_page(vaddr));
-+}
-+
-+/* Build a linear skb based on a single-buffer frame descriptor */
-+static struct sk_buff *dpaa2_eth_build_linear_skb(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ const struct dpaa2_fd *fd,
-+ void *fd_vaddr)
-+{
-+ struct sk_buff *skb = NULL;
-+ u16 fd_offset = dpaa2_fd_get_offset(fd);
-+ u32 fd_length = dpaa2_fd_get_len(fd);
-+
-+ skb = build_skb(fd_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
-+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
-+ if (unlikely(!skb)) {
-+ netdev_err(priv->net_dev, "build_skb() failed\n");
-+ return NULL;
-+ }
-+
-+ skb_reserve(skb, fd_offset);
-+ skb_put(skb, fd_length);
-+
-+ ch->buf_count--;
-+
-+ return skb;
-+}
-+
-+/* Build a non linear (fragmented) skb based on a S/G table */
-+static struct sk_buff *dpaa2_eth_build_frag_skb(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ struct dpaa2_sg_entry *sgt)
-+{
-+ struct sk_buff *skb = NULL;
-+ struct device *dev = priv->net_dev->dev.parent;
-+ void *sg_vaddr;
-+ dma_addr_t sg_addr;
-+ u16 sg_offset;
-+ u32 sg_length;
-+ struct page *page, *head_page;
-+ int page_offset;
-+ int i;
-+
-+ for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
-+ struct dpaa2_sg_entry *sge = &sgt[i];
-+
-+ dpaa2_sg_le_to_cpu(sge);
-+
-+ /* We don't support anything else yet! */
-+ if (unlikely(dpaa2_sg_get_format(sge) != dpaa2_sg_single)) {
-+ dev_warn_once(dev, "Unsupported S/G entry format: %d\n",
-+ dpaa2_sg_get_format(sge));
-+ return NULL;
-+ }
-+
-+ /* Get the address, offset and length from the S/G entry */
-+ sg_addr = dpaa2_sg_get_addr(sge);
-+ dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUFFER_SIZE,
-+ DMA_FROM_DEVICE);
-+ if (unlikely(dma_mapping_error(dev, sg_addr))) {
-+ netdev_err(priv->net_dev, "DMA unmap failed\n");
-+ return NULL;
-+ }
-+ sg_vaddr = phys_to_virt(sg_addr);
-+ sg_length = dpaa2_sg_get_len(sge);
-+
-+ if (i == 0) {
-+ /* We build the skb around the first data buffer */
-+ skb = build_skb(sg_vaddr, DPAA2_ETH_RX_BUFFER_SIZE +
-+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
-+ if (unlikely(!skb)) {
-+ netdev_err(priv->net_dev, "build_skb failed\n");
-+ return NULL;
-+ }
-+ sg_offset = dpaa2_sg_get_offset(sge);
-+ skb_reserve(skb, sg_offset);
-+ skb_put(skb, sg_length);
-+ } else {
-+ /* Subsequent data in SGEntries are stored at
-+ * offset 0 in their buffers, we don't need to
-+ * compute sg_offset.
-+ */
-+ WARN_ONCE(dpaa2_sg_get_offset(sge) != 0,
-+ "Non-zero offset in SGE[%d]!\n", i);
-+
-+ /* Rest of the data buffers are stored as skb frags */
-+ page = virt_to_page(sg_vaddr);
-+ head_page = virt_to_head_page(sg_vaddr);
-+
-+ /* Offset in page (which may be compound) */
-+ page_offset = ((unsigned long)sg_vaddr &
-+ (PAGE_SIZE - 1)) +
-+ (page_address(page) - page_address(head_page));
-+
-+ skb_add_rx_frag(skb, i - 1, head_page, page_offset,
-+ sg_length, DPAA2_ETH_RX_BUFFER_SIZE);
-+ }
-+
-+ if (dpaa2_sg_is_final(sge))
-+ break;
-+ }
-+
-+ /* Count all data buffers + sgt buffer */
-+ ch->buf_count -= i + 2;
-+
-+ return skb;
-+}
-+
-+static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ const struct dpaa2_fd *fd,
-+ struct napi_struct *napi)
-+{
-+ dma_addr_t addr = dpaa2_fd_get_addr(fd);
-+ u8 fd_format = dpaa2_fd_get_format(fd);
-+ void *vaddr;
-+ struct sk_buff *skb;
-+ struct rtnl_link_stats64 *percpu_stats;
-+ struct dpaa2_eth_stats *percpu_extras;
-+ struct device *dev = priv->net_dev->dev.parent;
-+ struct dpaa2_fas *fas;
-+ u32 status = 0;
-+
-+ /* Tracing point */
-+ trace_dpaa2_rx_fd(priv->net_dev, fd);
-+
-+ dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
-+ vaddr = phys_to_virt(addr);
-+
-+ prefetch(vaddr + priv->buf_layout.private_data_size);
-+ prefetch(vaddr + dpaa2_fd_get_offset(fd));
-+
-+ percpu_stats = this_cpu_ptr(priv->percpu_stats);
-+ percpu_extras = this_cpu_ptr(priv->percpu_extras);
-+
-+ if (fd_format == dpaa2_fd_single) {
-+ skb = dpaa2_eth_build_linear_skb(priv, ch, fd, vaddr);
-+ } else if (fd_format == dpaa2_fd_sg) {
-+ struct dpaa2_sg_entry *sgt =
-+ vaddr + dpaa2_fd_get_offset(fd);
-+ skb = dpaa2_eth_build_frag_skb(priv, ch, sgt);
-+ put_page(virt_to_head_page(vaddr));
-+ percpu_extras->rx_sg_frames++;
-+ percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
-+ } else {
-+ /* We don't support any other format */
-+ netdev_err(priv->net_dev, "Received invalid frame format\n");
-+ goto err_frame_format;
-+ }
-+
-+ if (unlikely(!skb)) {
-+ dev_err_once(dev, "error building skb\n");
-+ goto err_build_skb;
-+ }
-+
-+ prefetch(skb->data);
-+
-+ if (priv->ts_rx_en) {
-+ struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
-+ u64 *ns = (u64 *) (vaddr +
-+ priv->buf_layout.private_data_size +
-+ sizeof(struct dpaa2_fas));
-+
-+ *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
-+ memset(shhwtstamps, 0, sizeof(*shhwtstamps));
-+ shhwtstamps->hwtstamp = ns_to_ktime(*ns);
-+ }
-+
-+ /* Check if we need to validate the L4 csum */
-+ if (likely(fd->simple.frc & DPAA2_FD_FRC_FASV)) {
-+ fas = (struct dpaa2_fas *)
-+ (vaddr + priv->buf_layout.private_data_size);
-+ status = le32_to_cpu(fas->status);
-+ dpaa2_eth_rx_csum(priv, status, skb);
-+ }
-+
-+ skb->protocol = eth_type_trans(skb, priv->net_dev);
-+
-+ percpu_stats->rx_packets++;
-+ percpu_stats->rx_bytes += skb->len;
-+
-+ if (priv->net_dev->features & NETIF_F_GRO)
-+ napi_gro_receive(napi, skb);
-+ else
-+ netif_receive_skb(skb);
-+
-+ return;
-+err_frame_format:
-+err_build_skb:
-+ dpaa2_eth_free_rx_fd(priv, fd, vaddr);
-+ percpu_stats->rx_dropped++;
-+}
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
-+static void dpaa2_eth_rx_err(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ const struct dpaa2_fd *fd,
-+ struct napi_struct *napi __always_unused)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ dma_addr_t addr = dpaa2_fd_get_addr(fd);
-+ void *vaddr;
-+ struct rtnl_link_stats64 *percpu_stats;
-+ struct dpaa2_fas *fas;
-+ u32 status = 0;
-+
-+ dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUFFER_SIZE, DMA_FROM_DEVICE);
-+ vaddr = phys_to_virt(addr);
-+
-+ if (fd->simple.frc & DPAA2_FD_FRC_FASV) {
-+ fas = (struct dpaa2_fas *)
-+ (vaddr + priv->buf_layout.private_data_size);
-+ status = le32_to_cpu(fas->status);
-+
-+ /* All frames received on this queue should have at least
-+ * one of the Rx error bits set */
-+ WARN_ON_ONCE((status & DPAA2_ETH_RX_ERR_MASK) == 0);
-+ netdev_dbg(priv->net_dev, "Rx frame error: 0x%08x\n",
-+ status & DPAA2_ETH_RX_ERR_MASK);
-+ }
-+ dpaa2_eth_free_rx_fd(priv, fd, vaddr);
-+
-+ percpu_stats = this_cpu_ptr(priv->percpu_stats);
-+ percpu_stats->rx_errors++;
-+}
-+#endif
-+
-+/* Consume all frames pull-dequeued into the store. This is the simplest way to
-+ * make sure we don't accidentally issue another volatile dequeue which would
-+ * overwrite (leak) frames already in the store.
-+ *
-+ * Observance of NAPI budget is not our concern, leaving that to the caller.
-+ */
-+static int dpaa2_eth_store_consume(struct dpaa2_eth_channel *ch)
-+{
-+ struct dpaa2_eth_priv *priv = ch->priv;
-+ struct dpaa2_eth_fq *fq;
-+ struct dpaa2_dq *dq;
-+ const struct dpaa2_fd *fd;
-+ int cleaned = 0;
-+ int is_last;
-+
-+ do {
-+ dq = dpaa2_io_store_next(ch->store, &is_last);
-+ if (unlikely(!dq)) {
-+ if (unlikely(!is_last)) {
-+ netdev_dbg(priv->net_dev,
-+ "Channel %d reqturned no valid frames\n",
-+ ch->ch_id);
-+ /* MUST retry until we get some sort of
-+ * valid response token (be it "empty dequeue"
-+ * or a valid frame).
-+ */
-+ continue;
-+ }
-+ break;
-+ }
-+
-+ /* Obtain FD and process it */
-+ fd = dpaa2_dq_fd(dq);
-+ fq = (struct dpaa2_eth_fq *)dpaa2_dq_fqd_ctx(dq);
-+ fq->stats.frames++;
-+
-+ fq->consume(priv, ch, fd, &ch->napi);
-+ cleaned++;
-+ } while (!is_last);
-+
-+ return cleaned;
-+}
-+
-+static int dpaa2_eth_build_sg_fd(struct dpaa2_eth_priv *priv,
-+ struct sk_buff *skb,
-+ struct dpaa2_fd *fd)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ void *sgt_buf = NULL;
-+ dma_addr_t addr;
-+ int nr_frags = skb_shinfo(skb)->nr_frags;
-+ struct dpaa2_sg_entry *sgt;
-+ int i, j, err;
-+ int sgt_buf_size;
-+ struct scatterlist *scl, *crt_scl;
-+ int num_sg;
-+ int num_dma_bufs;
-+ struct dpaa2_eth_swa *bps;
-+
-+ /* Create and map scatterlist.
-+ * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
-+ * to go beyond nr_frags+1.
-+ * Note: We don't support chained scatterlists
-+ */
-+ WARN_ON(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1);
-+ scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
-+ if (unlikely(!scl))
-+ return -ENOMEM;
-+
-+ sg_init_table(scl, nr_frags + 1);
-+ num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
-+ num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_TO_DEVICE);
-+ if (unlikely(!num_dma_bufs)) {
-+ netdev_err(priv->net_dev, "dma_map_sg() error\n");
-+ err = -ENOMEM;
-+ goto dma_map_sg_failed;
-+ }
-+
-+ /* Prepare the HW SGT structure */
-+ sgt_buf_size = priv->tx_data_offset +
-+ sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
-+ sgt_buf = kzalloc(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN, GFP_ATOMIC);
-+ if (unlikely(!sgt_buf)) {
-+ netdev_err(priv->net_dev, "failed to allocate SGT buffer\n");
-+ err = -ENOMEM;
-+ goto sgt_buf_alloc_failed;
-+ }
-+ sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN);
-+
-+ /* PTA from egress side is passed as is to the confirmation side so
-+ * we need to clear some fields here in order to find consistent values
-+ * on TX confirmation. We are clearing FAS (Frame Annotation Status)
-+ * field here.
-+ */
-+ memset(sgt_buf + priv->buf_layout.private_data_size, 0, 8);
-+
-+ sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
-+
-+ /* Fill in the HW SGT structure.
-+ *
-+ * sgt_buf is zeroed out, so the following fields are implicit
-+ * in all sgt entries:
-+ * - offset is 0
-+ * - format is 'dpaa2_sg_single'
-+ */
-+ for_each_sg(scl, crt_scl, num_dma_bufs, i) {
-+ dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
-+ dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
-+ }
-+ dpaa2_sg_set_final(&sgt[i - 1], true);
-+
-+ /* Store the skb backpointer in the SGT buffer.
-+ * Fit the scatterlist and the number of buffers alongside the
-+ * skb backpointer in the SWA. We'll need all of them on Tx Conf.
-+ */
-+ bps = (struct dpaa2_eth_swa *)sgt_buf;
-+ bps->skb = skb;
-+ bps->scl = scl;
-+ bps->num_sg = num_sg;
-+ bps->num_dma_bufs = num_dma_bufs;
-+
-+ for (j = 0; j < i; j++)
-+ dpaa2_sg_cpu_to_le(&sgt[j]);
-+
-+ /* Separately map the SGT buffer */
-+ addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_TO_DEVICE);
-+ if (unlikely(dma_mapping_error(dev, addr))) {
-+ netdev_err(priv->net_dev, "dma_map_single() failed\n");
-+ err = -ENOMEM;
-+ goto dma_map_single_failed;
-+ }
-+ dpaa2_fd_set_offset(fd, priv->tx_data_offset);
-+ dpaa2_fd_set_format(fd, dpaa2_fd_sg);
-+ dpaa2_fd_set_addr(fd, addr);
-+ dpaa2_fd_set_len(fd, skb->len);
-+
-+ fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
-+ DPAA2_FD_CTRL_PTV1;
-+
-+ return 0;
-+
-+dma_map_single_failed:
-+ kfree(sgt_buf);
-+sgt_buf_alloc_failed:
-+ dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
-+dma_map_sg_failed:
-+ kfree(scl);
-+ return err;
-+}
-+
-+static int dpaa2_eth_build_single_fd(struct dpaa2_eth_priv *priv,
-+ struct sk_buff *skb,
-+ struct dpaa2_fd *fd)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ u8 *buffer_start;
-+ struct sk_buff **skbh;
-+ dma_addr_t addr;
-+
-+ buffer_start = PTR_ALIGN(skb->data - priv->tx_data_offset -
-+ DPAA2_ETH_TX_BUF_ALIGN,
-+ DPAA2_ETH_TX_BUF_ALIGN);
-+
-+ /* PTA from egress side is passed as is to the confirmation side so
-+ * we need to clear some fields here in order to find consistent values
-+ * on TX confirmation. We are clearing FAS (Frame Annotation Status)
-+ * field here.
-+ */
-+ memset(buffer_start + priv->buf_layout.private_data_size, 0, 8);
-+
-+ /* Store a backpointer to the skb at the beginning of the buffer
-+ * (in the private data area) such that we can release it
-+ * on Tx confirm
-+ */
-+ skbh = (struct sk_buff **)buffer_start;
-+ *skbh = skb;
-+
-+ addr = dma_map_single(dev,
-+ buffer_start,
-+ skb_tail_pointer(skb) - buffer_start,
-+ DMA_TO_DEVICE);
-+ if (unlikely(dma_mapping_error(dev, addr))) {
-+ dev_err(dev, "dma_map_single() failed\n");
-+ return -EINVAL;
-+ }
-+
-+ dpaa2_fd_set_addr(fd, addr);
-+ dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
-+ dpaa2_fd_set_len(fd, skb->len);
-+ dpaa2_fd_set_format(fd, dpaa2_fd_single);
-+
-+ fd->simple.ctrl = DPAA2_FD_CTRL_ASAL | DPAA2_FD_CTRL_PTA |
-+ DPAA2_FD_CTRL_PTV1;
-+
-+ return 0;
-+}
-+
-+/* DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
-+ * back-pointed to is also freed.
-+ * This can be called either from dpaa2_eth_tx_conf() or on the error path of
-+ * dpaa2_eth_tx().
-+ * Optionally, return the frame annotation status word (FAS), which needs
-+ * to be checked if we're on the confirmation path.
-+ */
-+static void dpaa2_eth_free_fd(const struct dpaa2_eth_priv *priv,
-+ const struct dpaa2_fd *fd,
-+ u32 *status)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ dma_addr_t fd_addr;
-+ struct sk_buff **skbh, *skb;
-+ unsigned char *buffer_start;
-+ int unmap_size;
-+ struct scatterlist *scl;
-+ int num_sg, num_dma_bufs;
-+ struct dpaa2_eth_swa *bps;
-+ bool fd_single;
-+ struct dpaa2_fas *fas;
-+
-+ fd_addr = dpaa2_fd_get_addr(fd);
-+ skbh = phys_to_virt(fd_addr);
-+ fd_single = (dpaa2_fd_get_format(fd) == dpaa2_fd_single);
-+
-+ if (fd_single) {
-+ skb = *skbh;
-+ buffer_start = (unsigned char *)skbh;
-+ /* Accessing the skb buffer is safe before dma unmap, because
-+ * we didn't map the actual skb shell.
-+ */
-+ dma_unmap_single(dev, fd_addr,
-+ skb_tail_pointer(skb) - buffer_start,
-+ DMA_TO_DEVICE);
-+ } else {
-+ bps = (struct dpaa2_eth_swa *)skbh;
-+ skb = bps->skb;
-+ scl = bps->scl;
-+ num_sg = bps->num_sg;
-+ num_dma_bufs = bps->num_dma_bufs;
-+
-+ /* Unmap the scatterlist */
-+ dma_unmap_sg(dev, scl, num_sg, DMA_TO_DEVICE);
-+ kfree(scl);
-+
-+ /* Unmap the SGT buffer */
-+ unmap_size = priv->tx_data_offset +
-+ sizeof(struct dpaa2_sg_entry) * (1 + num_dma_bufs);
-+ dma_unmap_single(dev, fd_addr, unmap_size, DMA_TO_DEVICE);
-+ }
-+
-+ if (priv->ts_tx_en && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
-+ struct skb_shared_hwtstamps shhwtstamps;
-+ u64 *ns;
-+
-+ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
-+
-+ ns = (u64 *)((void *)skbh +
-+ priv->buf_layout.private_data_size +
-+ sizeof(struct dpaa2_fas));
-+ *ns = DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS * (*ns);
-+ shhwtstamps.hwtstamp = ns_to_ktime(*ns);
-+ skb_tstamp_tx(skb, &shhwtstamps);
-+ }
-+
-+ /* Check the status from the Frame Annotation after we unmap the first
-+ * buffer but before we free it.
-+ */
-+ if (status && (fd->simple.frc & DPAA2_FD_FRC_FASV)) {
-+ fas = (struct dpaa2_fas *)
-+ ((void *)skbh + priv->buf_layout.private_data_size);
-+ *status = le32_to_cpu(fas->status);
-+ }
-+
-+ /* Free SGT buffer kmalloc'ed on tx */
-+ if (!fd_single)
-+ kfree(skbh);
-+
-+ /* Move on with skb release */
-+ dev_kfree_skb(skb);
-+}
-+
-+static int dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ struct dpaa2_fd fd;
-+ struct rtnl_link_stats64 *percpu_stats;
-+ struct dpaa2_eth_stats *percpu_extras;
-+ int err, i;
-+ /* TxConf FQ selection primarily based on cpu affinity; this is
-+ * non-migratable context, so it's safe to call smp_processor_id().
-+ */
-+ u16 queue_mapping = smp_processor_id() % priv->dpni_attrs.max_senders;
-+
-+ percpu_stats = this_cpu_ptr(priv->percpu_stats);
-+ percpu_extras = this_cpu_ptr(priv->percpu_extras);
-+
-+ /* Setup the FD fields */
-+ memset(&fd, 0, sizeof(fd));
-+
-+ if (unlikely(skb_headroom(skb) < DPAA2_ETH_NEEDED_HEADROOM(priv))) {
-+ struct sk_buff *ns;
-+
-+ dev_info_once(net_dev->dev.parent,
-+ "skb headroom too small, must realloc.\n");
-+ ns = skb_realloc_headroom(skb, DPAA2_ETH_NEEDED_HEADROOM(priv));
-+ if (unlikely(!ns)) {
-+ percpu_stats->tx_dropped++;
-+ goto err_alloc_headroom;
-+ }
-+ dev_kfree_skb(skb);
-+ skb = ns;
-+ }
-+
-+ /* We'll be holding a back-reference to the skb until Tx Confirmation;
-+ * we don't want that overwritten by a concurrent Tx with a cloned skb.
-+ */
-+ skb = skb_unshare(skb, GFP_ATOMIC);
-+ if (unlikely(!skb)) {
-+ netdev_err(net_dev, "Out of memory for skb_unshare()");
-+ /* skb_unshare() has already freed the skb */
-+ percpu_stats->tx_dropped++;
-+ return NETDEV_TX_OK;
-+ }
-+
-+ if (skb_is_nonlinear(skb)) {
-+ err = dpaa2_eth_build_sg_fd(priv, skb, &fd);
-+ percpu_extras->tx_sg_frames++;
-+ percpu_extras->tx_sg_bytes += skb->len;
-+ } else {
-+ err = dpaa2_eth_build_single_fd(priv, skb, &fd);
-+ }
-+
-+ if (unlikely(err)) {
-+ percpu_stats->tx_dropped++;
-+ goto err_build_fd;
-+ }
-+
-+ /* Tracing point */
-+ trace_dpaa2_tx_fd(net_dev, &fd);
-+
-+ for (i = 0; i < (DPAA2_ETH_MAX_TX_QUEUES << 1); i++) {
-+ err = dpaa2_io_service_enqueue_qd(NULL, priv->tx_qdid, 0,
-+ priv->fq[queue_mapping].flowid,
-+ &fd);
-+ if (err != -EBUSY)
-+ break;
-+ }
-+ percpu_extras->tx_portal_busy += i;
-+ if (unlikely(err < 0)) {
-+ netdev_dbg(net_dev, "error enqueueing Tx frame\n");
-+ percpu_stats->tx_errors++;
-+ /* Clean up everything, including freeing the skb */
-+ dpaa2_eth_free_fd(priv, &fd, NULL);
-+ } else {
-+ percpu_stats->tx_packets++;
-+ percpu_stats->tx_bytes += skb->len;
-+ }
-+
-+ return NETDEV_TX_OK;
-+
-+err_build_fd:
-+err_alloc_headroom:
-+ dev_kfree_skb(skb);
-+
-+ return NETDEV_TX_OK;
-+}
-+
-+static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ const struct dpaa2_fd *fd,
-+ struct napi_struct *napi __always_unused)
-+{
-+ struct rtnl_link_stats64 *percpu_stats;
-+ struct dpaa2_eth_stats *percpu_extras;
-+ u32 status = 0;
-+
-+ /* Tracing point */
-+ trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
-+
-+ percpu_extras = this_cpu_ptr(priv->percpu_extras);
-+ percpu_extras->tx_conf_frames++;
-+ percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
-+
-+ dpaa2_eth_free_fd(priv, fd, &status);
-+
-+ if (unlikely(status & DPAA2_ETH_TXCONF_ERR_MASK)) {
-+ netdev_err(priv->net_dev, "TxConf frame error(s): 0x%08x\n",
-+ status & DPAA2_ETH_TXCONF_ERR_MASK);
-+ percpu_stats = this_cpu_ptr(priv->percpu_stats);
-+ /* Tx-conf logically pertains to the egress path. */
-+ percpu_stats->tx_errors++;
-+ }
-+}
-+
-+static int dpaa2_eth_set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
-+{
-+ int err;
-+
-+ err = dpni_set_l3_chksum_validation(priv->mc_io, 0, priv->mc_token,
-+ enable);
-+ if (err) {
-+ netdev_err(priv->net_dev,
-+ "dpni_set_l3_chksum_validation() failed\n");
-+ return err;
-+ }
-+
-+ err = dpni_set_l4_chksum_validation(priv->mc_io, 0, priv->mc_token,
-+ enable);
-+ if (err) {
-+ netdev_err(priv->net_dev,
-+ "dpni_set_l4_chksum_validation failed\n");
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
-+{
-+ struct dpaa2_eth_fq *fq;
-+ struct dpni_tx_flow_cfg tx_flow_cfg;
-+ int err;
-+ int i;
-+
-+ memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
-+ tx_flow_cfg.options = DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN |
-+ DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN;
-+ tx_flow_cfg.l3_chksum_gen = enable;
-+ tx_flow_cfg.l4_chksum_gen = enable;
-+
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ fq = &priv->fq[i];
-+ if (fq->type != DPAA2_TX_CONF_FQ)
-+ continue;
-+
-+ /* The Tx flowid is kept in the corresponding TxConf FQ. */
-+ err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
-+ &fq->flowid, &tx_flow_cfg);
-+ if (err) {
-+ netdev_err(priv->net_dev, "dpni_set_tx_flow failed\n");
-+ return err;
-+ }
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_bp_add_7(struct dpaa2_eth_priv *priv, u16 bpid)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ u64 buf_array[7];
-+ void *buf;
-+ dma_addr_t addr;
-+ int i;
-+
-+ for (i = 0; i < 7; i++) {
-+ /* Allocate buffer visible to WRIOP + skb shared info +
-+ * alignment padding
-+ */
-+ buf = napi_alloc_frag(DPAA2_ETH_BUF_RAW_SIZE);
-+ if (unlikely(!buf)) {
-+ dev_err(dev, "buffer allocation failed\n");
-+ goto err_alloc;
-+ }
-+ buf = PTR_ALIGN(buf, DPAA2_ETH_RX_BUF_ALIGN);
-+
-+ addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUFFER_SIZE,
-+ DMA_FROM_DEVICE);
-+ if (unlikely(dma_mapping_error(dev, addr))) {
-+ dev_err(dev, "dma_map_single() failed\n");
-+ goto err_map;
-+ }
-+ buf_array[i] = addr;
-+
-+ /* tracing point */
-+ trace_dpaa2_eth_buf_seed(priv->net_dev,
-+ buf, DPAA2_ETH_BUF_RAW_SIZE,
-+ addr, DPAA2_ETH_RX_BUFFER_SIZE,
-+ bpid);
-+ }
-+
-+release_bufs:
-+ /* In case the portal is busy, retry until successful.
-+ * The buffer release function would only fail if the QBMan portal
-+ * was busy, which implies portal contention (i.e. more CPUs than
-+ * portals, i.e. GPPs w/o affine DPIOs). For all practical purposes,
-+ * there is little we can realistically do, short of giving up -
-+ * in which case we'd risk depleting the buffer pool and never again
-+ * receiving the Rx interrupt which would kick-start the refill logic.
-+ * So just keep retrying, at the risk of being moved to ksoftirqd.
-+ */
-+ while (dpaa2_io_service_release(NULL, bpid, buf_array, i))
-+ cpu_relax();
-+ return i;
-+
-+err_map:
-+ put_page(virt_to_head_page(buf));
-+err_alloc:
-+ if (i)
-+ goto release_bufs;
-+
-+ return 0;
-+}
-+
-+static int dpaa2_dpbp_seed(struct dpaa2_eth_priv *priv, u16 bpid)
-+{
-+ int i, j;
-+ int new_count;
-+
-+ /* This is the lazy seeding of Rx buffer pools.
-+ * dpaa2_bp_add_7() is also used on the Rx hotpath and calls
-+ * napi_alloc_frag(). The trouble with that is that it in turn ends up
-+ * calling this_cpu_ptr(), which mandates execution in atomic context.
-+ * Rather than splitting up the code, do a one-off preempt disable.
-+ */
-+ preempt_disable();
-+ for (j = 0; j < priv->num_channels; j++) {
-+ for (i = 0; i < DPAA2_ETH_NUM_BUFS; i += 7) {
-+ new_count = dpaa2_bp_add_7(priv, bpid);
-+ priv->channel[j]->buf_count += new_count;
-+
-+ if (new_count < 7) {
-+ preempt_enable();
-+ goto out_of_memory;
-+ }
-+ }
-+ }
-+ preempt_enable();
-+
-+ return 0;
-+
-+out_of_memory:
-+ return -ENOMEM;
-+}
-+
-+/**
-+ * Drain the specified number of buffers from the DPNI's private buffer pool.
-+ * @count must not exceeed 7
-+ */
-+static void dpaa2_dpbp_drain_cnt(struct dpaa2_eth_priv *priv, int count)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ u64 buf_array[7];
-+ void *vaddr;
-+ int ret, i;
-+
-+ do {
-+ ret = dpaa2_io_service_acquire(NULL, priv->dpbp_attrs.bpid,
-+ buf_array, count);
-+ if (ret < 0) {
-+ pr_err("dpaa2_io_service_acquire() failed\n");
-+ return;
-+ }
-+ for (i = 0; i < ret; i++) {
-+ /* Same logic as on regular Rx path */
-+ dma_unmap_single(dev, buf_array[i],
-+ DPAA2_ETH_RX_BUFFER_SIZE,
-+ DMA_FROM_DEVICE);
-+ vaddr = phys_to_virt(buf_array[i]);
-+ put_page(virt_to_head_page(vaddr));
-+ }
-+ } while (ret);
-+}
-+
-+static void __dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+
-+ dpaa2_dpbp_drain_cnt(priv, 7);
-+ dpaa2_dpbp_drain_cnt(priv, 1);
-+
-+ for (i = 0; i < priv->num_channels; i++)
-+ priv->channel[i]->buf_count = 0;
-+}
-+
-+/* Function is called from softirq context only, so we don't need to guard
-+ * the access to percpu count
-+ */
-+static int dpaa2_dpbp_refill(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *ch,
-+ u16 bpid)
-+{
-+ int new_count;
-+ int err = 0;
-+
-+ if (unlikely(ch->buf_count < DPAA2_ETH_REFILL_THRESH)) {
-+ do {
-+ new_count = dpaa2_bp_add_7(priv, bpid);
-+ if (unlikely(!new_count)) {
-+ /* Out of memory; abort for now, we'll
-+ * try later on
-+ */
-+ break;
-+ }
-+ ch->buf_count += new_count;
-+ } while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
-+
-+ if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
-+ err = -ENOMEM;
-+ }
-+
-+ return err;
-+}
-+
-+static int __dpaa2_eth_pull_channel(struct dpaa2_eth_channel *ch)
-+{
-+ int err;
-+ int dequeues = -1;
-+ struct dpaa2_eth_priv *priv = ch->priv;
-+
-+ /* Retry while portal is busy */
-+ do {
-+ err = dpaa2_io_service_pull_channel(NULL, ch->ch_id, ch->store);
-+ dequeues++;
-+ } while (err == -EBUSY);
-+ if (unlikely(err))
-+ netdev_err(priv->net_dev, "dpaa2_io_service_pull err %d", err);
-+
-+ ch->stats.dequeue_portal_busy += dequeues;
-+ return err;
-+}
-+
-+static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
-+{
-+ struct dpaa2_eth_channel *ch;
-+ int cleaned = 0, store_cleaned;
-+ struct dpaa2_eth_priv *priv;
-+ int err;
-+
-+ ch = container_of(napi, struct dpaa2_eth_channel, napi);
-+ priv = ch->priv;
-+
-+ __dpaa2_eth_pull_channel(ch);
-+
-+ do {
-+ /* Refill pool if appropriate */
-+ dpaa2_dpbp_refill(priv, ch, priv->dpbp_attrs.bpid);
-+
-+ store_cleaned = dpaa2_eth_store_consume(ch);
-+ cleaned += store_cleaned;
-+
-+ if (store_cleaned == 0 ||
-+ cleaned > budget - DPAA2_ETH_STORE_SIZE)
-+ break;
-+
-+ /* Try to dequeue some more */
-+ err = __dpaa2_eth_pull_channel(ch);
-+ if (unlikely(err))
-+ break;
-+ } while (1);
-+
-+ if (cleaned < budget) {
-+ napi_complete_done(napi, cleaned);
-+ err = dpaa2_io_service_rearm(NULL, &ch->nctx);
-+ if (unlikely(err))
-+ netdev_err(priv->net_dev,
-+ "Notif rearm failed for channel %d\n",
-+ ch->ch_id);
-+ }
-+
-+ ch->stats.frames += cleaned;
-+
-+ return cleaned;
-+}
-+
-+static void dpaa2_eth_napi_enable(struct dpaa2_eth_priv *priv)
-+{
-+ struct dpaa2_eth_channel *ch;
-+ int i;
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ napi_enable(&ch->napi);
-+ }
-+}
-+
-+static void dpaa2_eth_napi_disable(struct dpaa2_eth_priv *priv)
-+{
-+ struct dpaa2_eth_channel *ch;
-+ int i;
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ napi_disable(&ch->napi);
-+ }
-+}
-+
-+static int dpaa2_link_state_update(struct dpaa2_eth_priv *priv)
-+{
-+ struct dpni_link_state state;
-+ int err;
-+
-+ err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
-+ if (unlikely(err)) {
-+ netdev_err(priv->net_dev,
-+ "dpni_get_link_state() failed\n");
-+ return err;
-+ }
-+
-+ /* Chech link state; speed / duplex changes are not treated yet */
-+ if (priv->link_state.up == state.up)
-+ return 0;
-+
-+ priv->link_state = state;
-+ if (state.up) {
-+ netif_carrier_on(priv->net_dev);
-+ netif_tx_start_all_queues(priv->net_dev);
-+ } else {
-+ netif_tx_stop_all_queues(priv->net_dev);
-+ netif_carrier_off(priv->net_dev);
-+ }
-+
-+ netdev_info(priv->net_dev, "Link Event: state %s",
-+ state.up ? "up" : "down");
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_open(struct net_device *net_dev)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int err;
-+
-+ err = dpaa2_dpbp_seed(priv, priv->dpbp_attrs.bpid);
-+ if (err) {
-+ /* Not much to do; the buffer pool, though not filled up,
-+ * may still contain some buffers which would enable us
-+ * to limp on.
-+ */
-+ netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
-+ priv->dpbp_dev->obj_desc.id, priv->dpbp_attrs.bpid);
-+ }
-+
-+ /* We'll only start the txqs when the link is actually ready; make sure
-+ * we don't race against the link up notification, which may come
-+ * immediately after dpni_enable();
-+ */
-+ netif_tx_stop_all_queues(net_dev);
-+ dpaa2_eth_napi_enable(priv);
-+ /* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
-+ * return true and cause 'ip link show' to report the LOWER_UP flag,
-+ * even though the link notification wasn't even received.
-+ */
-+ netif_carrier_off(net_dev);
-+
-+ err = dpni_enable(priv->mc_io, 0, priv->mc_token);
-+ if (err < 0) {
-+ dev_err(net_dev->dev.parent, "dpni_enable() failed\n");
-+ goto enable_err;
-+ }
-+
-+ /* If the DPMAC object has already processed the link up interrupt,
-+ * we have to learn the link state ourselves.
-+ */
-+ err = dpaa2_link_state_update(priv);
-+ if (err < 0) {
-+ dev_err(net_dev->dev.parent, "Can't update link state\n");
-+ goto link_state_err;
-+ }
-+
-+ return 0;
-+
-+link_state_err:
-+enable_err:
-+ dpaa2_eth_napi_disable(priv);
-+ __dpaa2_dpbp_free(priv);
-+ return err;
-+}
-+
-+static int dpaa2_eth_stop(struct net_device *net_dev)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+
-+ /* Stop Tx and Rx traffic */
-+ netif_tx_stop_all_queues(net_dev);
-+ netif_carrier_off(net_dev);
-+ dpni_disable(priv->mc_io, 0, priv->mc_token);
-+
-+ msleep(500);
-+
-+ dpaa2_eth_napi_disable(priv);
-+ msleep(100);
-+
-+ __dpaa2_dpbp_free(priv);
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_init(struct net_device *net_dev)
-+{
-+ u64 supported = 0;
-+ u64 not_supported = 0;
-+ const struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ u32 options = priv->dpni_attrs.options;
-+
-+ /* Capabilities listing */
-+ supported |= IFF_LIVE_ADDR_CHANGE | IFF_PROMISC | IFF_ALLMULTI;
-+
-+ if (options & DPNI_OPT_UNICAST_FILTER)
-+ supported |= IFF_UNICAST_FLT;
-+ else
-+ not_supported |= IFF_UNICAST_FLT;
-+
-+ if (options & DPNI_OPT_MULTICAST_FILTER)
-+ supported |= IFF_MULTICAST;
-+ else
-+ not_supported |= IFF_MULTICAST;
-+
-+ net_dev->priv_flags |= supported;
-+ net_dev->priv_flags &= ~not_supported;
-+
-+ /* Features */
-+ net_dev->features = NETIF_F_RXCSUM |
-+ NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
-+ NETIF_F_SG | NETIF_F_HIGHDMA |
-+ NETIF_F_LLTX;
-+ net_dev->hw_features = net_dev->features;
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ struct device *dev = net_dev->dev.parent;
-+ int err;
-+
-+ err = eth_mac_addr(net_dev, addr);
-+ if (err < 0) {
-+ dev_err(dev, "eth_mac_addr() failed with error %d\n", err);
-+ return err;
-+ }
-+
-+ err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-+ net_dev->dev_addr);
-+ if (err) {
-+ dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+/** Fill in counters maintained by the GPP driver. These may be different from
-+ * the hardware counters obtained by ethtool.
-+ */
-+static struct rtnl_link_stats64
-+*dpaa2_eth_get_stats(struct net_device *net_dev,
-+ struct rtnl_link_stats64 *stats)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ struct rtnl_link_stats64 *percpu_stats;
-+ u64 *cpustats;
-+ u64 *netstats = (u64 *)stats;
-+ int i, j;
-+ int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
-+
-+ for_each_possible_cpu(i) {
-+ percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
-+ cpustats = (u64 *)percpu_stats;
-+ for (j = 0; j < num; j++)
-+ netstats[j] += cpustats[j];
-+ }
-+
-+ return stats;
-+}
-+
-+static int dpaa2_eth_change_mtu(struct net_device *net_dev, int mtu)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int err;
-+
-+ if (mtu < 68 || mtu > DPAA2_ETH_MAX_MTU) {
-+ netdev_err(net_dev, "Invalid MTU %d. Valid range is: 68..%d\n",
-+ mtu, DPAA2_ETH_MAX_MTU);
-+ return -EINVAL;
-+ }
-+
-+ /* Set the maximum Rx frame length to match the transmit side;
-+ * account for L2 headers when computing the MFL
-+ */
-+ err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
-+ (u16)DPAA2_ETH_L2_MAX_FRM(mtu));
-+ if (err) {
-+ netdev_err(net_dev, "dpni_set_mfl() failed\n");
-+ return err;
-+ }
-+
-+ net_dev->mtu = mtu;
-+ return 0;
-+}
-+
-+/* Convenience macro to make code littered with error checking more readable */
-+#define DPAA2_ETH_WARN_IF_ERR(err, netdevp, format, ...) \
-+do { \
-+ if (err) \
-+ netdev_warn(netdevp, format, ##__VA_ARGS__); \
-+} while (0)
-+
-+/* Copy mac unicast addresses from @net_dev to @priv.
-+ * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
-+ */
-+static void _dpaa2_eth_hw_add_uc_addr(const struct net_device *net_dev,
-+ struct dpaa2_eth_priv *priv)
-+{
-+ struct netdev_hw_addr *ha;
-+ int err;
-+
-+ netdev_for_each_uc_addr(ha, net_dev) {
-+ err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
-+ ha->addr);
-+ DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
-+ "Could not add ucast MAC %pM to the filtering table (err %d)\n",
-+ ha->addr, err);
-+ }
-+}
-+
-+/* Copy mac multicast addresses from @net_dev to @priv
-+ * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
-+ */
-+static void _dpaa2_eth_hw_add_mc_addr(const struct net_device *net_dev,
-+ struct dpaa2_eth_priv *priv)
-+{
-+ struct netdev_hw_addr *ha;
-+ int err;
-+
-+ netdev_for_each_mc_addr(ha, net_dev) {
-+ err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
-+ ha->addr);
-+ DPAA2_ETH_WARN_IF_ERR(err, priv->net_dev,
-+ "Could not add mcast MAC %pM to the filtering table (err %d)\n",
-+ ha->addr, err);
-+ }
-+}
-+
-+static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int uc_count = netdev_uc_count(net_dev);
-+ int mc_count = netdev_mc_count(net_dev);
-+ u8 max_uc = priv->dpni_attrs.max_unicast_filters;
-+ u8 max_mc = priv->dpni_attrs.max_multicast_filters;
-+ u32 options = priv->dpni_attrs.options;
-+ u16 mc_token = priv->mc_token;
-+ struct fsl_mc_io *mc_io = priv->mc_io;
-+ int err;
-+
-+ /* Basic sanity checks; these probably indicate a misconfiguration */
-+ if (!(options & DPNI_OPT_UNICAST_FILTER) && max_uc != 0)
-+ netdev_info(net_dev,
-+ "max_unicast_filters=%d, you must have DPNI_OPT_UNICAST_FILTER in the DPL\n",
-+ max_uc);
-+ if (!(options & DPNI_OPT_MULTICAST_FILTER) && max_mc != 0)
-+ netdev_info(net_dev,
-+ "max_multicast_filters=%d, you must have DPNI_OPT_MULTICAST_FILTER in the DPL\n",
-+ max_mc);
-+
-+ /* Force promiscuous if the uc or mc counts exceed our capabilities. */
-+ if (uc_count > max_uc) {
-+ netdev_info(net_dev,
-+ "Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
-+ uc_count, max_uc);
-+ goto force_promisc;
-+ }
-+ if (mc_count > max_mc) {
-+ netdev_info(net_dev,
-+ "Multicast addr count reached %d, max allowed is %d; forcing promisc\n",
-+ mc_count, max_mc);
-+ goto force_mc_promisc;
-+ }
-+
-+ /* Adjust promisc settings due to flag combinations */
-+ if (net_dev->flags & IFF_PROMISC) {
-+ goto force_promisc;
-+ } else if (net_dev->flags & IFF_ALLMULTI) {
-+ /* First, rebuild unicast filtering table. This should be done
-+ * in promisc mode, in order to avoid frame loss while we
-+ * progressively add entries to the table.
-+ * We don't know whether we had been in promisc already, and
-+ * making an MC call to find it is expensive; so set uc promisc
-+ * nonetheless.
-+ */
-+ err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc\n");
-+
-+ /* Actual uc table reconstruction. */
-+ err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc filters\n");
-+ _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
-+
-+ /* Finally, clear uc promisc and set mc promisc as requested. */
-+ err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear uc promisc\n");
-+ goto force_mc_promisc;
-+ }
-+
-+ /* Neither unicast, nor multicast promisc will be on... eventually.
-+ * For now, rebuild mac filtering tables while forcing both of them on.
-+ */
-+ err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set uc promisc (%d)\n", err);
-+ err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mc promisc (%d)\n", err);
-+
-+ /* Actual mac filtering tables reconstruction */
-+ err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mac filters\n");
-+ _dpaa2_eth_hw_add_mc_addr(net_dev, priv);
-+ _dpaa2_eth_hw_add_uc_addr(net_dev, priv);
-+
-+ /* Now we can clear both ucast and mcast promisc, without risking
-+ * to drop legitimate frames anymore.
-+ */
-+ err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear ucast promisc\n");
-+ err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't clear mcast promisc\n");
-+
-+ return;
-+
-+force_promisc:
-+ err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set ucast promisc\n");
-+force_mc_promisc:
-+ err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
-+ DPAA2_ETH_WARN_IF_ERR(err, net_dev, "Can't set mcast promisc\n");
-+}
-+
-+static int dpaa2_eth_set_features(struct net_device *net_dev,
-+ netdev_features_t features)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ netdev_features_t changed = features ^ net_dev->features;
-+ int err;
-+
-+ if (changed & NETIF_F_RXCSUM) {
-+ bool enable = !!(features & NETIF_F_RXCSUM);
-+
-+ err = dpaa2_eth_set_rx_csum(priv, enable);
-+ if (err)
-+ return err;
-+ }
-+
-+ if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
-+ bool enable = !!(features &
-+ (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
-+ err = dpaa2_eth_set_tx_csum(priv, enable);
-+ if (err)
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(dev);
-+ struct hwtstamp_config config;
-+
-+ if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
-+ return -EFAULT;
-+
-+ switch (config.tx_type) {
-+ case HWTSTAMP_TX_OFF:
-+ priv->ts_tx_en = false;
-+ break;
-+ case HWTSTAMP_TX_ON:
-+ priv->ts_tx_en = true;
-+ break;
-+ default:
-+ return -ERANGE;
-+ }
-+
-+ if (config.rx_filter == HWTSTAMP_FILTER_NONE)
-+ priv->ts_rx_en = false;
-+ else {
-+ priv->ts_rx_en = true;
-+ /* TS is set for all frame types, not only those requested */
-+ config.rx_filter = HWTSTAMP_FILTER_ALL;
-+ }
-+
-+ return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
-+ -EFAULT : 0;
-+}
-+
-+static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-+{
-+ if (cmd == SIOCSHWTSTAMP)
-+ return dpaa2_eth_ts_ioctl(dev, rq, cmd);
-+ else
-+ return -EINVAL;
-+}
-+
-+static const struct net_device_ops dpaa2_eth_ops = {
-+ .ndo_open = dpaa2_eth_open,
-+ .ndo_start_xmit = dpaa2_eth_tx,
-+ .ndo_stop = dpaa2_eth_stop,
-+ .ndo_init = dpaa2_eth_init,
-+ .ndo_set_mac_address = dpaa2_eth_set_addr,
-+ .ndo_get_stats64 = dpaa2_eth_get_stats,
-+ .ndo_change_mtu = dpaa2_eth_change_mtu,
-+ .ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
-+ .ndo_set_features = dpaa2_eth_set_features,
-+ .ndo_do_ioctl = dpaa2_eth_ioctl,
-+};
-+
-+static void dpaa2_eth_cdan_cb(struct dpaa2_io_notification_ctx *ctx)
-+{
-+ struct dpaa2_eth_channel *ch;
-+
-+ ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
-+
-+ /* Update NAPI statistics */
-+ ch->stats.cdan++;
-+
-+ napi_schedule_irqoff(&ch->napi);
-+}
-+
-+static void dpaa2_eth_setup_fqs(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+
-+ /* We have one TxConf FQ per Tx flow */
-+ for (i = 0; i < priv->dpni_attrs.max_senders; i++) {
-+ priv->fq[priv->num_fqs].netdev_priv = priv;
-+ priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
-+ priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
-+ priv->fq[priv->num_fqs++].flowid = DPNI_NEW_FLOW_ID;
-+ }
-+
-+ /* The number of Rx queues (Rx distribution width) may be different from
-+ * the number of cores.
-+ * We only support one traffic class for now.
-+ */
-+ for (i = 0; i < dpaa2_queue_count(priv); i++) {
-+ priv->fq[priv->num_fqs].netdev_priv = priv;
-+ priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
-+ priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
-+ priv->fq[priv->num_fqs++].flowid = (u16)i;
-+ }
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
-+ /* We have exactly one Rx error queue per DPNI */
-+ priv->fq[priv->num_fqs].netdev_priv = priv;
-+ priv->fq[priv->num_fqs].type = DPAA2_RX_ERR_FQ;
-+ priv->fq[priv->num_fqs++].consume = dpaa2_eth_rx_err;
-+#endif
-+}
-+
-+static int check_obj_version(struct fsl_mc_device *ls_dev, u16 mc_version)
-+{
-+ char *name = ls_dev->obj_desc.type;
-+ struct device *dev = &ls_dev->dev;
-+ u16 supported_version, flib_version;
-+
-+ if (strcmp(name, "dpni") == 0) {
-+ flib_version = DPNI_VER_MAJOR;
-+ supported_version = DPAA2_SUPPORTED_DPNI_VERSION;
-+ } else if (strcmp(name, "dpbp") == 0) {
-+ flib_version = DPBP_VER_MAJOR;
-+ supported_version = DPAA2_SUPPORTED_DPBP_VERSION;
-+ } else if (strcmp(name, "dpcon") == 0) {
-+ flib_version = DPCON_VER_MAJOR;
-+ supported_version = DPAA2_SUPPORTED_DPCON_VERSION;
-+ } else {
-+ dev_err(dev, "invalid object type (%s)\n", name);
-+ return -EINVAL;
-+ }
-+
-+ /* Check that the FLIB-defined version matches the one reported by MC */
-+ if (mc_version != flib_version) {
-+ dev_err(dev,
-+ "%s FLIB version mismatch: MC reports %d, we have %d\n",
-+ name, mc_version, flib_version);
-+ return -EINVAL;
-+ }
-+
-+ /* ... and that we actually support it */
-+ if (mc_version < supported_version) {
-+ dev_err(dev, "Unsupported %s FLIB version (%d)\n",
-+ name, mc_version);
-+ return -EINVAL;
-+ }
-+ dev_dbg(dev, "Using %s FLIB version %d\n", name, mc_version);
-+
-+ return 0;
-+}
-+
-+static struct fsl_mc_device *dpaa2_dpcon_setup(struct dpaa2_eth_priv *priv)
-+{
-+ struct fsl_mc_device *dpcon;
-+ struct device *dev = priv->net_dev->dev.parent;
-+ struct dpcon_attr attrs;
-+ int err;
-+
-+ err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
-+ FSL_MC_POOL_DPCON, &dpcon);
-+ if (err) {
-+ dev_info(dev, "Not enough DPCONs, will go on as-is\n");
-+ return NULL;
-+ }
-+
-+ err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
-+ if (err) {
-+ dev_err(dev, "dpcon_open() failed\n");
-+ goto err_open;
-+ }
-+
-+ err = dpcon_get_attributes(priv->mc_io, 0, dpcon->mc_handle, &attrs);
-+ if (err) {
-+ dev_err(dev, "dpcon_get_attributes() failed\n");
-+ goto err_get_attr;
-+ }
-+
-+ err = check_obj_version(dpcon, attrs.version.major);
-+ if (err)
-+ goto err_dpcon_ver;
-+
-+ err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
-+ if (err) {
-+ dev_err(dev, "dpcon_enable() failed\n");
-+ goto err_enable;
-+ }
-+
-+ return dpcon;
-+
-+err_enable:
-+err_dpcon_ver:
-+err_get_attr:
-+ dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
-+err_open:
-+ fsl_mc_object_free(dpcon);
-+
-+ return NULL;
-+}
-+
-+static void dpaa2_dpcon_free(struct dpaa2_eth_priv *priv,
-+ struct fsl_mc_device *dpcon)
-+{
-+ dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
-+ dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
-+ fsl_mc_object_free(dpcon);
-+}
-+
-+static struct dpaa2_eth_channel *
-+dpaa2_alloc_channel(struct dpaa2_eth_priv *priv)
-+{
-+ struct dpaa2_eth_channel *channel;
-+ struct dpcon_attr attr;
-+ struct device *dev = priv->net_dev->dev.parent;
-+ int err;
-+
-+ channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
-+ if (!channel) {
-+ dev_err(dev, "Memory allocation failed\n");
-+ return NULL;
-+ }
-+
-+ channel->dpcon = dpaa2_dpcon_setup(priv);
-+ if (!channel->dpcon)
-+ goto err_setup;
-+
-+ err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
-+ &attr);
-+ if (err) {
-+ dev_err(dev, "dpcon_get_attributes() failed\n");
-+ goto err_get_attr;
-+ }
-+
-+ channel->dpcon_id = attr.id;
-+ channel->ch_id = attr.qbman_ch_id;
-+ channel->priv = priv;
-+
-+ return channel;
-+
-+err_get_attr:
-+ dpaa2_dpcon_free(priv, channel->dpcon);
-+err_setup:
-+ kfree(channel);
-+ return NULL;
-+}
-+
-+static void dpaa2_free_channel(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_channel *channel)
-+{
-+ dpaa2_dpcon_free(priv, channel->dpcon);
-+ kfree(channel);
-+}
-+
-+static int dpaa2_dpio_setup(struct dpaa2_eth_priv *priv)
-+{
-+ struct dpaa2_io_notification_ctx *nctx;
-+ struct dpaa2_eth_channel *channel;
-+ struct dpcon_notification_cfg dpcon_notif_cfg;
-+ struct device *dev = priv->net_dev->dev.parent;
-+ int i, err;
-+
-+ /* Don't allocate more channels than strictly necessary and assign
-+ * them to cores starting from the first one available in
-+ * cpu_online_mask.
-+ * If the number of channels is lower than the number of cores,
-+ * there will be no rx/tx conf processing on the last cores in the mask.
-+ */
-+ cpumask_clear(&priv->dpio_cpumask);
-+ for_each_online_cpu(i) {
-+ /* Try to allocate a channel */
-+ channel = dpaa2_alloc_channel(priv);
-+ if (!channel)
-+ goto err_alloc_ch;
-+
-+ priv->channel[priv->num_channels] = channel;
-+
-+ nctx = &channel->nctx;
-+ nctx->is_cdan = 1;
-+ nctx->cb = dpaa2_eth_cdan_cb;
-+ nctx->id = channel->ch_id;
-+ nctx->desired_cpu = i;
-+
-+ /* Register the new context */
-+ err = dpaa2_io_service_register(NULL, nctx);
-+ if (err) {
-+ dev_info(dev, "No affine DPIO for core %d\n", i);
-+ /* This core doesn't have an affine DPIO, but there's
-+ * a chance another one does, so keep trying
-+ */
-+ dpaa2_free_channel(priv, channel);
-+ continue;
-+ }
-+
-+ /* Register DPCON notification with MC */
-+ dpcon_notif_cfg.dpio_id = nctx->dpio_id;
-+ dpcon_notif_cfg.priority = 0;
-+ dpcon_notif_cfg.user_ctx = nctx->qman64;
-+ err = dpcon_set_notification(priv->mc_io, 0,
-+ channel->dpcon->mc_handle,
-+ &dpcon_notif_cfg);
-+ if (err) {
-+ dev_err(dev, "dpcon_set_notification failed()\n");
-+ goto err_set_cdan;
-+ }
-+
-+ /* If we managed to allocate a channel and also found an affine
-+ * DPIO for this core, add it to the final mask
-+ */
-+ cpumask_set_cpu(i, &priv->dpio_cpumask);
-+ priv->num_channels++;
-+
-+ if (priv->num_channels == dpaa2_max_channels(priv))
-+ break;
-+ }
-+
-+ /* Tx confirmation queues can only be serviced by cpus
-+ * with an affine DPIO/channel
-+ */
-+ cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
-+
-+ return 0;
-+
-+err_set_cdan:
-+ dpaa2_io_service_deregister(NULL, nctx);
-+ dpaa2_free_channel(priv, channel);
-+err_alloc_ch:
-+ if (cpumask_empty(&priv->dpio_cpumask)) {
-+ dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
-+ return -ENODEV;
-+ }
-+ cpumask_copy(&priv->txconf_cpumask, &priv->dpio_cpumask);
-+
-+ return 0;
-+}
-+
-+static void dpaa2_dpio_free(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+ struct dpaa2_eth_channel *ch;
-+
-+ /* deregister CDAN notifications and free channels */
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ dpaa2_io_service_deregister(NULL, &ch->nctx);
-+ dpaa2_free_channel(priv, ch);
-+ }
-+}
-+
-+static struct dpaa2_eth_channel *
-+dpaa2_get_channel_by_cpu(struct dpaa2_eth_priv *priv, int cpu)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ int i;
-+
-+ for (i = 0; i < priv->num_channels; i++)
-+ if (priv->channel[i]->nctx.desired_cpu == cpu)
-+ return priv->channel[i];
-+
-+ /* We should never get here. Issue a warning and return
-+ * the first channel, because it's still better than nothing
-+ */
-+ dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
-+
-+ return priv->channel[0];
-+}
-+
-+static void dpaa2_set_fq_affinity(struct dpaa2_eth_priv *priv)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ struct dpaa2_eth_fq *fq;
-+ int rx_cpu, txconf_cpu;
-+ int i;
-+
-+ /* For each FQ, pick one channel/CPU to deliver frames to.
-+ * This may well change at runtime, either through irqbalance or
-+ * through direct user intervention.
-+ */
-+ rx_cpu = cpumask_first(&priv->dpio_cpumask);
-+ txconf_cpu = cpumask_first(&priv->txconf_cpumask);
-+
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ fq = &priv->fq[i];
-+ switch (fq->type) {
-+ case DPAA2_RX_FQ:
-+ case DPAA2_RX_ERR_FQ:
-+ fq->target_cpu = rx_cpu;
-+ cpumask_rr(rx_cpu, &priv->dpio_cpumask);
-+ break;
-+ case DPAA2_TX_CONF_FQ:
-+ fq->target_cpu = txconf_cpu;
-+ cpumask_rr(txconf_cpu, &priv->txconf_cpumask);
-+ break;
-+ default:
-+ dev_err(dev, "Unknown FQ type: %d\n", fq->type);
-+ }
-+ fq->channel = dpaa2_get_channel_by_cpu(priv, fq->target_cpu);
-+ }
-+}
-+
-+static int dpaa2_dpbp_setup(struct dpaa2_eth_priv *priv)
-+{
-+ int err;
-+ struct fsl_mc_device *dpbp_dev;
-+ struct device *dev = priv->net_dev->dev.parent;
-+
-+ err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
-+ &dpbp_dev);
-+ if (err) {
-+ dev_err(dev, "DPBP device allocation failed\n");
-+ return err;
-+ }
-+
-+ priv->dpbp_dev = dpbp_dev;
-+
-+ err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
-+ &dpbp_dev->mc_handle);
-+ if (err) {
-+ dev_err(dev, "dpbp_open() failed\n");
-+ goto err_open;
-+ }
-+
-+ err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
-+ if (err) {
-+ dev_err(dev, "dpbp_enable() failed\n");
-+ goto err_enable;
-+ }
-+
-+ err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
-+ &priv->dpbp_attrs);
-+ if (err) {
-+ dev_err(dev, "dpbp_get_attributes() failed\n");
-+ goto err_get_attr;
-+ }
-+
-+ err = check_obj_version(dpbp_dev, priv->dpbp_attrs.version.major);
-+ if (err)
-+ goto err_dpbp_ver;
-+
-+ return 0;
-+
-+err_dpbp_ver:
-+err_get_attr:
-+ dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
-+err_enable:
-+ dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
-+err_open:
-+ fsl_mc_object_free(dpbp_dev);
-+
-+ return err;
-+}
-+
-+static void dpaa2_dpbp_free(struct dpaa2_eth_priv *priv)
-+{
-+ __dpaa2_dpbp_free(priv);
-+ dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-+ dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-+ fsl_mc_object_free(priv->dpbp_dev);
-+}
-+
-+static int dpaa2_dpni_setup(struct fsl_mc_device *ls_dev)
-+{
-+ struct device *dev = &ls_dev->dev;
-+ struct dpaa2_eth_priv *priv;
-+ struct net_device *net_dev;
-+ void *dma_mem;
-+ int err;
-+
-+ net_dev = dev_get_drvdata(dev);
-+ priv = netdev_priv(net_dev);
-+
-+ priv->dpni_id = ls_dev->obj_desc.id;
-+
-+ /* and get a handle for the DPNI this interface is associate with */
-+ err = dpni_open(priv->mc_io, 0, priv->dpni_id, &priv->mc_token);
-+ if (err) {
-+ dev_err(dev, "dpni_open() failed\n");
-+ goto err_open;
-+ }
-+
-+ ls_dev->mc_io = priv->mc_io;
-+ ls_dev->mc_handle = priv->mc_token;
-+
-+ dma_mem = kzalloc(DPAA2_EXT_CFG_SIZE, GFP_DMA | GFP_KERNEL);
-+ if (!dma_mem)
-+ goto err_alloc;
-+
-+ priv->dpni_attrs.ext_cfg_iova = dma_map_single(dev, dma_mem,
-+ DPAA2_EXT_CFG_SIZE,
-+ DMA_FROM_DEVICE);
-+ if (dma_mapping_error(dev, priv->dpni_attrs.ext_cfg_iova)) {
-+ dev_err(dev, "dma mapping for dpni_ext_cfg failed\n");
-+ goto err_dma_map;
-+ }
-+
-+ err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
-+ &priv->dpni_attrs);
-+ if (err) {
-+ dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
-+ dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
-+ DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
-+ goto err_get_attr;
-+ }
-+
-+ err = check_obj_version(ls_dev, priv->dpni_attrs.version.major);
-+ if (err)
-+ goto err_dpni_ver;
-+
-+ dma_unmap_single(dev, priv->dpni_attrs.ext_cfg_iova,
-+ DPAA2_EXT_CFG_SIZE, DMA_FROM_DEVICE);
-+
-+ memset(&priv->dpni_ext_cfg, 0, sizeof(priv->dpni_ext_cfg));
-+ err = dpni_extract_extended_cfg(&priv->dpni_ext_cfg, dma_mem);
-+ if (err) {
-+ dev_err(dev, "dpni_extract_extended_cfg() failed\n");
-+ goto err_extract;
-+ }
-+
-+ /* Configure our buffers' layout */
-+ priv->buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
-+ DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
-+ DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
-+ DPNI_BUF_LAYOUT_OPT_DATA_ALIGN;
-+ priv->buf_layout.pass_parser_result = true;
-+ priv->buf_layout.pass_frame_status = true;
-+ priv->buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
-+ /* HW erratum mandates data alignment in multiples of 256 */
-+ priv->buf_layout.data_align = DPAA2_ETH_RX_BUF_ALIGN;
-+ /* ...rx, ... */
-+ err = dpni_set_rx_buffer_layout(priv->mc_io, 0, priv->mc_token,
-+ &priv->buf_layout);
-+ if (err) {
-+ dev_err(dev, "dpni_set_rx_buffer_layout() failed");
-+ goto err_buf_layout;
-+ }
-+ /* ... tx, ... */
-+ /* remove Rx-only options */
-+ priv->buf_layout.options &= ~(DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
-+ DPNI_BUF_LAYOUT_OPT_PARSER_RESULT);
-+ err = dpni_set_tx_buffer_layout(priv->mc_io, 0, priv->mc_token,
-+ &priv->buf_layout);
-+ if (err) {
-+ dev_err(dev, "dpni_set_tx_buffer_layout() failed");
-+ goto err_buf_layout;
-+ }
-+ /* ... tx-confirm. */
-+ priv->buf_layout.options &= ~DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE;
-+ priv->buf_layout.options |= DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
-+ priv->buf_layout.pass_timestamp = 1;
-+ err = dpni_set_tx_conf_buffer_layout(priv->mc_io, 0, priv->mc_token,
-+ &priv->buf_layout);
-+ if (err) {
-+ dev_err(dev, "dpni_set_tx_conf_buffer_layout() failed");
-+ goto err_buf_layout;
-+ }
-+ /* Now that we've set our tx buffer layout, retrieve the minimum
-+ * required tx data offset.
-+ */
-+ err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
-+ &priv->tx_data_offset);
-+ if (err) {
-+ dev_err(dev, "dpni_get_tx_data_offset() failed\n");
-+ goto err_data_offset;
-+ }
-+
-+ /* Warn in case TX data offset is not multiple of 64 bytes. */
-+ WARN_ON(priv->tx_data_offset % 64);
-+
-+ /* Accommodate SWA space. */
-+ priv->tx_data_offset += DPAA2_ETH_SWA_SIZE;
-+
-+ /* allocate classification rule space */
-+ priv->cls_rule = kzalloc(sizeof(*priv->cls_rule) *
-+ DPAA2_CLASSIFIER_ENTRY_COUNT, GFP_KERNEL);
-+ if (!priv->cls_rule)
-+ goto err_cls_rule;
-+
-+ kfree(dma_mem);
-+
-+ return 0;
-+
-+err_cls_rule:
-+err_data_offset:
-+err_buf_layout:
-+err_extract:
-+err_dpni_ver:
-+err_get_attr:
-+err_dma_map:
-+ kfree(dma_mem);
-+err_alloc:
-+ dpni_close(priv->mc_io, 0, priv->mc_token);
-+err_open:
-+ return err;
-+}
-+
-+static void dpaa2_dpni_free(struct dpaa2_eth_priv *priv)
-+{
-+ int err;
-+
-+ err = dpni_reset(priv->mc_io, 0, priv->mc_token);
-+ if (err)
-+ netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
-+ err);
-+
-+ dpni_close(priv->mc_io, 0, priv->mc_token);
-+}
-+
-+static int dpaa2_rx_flow_setup(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_fq *fq)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ struct dpni_queue_attr rx_queue_attr;
-+ struct dpni_queue_cfg queue_cfg;
-+ int err;
-+
-+ memset(&queue_cfg, 0, sizeof(queue_cfg));
-+ queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST |
-+ DPNI_QUEUE_OPT_TAILDROP_THRESHOLD;
-+ queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
-+ queue_cfg.dest_cfg.priority = 1;
-+ queue_cfg.user_ctx = (u64)fq;
-+ queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
-+ queue_cfg.tail_drop_threshold = DPAA2_ETH_TAILDROP_THRESH;
-+ err = dpni_set_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
-+ &queue_cfg);
-+ if (err) {
-+ dev_err(dev, "dpni_set_rx_flow() failed\n");
-+ return err;
-+ }
-+
-+ /* Get the actual FQID that was assigned by MC */
-+ err = dpni_get_rx_flow(priv->mc_io, 0, priv->mc_token, 0, fq->flowid,
-+ &rx_queue_attr);
-+ if (err) {
-+ dev_err(dev, "dpni_get_rx_flow() failed\n");
-+ return err;
-+ }
-+ fq->fqid = rx_queue_attr.fqid;
-+
-+ return 0;
-+}
-+
-+static int dpaa2_tx_flow_setup(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_fq *fq)
-+{
-+ struct device *dev = priv->net_dev->dev.parent;
-+ struct dpni_tx_flow_cfg tx_flow_cfg;
-+ struct dpni_tx_conf_cfg tx_conf_cfg;
-+ struct dpni_tx_conf_attr tx_conf_attr;
-+ int err;
-+
-+ memset(&tx_flow_cfg, 0, sizeof(tx_flow_cfg));
-+ tx_flow_cfg.options = DPNI_TX_FLOW_OPT_TX_CONF_ERROR;
-+ tx_flow_cfg.use_common_tx_conf_queue = 0;
-+ err = dpni_set_tx_flow(priv->mc_io, 0, priv->mc_token,
-+ &fq->flowid, &tx_flow_cfg);
-+ if (err) {
-+ dev_err(dev, "dpni_set_tx_flow() failed\n");
-+ return err;
-+ }
-+
-+ tx_conf_cfg.errors_only = 0;
-+ tx_conf_cfg.queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX |
-+ DPNI_QUEUE_OPT_DEST;
-+ tx_conf_cfg.queue_cfg.user_ctx = (u64)fq;
-+ tx_conf_cfg.queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
-+ tx_conf_cfg.queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
-+ tx_conf_cfg.queue_cfg.dest_cfg.priority = 0;
-+
-+ err = dpni_set_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
-+ &tx_conf_cfg);
-+ if (err) {
-+ dev_err(dev, "dpni_set_tx_conf() failed\n");
-+ return err;
-+ }
-+
-+ err = dpni_get_tx_conf(priv->mc_io, 0, priv->mc_token, fq->flowid,
-+ &tx_conf_attr);
-+ if (err) {
-+ dev_err(dev, "dpni_get_tx_conf() failed\n");
-+ return err;
-+ }
-+
-+ fq->fqid = tx_conf_attr.queue_attr.fqid;
-+
-+ return 0;
-+}
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
-+static int dpaa2_rx_err_setup(struct dpaa2_eth_priv *priv,
-+ struct dpaa2_eth_fq *fq)
-+{
-+ struct dpni_queue_attr queue_attr;
-+ struct dpni_queue_cfg queue_cfg;
-+ int err;
-+
-+ /* Configure the Rx error queue to generate CDANs,
-+ * just like the Rx queues */
-+ queue_cfg.options = DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST;
-+ queue_cfg.dest_cfg.dest_type = DPNI_DEST_DPCON;
-+ queue_cfg.dest_cfg.priority = 1;
-+ queue_cfg.user_ctx = (u64)fq;
-+ queue_cfg.dest_cfg.dest_id = fq->channel->dpcon_id;
-+ err = dpni_set_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_cfg);
-+ if (err) {
-+ netdev_err(priv->net_dev, "dpni_set_rx_err_queue() failed\n");
-+ return err;
-+ }
-+
-+ /* Get the FQID */
-+ err = dpni_get_rx_err_queue(priv->mc_io, 0, priv->mc_token, &queue_attr);
-+ if (err) {
-+ netdev_err(priv->net_dev, "dpni_get_rx_err_queue() failed\n");
-+ return err;
-+ }
-+ fq->fqid = queue_attr.fqid;
-+
-+ return 0;
-+}
-+#endif
-+
-+static int dpaa2_dpni_bind(struct dpaa2_eth_priv *priv)
-+{
-+ struct net_device *net_dev = priv->net_dev;
-+ struct device *dev = net_dev->dev.parent;
-+ struct dpni_pools_cfg pools_params;
-+ struct dpni_error_cfg err_cfg;
-+ int err = 0;
-+ int i;
-+
-+ pools_params.num_dpbp = 1;
-+ pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
-+ pools_params.pools[0].backup_pool = 0;
-+ pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUFFER_SIZE;
-+ err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
-+ if (err) {
-+ dev_err(dev, "dpni_set_pools() failed\n");
-+ return err;
-+ }
-+
-+ dpaa2_cls_check(net_dev);
-+
-+ /* have the interface implicitly distribute traffic based on supported
-+ * header fields
-+ */
-+ if (dpaa2_eth_hash_enabled(priv)) {
-+ err = dpaa2_set_hash(net_dev, DPAA2_RXH_SUPPORTED);
-+ if (err)
-+ return err;
-+ }
-+
-+ /* Configure handling of error frames */
-+ err_cfg.errors = DPAA2_ETH_RX_ERR_MASK;
-+ err_cfg.set_frame_annotation = 1;
-+#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
-+ err_cfg.error_action = DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE;
-+#else
-+ err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
-+#endif
-+ err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
-+ &err_cfg);
-+ if (err) {
-+ dev_err(dev, "dpni_set_errors_behavior failed\n");
-+ return err;
-+ }
-+
-+ /* Configure Rx and Tx conf queues to generate CDANs */
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ switch (priv->fq[i].type) {
-+ case DPAA2_RX_FQ:
-+ err = dpaa2_rx_flow_setup(priv, &priv->fq[i]);
-+ break;
-+ case DPAA2_TX_CONF_FQ:
-+ err = dpaa2_tx_flow_setup(priv, &priv->fq[i]);
-+ break;
-+#ifdef CONFIG_FSL_DPAA2_ETH_USE_ERR_QUEUE
-+ case DPAA2_RX_ERR_FQ:
-+ err = dpaa2_rx_err_setup(priv, &priv->fq[i]);
-+ break;
-+#endif
-+ default:
-+ dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
-+ return -EINVAL;
-+ }
-+ if (err)
-+ return err;
-+ }
-+
-+ err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token, &priv->tx_qdid);
-+ if (err) {
-+ dev_err(dev, "dpni_get_qdid() failed\n");
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+static int dpaa2_eth_alloc_rings(struct dpaa2_eth_priv *priv)
-+{
-+ struct net_device *net_dev = priv->net_dev;
-+ struct device *dev = net_dev->dev.parent;
-+ int i;
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ priv->channel[i]->store =
-+ dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
-+ if (!priv->channel[i]->store) {
-+ netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
-+ goto err_ring;
-+ }
-+ }
-+
-+ return 0;
-+
-+err_ring:
-+ for (i = 0; i < priv->num_channels; i++) {
-+ if (!priv->channel[i]->store)
-+ break;
-+ dpaa2_io_store_destroy(priv->channel[i]->store);
-+ }
-+
-+ return -ENOMEM;
-+}
-+
-+static void dpaa2_eth_free_rings(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+
-+ for (i = 0; i < priv->num_channels; i++)
-+ dpaa2_io_store_destroy(priv->channel[i]->store);
-+}
-+
-+static int dpaa2_eth_netdev_init(struct net_device *net_dev)
-+{
-+ int err;
-+ struct device *dev = net_dev->dev.parent;
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ u8 mac_addr[ETH_ALEN];
-+ u8 bcast_addr[ETH_ALEN];
-+
-+ net_dev->netdev_ops = &dpaa2_eth_ops;
-+
-+ /* If the DPL contains all-0 mac_addr, set a random hardware address */
-+ err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-+ mac_addr);
-+ if (err) {
-+ dev_err(dev, "dpni_get_primary_mac_addr() failed (%d)", err);
-+ return err;
-+ }
-+ if (is_zero_ether_addr(mac_addr)) {
-+ /* Fills in net_dev->dev_addr, as required by
-+ * register_netdevice()
-+ */
-+ eth_hw_addr_random(net_dev);
-+ /* Make the user aware, without cluttering the boot log */
-+ pr_info_once(KBUILD_MODNAME " device(s) have all-zero hwaddr, replaced with random");
-+ err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-+ net_dev->dev_addr);
-+ if (err) {
-+ dev_err(dev, "dpni_set_primary_mac_addr(): %d\n", err);
-+ return err;
-+ }
-+ /* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
-+ * practical purposes, this will be our "permanent" mac address,
-+ * at least until the next reboot. This move will also permit
-+ * register_netdevice() to properly fill up net_dev->perm_addr.
-+ */
-+ net_dev->addr_assign_type = NET_ADDR_PERM;
-+ } else {
-+ /* NET_ADDR_PERM is default, all we have to do is
-+ * fill in the device addr.
-+ */
-+ memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
-+ }
-+
-+ /* Explicitly add the broadcast address to the MAC filtering table;
-+ * the MC won't do that for us.
-+ */
-+ eth_broadcast_addr(bcast_addr);
-+ err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
-+ if (err) {
-+ dev_warn(dev, "dpni_add_mac_addr() failed (%d)\n", err);
-+ /* Won't return an error; at least, we'd have egress traffic */
-+ }
-+
-+ /* Reserve enough space to align buffer as per hardware requirement;
-+ * NOTE: priv->tx_data_offset MUST be initialized at this point.
-+ */
-+ net_dev->needed_headroom = DPAA2_ETH_NEEDED_HEADROOM(priv);
-+
-+ /* Our .ndo_init will be called herein */
-+ err = register_netdev(net_dev);
-+ if (err < 0) {
-+ dev_err(dev, "register_netdev() = %d\n", err);
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+static int dpaa2_poll_link_state(void *arg)
-+{
-+ struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
-+ int err;
-+
-+ while (!kthread_should_stop()) {
-+ err = dpaa2_link_state_update(priv);
-+ if (unlikely(err))
-+ return err;
-+
-+ msleep(DPAA2_ETH_LINK_STATE_REFRESH);
-+ }
-+
-+ return 0;
-+}
-+#else
-+static irqreturn_t dpni_irq0_handler(int irq_num, void *arg)
-+{
-+ return IRQ_WAKE_THREAD;
-+}
-+
-+static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
-+{
-+ u8 irq_index = DPNI_IRQ_INDEX;
-+ u32 status, clear = 0;
-+ struct device *dev = (struct device *)arg;
-+ struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
-+ struct net_device *net_dev = dev_get_drvdata(dev);
-+ int err;
-+
-+ netdev_dbg(net_dev, "IRQ %d received\n", irq_num);
-+ err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
-+ irq_index, &status);
-+ if (unlikely(err)) {
-+ netdev_err(net_dev, "Can't get irq status (err %d)", err);
-+ clear = 0xffffffff;
-+ goto out;
-+ }
-+
-+ if (status & DPNI_IRQ_EVENT_LINK_CHANGED) {
-+ clear |= DPNI_IRQ_EVENT_LINK_CHANGED;
-+ dpaa2_link_state_update(netdev_priv(net_dev));
-+ }
-+
-+out:
-+ dpni_clear_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
-+ irq_index, clear);
-+ return IRQ_HANDLED;
-+}
-+
-+static int dpaa2_eth_setup_irqs(struct fsl_mc_device *ls_dev)
-+{
-+ int err = 0;
-+ struct fsl_mc_device_irq *irq;
-+ int irq_count = ls_dev->obj_desc.irq_count;
-+ u8 irq_index = DPNI_IRQ_INDEX;
-+ u32 mask = DPNI_IRQ_EVENT_LINK_CHANGED;
-+
-+ /* The only interrupt supported now is the link state notification. */
-+ if (WARN_ON(irq_count != 1))
-+ return -EINVAL;
-+
-+ irq = ls_dev->irqs[0];
-+ err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
-+ dpni_irq0_handler,
-+ dpni_irq0_handler_thread,
-+ IRQF_NO_SUSPEND | IRQF_ONESHOT,
-+ dev_name(&ls_dev->dev), &ls_dev->dev);
-+ if (err < 0) {
-+ dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d", err);
-+ return err;
-+ }
-+
-+ err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
-+ irq_index, mask);
-+ if (err < 0) {
-+ dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d", err);
-+ return err;
-+ }
-+
-+ err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
-+ irq_index, 1);
-+ if (err < 0) {
-+ dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d", err);
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+#endif
-+
-+static void dpaa2_eth_napi_add(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+ struct dpaa2_eth_channel *ch;
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ /* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
-+ netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll,
-+ NAPI_POLL_WEIGHT);
-+ }
-+}
-+
-+static void dpaa2_eth_napi_del(struct dpaa2_eth_priv *priv)
-+{
-+ int i;
-+ struct dpaa2_eth_channel *ch;
-+
-+ for (i = 0; i < priv->num_channels; i++) {
-+ ch = priv->channel[i];
-+ netif_napi_del(&ch->napi);
-+ }
-+}
-+
-+/* SysFS support */
-+
-+static ssize_t dpaa2_eth_show_tx_shaping(struct device *dev,
-+ struct device_attribute *attr,
-+ char *buf)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
-+ /* No MC API for getting the shaping config. We're stateful. */
-+ struct dpni_tx_shaping_cfg *scfg = &priv->shaping_cfg;
-+
-+ return sprintf(buf, "%u %hu\n", scfg->rate_limit, scfg->max_burst_size);
-+}
-+
-+static ssize_t dpaa2_eth_write_tx_shaping(struct device *dev,
-+ struct device_attribute *attr,
-+ const char *buf,
-+ size_t count)
-+{
-+ int err, items;
-+ struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
-+ struct dpni_tx_shaping_cfg scfg;
-+
-+ items = sscanf(buf, "%u %hu", &scfg.rate_limit, &scfg.max_burst_size);
-+ if (items != 2) {
-+ pr_err("Expected format: \"rate_limit(Mbps) max_burst_size(bytes)\"\n");
-+ return -EINVAL;
-+ }
-+ /* Size restriction as per MC API documentation */
-+ if (scfg.max_burst_size > 64000) {
-+ pr_err("max_burst_size must be <= 64000, thanks.\n");
-+ return -EINVAL;
-+ }
-+
-+ err = dpni_set_tx_shaping(priv->mc_io, 0, priv->mc_token, &scfg);
-+ if (err) {
-+ dev_err(dev, "dpni_set_tx_shaping() failed\n");
-+ return -EPERM;
-+ }
-+ /* If successful, save the current configuration for future inquiries */
-+ priv->shaping_cfg = scfg;
-+
-+ return count;
-+}
-+
-+static ssize_t dpaa2_eth_show_txconf_cpumask(struct device *dev,
-+ struct device_attribute *attr,
-+ char *buf)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
-+
-+ return cpumap_print_to_pagebuf(1, buf, &priv->txconf_cpumask);
-+}
-+
-+static ssize_t dpaa2_eth_write_txconf_cpumask(struct device *dev,
-+ struct device_attribute *attr,
-+ const char *buf,
-+ size_t count)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(to_net_dev(dev));
-+ struct dpaa2_eth_fq *fq;
-+ bool running = netif_running(priv->net_dev);
-+ int i, err;
-+
-+ err = cpulist_parse(buf, &priv->txconf_cpumask);
-+ if (err)
-+ return err;
-+
-+ /* Only accept CPUs that have an affine DPIO */
-+ if (!cpumask_subset(&priv->txconf_cpumask, &priv->dpio_cpumask)) {
-+ netdev_info(priv->net_dev,
-+ "cpumask must be a subset of 0x%lx\n",
-+ *cpumask_bits(&priv->dpio_cpumask));
-+ cpumask_and(&priv->txconf_cpumask, &priv->dpio_cpumask,
-+ &priv->txconf_cpumask);
-+ }
-+
-+ /* Rewiring the TxConf FQs requires interface shutdown.
-+ */
-+ if (running) {
-+ err = dpaa2_eth_stop(priv->net_dev);
-+ if (err)
-+ return -ENODEV;
-+ }
-+
-+ /* Set the new TxConf FQ affinities */
-+ dpaa2_set_fq_affinity(priv);
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+ /* dpaa2_eth_open() below will *stop* the Tx queues until an explicit
-+ * link up notification is received. Give the polling thread enough time
-+ * to detect the link state change, or else we'll end up with the
-+ * transmission side forever shut down.
-+ */
-+ msleep(2 * DPAA2_ETH_LINK_STATE_REFRESH);
-+#endif
-+
-+ for (i = 0; i < priv->num_fqs; i++) {
-+ fq = &priv->fq[i];
-+ if (fq->type != DPAA2_TX_CONF_FQ)
-+ continue;
-+ dpaa2_tx_flow_setup(priv, fq);
-+ }
-+
-+ if (running) {
-+ err = dpaa2_eth_open(priv->net_dev);
-+ if (err)
-+ return -ENODEV;
-+ }
-+
-+ return count;
-+}
-+
-+static struct device_attribute dpaa2_eth_attrs[] = {
-+ __ATTR(txconf_cpumask,
-+ S_IRUSR | S_IWUSR,
-+ dpaa2_eth_show_txconf_cpumask,
-+ dpaa2_eth_write_txconf_cpumask),
-+
-+ __ATTR(tx_shaping,
-+ S_IRUSR | S_IWUSR,
-+ dpaa2_eth_show_tx_shaping,
-+ dpaa2_eth_write_tx_shaping),
-+};
-+
-+void dpaa2_eth_sysfs_init(struct device *dev)
-+{
-+ int i, err;
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++) {
-+ err = device_create_file(dev, &dpaa2_eth_attrs[i]);
-+ if (err) {
-+ dev_err(dev, "ERROR creating sysfs file\n");
-+ goto undo;
-+ }
-+ }
-+ return;
-+
-+undo:
-+ while (i > 0)
-+ device_remove_file(dev, &dpaa2_eth_attrs[--i]);
-+}
-+
-+void dpaa2_eth_sysfs_remove(struct device *dev)
-+{
-+ int i;
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_eth_attrs); i++)
-+ device_remove_file(dev, &dpaa2_eth_attrs[i]);
-+}
-+
-+static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
-+{
-+ struct device *dev;
-+ struct net_device *net_dev = NULL;
-+ struct dpaa2_eth_priv *priv = NULL;
-+ int err = 0;
-+
-+ dev = &dpni_dev->dev;
-+
-+ /* Net device */
-+ net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
-+ if (!net_dev) {
-+ dev_err(dev, "alloc_etherdev_mq() failed\n");
-+ return -ENOMEM;
-+ }
-+
-+ SET_NETDEV_DEV(net_dev, dev);
-+ dev_set_drvdata(dev, net_dev);
-+
-+ priv = netdev_priv(net_dev);
-+ priv->net_dev = net_dev;
-+ priv->msg_enable = netif_msg_init(debug, -1);
-+
-+ /* Obtain a MC portal */
-+ err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
-+ &priv->mc_io);
-+ if (err) {
-+ dev_err(dev, "MC portal allocation failed\n");
-+ goto err_portal_alloc;
-+ }
-+
-+#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+ err = fsl_mc_allocate_irqs(dpni_dev);
-+ if (err) {
-+ dev_err(dev, "MC irqs allocation failed\n");
-+ goto err_irqs_alloc;
-+ }
-+#endif
-+
-+ /* DPNI initialization */
-+ err = dpaa2_dpni_setup(dpni_dev);
-+ if (err < 0)
-+ goto err_dpni_setup;
-+
-+ /* DPIO */
-+ err = dpaa2_dpio_setup(priv);
-+ if (err)
-+ goto err_dpio_setup;
-+
-+ /* FQs */
-+ dpaa2_eth_setup_fqs(priv);
-+ dpaa2_set_fq_affinity(priv);
-+
-+ /* DPBP */
-+ err = dpaa2_dpbp_setup(priv);
-+ if (err)
-+ goto err_dpbp_setup;
-+
-+ /* DPNI binding to DPIO and DPBPs */
-+ err = dpaa2_dpni_bind(priv);
-+ if (err)
-+ goto err_bind;
-+
-+ dpaa2_eth_napi_add(priv);
-+
-+ /* Percpu statistics */
-+ priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
-+ if (!priv->percpu_stats) {
-+ dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
-+ err = -ENOMEM;
-+ goto err_alloc_percpu_stats;
-+ }
-+ priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
-+ if (!priv->percpu_extras) {
-+ dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
-+ err = -ENOMEM;
-+ goto err_alloc_percpu_extras;
-+ }
-+
-+ snprintf(net_dev->name, IFNAMSIZ, "ni%d", dpni_dev->obj_desc.id);
-+ if (!dev_valid_name(net_dev->name)) {
-+ dev_warn(&net_dev->dev,
-+ "netdevice name \"%s\" cannot be used, reverting to default..\n",
-+ net_dev->name);
-+ dev_alloc_name(net_dev, "eth%d");
-+ dev_warn(&net_dev->dev, "using name \"%s\"\n", net_dev->name);
-+ }
-+
-+ err = dpaa2_eth_netdev_init(net_dev);
-+ if (err)
-+ goto err_netdev_init;
-+
-+ /* Configure checksum offload based on current interface flags */
-+ err = dpaa2_eth_set_rx_csum(priv,
-+ !!(net_dev->features & NETIF_F_RXCSUM));
-+ if (err)
-+ goto err_csum;
-+
-+ err = dpaa2_eth_set_tx_csum(priv,
-+ !!(net_dev->features &
-+ (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
-+ if (err)
-+ goto err_csum;
-+
-+ err = dpaa2_eth_alloc_rings(priv);
-+ if (err)
-+ goto err_alloc_rings;
-+
-+ net_dev->ethtool_ops = &dpaa2_ethtool_ops;
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+ priv->poll_thread = kthread_run(dpaa2_poll_link_state, priv,
-+ "%s_poll_link", net_dev->name);
-+#else
-+ err = dpaa2_eth_setup_irqs(dpni_dev);
-+ if (err) {
-+ netdev_err(net_dev, "ERROR %d setting up interrupts", err);
-+ goto err_setup_irqs;
-+ }
-+#endif
-+
-+ dpaa2_eth_sysfs_init(&net_dev->dev);
-+ dpaa2_dbg_add(priv);
-+
-+ dev_info(dev, "Probed interface %s\n", net_dev->name);
-+ return 0;
-+
-+#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+err_setup_irqs:
-+#endif
-+ dpaa2_eth_free_rings(priv);
-+err_alloc_rings:
-+err_csum:
-+ unregister_netdev(net_dev);
-+err_netdev_init:
-+ free_percpu(priv->percpu_extras);
-+err_alloc_percpu_extras:
-+ free_percpu(priv->percpu_stats);
-+err_alloc_percpu_stats:
-+ dpaa2_eth_napi_del(priv);
-+err_bind:
-+ dpaa2_dpbp_free(priv);
-+err_dpbp_setup:
-+ dpaa2_dpio_free(priv);
-+err_dpio_setup:
-+ kfree(priv->cls_rule);
-+ dpni_close(priv->mc_io, 0, priv->mc_token);
-+err_dpni_setup:
-+#ifndef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+ fsl_mc_free_irqs(dpni_dev);
-+err_irqs_alloc:
-+#endif
-+ fsl_mc_portal_free(priv->mc_io);
-+err_portal_alloc:
-+ dev_set_drvdata(dev, NULL);
-+ free_netdev(net_dev);
-+
-+ return err;
-+}
-+
-+static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
-+{
-+ struct device *dev;
-+ struct net_device *net_dev;
-+ struct dpaa2_eth_priv *priv;
-+
-+ dev = &ls_dev->dev;
-+ net_dev = dev_get_drvdata(dev);
-+ priv = netdev_priv(net_dev);
-+
-+ dpaa2_dbg_remove(priv);
-+ dpaa2_eth_sysfs_remove(&net_dev->dev);
-+
-+ unregister_netdev(net_dev);
-+ dev_info(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
-+
-+ dpaa2_dpio_free(priv);
-+ dpaa2_eth_free_rings(priv);
-+ dpaa2_eth_napi_del(priv);
-+ dpaa2_dpbp_free(priv);
-+ dpaa2_dpni_free(priv);
-+
-+ fsl_mc_portal_free(priv->mc_io);
-+
-+ free_percpu(priv->percpu_stats);
-+ free_percpu(priv->percpu_extras);
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_LINK_POLL
-+ kthread_stop(priv->poll_thread);
-+#else
-+ fsl_mc_free_irqs(ls_dev);
-+#endif
-+
-+ kfree(priv->cls_rule);
-+
-+ dev_set_drvdata(dev, NULL);
-+ free_netdev(net_dev);
-+
-+ return 0;
-+}
-+
-+static const struct fsl_mc_device_match_id dpaa2_eth_match_id_table[] = {
-+ {
-+ .vendor = FSL_MC_VENDOR_FREESCALE,
-+ .obj_type = "dpni",
-+ .ver_major = DPNI_VER_MAJOR,
-+ .ver_minor = DPNI_VER_MINOR
-+ },
-+ { .vendor = 0x0 }
-+};
-+
-+static struct fsl_mc_driver dpaa2_eth_driver = {
-+ .driver = {
-+ .name = KBUILD_MODNAME,
-+ .owner = THIS_MODULE,
-+ },
-+ .probe = dpaa2_eth_probe,
-+ .remove = dpaa2_eth_remove,
-+ .match_id_table = dpaa2_eth_match_id_table
-+};
-+
-+static int __init dpaa2_eth_driver_init(void)
-+{
-+ int err;
-+
-+ dpaa2_eth_dbg_init();
-+
-+ err = fsl_mc_driver_register(&dpaa2_eth_driver);
-+ if (err) {
-+ dpaa2_eth_dbg_exit();
-+ return err;
-+ }
-+
-+ return 0;
-+}
-+
-+static void __exit dpaa2_eth_driver_exit(void)
-+{
-+ fsl_mc_driver_unregister(&dpaa2_eth_driver);
-+ dpaa2_eth_dbg_exit();
-+}
-+
-+module_init(dpaa2_eth_driver_init);
-+module_exit(dpaa2_eth_driver_exit);
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
-@@ -0,0 +1,366 @@
-+/* Copyright 2014-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+
-+#ifndef __DPAA2_ETH_H
-+#define __DPAA2_ETH_H
-+
-+#include <linux/netdevice.h>
-+#include <linux/if_vlan.h>
-+#include "../../fsl-mc/include/fsl_dpaa2_io.h"
-+#include "../../fsl-mc/include/fsl_dpaa2_fd.h"
-+#include "../../fsl-mc/include/dpbp.h"
-+#include "../../fsl-mc/include/dpbp-cmd.h"
-+#include "../../fsl-mc/include/dpcon.h"
-+#include "../../fsl-mc/include/dpcon-cmd.h"
-+#include "../../fsl-mc/include/dpmng.h"
-+#include "dpni.h"
-+#include "dpni-cmd.h"
-+
-+#include "dpaa2-eth-trace.h"
-+#include "dpaa2-eth-debugfs.h"
-+
-+#define DPAA2_ETH_STORE_SIZE 16
-+
-+/* Maximum receive frame size is 64K */
-+#define DPAA2_ETH_MAX_SG_ENTRIES ((64 * 1024) / DPAA2_ETH_RX_BUFFER_SIZE)
-+
-+/* Maximum acceptable MTU value. It is in direct relation with the MC-enforced
-+ * Max Frame Length (currently 10k).
-+ */
-+#define DPAA2_ETH_MFL (10 * 1024)
-+#define DPAA2_ETH_MAX_MTU (DPAA2_ETH_MFL - VLAN_ETH_HLEN)
-+/* Convert L3 MTU to L2 MFL */
-+#define DPAA2_ETH_L2_MAX_FRM(mtu) (mtu + VLAN_ETH_HLEN)
-+
-+/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo
-+ * frames in the Rx queues (length of the current frame is not
-+ * taken into account when making the taildrop decision)
-+ */
-+#define DPAA2_ETH_TAILDROP_THRESH (64 * 1024)
-+
-+/* Buffer quota per queue. Must be large enough such that for minimum sized
-+ * frames taildrop kicks in before the bpool gets depleted, so we compute
-+ * how many 64B frames fit inside the taildrop threshold and add a margin
-+ * to accommodate the buffer refill delay.
-+ */
-+#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE (DPAA2_ETH_TAILDROP_THRESH / 64)
-+#define DPAA2_ETH_NUM_BUFS (DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
-+#define DPAA2_ETH_REFILL_THRESH DPAA2_ETH_MAX_FRAMES_PER_QUEUE
-+
-+/* Hardware requires alignment for ingress/egress buffer addresses
-+ * and ingress buffer lengths.
-+ */
-+#define DPAA2_ETH_RX_BUFFER_SIZE 2048
-+#define DPAA2_ETH_TX_BUF_ALIGN 64
-+#define DPAA2_ETH_RX_BUF_ALIGN 256
-+#define DPAA2_ETH_NEEDED_HEADROOM(p_priv) \
-+ ((p_priv)->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN)
-+
-+#define DPAA2_ETH_BUF_RAW_SIZE \
-+ (DPAA2_ETH_RX_BUFFER_SIZE + \
-+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + \
-+ DPAA2_ETH_RX_BUF_ALIGN)
-+
-+/* PTP nominal frequency 1MHz */
-+#define DPAA2_PTP_NOMINAL_FREQ_PERIOD_NS 1000
-+
-+/* We are accommodating a skb backpointer and some S/G info
-+ * in the frame's software annotation. The hardware
-+ * options are either 0 or 64, so we choose the latter.
-+ */
-+#define DPAA2_ETH_SWA_SIZE 64
-+
-+/* Must keep this struct smaller than DPAA2_ETH_SWA_SIZE */
-+struct dpaa2_eth_swa {
-+ struct sk_buff *skb;
-+ struct scatterlist *scl;
-+ int num_sg;
-+ int num_dma_bufs;
-+};
-+
-+/* Annotation valid bits in FD FRC */
-+#define DPAA2_FD_FRC_FASV 0x8000
-+#define DPAA2_FD_FRC_FAEADV 0x4000
-+#define DPAA2_FD_FRC_FAPRV 0x2000
-+#define DPAA2_FD_FRC_FAIADV 0x1000
-+#define DPAA2_FD_FRC_FASWOV 0x0800
-+#define DPAA2_FD_FRC_FAICFDV 0x0400
-+
-+/* Annotation bits in FD CTRL */
-+#define DPAA2_FD_CTRL_ASAL 0x00020000 /* ASAL = 128 */
-+#define DPAA2_FD_CTRL_PTA 0x00800000
-+#define DPAA2_FD_CTRL_PTV1 0x00400000
-+
-+/* Frame annotation status */
-+struct dpaa2_fas {
-+ u8 reserved;
-+ u8 ppid;
-+ __le16 ifpid;
-+ __le32 status;
-+} __packed;
-+
-+/* Debug frame, otherwise supposed to be discarded */
-+#define DPAA2_ETH_FAS_DISC 0x80000000
-+/* MACSEC frame */
-+#define DPAA2_ETH_FAS_MS 0x40000000
-+#define DPAA2_ETH_FAS_PTP 0x08000000
-+/* Ethernet multicast frame */
-+#define DPAA2_ETH_FAS_MC 0x04000000
-+/* Ethernet broadcast frame */
-+#define DPAA2_ETH_FAS_BC 0x02000000
-+#define DPAA2_ETH_FAS_KSE 0x00040000
-+#define DPAA2_ETH_FAS_EOFHE 0x00020000
-+#define DPAA2_ETH_FAS_MNLE 0x00010000
-+#define DPAA2_ETH_FAS_TIDE 0x00008000
-+#define DPAA2_ETH_FAS_PIEE 0x00004000
-+/* Frame length error */
-+#define DPAA2_ETH_FAS_FLE 0x00002000
-+/* Frame physical error; our favourite pastime */
-+#define DPAA2_ETH_FAS_FPE 0x00001000
-+#define DPAA2_ETH_FAS_PTE 0x00000080
-+#define DPAA2_ETH_FAS_ISP 0x00000040
-+#define DPAA2_ETH_FAS_PHE 0x00000020
-+#define DPAA2_ETH_FAS_BLE 0x00000010
-+/* L3 csum validation performed */
-+#define DPAA2_ETH_FAS_L3CV 0x00000008
-+/* L3 csum error */
-+#define DPAA2_ETH_FAS_L3CE 0x00000004
-+/* L4 csum validation performed */
-+#define DPAA2_ETH_FAS_L4CV 0x00000002
-+/* L4 csum error */
-+#define DPAA2_ETH_FAS_L4CE 0x00000001
-+/* These bits always signal errors */
-+#define DPAA2_ETH_RX_ERR_MASK (DPAA2_ETH_FAS_KSE | \
-+ DPAA2_ETH_FAS_EOFHE | \
-+ DPAA2_ETH_FAS_MNLE | \
-+ DPAA2_ETH_FAS_TIDE | \
-+ DPAA2_ETH_FAS_PIEE | \
-+ DPAA2_ETH_FAS_FLE | \
-+ DPAA2_ETH_FAS_FPE | \
-+ DPAA2_ETH_FAS_PTE | \
-+ DPAA2_ETH_FAS_ISP | \
-+ DPAA2_ETH_FAS_PHE | \
-+ DPAA2_ETH_FAS_BLE | \
-+ DPAA2_ETH_FAS_L3CE | \
-+ DPAA2_ETH_FAS_L4CE)
-+/* Unsupported features in the ingress */
-+#define DPAA2_ETH_RX_UNSUPP_MASK DPAA2_ETH_FAS_MS
-+/* Tx errors */
-+#define DPAA2_ETH_TXCONF_ERR_MASK (DPAA2_ETH_FAS_KSE | \
-+ DPAA2_ETH_FAS_EOFHE | \
-+ DPAA2_ETH_FAS_MNLE | \
-+ DPAA2_ETH_FAS_TIDE)
-+
-+/* Time in milliseconds between link state updates */
-+#define DPAA2_ETH_LINK_STATE_REFRESH 1000
-+
-+/* Driver statistics, other than those in struct rtnl_link_stats64.
-+ * These are usually collected per-CPU and aggregated by ethtool.
-+ */
-+struct dpaa2_eth_stats {
-+ __u64 tx_conf_frames;
-+ __u64 tx_conf_bytes;
-+ __u64 tx_sg_frames;
-+ __u64 tx_sg_bytes;
-+ __u64 rx_sg_frames;
-+ __u64 rx_sg_bytes;
-+ /* Enqueues retried due to portal busy */
-+ __u64 tx_portal_busy;
-+};
-+
-+/* Per-FQ statistics */
-+struct dpaa2_eth_fq_stats {
-+ /* Number of frames received on this queue */
-+ __u64 frames;
-+};
-+
-+/* Per-channel statistics */
-+struct dpaa2_eth_ch_stats {
-+ /* Volatile dequeues retried due to portal busy */
-+ __u64 dequeue_portal_busy;
-+ /* Number of CDANs; useful to estimate avg NAPI len */
-+ __u64 cdan;
-+ /* Number of frames received on queues from this channel */
-+ __u64 frames;
-+};
-+
-+/* Maximum number of Rx queues associated with a DPNI */
-+#define DPAA2_ETH_MAX_RX_QUEUES 16
-+#define DPAA2_ETH_MAX_TX_QUEUES NR_CPUS
-+#define DPAA2_ETH_MAX_RX_ERR_QUEUES 1
-+#define DPAA2_ETH_MAX_QUEUES (DPAA2_ETH_MAX_RX_QUEUES + \
-+ DPAA2_ETH_MAX_TX_QUEUES + \
-+ DPAA2_ETH_MAX_RX_ERR_QUEUES)
-+
-+#define DPAA2_ETH_MAX_DPCONS NR_CPUS
-+
-+enum dpaa2_eth_fq_type {
-+ DPAA2_RX_FQ = 0,
-+ DPAA2_TX_CONF_FQ,
-+ DPAA2_RX_ERR_FQ
-+};
-+
-+struct dpaa2_eth_priv;
-+
-+struct dpaa2_eth_fq {
-+ u32 fqid;
-+ u16 flowid;
-+ int target_cpu;
-+ struct dpaa2_eth_channel *channel;
-+ enum dpaa2_eth_fq_type type;
-+
-+ void (*consume)(struct dpaa2_eth_priv *,
-+ struct dpaa2_eth_channel *,
-+ const struct dpaa2_fd *,
-+ struct napi_struct *);
-+ struct dpaa2_eth_priv *netdev_priv; /* backpointer */
-+ struct dpaa2_eth_fq_stats stats;
-+};
-+
-+struct dpaa2_eth_channel {
-+ struct dpaa2_io_notification_ctx nctx;
-+ struct fsl_mc_device *dpcon;
-+ int dpcon_id;
-+ int ch_id;
-+ int dpio_id;
-+ struct napi_struct napi;
-+ struct dpaa2_io_store *store;
-+ struct dpaa2_eth_priv *priv;
-+ int buf_count;
-+ struct dpaa2_eth_ch_stats stats;
-+};
-+
-+struct dpaa2_cls_rule {
-+ struct ethtool_rx_flow_spec fs;
-+ bool in_use;
-+};
-+
-+struct dpaa2_eth_priv {
-+ struct net_device *net_dev;
-+
-+ u8 num_fqs;
-+ /* First queue is tx conf, the rest are rx */
-+ struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
-+
-+ u8 num_channels;
-+ struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
-+
-+ int dpni_id;
-+ struct dpni_attr dpni_attrs;
-+ struct dpni_extended_cfg dpni_ext_cfg;
-+ /* Insofar as the MC is concerned, we're using one layout on all 3 types
-+ * of buffers (Rx, Tx, Tx-Conf).
-+ */
-+ struct dpni_buffer_layout buf_layout;
-+ u16 tx_data_offset;
-+
-+ struct fsl_mc_device *dpbp_dev;
-+ struct dpbp_attr dpbp_attrs;
-+
-+ u16 tx_qdid;
-+ struct fsl_mc_io *mc_io;
-+ /* SysFS-controlled affinity mask for TxConf FQs */
-+ struct cpumask txconf_cpumask;
-+ /* Cores which have an affine DPIO/DPCON.
-+ * This is the cpu set on which Rx frames are processed;
-+ * Tx confirmation frames are processed on a subset of this,
-+ * depending on user settings.
-+ */
-+ struct cpumask dpio_cpumask;
-+
-+ /* Standard statistics */
-+ struct rtnl_link_stats64 __percpu *percpu_stats;
-+ /* Extra stats, in addition to the ones known by the kernel */
-+ struct dpaa2_eth_stats __percpu *percpu_extras;
-+ u32 msg_enable; /* net_device message level */
-+
-+ u16 mc_token;
-+
-+ struct dpni_link_state link_state;
-+ struct task_struct *poll_thread;
-+
-+ /* enabled ethtool hashing bits */
-+ u64 rx_hash_fields;
-+
-+#ifdef CONFIG_FSL_DPAA2_ETH_DEBUGFS
-+ struct dpaa2_debugfs dbg;
-+#endif
-+
-+ /* array of classification rules */
-+ struct dpaa2_cls_rule *cls_rule;
-+
-+ struct dpni_tx_shaping_cfg shaping_cfg;
-+
-+ bool ts_tx_en; /* Tx timestamping enabled */
-+ bool ts_rx_en; /* Rx timestamping enabled */
-+};
-+
-+/* default Rx hash options, set during probing */
-+#define DPAA2_RXH_SUPPORTED (RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \
-+ | RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 \
-+ | RXH_L4_B_2_3)
-+
-+#define dpaa2_eth_hash_enabled(priv) \
-+ ((priv)->dpni_attrs.options & DPNI_OPT_DIST_HASH)
-+
-+#define dpaa2_eth_fs_enabled(priv) \
-+ ((priv)->dpni_attrs.options & DPNI_OPT_DIST_FS)
-+
-+#define DPAA2_CLASSIFIER_ENTRY_COUNT 16
-+
-+/* Required by struct dpni_attr::ext_cfg_iova */
-+#define DPAA2_EXT_CFG_SIZE 256
-+
-+extern const struct ethtool_ops dpaa2_ethtool_ops;
-+
-+int dpaa2_set_hash(struct net_device *net_dev, u64 flags);
-+
-+static int dpaa2_queue_count(struct dpaa2_eth_priv *priv)
-+{
-+ if (!dpaa2_eth_hash_enabled(priv))
-+ return 1;
-+
-+ return priv->dpni_ext_cfg.tc_cfg[0].max_dist;
-+}
-+
-+static inline int dpaa2_max_channels(struct dpaa2_eth_priv *priv)
-+{
-+ /* Ideally, we want a number of channels large enough
-+ * to accommodate both the Rx distribution size
-+ * and the max number of Tx confirmation queues
-+ */
-+ return max_t(int, dpaa2_queue_count(priv),
-+ priv->dpni_attrs.max_senders);
-+}
-+
-+void dpaa2_cls_check(struct net_device *);
-+
-+#endif /* __DPAA2_H */
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
-@@ -0,0 +1,882 @@
-+/* Copyright 2014-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of Freescale Semiconductor nor the
-+ * names of its contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
-+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-+ * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
-+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-+ */
-+
-+#include "dpni.h" /* DPNI_LINK_OPT_* */
-+#include "dpaa2-eth.h"
-+
-+/* size of DMA memory used to pass configuration to classifier, in bytes */
-+#define DPAA2_CLASSIFIER_DMA_SIZE 256
-+
-+/* To be kept in sync with 'enum dpni_counter' */
-+char dpaa2_ethtool_stats[][ETH_GSTRING_LEN] = {
-+ "rx frames",
-+ "rx bytes",
-+ "rx frames dropped",
-+ "rx err frames",
-+ "rx mcast frames",
-+ "rx mcast bytes",
-+ "rx bcast frames",
-+ "rx bcast bytes",
-+ "tx frames",
-+ "tx bytes",
-+ "tx err frames",
-+};
-+
-+#define DPAA2_ETH_NUM_STATS ARRAY_SIZE(dpaa2_ethtool_stats)
-+
-+/* To be kept in sync with 'struct dpaa2_eth_stats' */
-+char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
-+ /* per-cpu stats */
-+
-+ "tx conf frames",
-+ "tx conf bytes",
-+ "tx sg frames",
-+ "tx sg bytes",
-+ "rx sg frames",
-+ "rx sg bytes",
-+ /* how many times we had to retry the enqueue command */
-+ "tx portal busy",
-+
-+ /* Channel stats */
-+
-+ /* How many times we had to retry the volatile dequeue command */
-+ "portal busy",
-+ /* Number of notifications received */
-+ "cdan",
-+#ifdef CONFIG_FSL_QBMAN_DEBUG
-+ /* FQ stats */
-+ "rx pending frames",
-+ "rx pending bytes",
-+ "tx conf pending frames",
-+ "tx conf pending bytes",
-+ "buffer count"
-+#endif
-+};
-+
-+#define DPAA2_ETH_NUM_EXTRA_STATS ARRAY_SIZE(dpaa2_ethtool_extras)
-+
-+static void dpaa2_get_drvinfo(struct net_device *net_dev,
-+ struct ethtool_drvinfo *drvinfo)
-+{
-+ struct mc_version mc_ver;
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ char fw_version[ETHTOOL_FWVERS_LEN];
-+ char version[32];
-+ int err;
-+
-+ err = mc_get_version(priv->mc_io, 0, &mc_ver);
-+ if (err) {
-+ strlcpy(drvinfo->fw_version, "Error retrieving MC version",
-+ sizeof(drvinfo->fw_version));
-+ } else {
-+ scnprintf(fw_version, sizeof(fw_version), "%d.%d.%d",
-+ mc_ver.major, mc_ver.minor, mc_ver.revision);
-+ strlcpy(drvinfo->fw_version, fw_version,
-+ sizeof(drvinfo->fw_version));
-+ }
-+
-+ scnprintf(version, sizeof(version), "%d.%d", DPNI_VER_MAJOR,
-+ DPNI_VER_MINOR);
-+ strlcpy(drvinfo->version, version, sizeof(drvinfo->version));
-+
-+ strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
-+ strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
-+ sizeof(drvinfo->bus_info));
-+}
-+
-+static u32 dpaa2_get_msglevel(struct net_device *net_dev)
-+{
-+ return ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable;
-+}
-+
-+static void dpaa2_set_msglevel(struct net_device *net_dev,
-+ u32 msg_enable)
-+{
-+ ((struct dpaa2_eth_priv *)netdev_priv(net_dev))->msg_enable =
-+ msg_enable;
-+}
-+
-+static int dpaa2_get_settings(struct net_device *net_dev,
-+ struct ethtool_cmd *cmd)
-+{
-+ struct dpni_link_state state = {0};
-+ int err = 0;
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+
-+ err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
-+ if (err) {
-+ netdev_err(net_dev, "ERROR %d getting link state", err);
-+ goto out;
-+ }
-+
-+ /* At the moment, we have no way of interrogating the DPMAC
-+ * from the DPNI side - and for that matter there may exist
-+ * no DPMAC at all. So for now we just don't report anything
-+ * beyond the DPNI attributes.
-+ */
-+ if (state.options & DPNI_LINK_OPT_AUTONEG)
-+ cmd->autoneg = AUTONEG_ENABLE;
-+ if (!(state.options & DPNI_LINK_OPT_HALF_DUPLEX))
-+ cmd->duplex = DUPLEX_FULL;
-+ ethtool_cmd_speed_set(cmd, state.rate);
-+
-+out:
-+ return err;
-+}
-+
-+static int dpaa2_set_settings(struct net_device *net_dev,
-+ struct ethtool_cmd *cmd)
-+{
-+ struct dpni_link_cfg cfg = {0};
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int err = 0;
-+
-+ netdev_dbg(net_dev, "Setting link parameters...");
-+
-+ /* Due to a temporary firmware limitation, the DPNI must be down
-+ * in order to be able to change link settings. Taking steps to let
-+ * the user know that.
-+ */
-+ if (netif_running(net_dev)) {
-+ netdev_info(net_dev, "Sorry, interface must be brought down first.\n");
-+ return -EACCES;
-+ }
-+
-+ cfg.rate = ethtool_cmd_speed(cmd);
-+ if (cmd->autoneg == AUTONEG_ENABLE)
-+ cfg.options |= DPNI_LINK_OPT_AUTONEG;
-+ else
-+ cfg.options &= ~DPNI_LINK_OPT_AUTONEG;
-+ if (cmd->duplex == DUPLEX_HALF)
-+ cfg.options |= DPNI_LINK_OPT_HALF_DUPLEX;
-+ else
-+ cfg.options &= ~DPNI_LINK_OPT_HALF_DUPLEX;
-+
-+ err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &cfg);
-+ if (err)
-+ /* ethtool will be loud enough if we return an error; no point
-+ * in putting our own error message on the console by default
-+ */
-+ netdev_dbg(net_dev, "ERROR %d setting link cfg", err);
-+
-+ return err;
-+}
-+
-+static void dpaa2_get_strings(struct net_device *netdev, u32 stringset,
-+ u8 *data)
-+{
-+ u8 *p = data;
-+ int i;
-+
-+ switch (stringset) {
-+ case ETH_SS_STATS:
-+ for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
-+ strlcpy(p, dpaa2_ethtool_stats[i], ETH_GSTRING_LEN);
-+ p += ETH_GSTRING_LEN;
-+ }
-+ for (i = 0; i < DPAA2_ETH_NUM_EXTRA_STATS; i++) {
-+ strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
-+ p += ETH_GSTRING_LEN;
-+ }
-+ break;
-+ }
-+}
-+
-+static int dpaa2_get_sset_count(struct net_device *net_dev, int sset)
-+{
-+ switch (sset) {
-+ case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
-+ return DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS;
-+ default:
-+ return -EOPNOTSUPP;
-+ }
-+}
-+
-+/** Fill in hardware counters, as returned by the MC firmware.
-+ */
-+static void dpaa2_get_ethtool_stats(struct net_device *net_dev,
-+ struct ethtool_stats *stats,
-+ u64 *data)
-+{
-+ int i; /* Current index in the data array */
-+ int j, k, err;
-+
-+#ifdef CONFIG_FSL_QBMAN_DEBUG
-+ u32 fcnt, bcnt;
-+ u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
-+ u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
-+ u32 buf_cnt;
-+#endif
-+ u64 cdan = 0;
-+ u64 portal_busy = 0;
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ struct dpaa2_eth_stats *extras;
-+ struct dpaa2_eth_ch_stats *ch_stats;
-+
-+ memset(data, 0,
-+ sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
-+
-+ /* Print standard counters, from DPNI statistics */
-+ for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
-+ err = dpni_get_counter(priv->mc_io, 0, priv->mc_token, i,
-+ data + i);
-+ if (err != 0)
-+ netdev_warn(net_dev, "Err %d getting DPNI counter %d",
-+ err, i);
-+ }
-+
-+ /* Print per-cpu extra stats */
-+ for_each_online_cpu(k) {
-+ extras = per_cpu_ptr(priv->percpu_extras, k);
-+ for (j = 0; j < sizeof(*extras) / sizeof(__u64); j++)
-+ *((__u64 *)data + i + j) += *((__u64 *)extras + j);
-+ }
-+ i += j;
-+
-+ /* We may be using fewer DPIOs than actual CPUs */
-+ for_each_cpu(j, &priv->dpio_cpumask) {
-+ ch_stats = &priv->channel[j]->stats;
-+ cdan += ch_stats->cdan;
-+ portal_busy += ch_stats->dequeue_portal_busy;
-+ }
-+
-+ *(data + i++) = portal_busy;
-+ *(data + i++) = cdan;
-+
-+#ifdef CONFIG_FSL_QBMAN_DEBUG
-+ for (j = 0; j < priv->num_fqs; j++) {
-+ /* Print FQ instantaneous counts */
-+ err = dpaa2_io_query_fq_count(NULL, priv->fq[j].fqid,
-+ &fcnt, &bcnt);
-+ if (err) {
-+ netdev_warn(net_dev, "FQ query error %d", err);
-+ return;
-+ }
-+
-+ if (priv->fq[j].type == DPAA2_TX_CONF_FQ) {
-+ fcnt_tx_total += fcnt;
-+ bcnt_tx_total += bcnt;
-+ } else {
-+ fcnt_rx_total += fcnt;
-+ bcnt_rx_total += bcnt;
-+ }
-+ }
-+ *(data + i++) = fcnt_rx_total;
-+ *(data + i++) = bcnt_rx_total;
-+ *(data + i++) = fcnt_tx_total;
-+ *(data + i++) = bcnt_tx_total;
-+
-+ err = dpaa2_io_query_bp_count(NULL, priv->dpbp_attrs.bpid, &buf_cnt);
-+ if (err) {
-+ netdev_warn(net_dev, "Buffer count query error %d\n", err);
-+ return;
-+ }
-+ *(data + i++) = buf_cnt;
-+#endif
-+}
-+
-+static const struct dpaa2_hash_fields {
-+ u64 rxnfc_field;
-+ enum net_prot cls_prot;
-+ int cls_field;
-+ int size;
-+} dpaa2_hash_fields[] = {
-+ {
-+ /* L2 header */
-+ .rxnfc_field = RXH_L2DA,
-+ .cls_prot = NET_PROT_ETH,
-+ .cls_field = NH_FLD_ETH_DA,
-+ .size = 6,
-+ }, {
-+ /* VLAN header */
-+ .rxnfc_field = RXH_VLAN,
-+ .cls_prot = NET_PROT_VLAN,
-+ .cls_field = NH_FLD_VLAN_TCI,
-+ .size = 2,
-+ }, {
-+ /* IP header */
-+ .rxnfc_field = RXH_IP_SRC,
-+ .cls_prot = NET_PROT_IP,
-+ .cls_field = NH_FLD_IP_SRC,
-+ .size = 4,
-+ }, {
-+ .rxnfc_field = RXH_IP_DST,
-+ .cls_prot = NET_PROT_IP,
-+ .cls_field = NH_FLD_IP_DST,
-+ .size = 4,
-+ }, {
-+ .rxnfc_field = RXH_L3_PROTO,
-+ .cls_prot = NET_PROT_IP,
-+ .cls_field = NH_FLD_IP_PROTO,
-+ .size = 1,
-+ }, {
-+ /* Using UDP ports, this is functionally equivalent to raw
-+ * byte pairs from L4 header.
-+ */
-+ .rxnfc_field = RXH_L4_B_0_1,
-+ .cls_prot = NET_PROT_UDP,
-+ .cls_field = NH_FLD_UDP_PORT_SRC,
-+ .size = 2,
-+ }, {
-+ .rxnfc_field = RXH_L4_B_2_3,
-+ .cls_prot = NET_PROT_UDP,
-+ .cls_field = NH_FLD_UDP_PORT_DST,
-+ .size = 2,
-+ },
-+};
-+
-+static int dpaa2_cls_is_enabled(struct net_device *net_dev, u64 flag)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+
-+ return !!(priv->rx_hash_fields & flag);
-+}
-+
-+static int dpaa2_cls_key_off(struct net_device *net_dev, u64 flag)
-+{
-+ int i, off = 0;
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
-+ if (dpaa2_hash_fields[i].rxnfc_field & flag)
-+ return off;
-+ if (dpaa2_cls_is_enabled(net_dev,
-+ dpaa2_hash_fields[i].rxnfc_field))
-+ off += dpaa2_hash_fields[i].size;
-+ }
-+
-+ return -1;
-+}
-+
-+static u8 dpaa2_cls_key_size(struct net_device *net_dev)
-+{
-+ u8 i, size = 0;
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
-+ if (!dpaa2_cls_is_enabled(net_dev,
-+ dpaa2_hash_fields[i].rxnfc_field))
-+ continue;
-+ size += dpaa2_hash_fields[i].size;
-+ }
-+
-+ return size;
-+}
-+
-+static u8 dpaa2_cls_max_key_size(struct net_device *net_dev)
-+{
-+ u8 i, size = 0;
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++)
-+ size += dpaa2_hash_fields[i].size;
-+
-+ return size;
-+}
-+
-+void dpaa2_cls_check(struct net_device *net_dev)
-+{
-+ u8 key_size = dpaa2_cls_max_key_size(net_dev);
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+
-+ if (priv->dpni_attrs.options & DPNI_OPT_DIST_FS &&
-+ priv->dpni_attrs.max_dist_key_size < key_size) {
-+ dev_err(&net_dev->dev,
-+ "max_dist_key_size = %d, expected %d. Steering is disabled\n",
-+ priv->dpni_attrs.max_dist_key_size,
-+ key_size);
-+ priv->dpni_attrs.options &= ~DPNI_OPT_DIST_FS;
-+ }
-+}
-+
-+/* Set RX hash options
-+ * flags is a combination of RXH_ bits
-+ */
-+int dpaa2_set_hash(struct net_device *net_dev, u64 flags)
-+{
-+ struct device *dev = net_dev->dev.parent;
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ struct dpkg_profile_cfg cls_cfg;
-+ struct dpni_rx_tc_dist_cfg dist_cfg;
-+ u8 *dma_mem;
-+ u64 enabled_flags = 0;
-+ int i;
-+ int err = 0;
-+
-+ if (!dpaa2_eth_hash_enabled(priv)) {
-+ dev_err(dev, "Hashing support is not enabled\n");
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (flags & ~DPAA2_RXH_SUPPORTED) {
-+ /* RXH_DISCARD is not supported */
-+ dev_err(dev, "unsupported option selected, supported options are: mvtsdfn\n");
-+ return -EOPNOTSUPP;
-+ }
-+
-+ memset(&cls_cfg, 0, sizeof(cls_cfg));
-+
-+ for (i = 0; i < ARRAY_SIZE(dpaa2_hash_fields); i++) {
-+ struct dpkg_extract *key =
-+ &cls_cfg.extracts[cls_cfg.num_extracts];
-+
-+ if (!(flags & dpaa2_hash_fields[i].rxnfc_field))
-+ continue;
-+
-+ if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
-+ dev_err(dev, "error adding key extraction rule, too many rules?\n");
-+ return -E2BIG;
-+ }
-+
-+ key->type = DPKG_EXTRACT_FROM_HDR;
-+ key->extract.from_hdr.prot =
-+ dpaa2_hash_fields[i].cls_prot;
-+ key->extract.from_hdr.type = DPKG_FULL_FIELD;
-+ key->extract.from_hdr.field =
-+ dpaa2_hash_fields[i].cls_field;
-+ cls_cfg.num_extracts++;
-+
-+ enabled_flags |= dpaa2_hash_fields[i].rxnfc_field;
-+ }
-+
-+ dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_DMA | GFP_KERNEL);
-+ if (!dma_mem)
-+ return -ENOMEM;
-+
-+ err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
-+ if (err) {
-+ dev_err(dev, "dpni_prepare_key_cfg error %d", err);
-+ return err;
-+ }
-+
-+ memset(&dist_cfg, 0, sizeof(dist_cfg));
-+
-+ /* Prepare for setting the rx dist */
-+ dist_cfg.key_cfg_iova = dma_map_single(net_dev->dev.parent, dma_mem,
-+ DPAA2_CLASSIFIER_DMA_SIZE,
-+ DMA_TO_DEVICE);
-+ if (dma_mapping_error(net_dev->dev.parent, dist_cfg.key_cfg_iova)) {
-+ dev_err(dev, "DMA mapping failed\n");
-+ kfree(dma_mem);
-+ return -ENOMEM;
-+ }
-+
-+ dist_cfg.dist_size = dpaa2_queue_count(priv);
-+ if (dpaa2_eth_fs_enabled(priv)) {
-+ dist_cfg.dist_mode = DPNI_DIST_MODE_FS;
-+ dist_cfg.fs_cfg.miss_action = DPNI_FS_MISS_HASH;
-+ } else {
-+ dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
-+ }
-+
-+ err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg);
-+ dma_unmap_single(net_dev->dev.parent, dist_cfg.key_cfg_iova,
-+ DPAA2_CLASSIFIER_DMA_SIZE, DMA_TO_DEVICE);
-+ kfree(dma_mem);
-+ if (err) {
-+ dev_err(dev, "dpni_set_rx_tc_dist() error %d\n", err);
-+ return err;
-+ }
-+
-+ priv->rx_hash_fields = enabled_flags;
-+
-+ return 0;
-+}
-+
-+static int dpaa2_cls_prep_rule(struct net_device *net_dev,
-+ struct ethtool_rx_flow_spec *fs,
-+ void *key)
-+{
-+ struct ethtool_tcpip4_spec *l4ip4_h, *l4ip4_m;
-+ struct ethhdr *eth_h, *eth_m;
-+ struct ethtool_flow_ext *ext_h, *ext_m;
-+ const u8 key_size = dpaa2_cls_key_size(net_dev);
-+ void *msk = key + key_size;
-+
-+ memset(key, 0, key_size * 2);
-+
-+ /* This code is a major mess, it has to be cleaned up after the
-+ * classification mask issue is fixed and key format will be made static
-+ */
-+
-+ switch (fs->flow_type & 0xff) {
-+ case TCP_V4_FLOW:
-+ l4ip4_h = &fs->h_u.tcp_ip4_spec;
-+ l4ip4_m = &fs->m_u.tcp_ip4_spec;
-+ /* TODO: ethertype to match IPv4 and protocol to match TCP */
-+ goto l4ip4;
-+
-+ case UDP_V4_FLOW:
-+ l4ip4_h = &fs->h_u.udp_ip4_spec;
-+ l4ip4_m = &fs->m_u.udp_ip4_spec;
-+ goto l4ip4;
-+
-+ case SCTP_V4_FLOW:
-+ l4ip4_h = &fs->h_u.sctp_ip4_spec;
-+ l4ip4_m = &fs->m_u.sctp_ip4_spec;
-+
-+l4ip4:
-+ if (l4ip4_m->tos) {
-+ netdev_err(net_dev,
-+ "ToS is not supported for IPv4 L4\n");
-+ return -EOPNOTSUPP;
-+ }
-+ if (l4ip4_m->ip4src &&
-+ !dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
-+ netdev_err(net_dev, "IP SRC not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+ if (l4ip4_m->ip4dst &&
-+ !dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
-+ netdev_err(net_dev, "IP DST not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+ if (l4ip4_m->psrc &&
-+ !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
-+ netdev_err(net_dev, "PSRC not supported, ignored\n");
-+ return -EOPNOTSUPP;
-+ }
-+ if (l4ip4_m->pdst &&
-+ !dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
-+ netdev_err(net_dev, "PDST not supported, ignored\n");
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_IP_SRC)) {
-+ *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
-+ = l4ip4_h->ip4src;
-+ *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_SRC))
-+ = l4ip4_m->ip4src;
-+ }
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_IP_DST)) {
-+ *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
-+ = l4ip4_h->ip4dst;
-+ *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_IP_DST))
-+ = l4ip4_m->ip4dst;
-+ }
-+
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_0_1)) {
-+ *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
-+ = l4ip4_h->psrc;
-+ *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_0_1))
-+ = l4ip4_m->psrc;
-+ }
-+
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_L4_B_2_3)) {
-+ *(u32 *)(key + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
-+ = l4ip4_h->pdst;
-+ *(u32 *)(msk + dpaa2_cls_key_off(net_dev, RXH_L4_B_2_3))
-+ = l4ip4_m->pdst;
-+ }
-+ break;
-+
-+ case ETHER_FLOW:
-+ eth_h = &fs->h_u.ether_spec;
-+ eth_m = &fs->m_u.ether_spec;
-+
-+ if (eth_m->h_proto) {
-+ netdev_err(net_dev, "Ethertype is not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (!is_zero_ether_addr(eth_m->h_source)) {
-+ netdev_err(net_dev, "ETH SRC is not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
-+ ether_addr_copy(key
-+ + dpaa2_cls_key_off(net_dev, RXH_L2DA),
-+ eth_h->h_dest);
-+ ether_addr_copy(msk
-+ + dpaa2_cls_key_off(net_dev, RXH_L2DA),
-+ eth_m->h_dest);
-+ } else {
-+ if (!is_zero_ether_addr(eth_m->h_dest)) {
-+ netdev_err(net_dev,
-+ "ETH DST is not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+ }
-+ break;
-+
-+ default:
-+ /* TODO: IP user flow, AH, ESP */
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (fs->flow_type & FLOW_EXT) {
-+ /* TODO: ETH data, VLAN ethertype, VLAN TCI .. */
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if (fs->flow_type & FLOW_MAC_EXT) {
-+ ext_h = &fs->h_ext;
-+ ext_m = &fs->m_ext;
-+
-+ if (dpaa2_cls_is_enabled(net_dev, RXH_L2DA)) {
-+ ether_addr_copy(key
-+ + dpaa2_cls_key_off(net_dev, RXH_L2DA),
-+ ext_h->h_dest);
-+ ether_addr_copy(msk
-+ + dpaa2_cls_key_off(net_dev, RXH_L2DA),
-+ ext_m->h_dest);
-+ } else {
-+ if (!is_zero_ether_addr(ext_m->h_dest)) {
-+ netdev_err(net_dev,
-+ "ETH DST is not supported!\n");
-+ return -EOPNOTSUPP;
-+ }
-+ }
-+ }
-+ return 0;
-+}
-+
-+static int dpaa2_do_cls(struct net_device *net_dev,
-+ struct ethtool_rx_flow_spec *fs,
-+ bool add)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
-+ struct dpni_rule_cfg rule_cfg;
-+ void *dma_mem;
-+ int err = 0;
-+
-+ if (!dpaa2_eth_fs_enabled(priv)) {
-+ netdev_err(net_dev, "dev does not support steering!\n");
-+ /* dev doesn't support steering */
-+ return -EOPNOTSUPP;
-+ }
-+
-+ if ((fs->ring_cookie != RX_CLS_FLOW_DISC &&
-+ fs->ring_cookie >= dpaa2_queue_count(priv)) ||
-+ fs->location >= rule_cnt)
-+ return -EINVAL;
-+
-+ memset(&rule_cfg, 0, sizeof(rule_cfg));
-+ rule_cfg.key_size = dpaa2_cls_key_size(net_dev);
-+
-+ /* allocate twice the key size, for the actual key and for mask */
-+ dma_mem = kzalloc(rule_cfg.key_size * 2, GFP_DMA | GFP_KERNEL);
-+ if (!dma_mem)
-+ return -ENOMEM;
-+
-+ err = dpaa2_cls_prep_rule(net_dev, fs, dma_mem);
-+ if (err)
-+ goto err_free_mem;
-+
-+ rule_cfg.key_iova = dma_map_single(net_dev->dev.parent, dma_mem,
-+ rule_cfg.key_size * 2,
-+ DMA_TO_DEVICE);
-+
-+ rule_cfg.mask_iova = rule_cfg.key_iova + rule_cfg.key_size;
-+
-+ if (!(priv->dpni_attrs.options & DPNI_OPT_FS_MASK_SUPPORT)) {
-+ int i;
-+ u8 *mask = dma_mem + rule_cfg.key_size;
-+
-+ /* check that nothing is masked out, otherwise it won't work */
-+ for (i = 0; i < rule_cfg.key_size; i++) {
-+ if (mask[i] == 0xff)
-+ continue;
-+ netdev_err(net_dev, "dev does not support masking!\n");
-+ err = -EOPNOTSUPP;
-+ goto err_free_mem;
-+ }
-+ rule_cfg.mask_iova = 0;
-+ }
-+
-+ /* No way to control rule order in firmware */
-+ if (add)
-+ err = dpni_add_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
-+ &rule_cfg, (u16)fs->ring_cookie);
-+ else
-+ err = dpni_remove_fs_entry(priv->mc_io, 0, priv->mc_token, 0,
-+ &rule_cfg);
-+
-+ dma_unmap_single(net_dev->dev.parent, rule_cfg.key_iova,
-+ rule_cfg.key_size * 2, DMA_TO_DEVICE);
-+ if (err) {
-+ netdev_err(net_dev, "dpaa2_add_cls() error %d\n", err);
-+ goto err_free_mem;
-+ }
-+
-+ priv->cls_rule[fs->location].fs = *fs;
-+ priv->cls_rule[fs->location].in_use = true;
-+
-+err_free_mem:
-+ kfree(dma_mem);
-+
-+ return err;
-+}
-+
-+static int dpaa2_add_cls(struct net_device *net_dev,
-+ struct ethtool_rx_flow_spec *fs)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int err;
-+
-+ err = dpaa2_do_cls(net_dev, fs, true);
-+ if (err)
-+ return err;
-+
-+ priv->cls_rule[fs->location].in_use = true;
-+ priv->cls_rule[fs->location].fs = *fs;
-+
-+ return 0;
-+}
-+
-+static int dpaa2_del_cls(struct net_device *net_dev, int location)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int err;
-+
-+ err = dpaa2_do_cls(net_dev, &priv->cls_rule[location].fs, false);
-+ if (err)
-+ return err;
-+
-+ priv->cls_rule[location].in_use = false;
-+
-+ return 0;
-+}
-+
-+static void dpaa2_clear_cls(struct net_device *net_dev)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ int i, err;
-+
-+ for (i = 0; i < DPAA2_CLASSIFIER_ENTRY_COUNT; i++) {
-+ if (!priv->cls_rule[i].in_use)
-+ continue;
-+
-+ err = dpaa2_del_cls(net_dev, i);
-+ if (err)
-+ netdev_warn(net_dev,
-+ "err trying to delete classification entry %d\n",
-+ i);
-+ }
-+}
-+
-+static int dpaa2_set_rxnfc(struct net_device *net_dev,
-+ struct ethtool_rxnfc *rxnfc)
-+{
-+ int err = 0;
-+
-+ switch (rxnfc->cmd) {
-+ case ETHTOOL_SRXFH:
-+ /* first off clear ALL classification rules, chaging key
-+ * composition will break them anyway
-+ */
-+ dpaa2_clear_cls(net_dev);
-+ /* we purposely ignore cmd->flow_type for now, because the
-+ * classifier only supports a single set of fields for all
-+ * protocols
-+ */
-+ err = dpaa2_set_hash(net_dev, rxnfc->data);
-+ break;
-+ case ETHTOOL_SRXCLSRLINS:
-+ err = dpaa2_add_cls(net_dev, &rxnfc->fs);
-+ break;
-+
-+ case ETHTOOL_SRXCLSRLDEL:
-+ err = dpaa2_del_cls(net_dev, rxnfc->fs.location);
-+ break;
-+
-+ default:
-+ err = -EOPNOTSUPP;
-+ }
-+
-+ return err;
-+}
-+
-+static int dpaa2_get_rxnfc(struct net_device *net_dev,
-+ struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
-+{
-+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-+ const int rule_cnt = DPAA2_CLASSIFIER_ENTRY_COUNT;
-+ int i, j;
-+
-+ switch (rxnfc->cmd) {
-+ case ETHTOOL_GRXFH:
-+ /* we purposely ignore cmd->flow_type for now, because the
-+ * classifier only supports a single set of fields for all
-+ * protocols
-+ */
-+ rxnfc->data = priv->rx_hash_fields;
-+ break;
-+
-+ case ETHTOOL_GRXRINGS:
-+ rxnfc->data = dpaa2_queue_count(priv);
-+ break;
-+
-+ case ETHTOOL_GRXCLSRLCNT:
-+ for (i = 0, rxnfc->rule_cnt = 0; i < rule_cnt; i++)
-+ if (priv->cls_rule[i].in_use)
-+ rxnfc->rule_cnt++;
-+ rxnfc->data = rule_cnt;
-+ break;
-+
-+ case ETHTOOL_GRXCLSRULE:
-+ if (!priv->cls_rule[rxnfc->fs.location].in_use)
-+ return -EINVAL;
-+
-+ rxnfc->fs = priv->cls_rule[rxnfc->fs.location].fs;
-+ break;
-+
-+ case ETHTOOL_GRXCLSRLALL:
-+ for (i = 0, j = 0; i < rule_cnt; i++) {
-+ if (!priv->cls_rule[i].in_use)
-+ continue;
-+ if (j == rxnfc->rule_cnt)
-+ return -EMSGSIZE;
-+ rule_locs[j++] = i;
-+ }
-+ rxnfc->rule_cnt = j;
-+ rxnfc->data = rule_cnt;
-+ break;
-+
-+ default:
-+ return -EOPNOTSUPP;
-+ }
-+
-+ return 0;
-+}
-+
-+const struct ethtool_ops dpaa2_ethtool_ops = {
-+ .get_drvinfo = dpaa2_get_drvinfo,
-+ .get_msglevel = dpaa2_get_msglevel,
-+ .set_msglevel = dpaa2_set_msglevel,
-+ .get_link = ethtool_op_get_link,
-+ .get_settings = dpaa2_get_settings,
-+ .set_settings = dpaa2_set_settings,
-+ .get_sset_count = dpaa2_get_sset_count,
-+ .get_ethtool_stats = dpaa2_get_ethtool_stats,
-+ .get_strings = dpaa2_get_strings,
-+ .get_rxnfc = dpaa2_get_rxnfc,
-+ .set_rxnfc = dpaa2_set_rxnfc,
-+};
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpkg.h
-@@ -0,0 +1,175 @@
-+/* Copyright 2013-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of the above-listed copyright holders nor the
-+ * names of any contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-+ * POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#ifndef __FSL_DPKG_H_
-+#define __FSL_DPKG_H_
-+
-+#include <linux/types.h>
-+#include "../../fsl-mc/include/net.h"
-+
-+/* Data Path Key Generator API
-+ * Contains initialization APIs and runtime APIs for the Key Generator
-+ */
-+
-+/** Key Generator properties */
-+
-+/**
-+ * Number of masks per key extraction
-+ */
-+#define DPKG_NUM_OF_MASKS 4
-+/**
-+ * Number of extractions per key profile
-+ */
-+#define DPKG_MAX_NUM_OF_EXTRACTS 10
-+
-+/**
-+ * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
-+ * @DPKG_FROM_HDR: Extract selected bytes from header, by offset
-+ * @DPKG_FROM_FIELD: Extract selected bytes from header, by offset from field
-+ * @DPKG_FULL_FIELD: Extract a full field
-+ */
-+enum dpkg_extract_from_hdr_type {
-+ DPKG_FROM_HDR = 0,
-+ DPKG_FROM_FIELD = 1,
-+ DPKG_FULL_FIELD = 2
-+};
-+
-+/**
-+ * enum dpkg_extract_type - Enumeration for selecting extraction type
-+ * @DPKG_EXTRACT_FROM_HDR: Extract from the header
-+ * @DPKG_EXTRACT_FROM_DATA: Extract from data not in specific header
-+ * @DPKG_EXTRACT_FROM_PARSE: Extract from parser-result;
-+ * e.g. can be used to extract header existence;
-+ * please refer to 'Parse Result definition' section in the parser BG
-+ */
-+enum dpkg_extract_type {
-+ DPKG_EXTRACT_FROM_HDR = 0,
-+ DPKG_EXTRACT_FROM_DATA = 1,
-+ DPKG_EXTRACT_FROM_PARSE = 3
-+};
-+
-+/**
-+ * struct dpkg_mask - A structure for defining a single extraction mask
-+ * @mask: Byte mask for the extracted content
-+ * @offset: Offset within the extracted content
-+ */
-+struct dpkg_mask {
-+ uint8_t mask;
-+ uint8_t offset;
-+};
-+
-+/**
-+ * struct dpkg_extract - A structure for defining a single extraction
-+ * @type: Determines how the union below is interpreted:
-+ * DPKG_EXTRACT_FROM_HDR: selects 'from_hdr';
-+ * DPKG_EXTRACT_FROM_DATA: selects 'from_data';
-+ * DPKG_EXTRACT_FROM_PARSE: selects 'from_parse'
-+ * @extract: Selects extraction method
-+ * @num_of_byte_masks: Defines the number of valid entries in the array below;
-+ * This is also the number of bytes to be used as masks
-+ * @masks: Masks parameters
-+ */
-+struct dpkg_extract {
-+ enum dpkg_extract_type type;
-+ /**
-+ * union extract - Selects extraction method
-+ * @from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
-+ * @from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
-+ * @from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
-+ */
-+ union {
-+ /**
-+ * struct from_hdr - Used when 'type = DPKG_EXTRACT_FROM_HDR'
-+ * @prot: Any of the supported headers
-+ * @type: Defines the type of header extraction:
-+ * DPKG_FROM_HDR: use size & offset below;
-+ * DPKG_FROM_FIELD: use field, size and offset below;
-+ * DPKG_FULL_FIELD: use field below
-+ * @field: One of the supported fields (NH_FLD_)
-+ *
-+ * @size: Size in bytes
-+ * @offset: Byte offset
-+ * @hdr_index: Clear for cases not listed below;
-+ * Used for protocols that may have more than a single
-+ * header, 0 indicates an outer header;
-+ * Supported protocols (possible values):
-+ * NET_PROT_VLAN (0, HDR_INDEX_LAST);
-+ * NET_PROT_MPLS (0, 1, HDR_INDEX_LAST);
-+ * NET_PROT_IP(0, HDR_INDEX_LAST);
-+ * NET_PROT_IPv4(0, HDR_INDEX_LAST);
-+ * NET_PROT_IPv6(0, HDR_INDEX_LAST);
-+ */
-+
-+ struct {
-+ enum net_prot prot;
-+ enum dpkg_extract_from_hdr_type type;
-+ uint32_t field;
-+ uint8_t size;
-+ uint8_t offset;
-+ uint8_t hdr_index;
-+ } from_hdr;
-+ /**
-+ * struct from_data - Used when 'type = DPKG_EXTRACT_FROM_DATA'
-+ * @size: Size in bytes
-+ * @offset: Byte offset
-+ */
-+ struct {
-+ uint8_t size;
-+ uint8_t offset;
-+ } from_data;
-+
-+ /**
-+ * struct from_parse - Used when 'type = DPKG_EXTRACT_FROM_PARSE'
-+ * @size: Size in bytes
-+ * @offset: Byte offset
-+ */
-+ struct {
-+ uint8_t size;
-+ uint8_t offset;
-+ } from_parse;
-+ } extract;
-+
-+ uint8_t num_of_byte_masks;
-+ struct dpkg_mask masks[DPKG_NUM_OF_MASKS];
-+};
-+
-+/**
-+ * struct dpkg_profile_cfg - A structure for defining a full Key Generation
-+ * profile (rule)
-+ * @num_extracts: Defines the number of valid entries in the array below
-+ * @extracts: Array of required extractions
-+ */
-+struct dpkg_profile_cfg {
-+ uint8_t num_extracts;
-+ struct dpkg_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
-+};
-+
-+#endif /* __FSL_DPKG_H_ */
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
-@@ -0,0 +1,1058 @@
-+/* Copyright 2013-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of the above-listed copyright holders nor the
-+ * names of any contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-+ * POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#ifndef _FSL_DPNI_CMD_H
-+#define _FSL_DPNI_CMD_H
-+
-+/* DPNI Version */
-+#define DPNI_VER_MAJOR 6
-+#define DPNI_VER_MINOR 0
-+
-+/* Command IDs */
-+#define DPNI_CMDID_OPEN 0x801
-+#define DPNI_CMDID_CLOSE 0x800
-+#define DPNI_CMDID_CREATE 0x901
-+#define DPNI_CMDID_DESTROY 0x900
-+
-+#define DPNI_CMDID_ENABLE 0x002
-+#define DPNI_CMDID_DISABLE 0x003
-+#define DPNI_CMDID_GET_ATTR 0x004
-+#define DPNI_CMDID_RESET 0x005
-+#define DPNI_CMDID_IS_ENABLED 0x006
-+
-+#define DPNI_CMDID_SET_IRQ 0x010
-+#define DPNI_CMDID_GET_IRQ 0x011
-+#define DPNI_CMDID_SET_IRQ_ENABLE 0x012
-+#define DPNI_CMDID_GET_IRQ_ENABLE 0x013
-+#define DPNI_CMDID_SET_IRQ_MASK 0x014
-+#define DPNI_CMDID_GET_IRQ_MASK 0x015
-+#define DPNI_CMDID_GET_IRQ_STATUS 0x016
-+#define DPNI_CMDID_CLEAR_IRQ_STATUS 0x017
-+
-+#define DPNI_CMDID_SET_POOLS 0x200
-+#define DPNI_CMDID_GET_RX_BUFFER_LAYOUT 0x201
-+#define DPNI_CMDID_SET_RX_BUFFER_LAYOUT 0x202
-+#define DPNI_CMDID_GET_TX_BUFFER_LAYOUT 0x203
-+#define DPNI_CMDID_SET_TX_BUFFER_LAYOUT 0x204
-+#define DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT 0x205
-+#define DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT 0x206
-+#define DPNI_CMDID_SET_L3_CHKSUM_VALIDATION 0x207
-+#define DPNI_CMDID_GET_L3_CHKSUM_VALIDATION 0x208
-+#define DPNI_CMDID_SET_L4_CHKSUM_VALIDATION 0x209
-+#define DPNI_CMDID_GET_L4_CHKSUM_VALIDATION 0x20A
-+#define DPNI_CMDID_SET_ERRORS_BEHAVIOR 0x20B
-+#define DPNI_CMDID_SET_TX_CONF_REVOKE 0x20C
-+
-+#define DPNI_CMDID_GET_QDID 0x210
-+#define DPNI_CMDID_GET_SP_INFO 0x211
-+#define DPNI_CMDID_GET_TX_DATA_OFFSET 0x212
-+#define DPNI_CMDID_GET_COUNTER 0x213
-+#define DPNI_CMDID_SET_COUNTER 0x214
-+#define DPNI_CMDID_GET_LINK_STATE 0x215
-+#define DPNI_CMDID_SET_MAX_FRAME_LENGTH 0x216
-+#define DPNI_CMDID_GET_MAX_FRAME_LENGTH 0x217
-+#define DPNI_CMDID_SET_MTU 0x218
-+#define DPNI_CMDID_GET_MTU 0x219
-+#define DPNI_CMDID_SET_LINK_CFG 0x21A
-+#define DPNI_CMDID_SET_TX_SHAPING 0x21B
-+
-+#define DPNI_CMDID_SET_MCAST_PROMISC 0x220
-+#define DPNI_CMDID_GET_MCAST_PROMISC 0x221
-+#define DPNI_CMDID_SET_UNICAST_PROMISC 0x222
-+#define DPNI_CMDID_GET_UNICAST_PROMISC 0x223
-+#define DPNI_CMDID_SET_PRIM_MAC 0x224
-+#define DPNI_CMDID_GET_PRIM_MAC 0x225
-+#define DPNI_CMDID_ADD_MAC_ADDR 0x226
-+#define DPNI_CMDID_REMOVE_MAC_ADDR 0x227
-+#define DPNI_CMDID_CLR_MAC_FILTERS 0x228
-+
-+#define DPNI_CMDID_SET_VLAN_FILTERS 0x230
-+#define DPNI_CMDID_ADD_VLAN_ID 0x231
-+#define DPNI_CMDID_REMOVE_VLAN_ID 0x232
-+#define DPNI_CMDID_CLR_VLAN_FILTERS 0x233
-+
-+#define DPNI_CMDID_SET_RX_TC_DIST 0x235
-+#define DPNI_CMDID_SET_TX_FLOW 0x236
-+#define DPNI_CMDID_GET_TX_FLOW 0x237
-+#define DPNI_CMDID_SET_RX_FLOW 0x238
-+#define DPNI_CMDID_GET_RX_FLOW 0x239
-+#define DPNI_CMDID_SET_RX_ERR_QUEUE 0x23A
-+#define DPNI_CMDID_GET_RX_ERR_QUEUE 0x23B
-+
-+#define DPNI_CMDID_SET_RX_TC_POLICING 0x23E
-+#define DPNI_CMDID_SET_RX_TC_EARLY_DROP 0x23F
-+
-+#define DPNI_CMDID_SET_QOS_TBL 0x240
-+#define DPNI_CMDID_ADD_QOS_ENT 0x241
-+#define DPNI_CMDID_REMOVE_QOS_ENT 0x242
-+#define DPNI_CMDID_CLR_QOS_TBL 0x243
-+#define DPNI_CMDID_ADD_FS_ENT 0x244
-+#define DPNI_CMDID_REMOVE_FS_ENT 0x245
-+#define DPNI_CMDID_CLR_FS_ENT 0x246
-+#define DPNI_CMDID_SET_VLAN_INSERTION 0x247
-+#define DPNI_CMDID_SET_VLAN_REMOVAL 0x248
-+#define DPNI_CMDID_SET_IPR 0x249
-+#define DPNI_CMDID_SET_IPF 0x24A
-+
-+#define DPNI_CMDID_SET_TX_SELECTION 0x250
-+#define DPNI_CMDID_GET_RX_TC_POLICING 0x251
-+#define DPNI_CMDID_GET_RX_TC_EARLY_DROP 0x252
-+#define DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION 0x253
-+#define DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION 0x254
-+#define DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION 0x255
-+#define DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION 0x256
-+#define DPNI_CMDID_SET_TX_CONF 0x257
-+#define DPNI_CMDID_GET_TX_CONF 0x258
-+#define DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION 0x259
-+#define DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION 0x25A
-+#define DPNI_CMDID_SET_TX_TC_EARLY_DROP 0x25B
-+#define DPNI_CMDID_GET_TX_TC_EARLY_DROP 0x25C
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_OPEN(cmd, dpni_id) \
-+ MC_CMD_OP(cmd, 0, 0, 32, int, dpni_id)
-+
-+#define DPNI_PREP_EXTENDED_CFG(ext, cfg) \
-+do { \
-+ MC_PREP_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
-+ MC_PREP_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
-+ MC_PREP_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
-+ MC_PREP_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
-+ MC_PREP_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
-+ MC_PREP_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
-+ MC_PREP_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
-+ MC_PREP_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
-+ MC_PREP_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
-+ MC_PREP_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
-+ MC_PREP_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
-+ MC_PREP_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
-+ MC_PREP_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
-+ MC_PREP_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
-+ MC_PREP_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
-+ MC_PREP_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
-+ MC_PREP_OP(ext, 4, 0, 16, uint16_t, \
-+ cfg->ipr_cfg.max_open_frames_ipv4); \
-+ MC_PREP_OP(ext, 4, 16, 16, uint16_t, \
-+ cfg->ipr_cfg.max_open_frames_ipv6); \
-+ MC_PREP_OP(ext, 4, 32, 16, uint16_t, \
-+ cfg->ipr_cfg.max_reass_frm_size); \
-+ MC_PREP_OP(ext, 5, 0, 16, uint16_t, \
-+ cfg->ipr_cfg.min_frag_size_ipv4); \
-+ MC_PREP_OP(ext, 5, 16, 16, uint16_t, \
-+ cfg->ipr_cfg.min_frag_size_ipv6); \
-+} while (0)
-+
-+#define DPNI_EXT_EXTENDED_CFG(ext, cfg) \
-+do { \
-+ MC_EXT_OP(ext, 0, 0, 16, uint16_t, cfg->tc_cfg[0].max_dist); \
-+ MC_EXT_OP(ext, 0, 16, 16, uint16_t, cfg->tc_cfg[0].max_fs_entries); \
-+ MC_EXT_OP(ext, 0, 32, 16, uint16_t, cfg->tc_cfg[1].max_dist); \
-+ MC_EXT_OP(ext, 0, 48, 16, uint16_t, cfg->tc_cfg[1].max_fs_entries); \
-+ MC_EXT_OP(ext, 1, 0, 16, uint16_t, cfg->tc_cfg[2].max_dist); \
-+ MC_EXT_OP(ext, 1, 16, 16, uint16_t, cfg->tc_cfg[2].max_fs_entries); \
-+ MC_EXT_OP(ext, 1, 32, 16, uint16_t, cfg->tc_cfg[3].max_dist); \
-+ MC_EXT_OP(ext, 1, 48, 16, uint16_t, cfg->tc_cfg[3].max_fs_entries); \
-+ MC_EXT_OP(ext, 2, 0, 16, uint16_t, cfg->tc_cfg[4].max_dist); \
-+ MC_EXT_OP(ext, 2, 16, 16, uint16_t, cfg->tc_cfg[4].max_fs_entries); \
-+ MC_EXT_OP(ext, 2, 32, 16, uint16_t, cfg->tc_cfg[5].max_dist); \
-+ MC_EXT_OP(ext, 2, 48, 16, uint16_t, cfg->tc_cfg[5].max_fs_entries); \
-+ MC_EXT_OP(ext, 3, 0, 16, uint16_t, cfg->tc_cfg[6].max_dist); \
-+ MC_EXT_OP(ext, 3, 16, 16, uint16_t, cfg->tc_cfg[6].max_fs_entries); \
-+ MC_EXT_OP(ext, 3, 32, 16, uint16_t, cfg->tc_cfg[7].max_dist); \
-+ MC_EXT_OP(ext, 3, 48, 16, uint16_t, cfg->tc_cfg[7].max_fs_entries); \
-+ MC_EXT_OP(ext, 4, 0, 16, uint16_t, \
-+ cfg->ipr_cfg.max_open_frames_ipv4); \
-+ MC_EXT_OP(ext, 4, 16, 16, uint16_t, \
-+ cfg->ipr_cfg.max_open_frames_ipv6); \
-+ MC_EXT_OP(ext, 4, 32, 16, uint16_t, \
-+ cfg->ipr_cfg.max_reass_frm_size); \
-+ MC_EXT_OP(ext, 5, 0, 16, uint16_t, \
-+ cfg->ipr_cfg.min_frag_size_ipv4); \
-+ MC_EXT_OP(ext, 5, 16, 16, uint16_t, \
-+ cfg->ipr_cfg.min_frag_size_ipv6); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_CREATE(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->adv.max_tcs); \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, cfg->adv.max_senders); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->mac_addr[5]); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->mac_addr[4]); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->mac_addr[3]); \
-+ MC_CMD_OP(cmd, 0, 40, 8, uint8_t, cfg->mac_addr[2]); \
-+ MC_CMD_OP(cmd, 0, 48, 8, uint8_t, cfg->mac_addr[1]); \
-+ MC_CMD_OP(cmd, 0, 56, 8, uint8_t, cfg->mac_addr[0]); \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->adv.options); \
-+ MC_CMD_OP(cmd, 2, 0, 8, uint8_t, cfg->adv.max_unicast_filters); \
-+ MC_CMD_OP(cmd, 2, 8, 8, uint8_t, cfg->adv.max_multicast_filters); \
-+ MC_CMD_OP(cmd, 2, 16, 8, uint8_t, cfg->adv.max_vlan_filters); \
-+ MC_CMD_OP(cmd, 2, 24, 8, uint8_t, cfg->adv.max_qos_entries); \
-+ MC_CMD_OP(cmd, 2, 32, 8, uint8_t, cfg->adv.max_qos_key_size); \
-+ MC_CMD_OP(cmd, 2, 48, 8, uint8_t, cfg->adv.max_dist_key_size); \
-+ MC_CMD_OP(cmd, 2, 56, 8, enum net_prot, cfg->adv.start_hdr); \
-+ MC_CMD_OP(cmd, 4, 48, 8, uint8_t, cfg->adv.max_policers); \
-+ MC_CMD_OP(cmd, 4, 56, 8, uint8_t, cfg->adv.max_congestion_ctrl); \
-+ MC_CMD_OP(cmd, 5, 0, 64, uint64_t, cfg->adv.ext_cfg_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_POOLS(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 8, uint8_t, cfg->num_dpbp); \
-+ MC_CMD_OP(cmd, 0, 8, 1, int, cfg->pools[0].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 9, 1, int, cfg->pools[1].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 10, 1, int, cfg->pools[2].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 11, 1, int, cfg->pools[3].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 12, 1, int, cfg->pools[4].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 13, 1, int, cfg->pools[5].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 14, 1, int, cfg->pools[6].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 15, 1, int, cfg->pools[7].backup_pool); \
-+ MC_CMD_OP(cmd, 0, 32, 32, int, cfg->pools[0].dpbp_id); \
-+ MC_CMD_OP(cmd, 4, 32, 16, uint16_t, cfg->pools[0].buffer_size);\
-+ MC_CMD_OP(cmd, 1, 0, 32, int, cfg->pools[1].dpbp_id); \
-+ MC_CMD_OP(cmd, 4, 48, 16, uint16_t, cfg->pools[1].buffer_size);\
-+ MC_CMD_OP(cmd, 1, 32, 32, int, cfg->pools[2].dpbp_id); \
-+ MC_CMD_OP(cmd, 5, 0, 16, uint16_t, cfg->pools[2].buffer_size);\
-+ MC_CMD_OP(cmd, 2, 0, 32, int, cfg->pools[3].dpbp_id); \
-+ MC_CMD_OP(cmd, 5, 16, 16, uint16_t, cfg->pools[3].buffer_size);\
-+ MC_CMD_OP(cmd, 2, 32, 32, int, cfg->pools[4].dpbp_id); \
-+ MC_CMD_OP(cmd, 5, 32, 16, uint16_t, cfg->pools[4].buffer_size);\
-+ MC_CMD_OP(cmd, 3, 0, 32, int, cfg->pools[5].dpbp_id); \
-+ MC_CMD_OP(cmd, 5, 48, 16, uint16_t, cfg->pools[5].buffer_size);\
-+ MC_CMD_OP(cmd, 3, 32, 32, int, cfg->pools[6].dpbp_id); \
-+ MC_CMD_OP(cmd, 6, 0, 16, uint16_t, cfg->pools[6].buffer_size);\
-+ MC_CMD_OP(cmd, 4, 0, 32, int, cfg->pools[7].dpbp_id); \
-+ MC_CMD_OP(cmd, 6, 16, 16, uint16_t, cfg->pools[7].buffer_size);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_IS_ENABLED(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
-+ MC_CMD_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_IRQ(cmd, irq_index) \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_IRQ(cmd, type, irq_cfg) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
-+ MC_RSP_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
-+ MC_RSP_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
-+ MC_RSP_OP(cmd, 2, 32, 32, int, type); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 8, uint8_t, en); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_IRQ_ENABLE(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 8, uint8_t, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, uint32_t, mask); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_IRQ_MASK(cmd, irq_index) \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_IRQ_MASK(cmd, mask) \
-+ MC_RSP_OP(cmd, 0, 0, 32, uint32_t, mask)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status);\
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_IRQ_STATUS(cmd, status) \
-+ MC_RSP_OP(cmd, 0, 0, 32, uint32_t, status)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_ATTR(cmd, attr) \
-+ MC_CMD_OP(cmd, 6, 0, 64, uint64_t, attr->ext_cfg_iova)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_ATTR(cmd, attr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 32, int, attr->id);\
-+ MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->max_tcs); \
-+ MC_RSP_OP(cmd, 0, 40, 8, uint8_t, attr->max_senders); \
-+ MC_RSP_OP(cmd, 0, 48, 8, enum net_prot, attr->start_hdr); \
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, attr->options); \
-+ MC_RSP_OP(cmd, 2, 0, 8, uint8_t, attr->max_unicast_filters); \
-+ MC_RSP_OP(cmd, 2, 8, 8, uint8_t, attr->max_multicast_filters);\
-+ MC_RSP_OP(cmd, 2, 16, 8, uint8_t, attr->max_vlan_filters); \
-+ MC_RSP_OP(cmd, 2, 24, 8, uint8_t, attr->max_qos_entries); \
-+ MC_RSP_OP(cmd, 2, 32, 8, uint8_t, attr->max_qos_key_size); \
-+ MC_RSP_OP(cmd, 2, 40, 8, uint8_t, attr->max_dist_key_size); \
-+ MC_RSP_OP(cmd, 4, 48, 8, uint8_t, attr->max_policers); \
-+ MC_RSP_OP(cmd, 4, 56, 8, uint8_t, attr->max_congestion_ctrl); \
-+ MC_RSP_OP(cmd, 5, 32, 16, uint16_t, attr->version.major);\
-+ MC_RSP_OP(cmd, 5, 48, 16, uint16_t, attr->version.minor);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, uint32_t, cfg->errors); \
-+ MC_CMD_OP(cmd, 0, 32, 4, enum dpni_error_action, cfg->error_action); \
-+ MC_CMD_OP(cmd, 0, 36, 1, int, cfg->set_frame_annotation); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
-+ MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
-+ MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_RSP_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_RSP_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_RSP_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_RSP_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_RSP_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_RSP_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, layout->private_data_size); \
-+ MC_CMD_OP(cmd, 0, 16, 16, uint16_t, layout->data_align); \
-+ MC_CMD_OP(cmd, 0, 32, 32, uint32_t, layout->options); \
-+ MC_CMD_OP(cmd, 1, 0, 1, int, layout->pass_timestamp); \
-+ MC_CMD_OP(cmd, 1, 1, 1, int, layout->pass_parser_result); \
-+ MC_CMD_OP(cmd, 1, 2, 1, int, layout->pass_frame_status); \
-+ MC_CMD_OP(cmd, 1, 16, 16, uint16_t, layout->data_head_room); \
-+ MC_CMD_OP(cmd, 1, 32, 16, uint16_t, layout->data_tail_room); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_QDID(cmd, qdid) \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, qdid)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_SP_INFO(cmd, sp_info) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, sp_info->spids[0]); \
-+ MC_RSP_OP(cmd, 0, 16, 16, uint16_t, sp_info->spids[1]); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_TX_DATA_OFFSET(cmd, data_offset) \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, data_offset)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_COUNTER(cmd, counter) \
-+ MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_COUNTER(cmd, value) \
-+ MC_RSP_OP(cmd, 1, 0, 64, uint64_t, value)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_COUNTER(cmd, counter, value) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, enum dpni_counter, counter); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, value); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_LINK_CFG(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->rate);\
-+ MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->options);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_LINK_STATE(cmd, state) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 32, 1, int, state->up);\
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, state->rate);\
-+ MC_RSP_OP(cmd, 2, 0, 64, uint64_t, state->options);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, tx_shaper->max_burst_size);\
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, tx_shaper->rate_limit);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, max_frame_length) \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, max_frame_length)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_MTU(cmd, mtu) \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, mtu)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_MTU(cmd, mtu) \
-+ MC_RSP_OP(cmd, 0, 0, 16, uint16_t, mtu)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_MULTICAST_PROMISC(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_UNICAST_PROMISC(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_UNICAST_PROMISC(cmd, en) \
-+ MC_RSP_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
-+ MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
-+ MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
-+ MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
-+ MC_RSP_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
-+ MC_RSP_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
-+ MC_RSP_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
-+ MC_RSP_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
-+ MC_RSP_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
-+ MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
-+ MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
-+ MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, mac_addr[5]); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, mac_addr[4]); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, mac_addr[3]); \
-+ MC_CMD_OP(cmd, 0, 40, 8, uint8_t, mac_addr[2]); \
-+ MC_CMD_OP(cmd, 0, 48, 8, uint8_t, mac_addr[1]); \
-+ MC_CMD_OP(cmd, 0, 56, 8, uint8_t, mac_addr[0]); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, unicast); \
-+ MC_CMD_OP(cmd, 0, 1, 1, int, multicast); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_VLAN_FILTERS(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id) \
-+ MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id) \
-+ MC_CMD_OP(cmd, 0, 32, 16, uint16_t, vlan_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_SELECTION(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->tc_sched[0].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 0, 16, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[0].mode); \
-+ MC_CMD_OP(cmd, 0, 32, 16, uint16_t, cfg->tc_sched[1].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 0, 48, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[1].mode); \
-+ MC_CMD_OP(cmd, 1, 0, 16, uint16_t, cfg->tc_sched[2].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 1, 16, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[2].mode); \
-+ MC_CMD_OP(cmd, 1, 32, 16, uint16_t, cfg->tc_sched[3].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 1, 48, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[3].mode); \
-+ MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->tc_sched[4].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 2, 16, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[4].mode); \
-+ MC_CMD_OP(cmd, 2, 32, 16, uint16_t, cfg->tc_sched[5].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 2, 48, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[5].mode); \
-+ MC_CMD_OP(cmd, 3, 0, 16, uint16_t, cfg->tc_sched[6].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 3, 16, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[6].mode); \
-+ MC_CMD_OP(cmd, 3, 32, 16, uint16_t, cfg->tc_sched[7].delta_bandwidth);\
-+ MC_CMD_OP(cmd, 3, 48, 4, enum dpni_tx_schedule_mode, \
-+ cfg->tc_sched[7].mode); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 16, uint16_t, cfg->dist_size); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 24, 4, enum dpni_dist_mode, cfg->dist_mode); \
-+ MC_CMD_OP(cmd, 0, 28, 4, enum dpni_fs_miss_action, \
-+ cfg->fs_cfg.miss_action); \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, cfg->fs_cfg.default_flow_id); \
-+ MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_FLOW(cmd, flow_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 43, 1, int, cfg->l3_chksum_gen);\
-+ MC_CMD_OP(cmd, 0, 44, 1, int, cfg->l4_chksum_gen);\
-+ MC_CMD_OP(cmd, 0, 45, 1, int, cfg->use_common_tx_conf_queue);\
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id);\
-+ MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_SET_TX_FLOW(cmd, flow_id) \
-+ MC_RSP_OP(cmd, 0, 48, 16, uint16_t, flow_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_TX_FLOW(cmd, flow_id) \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_TX_FLOW(cmd, attr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 43, 1, int, attr->l3_chksum_gen);\
-+ MC_RSP_OP(cmd, 0, 44, 1, int, attr->l4_chksum_gen);\
-+ MC_RSP_OP(cmd, 0, 45, 1, int, attr->use_common_tx_conf_queue);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
-+ MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
-+ MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
-+ MC_CMD_OP(cmd, 2, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->options); \
-+ MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
-+ MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
-+ cfg->flc_cfg.frame_data_size);\
-+ MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
-+ cfg->flc_cfg.flow_context_size);\
-+ MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
-+ MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
-+ MC_CMD_OP(cmd, 5, 0, 32, uint32_t, cfg->tail_drop_threshold); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_RX_FLOW(cmd, attr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
-+ MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type); \
-+ MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
-+ MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
-+ MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
-+ MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
-+ MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
-+ MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
-+ attr->flc_cfg.frame_data_size);\
-+ MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
-+ attr->flc_cfg.flow_context_size);\
-+ MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
-+ MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->dest_cfg.priority);\
-+ MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, cfg->dest_cfg.dest_type);\
-+ MC_CMD_OP(cmd, 0, 42, 1, int, cfg->order_preservation_en);\
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->user_ctx); \
-+ MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->options); \
-+ MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->tail_drop_threshold); \
-+ MC_CMD_OP(cmd, 3, 0, 4, enum dpni_flc_type, cfg->flc_cfg.flc_type); \
-+ MC_CMD_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
-+ cfg->flc_cfg.frame_data_size);\
-+ MC_CMD_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
-+ cfg->flc_cfg.flow_context_size);\
-+ MC_CMD_OP(cmd, 3, 32, 32, uint32_t, cfg->flc_cfg.options);\
-+ MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->flc_cfg.flow_context);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 32, int, attr->dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 0, 32, 8, uint8_t, attr->dest_cfg.priority);\
-+ MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, attr->dest_cfg.dest_type);\
-+ MC_RSP_OP(cmd, 0, 42, 1, int, attr->order_preservation_en);\
-+ MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->user_ctx); \
-+ MC_RSP_OP(cmd, 2, 0, 32, uint32_t, attr->tail_drop_threshold); \
-+ MC_RSP_OP(cmd, 2, 32, 32, uint32_t, attr->fqid); \
-+ MC_RSP_OP(cmd, 3, 0, 4, enum dpni_flc_type, attr->flc_cfg.flc_type); \
-+ MC_RSP_OP(cmd, 3, 4, 4, enum dpni_stash_size, \
-+ attr->flc_cfg.frame_data_size);\
-+ MC_RSP_OP(cmd, 3, 8, 4, enum dpni_stash_size, \
-+ attr->flc_cfg.flow_context_size);\
-+ MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->flc_cfg.options);\
-+ MC_RSP_OP(cmd, 4, 0, 64, uint64_t, attr->flc_cfg.flow_context);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, revoke)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_QOS_TABLE(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->default_tc); \
-+ MC_CMD_OP(cmd, 0, 40, 1, int, cfg->discard_on_miss); \
-+ MC_CMD_OP(cmd, 6, 0, 64, uint64_t, cfg->key_cfg_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
-+ MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
-+ MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
-+ MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 24, 8, uint8_t, cfg->key_size); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->key_iova); \
-+ MC_CMD_OP(cmd, 2, 0, 64, uint64_t, cfg->mask_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id) \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_VLAN_INSERTION(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_VLAN_REMOVAL(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_IPR(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_IPF(cmd, en) \
-+ MC_CMD_OP(cmd, 0, 0, 1, int, en)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
-+ MC_CMD_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
-+ MC_CMD_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
-+ MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
-+ MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
-+ MC_CMD_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id) \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, tc_id)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 4, enum dpni_policer_mode, cfg->mode); \
-+ MC_RSP_OP(cmd, 0, 4, 4, enum dpni_policer_color, cfg->default_color); \
-+ MC_RSP_OP(cmd, 0, 8, 4, enum dpni_policer_unit, cfg->units); \
-+ MC_RSP_OP(cmd, 0, 32, 32, uint32_t, cfg->options); \
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->cir); \
-+ MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->cbs); \
-+ MC_RSP_OP(cmd, 2, 0, 32, uint32_t, cfg->eir); \
-+ MC_RSP_OP(cmd, 2, 32, 32, uint32_t, cfg->ebs);\
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_PREP_EARLY_DROP(ext, cfg) \
-+do { \
-+ MC_PREP_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
-+ MC_PREP_OP(ext, 0, 2, 2, \
-+ enum dpni_congestion_unit, cfg->units); \
-+ MC_PREP_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
-+ MC_PREP_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
-+ MC_PREP_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
-+ MC_PREP_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
-+ MC_PREP_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
-+ MC_PREP_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
-+ MC_PREP_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
-+ MC_PREP_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
-+ MC_PREP_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
-+ MC_PREP_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_EXT_EARLY_DROP(ext, cfg) \
-+do { \
-+ MC_EXT_OP(ext, 0, 0, 2, enum dpni_early_drop_mode, cfg->mode); \
-+ MC_EXT_OP(ext, 0, 2, 2, \
-+ enum dpni_congestion_unit, cfg->units); \
-+ MC_EXT_OP(ext, 0, 32, 32, uint32_t, cfg->tail_drop_threshold); \
-+ MC_EXT_OP(ext, 1, 0, 8, uint8_t, cfg->green.drop_probability); \
-+ MC_EXT_OP(ext, 2, 0, 64, uint64_t, cfg->green.max_threshold); \
-+ MC_EXT_OP(ext, 3, 0, 64, uint64_t, cfg->green.min_threshold); \
-+ MC_EXT_OP(ext, 5, 0, 8, uint8_t, cfg->yellow.drop_probability);\
-+ MC_EXT_OP(ext, 6, 0, 64, uint64_t, cfg->yellow.max_threshold); \
-+ MC_EXT_OP(ext, 7, 0, 64, uint64_t, cfg->yellow.min_threshold); \
-+ MC_EXT_OP(ext, 9, 0, 8, uint8_t, cfg->red.drop_probability); \
-+ MC_EXT_OP(ext, 10, 0, 64, uint64_t, cfg->red.max_threshold); \
-+ MC_EXT_OP(ext, 11, 0, 64, uint64_t, cfg->red.min_threshold); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
-+} while (0)
-+
-+/* cmd, param, offset, width, type, arg_name */
-+#define DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, early_drop_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
-+
-+#define DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id) \
-+ MC_CMD_OP(cmd, 0, 8, 8, uint8_t, tc_id)
-+
-+#define DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->queue_cfg.dest_cfg.priority); \
-+ MC_CMD_OP(cmd, 0, 40, 2, enum dpni_dest, \
-+ cfg->queue_cfg.dest_cfg.dest_type); \
-+ MC_CMD_OP(cmd, 0, 42, 1, int, cfg->errors_only); \
-+ MC_CMD_OP(cmd, 0, 46, 1, int, cfg->queue_cfg.order_preservation_en); \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
-+ MC_CMD_OP(cmd, 1, 0, 64, uint64_t, cfg->queue_cfg.user_ctx); \
-+ MC_CMD_OP(cmd, 2, 0, 32, uint32_t, cfg->queue_cfg.options); \
-+ MC_CMD_OP(cmd, 2, 32, 32, int, cfg->queue_cfg.dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 3, 0, 32, uint32_t, \
-+ cfg->queue_cfg.tail_drop_threshold); \
-+ MC_CMD_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
-+ cfg->queue_cfg.flc_cfg.flc_type); \
-+ MC_CMD_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
-+ cfg->queue_cfg.flc_cfg.frame_data_size); \
-+ MC_CMD_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
-+ cfg->queue_cfg.flc_cfg.flow_context_size); \
-+ MC_CMD_OP(cmd, 4, 32, 32, uint32_t, cfg->queue_cfg.flc_cfg.options); \
-+ MC_CMD_OP(cmd, 5, 0, 64, uint64_t, \
-+ cfg->queue_cfg.flc_cfg.flow_context); \
-+} while (0)
-+
-+#define DPNI_CMD_GET_TX_CONF(cmd, flow_id) \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
-+
-+#define DPNI_RSP_GET_TX_CONF(cmd, attr) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 32, 8, uint8_t, \
-+ attr->queue_attr.dest_cfg.priority); \
-+ MC_RSP_OP(cmd, 0, 40, 2, enum dpni_dest, \
-+ attr->queue_attr.dest_cfg.dest_type); \
-+ MC_RSP_OP(cmd, 0, 42, 1, int, attr->errors_only); \
-+ MC_RSP_OP(cmd, 0, 46, 1, int, \
-+ attr->queue_attr.order_preservation_en); \
-+ MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->queue_attr.user_ctx); \
-+ MC_RSP_OP(cmd, 2, 32, 32, int, attr->queue_attr.dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 3, 0, 32, uint32_t, \
-+ attr->queue_attr.tail_drop_threshold); \
-+ MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->queue_attr.fqid); \
-+ MC_RSP_OP(cmd, 4, 0, 4, enum dpni_flc_type, \
-+ attr->queue_attr.flc_cfg.flc_type); \
-+ MC_RSP_OP(cmd, 4, 4, 4, enum dpni_stash_size, \
-+ attr->queue_attr.flc_cfg.frame_data_size); \
-+ MC_RSP_OP(cmd, 4, 8, 4, enum dpni_stash_size, \
-+ attr->queue_attr.flc_cfg.flow_context_size); \
-+ MC_RSP_OP(cmd, 4, 32, 32, uint32_t, attr->queue_attr.flc_cfg.options); \
-+ MC_RSP_OP(cmd, 5, 0, 64, uint64_t, \
-+ attr->queue_attr.flc_cfg.flow_context); \
-+} while (0)
-+
-+#define DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg) \
-+do { \
-+ MC_CMD_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_CMD_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_CMD_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id); \
-+ MC_CMD_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_CMD_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_CMD_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_CMD_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_CMD_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_CMD_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#define DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id) \
-+ MC_CMD_OP(cmd, 0, 48, 16, uint16_t, flow_id)
-+
-+#define DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg) \
-+do { \
-+ MC_RSP_OP(cmd, 0, 0, 2, enum dpni_congestion_unit, cfg->units); \
-+ MC_RSP_OP(cmd, 0, 4, 4, enum dpni_dest, cfg->dest_cfg.dest_type); \
-+ MC_RSP_OP(cmd, 0, 16, 8, uint8_t, cfg->dest_cfg.priority); \
-+ MC_RSP_OP(cmd, 1, 0, 32, uint32_t, cfg->threshold_entry); \
-+ MC_RSP_OP(cmd, 1, 32, 32, uint32_t, cfg->threshold_exit); \
-+ MC_RSP_OP(cmd, 2, 0, 16, uint16_t, cfg->options); \
-+ MC_RSP_OP(cmd, 2, 32, 32, int, cfg->dest_cfg.dest_id); \
-+ MC_RSP_OP(cmd, 3, 0, 64, uint64_t, cfg->message_ctx); \
-+ MC_RSP_OP(cmd, 4, 0, 64, uint64_t, cfg->message_iova); \
-+} while (0)
-+
-+#endif /* _FSL_DPNI_CMD_H */
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.c
-@@ -0,0 +1,1907 @@
-+/* Copyright 2013-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of the above-listed copyright holders nor the
-+ * names of any contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-+ * POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#include "../../fsl-mc/include/mc-sys.h"
-+#include "../../fsl-mc/include/mc-cmd.h"
-+#include "dpni.h"
-+#include "dpni-cmd.h"
-+
-+int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-+ uint8_t *key_cfg_buf)
-+{
-+ int i, j;
-+ int offset = 0;
-+ int param = 1;
-+ uint64_t *params = (uint64_t *)key_cfg_buf;
-+
-+ if (!key_cfg_buf || !cfg)
-+ return -EINVAL;
-+
-+ params[0] |= mc_enc(0, 8, cfg->num_extracts);
-+ params[0] = cpu_to_le64(params[0]);
-+
-+ if (cfg->num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS)
-+ return -EINVAL;
-+
-+ for (i = 0; i < cfg->num_extracts; i++) {
-+ switch (cfg->extracts[i].type) {
-+ case DPKG_EXTRACT_FROM_HDR:
-+ params[param] |= mc_enc(0, 8,
-+ cfg->extracts[i].extract.from_hdr.prot);
-+ params[param] |= mc_enc(8, 4,
-+ cfg->extracts[i].extract.from_hdr.type);
-+ params[param] |= mc_enc(16, 8,
-+ cfg->extracts[i].extract.from_hdr.size);
-+ params[param] |= mc_enc(24, 8,
-+ cfg->extracts[i].extract.
-+ from_hdr.offset);
-+ params[param] |= mc_enc(32, 32,
-+ cfg->extracts[i].extract.
-+ from_hdr.field);
-+ params[param] = cpu_to_le64(params[param]);
-+ param++;
-+ params[param] |= mc_enc(0, 8,
-+ cfg->extracts[i].extract.
-+ from_hdr.hdr_index);
-+ break;
-+ case DPKG_EXTRACT_FROM_DATA:
-+ params[param] |= mc_enc(16, 8,
-+ cfg->extracts[i].extract.
-+ from_data.size);
-+ params[param] |= mc_enc(24, 8,
-+ cfg->extracts[i].extract.
-+ from_data.offset);
-+ params[param] = cpu_to_le64(params[param]);
-+ param++;
-+ break;
-+ case DPKG_EXTRACT_FROM_PARSE:
-+ params[param] |= mc_enc(16, 8,
-+ cfg->extracts[i].extract.
-+ from_parse.size);
-+ params[param] |= mc_enc(24, 8,
-+ cfg->extracts[i].extract.
-+ from_parse.offset);
-+ params[param] = cpu_to_le64(params[param]);
-+ param++;
-+ break;
-+ default:
-+ return -EINVAL;
-+ }
-+ params[param] |= mc_enc(
-+ 24, 8, cfg->extracts[i].num_of_byte_masks);
-+ params[param] |= mc_enc(32, 4, cfg->extracts[i].type);
-+ params[param] = cpu_to_le64(params[param]);
-+ param++;
-+ for (offset = 0, j = 0;
-+ j < DPKG_NUM_OF_MASKS;
-+ offset += 16, j++) {
-+ params[param] |= mc_enc(
-+ (offset), 8, cfg->extracts[i].masks[j].mask);
-+ params[param] |= mc_enc(
-+ (offset + 8), 8,
-+ cfg->extracts[i].masks[j].offset);
-+ }
-+ params[param] = cpu_to_le64(params[param]);
-+ param++;
-+ }
-+ return 0;
-+}
-+
-+int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
-+ uint8_t *ext_cfg_buf)
-+{
-+ uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
-+
-+ DPNI_PREP_EXTENDED_CFG(ext_params, cfg);
-+
-+ return 0;
-+}
-+
-+int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
-+ const uint8_t *ext_cfg_buf)
-+{
-+ uint64_t *ext_params = (uint64_t *)ext_cfg_buf;
-+
-+ DPNI_EXT_EXTENDED_CFG(ext_params, cfg);
-+
-+ return 0;
-+}
-+
-+int dpni_open(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ int dpni_id,
-+ uint16_t *token)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_OPEN,
-+ cmd_flags,
-+ 0);
-+ DPNI_CMD_OPEN(cmd, dpni_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
-+
-+ return 0;
-+}
-+
-+int dpni_close(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLOSE,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_create(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ const struct dpni_cfg *cfg,
-+ uint16_t *token)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CREATE,
-+ cmd_flags,
-+ 0);
-+ DPNI_CMD_CREATE(cmd, cfg);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
-+
-+ return 0;
-+}
-+
-+int dpni_destroy(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_DESTROY,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_pools(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_pools_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_POOLS,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_POOLS(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_ENABLE,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_disable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_DISABLE,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_is_enabled(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_IS_ENABLED, cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_IS_ENABLED(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_reset(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_RESET,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_irq(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ struct dpni_irq_cfg *irq_cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_irq(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ int *type,
-+ struct dpni_irq_cfg *irq_cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_IRQ(cmd, irq_index);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_IRQ(cmd, *type, irq_cfg);
-+
-+ return 0;
-+}
-+
-+int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint8_t en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_ENABLE,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint8_t *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_IRQ_ENABLE(cmd, irq_index);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_IRQ_ENABLE(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t mask)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_MASK,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t *mask)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_IRQ_MASK(cmd, irq_index);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_IRQ_MASK(cmd, *mask);
-+
-+ return 0;
-+}
-+
-+int dpni_get_irq_status(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t *status)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_STATUS,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_IRQ_STATUS(cmd, *status);
-+
-+ return 0;
-+}
-+
-+int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t status)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLEAR_IRQ_STATUS,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_attributes(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_attr *attr)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_ATTR,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_ATTR(cmd, attr);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_ATTR(cmd, attr);
-+
-+ return 0;
-+}
-+
-+int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_error_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_ERRORS_BEHAVIOR,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_ERRORS_BEHAVIOR(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_RX_BUFFER_LAYOUT(cmd, layout);
-+
-+ return 0;
-+}
-+
-+int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_BUFFER_LAYOUT(cmd, layout);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_TX_BUFFER_LAYOUT(cmd, layout);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_BUFFER_LAYOUT(cmd, layout);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_BUFFER_LAYOUT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_CONF_BUFFER_LAYOUT(cmd, layout);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L3_CHKSUM_VALIDATION,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_L3_CHKSUM_VALIDATION(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L3_CHKSUM_VALIDATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_L3_CHKSUM_VALIDATION(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_L4_CHKSUM_VALIDATION,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_L4_CHKSUM_VALIDATION(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_L4_CHKSUM_VALIDATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_L4_CHKSUM_VALIDATION(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_qdid(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *qdid)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_QDID(cmd, *qdid);
-+
-+ return 0;
-+}
-+
-+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_sp_info *sp_info)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_SP_INFO,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_SP_INFO(cmd, sp_info);
-+
-+ return 0;
-+}
-+
-+int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *data_offset)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_TX_DATA_OFFSET(cmd, *data_offset);
-+
-+ return 0;
-+}
-+
-+int dpni_get_counter(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ enum dpni_counter counter,
-+ uint64_t *value)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_COUNTER,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_COUNTER(cmd, counter);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_COUNTER(cmd, *value);
-+
-+ return 0;
-+}
-+
-+int dpni_set_counter(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ enum dpni_counter counter,
-+ uint64_t value)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_COUNTER,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_COUNTER(cmd, counter, value);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_link_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_LINK_CFG,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_LINK_CFG(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_link_state(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_link_state *state)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_LINK_STATE,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_LINK_STATE(cmd, state);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_tx_shaping_cfg *tx_shaper)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SHAPING,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_SHAPING(cmd, tx_shaper);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t max_frame_length)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MAX_FRAME_LENGTH,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_MAX_FRAME_LENGTH(cmd, max_frame_length);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *max_frame_length)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_MAX_FRAME_LENGTH(cmd, *max_frame_length);
-+
-+ return 0;
-+}
-+
-+int dpni_set_mtu(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t mtu)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MTU,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_MTU(cmd, mtu);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_mtu(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *mtu)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MTU,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_MTU(cmd, *mtu);
-+
-+ return 0;
-+}
-+
-+int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MCAST_PROMISC,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_MULTICAST_PROMISC(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_MULTICAST_PROMISC(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_UNICAST_PROMISC(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_UNICAST_PROMISC(cmd, *en);
-+
-+ return 0;
-+}
-+
-+int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6])
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_PRIM_MAC,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_PRIMARY_MAC_ADDR(cmd, mac_addr);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t mac_addr[6])
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PRIM_MAC,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_PRIMARY_MAC_ADDR(cmd, mac_addr);
-+
-+ return 0;
-+}
-+
-+int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6])
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_MAC_ADDR,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_ADD_MAC_ADDR(cmd, mac_addr);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6])
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_MAC_ADDR,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_REMOVE_MAC_ADDR(cmd, mac_addr);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int unicast,
-+ int multicast)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_CLEAR_MAC_FILTERS(cmd, unicast, multicast);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_FILTERS,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_VLAN_FILTERS(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t vlan_id)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_VLAN_ID,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_ADD_VLAN_ID(cmd, vlan_id);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t vlan_id)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_VLAN_ID,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_REMOVE_VLAN_ID(cmd, vlan_id);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_VLAN_FILTERS,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_tx_selection_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_SELECTION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_SELECTION(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rx_tc_dist_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_DIST,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_TC_DIST(cmd, tc_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *flow_id,
-+ const struct dpni_tx_flow_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_FLOW,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_FLOW(cmd, *flow_id, cfg);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_SET_TX_FLOW(cmd, *flow_id);
-+
-+ return 0;
-+}
-+
-+int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_tx_flow_attr *attr)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_FLOW,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_TX_FLOW(cmd, flow_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_TX_FLOW(cmd, attr);
-+
-+ return 0;
-+}
-+
-+int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint16_t flow_id,
-+ const struct dpni_queue_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_FLOW,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_FLOW(cmd, tc_id, flow_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint16_t flow_id,
-+ struct dpni_queue_attr *attr)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_FLOW,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_RX_FLOW(cmd, tc_id, flow_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_RX_FLOW(cmd, attr);
-+
-+ return 0;
-+}
-+
-+int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_queue_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_ERR_QUEUE,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_ERR_QUEUE(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_queue_attr *attr)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_ERR_QUEUE,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ /* retrieve response parameters */
-+ DPNI_RSP_GET_RX_ERR_QUEUE(cmd, attr);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int revoke)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF_REVOKE,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_CONF_REVOKE(cmd, revoke);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_qos_table(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_qos_tbl_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QOS_TBL,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_QOS_TABLE(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_rule_cfg *cfg,
-+ uint8_t tc_id)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_QOS_ENT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_ADD_QOS_ENTRY(cmd, cfg, tc_id);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_rule_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_QOS_ENT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_REMOVE_QOS_ENTRY(cmd, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_QOS_TBL,
-+ cmd_flags,
-+ token);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rule_cfg *cfg,
-+ uint16_t flow_id)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_FS_ENT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_ADD_FS_ENTRY(cmd, tc_id, cfg, flow_id);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rule_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_FS_ENT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_REMOVE_FS_ENTRY(cmd, tc_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_FS_ENT,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_CLEAR_FS_ENTRIES(cmd, tc_id);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_INSERTION,
-+ cmd_flags, token);
-+ DPNI_CMD_SET_VLAN_INSERTION(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_VLAN_REMOVAL,
-+ cmd_flags, token);
-+ DPNI_CMD_SET_VLAN_REMOVAL(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_ipr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPR,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_IPR(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_ipf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IPF,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_IPF(cmd, en);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rx_tc_policing_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_POLICING,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_TC_POLICING(cmd, tc_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_rx_tc_policing_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_POLICING,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_RX_TC_POLICING(cmd, tc_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ DPNI_RSP_GET_RX_TC_POLICING(cmd, cfg);
-+
-+ return 0;
-+}
-+
-+void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
-+ uint8_t *early_drop_buf)
-+{
-+ uint64_t *ext_params = (uint64_t *)early_drop_buf;
-+
-+ DPNI_PREP_EARLY_DROP(ext_params, cfg);
-+}
-+
-+void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
-+ const uint8_t *early_drop_buf)
-+{
-+ uint64_t *ext_params = (uint64_t *)early_drop_buf;
-+
-+ DPNI_EXT_EARLY_DROP(ext_params, cfg);
-+}
-+
-+int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_EARLY_DROP,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_RX_TC_EARLY_DROP,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_RX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_TC_EARLY_DROP,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_TC_EARLY_DROP,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_TX_TC_EARLY_DROP(cmd, tc_id, early_drop_iova);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_SET_RX_TC_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_GET_RX_TC_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ DPNI_RSP_GET_RX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_SET_TX_TC_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_GET_TX_TC_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, tc_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ DPNI_RSP_GET_TX_TC_CONGESTION_NOTIFICATION(cmd, cfg);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ const struct dpni_tx_conf_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TX_CONF,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_CONF(cmd, flow_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_tx_conf_attr *attr)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_CONF,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_TX_CONF(cmd, flow_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ DPNI_RSP_GET_TX_CONF(cmd, attr);
-+
-+ return 0;
-+}
-+
-+int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ const struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_SET_TX_CONF_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_SET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id, cfg);
-+
-+ /* send command to mc*/
-+ return mc_send_command(mc_io, &cmd);
-+}
-+
-+int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_congestion_notification_cfg *cfg)
-+{
-+ struct mc_command cmd = { 0 };
-+ int err;
-+
-+ /* prepare command */
-+ cmd.header = mc_encode_cmd_header(
-+ DPNI_CMDID_GET_TX_CONF_CONGESTION_NOTIFICATION,
-+ cmd_flags,
-+ token);
-+ DPNI_CMD_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, flow_id);
-+
-+ /* send command to mc*/
-+ err = mc_send_command(mc_io, &cmd);
-+ if (err)
-+ return err;
-+
-+ DPNI_RSP_GET_TX_CONF_CONGESTION_NOTIFICATION(cmd, cfg);
-+
-+ return 0;
-+}
---- /dev/null
-+++ b/drivers/staging/fsl-dpaa2/ethernet/dpni.h
-@@ -0,0 +1,2581 @@
-+/* Copyright 2013-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of the above-listed copyright holders nor the
-+ * names of any contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-+ * POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#ifndef __FSL_DPNI_H
-+#define __FSL_DPNI_H
-+
-+#include "dpkg.h"
-+
-+struct fsl_mc_io;
-+
-+/**
-+ * Data Path Network Interface API
-+ * Contains initialization APIs and runtime control APIs for DPNI
-+ */
-+
-+/** General DPNI macros */
-+
-+/**
-+ * Maximum number of traffic classes
-+ */
-+#define DPNI_MAX_TC 8
-+/**
-+ * Maximum number of buffer pools per DPNI
-+ */
-+#define DPNI_MAX_DPBP 8
-+/**
-+ * Maximum number of storage-profiles per DPNI
-+ */
-+#define DPNI_MAX_SP 2
-+
-+/**
-+ * All traffic classes considered; see dpni_set_rx_flow()
-+ */
-+#define DPNI_ALL_TCS (uint8_t)(-1)
-+/**
-+ * All flows within traffic class considered; see dpni_set_rx_flow()
-+ */
-+#define DPNI_ALL_TC_FLOWS (uint16_t)(-1)
-+/**
-+ * Generate new flow ID; see dpni_set_tx_flow()
-+ */
-+#define DPNI_NEW_FLOW_ID (uint16_t)(-1)
-+/* use for common tx-conf queue; see dpni_set_tx_conf_<x>() */
-+#define DPNI_COMMON_TX_CONF (uint16_t)(-1)
-+
-+/**
-+ * dpni_open() - Open a control session for the specified object
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @dpni_id: DPNI unique ID
-+ * @token: Returned token; use in subsequent API calls
-+ *
-+ * This function can be used to open a control session for an
-+ * already created object; an object may have been declared in
-+ * the DPL or by calling the dpni_create() function.
-+ * This function returns a unique authentication token,
-+ * associated with the specific object ID and the specific MC
-+ * portal; this token must be used in all subsequent commands for
-+ * this specific object.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_open(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ int dpni_id,
-+ uint16_t *token);
-+
-+/**
-+ * dpni_close() - Close the control session of the object
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * After this function is called, no further operations are
-+ * allowed on the object without opening a new control session.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_close(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/* DPNI configuration options */
-+
-+/**
-+ * Allow different distribution key profiles for different traffic classes;
-+ * if not set, a single key profile is assumed
-+ */
-+#define DPNI_OPT_ALLOW_DIST_KEY_PER_TC 0x00000001
-+
-+/**
-+ * Disable all non-error transmit confirmation; error frames are reported
-+ * back to a common Tx error queue
-+ */
-+#define DPNI_OPT_TX_CONF_DISABLED 0x00000002
-+
-+/**
-+ * Disable per-sender private Tx confirmation/error queue
-+ */
-+#define DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED 0x00000004
-+
-+/**
-+ * Support distribution based on hashed key;
-+ * allows statistical distribution over receive queues in a traffic class
-+ */
-+#define DPNI_OPT_DIST_HASH 0x00000010
-+
-+/**
-+ * DEPRECATED - if this flag is selected and and all new 'max_fs_entries' are
-+ * '0' then backward compatibility is preserved;
-+ * Support distribution based on flow steering;
-+ * allows explicit control of distribution over receive queues in a traffic
-+ * class
-+ */
-+#define DPNI_OPT_DIST_FS 0x00000020
-+
-+/**
-+ * Unicast filtering support
-+ */
-+#define DPNI_OPT_UNICAST_FILTER 0x00000080
-+/**
-+ * Multicast filtering support
-+ */
-+#define DPNI_OPT_MULTICAST_FILTER 0x00000100
-+/**
-+ * VLAN filtering support
-+ */
-+#define DPNI_OPT_VLAN_FILTER 0x00000200
-+/**
-+ * Support IP reassembly on received packets
-+ */
-+#define DPNI_OPT_IPR 0x00000800
-+/**
-+ * Support IP fragmentation on transmitted packets
-+ */
-+#define DPNI_OPT_IPF 0x00001000
-+/**
-+ * VLAN manipulation support
-+ */
-+#define DPNI_OPT_VLAN_MANIPULATION 0x00010000
-+/**
-+ * Support masking of QoS lookup keys
-+ */
-+#define DPNI_OPT_QOS_MASK_SUPPORT 0x00020000
-+/**
-+ * Support masking of Flow Steering lookup keys
-+ */
-+#define DPNI_OPT_FS_MASK_SUPPORT 0x00040000
-+
-+/**
-+ * struct dpni_extended_cfg - Structure representing extended DPNI configuration
-+ * @tc_cfg: TCs configuration
-+ * @ipr_cfg: IP reassembly configuration
-+ */
-+struct dpni_extended_cfg {
-+ /**
-+ * struct tc_cfg - TC configuration
-+ * @max_dist: Maximum distribution size for Rx traffic class;
-+ * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
-+ * 112,128,192,224,256,384,448,512,768,896,1024;
-+ * value '0' will be treated as '1'.
-+ * other unsupported values will be round down to the nearest
-+ * supported value.
-+ * @max_fs_entries: Maximum FS entries for Rx traffic class;
-+ * '0' means no support for this TC;
-+ */
-+ struct {
-+ uint16_t max_dist;
-+ uint16_t max_fs_entries;
-+ } tc_cfg[DPNI_MAX_TC];
-+ /**
-+ * struct ipr_cfg - Structure representing IP reassembly configuration
-+ * @max_reass_frm_size: Maximum size of the reassembled frame
-+ * @min_frag_size_ipv4: Minimum fragment size of IPv4 fragments
-+ * @min_frag_size_ipv6: Minimum fragment size of IPv6 fragments
-+ * @max_open_frames_ipv4: Maximum concurrent IPv4 packets in reassembly
-+ * process
-+ * @max_open_frames_ipv6: Maximum concurrent IPv6 packets in reassembly
-+ * process
-+ */
-+ struct {
-+ uint16_t max_reass_frm_size;
-+ uint16_t min_frag_size_ipv4;
-+ uint16_t min_frag_size_ipv6;
-+ uint16_t max_open_frames_ipv4;
-+ uint16_t max_open_frames_ipv6;
-+ } ipr_cfg;
-+};
-+
-+/**
-+ * dpni_prepare_extended_cfg() - function prepare extended parameters
-+ * @cfg: extended structure
-+ * @ext_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
-+ *
-+ * This function has to be called before dpni_create()
-+ */
-+int dpni_prepare_extended_cfg(const struct dpni_extended_cfg *cfg,
-+ uint8_t *ext_cfg_buf);
-+
-+/**
-+ * struct dpni_cfg - Structure representing DPNI configuration
-+ * @mac_addr: Primary MAC address
-+ * @adv: Advanced parameters; default is all zeros;
-+ * use this structure to change default settings
-+ */
-+struct dpni_cfg {
-+ uint8_t mac_addr[6];
-+ /**
-+ * struct adv - Advanced parameters
-+ * @options: Mask of available options; use 'DPNI_OPT_<X>' values
-+ * @start_hdr: Selects the packet starting header for parsing;
-+ * 'NET_PROT_NONE' is treated as default: 'NET_PROT_ETH'
-+ * @max_senders: Maximum number of different senders; used as the number
-+ * of dedicated Tx flows; Non-power-of-2 values are rounded
-+ * up to the next power-of-2 value as hardware demands it;
-+ * '0' will be treated as '1'
-+ * @max_tcs: Maximum number of traffic classes (for both Tx and Rx);
-+ * '0' will e treated as '1'
-+ * @max_unicast_filters: Maximum number of unicast filters;
-+ * '0' is treated as '16'
-+ * @max_multicast_filters: Maximum number of multicast filters;
-+ * '0' is treated as '64'
-+ * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in
-+ * the QoS table; '0' is treated as '64'
-+ * @max_qos_key_size: Maximum key size for the QoS look-up;
-+ * '0' is treated as '24' which is enough for IPv4
-+ * 5-tuple
-+ * @max_dist_key_size: Maximum key size for the distribution;
-+ * '0' is treated as '24' which is enough for IPv4 5-tuple
-+ * @max_policers: Maximum number of policers;
-+ * should be between '0' and max_tcs
-+ * @max_congestion_ctrl: Maximum number of congestion control groups
-+ * (CGs); covers early drop and congestion notification
-+ * requirements;
-+ * should be between '0' and ('max_tcs' + 'max_senders')
-+ * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory
-+ * filled with the extended configuration by calling
-+ * dpni_prepare_extended_cfg()
-+ */
-+ struct {
-+ uint32_t options;
-+ enum net_prot start_hdr;
-+ uint8_t max_senders;
-+ uint8_t max_tcs;
-+ uint8_t max_unicast_filters;
-+ uint8_t max_multicast_filters;
-+ uint8_t max_vlan_filters;
-+ uint8_t max_qos_entries;
-+ uint8_t max_qos_key_size;
-+ uint8_t max_dist_key_size;
-+ uint8_t max_policers;
-+ uint8_t max_congestion_ctrl;
-+ uint64_t ext_cfg_iova;
-+ } adv;
-+};
-+
-+/**
-+ * dpni_create() - Create the DPNI object
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @cfg: Configuration structure
-+ * @token: Returned token; use in subsequent API calls
-+ *
-+ * Create the DPNI object, allocate required resources and
-+ * perform required initialization.
-+ *
-+ * The object can be created either by declaring it in the
-+ * DPL file, or by calling this function.
-+ *
-+ * This function returns a unique authentication token,
-+ * associated with the specific object ID and the specific MC
-+ * portal; this token must be used in all subsequent calls to
-+ * this specific object. For objects that are created using the
-+ * DPL file, call dpni_open() function to get an authentication
-+ * token first.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_create(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ const struct dpni_cfg *cfg,
-+ uint16_t *token);
-+
-+/**
-+ * dpni_destroy() - Destroy the DPNI object and release all its resources.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_destroy(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * struct dpni_pools_cfg - Structure representing buffer pools configuration
-+ * @num_dpbp: Number of DPBPs
-+ * @pools: Array of buffer pools parameters; The number of valid entries
-+ * must match 'num_dpbp' value
-+ */
-+struct dpni_pools_cfg {
-+ uint8_t num_dpbp;
-+ /**
-+ * struct pools - Buffer pools parameters
-+ * @dpbp_id: DPBP object ID
-+ * @buffer_size: Buffer size
-+ * @backup_pool: Backup pool
-+ */
-+ struct {
-+ int dpbp_id;
-+ uint16_t buffer_size;
-+ int backup_pool;
-+ } pools[DPNI_MAX_DPBP];
-+};
-+
-+/**
-+ * dpni_set_pools() - Set buffer pools configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: Buffer pools configuration
-+ *
-+ * mandatory for DPNI operation
-+ * warning:Allowed only when DPNI is disabled
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_pools(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_pools_cfg *cfg);
-+
-+/**
-+ * dpni_enable() - Enable the DPNI, allow sending and receiving frames.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * dpni_disable() - Disable the DPNI, stop sending and receiving frames.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_disable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * dpni_is_enabled() - Check if the DPNI is enabled.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Returns '1' if object is enabled; '0' otherwise
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_is_enabled(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en);
-+
-+/**
-+ * dpni_reset() - Reset the DPNI, returns the object to initial state.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_reset(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * DPNI IRQ Index and Events
-+ */
-+
-+/**
-+ * IRQ index
-+ */
-+#define DPNI_IRQ_INDEX 0
-+/**
-+ * IRQ event - indicates a change in link state
-+ */
-+#define DPNI_IRQ_EVENT_LINK_CHANGED 0x00000001
-+
-+/**
-+ * struct dpni_irq_cfg - IRQ configuration
-+ * @addr: Address that must be written to signal a message-based interrupt
-+ * @val: Value to write into irq_addr address
-+ * @irq_num: A user defined number associated with this IRQ
-+ */
-+struct dpni_irq_cfg {
-+ uint64_t addr;
-+ uint32_t val;
-+ int irq_num;
-+};
-+
-+/**
-+ * dpni_set_irq() - Set IRQ information for the DPNI to trigger an interrupt.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: Identifies the interrupt index to configure
-+ * @irq_cfg: IRQ configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_irq(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ struct dpni_irq_cfg *irq_cfg);
-+
-+/**
-+ * dpni_get_irq() - Get IRQ information from the DPNI.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @type: Interrupt type: 0 represents message interrupt
-+ * type (both irq_addr and irq_val are valid)
-+ * @irq_cfg: IRQ attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_irq(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ int *type,
-+ struct dpni_irq_cfg *irq_cfg);
-+
-+/**
-+ * dpni_set_irq_enable() - Set overall interrupt state.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @en: Interrupt state: - enable = 1, disable = 0
-+ *
-+ * Allows GPP software to control when interrupts are generated.
-+ * Each interrupt can have up to 32 causes. The enable/disable control's the
-+ * overall interrupt state. if the interrupt is disabled no causes will cause
-+ * an interrupt.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint8_t en);
-+
-+/**
-+ * dpni_get_irq_enable() - Get overall interrupt state
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @en: Returned interrupt state - enable = 1, disable = 0
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint8_t *en);
-+
-+/**
-+ * dpni_set_irq_mask() - Set interrupt mask.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @mask: event mask to trigger interrupt;
-+ * each bit:
-+ * 0 = ignore event
-+ * 1 = consider event for asserting IRQ
-+ *
-+ * Every interrupt can have up to 32 causes and the interrupt model supports
-+ * masking/unmasking each cause independently
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t mask);
-+
-+/**
-+ * dpni_get_irq_mask() - Get interrupt mask.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @mask: Returned event mask to trigger interrupt
-+ *
-+ * Every interrupt can have up to 32 causes and the interrupt model supports
-+ * masking/unmasking each cause independently
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t *mask);
-+
-+/**
-+ * dpni_get_irq_status() - Get the current status of any pending interrupts.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @status: Returned interrupts status - one bit per cause:
-+ * 0 = no interrupt pending
-+ * 1 = interrupt pending
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_irq_status(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t *status);
-+
-+/**
-+ * dpni_clear_irq_status() - Clear a pending interrupt's status
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @irq_index: The interrupt index to configure
-+ * @status: bits to clear (W1C) - one bit per cause:
-+ * 0 = don't change
-+ * 1 = clear status bit
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t irq_index,
-+ uint32_t status);
-+
-+/**
-+ * struct dpni_attr - Structure representing DPNI attributes
-+ * @id: DPNI object ID
-+ * @version: DPNI version
-+ * @start_hdr: Indicates the packet starting header for parsing
-+ * @options: Mask of available options; reflects the value as was given in
-+ * object's creation
-+ * @max_senders: Maximum number of different senders; used as the number
-+ * of dedicated Tx flows;
-+ * @max_tcs: Maximum number of traffic classes (for both Tx and Rx)
-+ * @max_unicast_filters: Maximum number of unicast filters
-+ * @max_multicast_filters: Maximum number of multicast filters
-+ * @max_vlan_filters: Maximum number of VLAN filters
-+ * @max_qos_entries: if 'max_tcs > 1', declares the maximum entries in QoS table
-+ * @max_qos_key_size: Maximum key size for the QoS look-up
-+ * @max_dist_key_size: Maximum key size for the distribution look-up
-+ * @max_policers: Maximum number of policers;
-+ * @max_congestion_ctrl: Maximum number of congestion control groups (CGs);
-+ * @ext_cfg_iova: I/O virtual address of 256 bytes DMA-able memory;
-+ * call dpni_extract_extended_cfg() to extract the extended configuration
-+ */
-+struct dpni_attr {
-+ int id;
-+ /**
-+ * struct version - DPNI version
-+ * @major: DPNI major version
-+ * @minor: DPNI minor version
-+ */
-+ struct {
-+ uint16_t major;
-+ uint16_t minor;
-+ } version;
-+ enum net_prot start_hdr;
-+ uint32_t options;
-+ uint8_t max_senders;
-+ uint8_t max_tcs;
-+ uint8_t max_unicast_filters;
-+ uint8_t max_multicast_filters;
-+ uint8_t max_vlan_filters;
-+ uint8_t max_qos_entries;
-+ uint8_t max_qos_key_size;
-+ uint8_t max_dist_key_size;
-+ uint8_t max_policers;
-+ uint8_t max_congestion_ctrl;
-+ uint64_t ext_cfg_iova;
-+};
-+
-+/**
-+ * dpni_get_attributes() - Retrieve DPNI attributes.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @attr: Object's attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_attributes(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_attr *attr);
-+
-+/**
-+ * dpni_extract_extended_cfg() - extract the extended parameters
-+ * @cfg: extended structure
-+ * @ext_cfg_buf: 256 bytes of DMA-able memory
-+ *
-+ * This function has to be called after dpni_get_attributes()
-+ */
-+int dpni_extract_extended_cfg(struct dpni_extended_cfg *cfg,
-+ const uint8_t *ext_cfg_buf);
-+
-+/**
-+ * DPNI errors
-+ */
-+
-+/**
-+ * Extract out of frame header error
-+ */
-+#define DPNI_ERROR_EOFHE 0x00020000
-+/**
-+ * Frame length error
-+ */
-+#define DPNI_ERROR_FLE 0x00002000
-+/**
-+ * Frame physical error
-+ */
-+#define DPNI_ERROR_FPE 0x00001000
-+/**
-+ * Parsing header error
-+ */
-+#define DPNI_ERROR_PHE 0x00000020
-+/**
-+ * Parser L3 checksum error
-+ */
-+#define DPNI_ERROR_L3CE 0x00000004
-+/**
-+ * Parser L3 checksum error
-+ */
-+#define DPNI_ERROR_L4CE 0x00000001
-+
-+/**
-+ * enum dpni_error_action - Defines DPNI behavior for errors
-+ * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
-+ * @DPNI_ERROR_ACTION_CONTINUE: Continue with the normal flow
-+ * @DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE: Send the frame to the error queue
-+ */
-+enum dpni_error_action {
-+ DPNI_ERROR_ACTION_DISCARD = 0,
-+ DPNI_ERROR_ACTION_CONTINUE = 1,
-+ DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE = 2
-+};
-+
-+/**
-+ * struct dpni_error_cfg - Structure representing DPNI errors treatment
-+ * @errors: Errors mask; use 'DPNI_ERROR__<X>
-+ * @error_action: The desired action for the errors mask
-+ * @set_frame_annotation: Set to '1' to mark the errors in frame annotation
-+ * status (FAS); relevant only for the non-discard action
-+ */
-+struct dpni_error_cfg {
-+ uint32_t errors;
-+ enum dpni_error_action error_action;
-+ int set_frame_annotation;
-+};
-+
-+/**
-+ * dpni_set_errors_behavior() - Set errors behavior
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: Errors configuration
-+ *
-+ * this function may be called numerous times with different
-+ * error masks
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_error_cfg *cfg);
-+
-+/**
-+ * DPNI buffer layout modification options
-+ */
-+
-+/**
-+ * Select to modify the time-stamp setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_TIMESTAMP 0x00000001
-+/**
-+ * Select to modify the parser-result setting; not applicable for Tx
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_PARSER_RESULT 0x00000002
-+/**
-+ * Select to modify the frame-status setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_FRAME_STATUS 0x00000004
-+/**
-+ * Select to modify the private-data-size setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE 0x00000008
-+/**
-+ * Select to modify the data-alignment setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_DATA_ALIGN 0x00000010
-+/**
-+ * Select to modify the data-head-room setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM 0x00000020
-+/**
-+ * Select to modify the data-tail-room setting
-+ */
-+#define DPNI_BUF_LAYOUT_OPT_DATA_TAIL_ROOM 0x00000040
-+
-+/**
-+ * struct dpni_buffer_layout - Structure representing DPNI buffer layout
-+ * @options: Flags representing the suggested modifications to the buffer
-+ * layout; Use any combination of 'DPNI_BUF_LAYOUT_OPT_<X>' flags
-+ * @pass_timestamp: Pass timestamp value
-+ * @pass_parser_result: Pass parser results
-+ * @pass_frame_status: Pass frame status
-+ * @private_data_size: Size kept for private data (in bytes)
-+ * @data_align: Data alignment
-+ * @data_head_room: Data head room
-+ * @data_tail_room: Data tail room
-+ */
-+struct dpni_buffer_layout {
-+ uint32_t options;
-+ int pass_timestamp;
-+ int pass_parser_result;
-+ int pass_frame_status;
-+ uint16_t private_data_size;
-+ uint16_t data_align;
-+ uint16_t data_head_room;
-+ uint16_t data_tail_room;
-+};
-+
-+/**
-+ * dpni_get_rx_buffer_layout() - Retrieve Rx buffer layout attributes.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Returns buffer layout attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_rx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_set_rx_buffer_layout() - Set Rx buffer layout configuration.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Buffer layout configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ *
-+ * @warning Allowed only when DPNI is disabled
-+ */
-+int dpni_set_rx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_get_tx_buffer_layout() - Retrieve Tx buffer layout attributes.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Returns buffer layout attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_tx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_set_tx_buffer_layout() - Set Tx buffer layout configuration.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Buffer layout configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ *
-+ * @warning Allowed only when DPNI is disabled
-+ */
-+int dpni_set_tx_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_get_tx_conf_buffer_layout() - Retrieve Tx confirmation buffer layout
-+ * attributes.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Returns buffer layout attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_set_tx_conf_buffer_layout() - Set Tx confirmation buffer layout
-+ * configuration.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @layout: Buffer layout configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ *
-+ * @warning Allowed only when DPNI is disabled
-+ */
-+int dpni_set_tx_conf_buffer_layout(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_buffer_layout *layout);
-+
-+/**
-+ * dpni_set_l3_chksum_validation() - Enable/disable L3 checksum validation
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_l3_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_get_l3_chksum_validation() - Get L3 checksum validation mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Returns '1' if enabled; '0' otherwise
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_l3_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en);
-+
-+/**
-+ * dpni_set_l4_chksum_validation() - Enable/disable L4 checksum validation
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_l4_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_get_l4_chksum_validation() - Get L4 checksum validation mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Returns '1' if enabled; '0' otherwise
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_l4_chksum_validation(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en);
-+
-+/**
-+ * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
-+ * for enqueue operations
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @qdid: Returned virtual QDID value that should be used as an argument
-+ * in all enqueue operations
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_qdid(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *qdid);
-+
-+/**
-+ * struct dpni_sp_info - Structure representing DPNI storage-profile information
-+ * (relevant only for DPNI owned by AIOP)
-+ * @spids: array of storage-profiles
-+ */
-+struct dpni_sp_info {
-+ uint16_t spids[DPNI_MAX_SP];
-+};
-+
-+/**
-+ * dpni_get_spids() - Get the AIOP storage profile IDs associated with the DPNI
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @sp_info: Returned AIOP storage-profile information
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ *
-+ * @warning Only relevant for DPNI that belongs to AIOP container.
-+ */
-+int dpni_get_sp_info(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_sp_info *sp_info);
-+
-+/**
-+ * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @data_offset: Tx data offset (from start of buffer)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *data_offset);
-+
-+/**
-+ * enum dpni_counter - DPNI counter types
-+ * @DPNI_CNT_ING_FRAME: Counts ingress frames
-+ * @DPNI_CNT_ING_BYTE: Counts ingress bytes
-+ * @DPNI_CNT_ING_FRAME_DROP: Counts ingress frames dropped due to explicit
-+ * 'drop' setting
-+ * @DPNI_CNT_ING_FRAME_DISCARD: Counts ingress frames discarded due to errors
-+ * @DPNI_CNT_ING_MCAST_FRAME: Counts ingress multicast frames
-+ * @DPNI_CNT_ING_MCAST_BYTE: Counts ingress multicast bytes
-+ * @DPNI_CNT_ING_BCAST_FRAME: Counts ingress broadcast frames
-+ * @DPNI_CNT_ING_BCAST_BYTES: Counts ingress broadcast bytes
-+ * @DPNI_CNT_EGR_FRAME: Counts egress frames
-+ * @DPNI_CNT_EGR_BYTE: Counts egress bytes
-+ * @DPNI_CNT_EGR_FRAME_DISCARD: Counts egress frames discarded due to errors
-+ */
-+enum dpni_counter {
-+ DPNI_CNT_ING_FRAME = 0x0,
-+ DPNI_CNT_ING_BYTE = 0x1,
-+ DPNI_CNT_ING_FRAME_DROP = 0x2,
-+ DPNI_CNT_ING_FRAME_DISCARD = 0x3,
-+ DPNI_CNT_ING_MCAST_FRAME = 0x4,
-+ DPNI_CNT_ING_MCAST_BYTE = 0x5,
-+ DPNI_CNT_ING_BCAST_FRAME = 0x6,
-+ DPNI_CNT_ING_BCAST_BYTES = 0x7,
-+ DPNI_CNT_EGR_FRAME = 0x8,
-+ DPNI_CNT_EGR_BYTE = 0x9,
-+ DPNI_CNT_EGR_FRAME_DISCARD = 0xa
-+};
-+
-+/**
-+ * dpni_get_counter() - Read a specific DPNI counter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @counter: The requested counter
-+ * @value: Returned counter's current value
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_counter(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ enum dpni_counter counter,
-+ uint64_t *value);
-+
-+/**
-+ * dpni_set_counter() - Set (or clear) a specific DPNI counter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @counter: The requested counter
-+ * @value: New counter value; typically pass '0' for resetting
-+ * the counter.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_counter(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ enum dpni_counter counter,
-+ uint64_t value);
-+
-+/**
-+ * Enable auto-negotiation
-+ */
-+#define DPNI_LINK_OPT_AUTONEG 0x0000000000000001ULL
-+/**
-+ * Enable half-duplex mode
-+ */
-+#define DPNI_LINK_OPT_HALF_DUPLEX 0x0000000000000002ULL
-+/**
-+ * Enable pause frames
-+ */
-+#define DPNI_LINK_OPT_PAUSE 0x0000000000000004ULL
-+/**
-+ * Enable a-symmetric pause frames
-+ */
-+#define DPNI_LINK_OPT_ASYM_PAUSE 0x0000000000000008ULL
-+
-+/**
-+ * struct - Structure representing DPNI link configuration
-+ * @rate: Rate
-+ * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
-+ */
-+struct dpni_link_cfg {
-+ uint32_t rate;
-+ uint64_t options;
-+};
-+
-+/**
-+ * dpni_set_link_cfg() - set the link configuration.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: Link configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_link_cfg *cfg);
-+
-+/**
-+ * struct dpni_link_state - Structure representing DPNI link state
-+ * @rate: Rate
-+ * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
-+ * @up: Link state; '0' for down, '1' for up
-+ */
-+struct dpni_link_state {
-+ uint32_t rate;
-+ uint64_t options;
-+ int up;
-+};
-+
-+/**
-+ * dpni_get_link_state() - Return the link state (either up or down)
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @state: Returned link state;
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_link_state(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_link_state *state);
-+
-+/**
-+ * struct dpni_tx_shaping - Structure representing DPNI tx shaping configuration
-+ * @rate_limit: rate in Mbps
-+ * @max_burst_size: burst size in bytes (up to 64KB)
-+ */
-+struct dpni_tx_shaping_cfg {
-+ uint32_t rate_limit;
-+ uint16_t max_burst_size;
-+};
-+
-+/**
-+ * dpni_set_tx_shaping() - Set the transmit shaping
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tx_shaper: tx shaping configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_tx_shaping_cfg *tx_shaper);
-+
-+/**
-+ * dpni_set_max_frame_length() - Set the maximum received frame length.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @max_frame_length: Maximum received frame length (in
-+ * bytes); frame is discarded if its
-+ * length exceeds this value
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t max_frame_length);
-+
-+/**
-+ * dpni_get_max_frame_length() - Get the maximum received frame length.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @max_frame_length: Maximum received frame length (in
-+ * bytes); frame is discarded if its
-+ * length exceeds this value
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *max_frame_length);
-+
-+/**
-+ * dpni_set_mtu() - Set the MTU for the interface.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mtu: MTU length (in bytes)
-+ *
-+ * MTU determines the maximum fragment size for performing IP
-+ * fragmentation on egress packets.
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_mtu(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t mtu);
-+
-+/**
-+ * dpni_get_mtu() - Get the MTU.
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mtu: Returned MTU length (in bytes)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_mtu(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *mtu);
-+
-+/**
-+ * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_get_multicast_promisc() - Get multicast promiscuous mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Returns '1' if enabled; '0' otherwise
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en);
-+
-+/**
-+ * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_get_unicast_promisc() - Get unicast promiscuous mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Returns '1' if enabled; '0' otherwise
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int *en);
-+
-+/**
-+ * dpni_set_primary_mac_addr() - Set the primary MAC address
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mac_addr: MAC address to set as primary address
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6]);
-+
-+/**
-+ * dpni_get_primary_mac_addr() - Get the primary MAC address
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mac_addr: Returned MAC address
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t mac_addr[6]);
-+
-+/**
-+ * dpni_add_mac_addr() - Add MAC address filter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mac_addr: MAC address to add
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6]);
-+
-+/**
-+ * dpni_remove_mac_addr() - Remove MAC address filter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @mac_addr: MAC address to remove
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const uint8_t mac_addr[6]);
-+
-+/**
-+ * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @unicast: Set to '1' to clear unicast addresses
-+ * @multicast: Set to '1' to clear multicast addresses
-+ *
-+ * The primary MAC address is not cleared by this operation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int unicast,
-+ int multicast);
-+
-+/**
-+ * dpni_set_vlan_filters() - Enable/disable VLAN filtering mode
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_vlan_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_add_vlan_id() - Add VLAN ID filter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @vlan_id: VLAN ID to add
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_add_vlan_id(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t vlan_id);
-+
-+/**
-+ * dpni_remove_vlan_id() - Remove VLAN ID filter
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @vlan_id: VLAN ID to remove
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_remove_vlan_id(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t vlan_id);
-+
-+/**
-+ * dpni_clear_vlan_filters() - Clear all VLAN filters
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_clear_vlan_filters(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * enum dpni_tx_schedule_mode - DPNI Tx scheduling mode
-+ * @DPNI_TX_SCHED_STRICT_PRIORITY: strict priority
-+ * @DPNI_TX_SCHED_WEIGHTED: weighted based scheduling
-+ */
-+enum dpni_tx_schedule_mode {
-+ DPNI_TX_SCHED_STRICT_PRIORITY,
-+ DPNI_TX_SCHED_WEIGHTED,
-+};
-+
-+/**
-+ * struct dpni_tx_schedule_cfg - Structure representing Tx
-+ * scheduling configuration
-+ * @mode: scheduling mode
-+ * @delta_bandwidth: Bandwidth represented in weights from 100 to 10000;
-+ * not applicable for 'strict-priority' mode;
-+ */
-+struct dpni_tx_schedule_cfg {
-+ enum dpni_tx_schedule_mode mode;
-+ uint16_t delta_bandwidth;
-+};
-+
-+/**
-+ * struct dpni_tx_selection_cfg - Structure representing transmission
-+ * selection configuration
-+ * @tc_sched: an array of traffic-classes
-+ */
-+struct dpni_tx_selection_cfg {
-+ struct dpni_tx_schedule_cfg tc_sched[DPNI_MAX_TC];
-+};
-+
-+/**
-+ * dpni_set_tx_selection() - Set transmission selection configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: transmission selection configuration
-+ *
-+ * warning: Allowed only when DPNI is disabled
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_tx_selection(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_tx_selection_cfg *cfg);
-+
-+/**
-+ * enum dpni_dist_mode - DPNI distribution mode
-+ * @DPNI_DIST_MODE_NONE: No distribution
-+ * @DPNI_DIST_MODE_HASH: Use hash distribution; only relevant if
-+ * the 'DPNI_OPT_DIST_HASH' option was set at DPNI creation
-+ * @DPNI_DIST_MODE_FS: Use explicit flow steering; only relevant if
-+ * the 'DPNI_OPT_DIST_FS' option was set at DPNI creation
-+ */
-+enum dpni_dist_mode {
-+ DPNI_DIST_MODE_NONE = 0,
-+ DPNI_DIST_MODE_HASH = 1,
-+ DPNI_DIST_MODE_FS = 2
-+};
-+
-+/**
-+ * enum dpni_fs_miss_action - DPNI Flow Steering miss action
-+ * @DPNI_FS_MISS_DROP: In case of no-match, drop the frame
-+ * @DPNI_FS_MISS_EXPLICIT_FLOWID: In case of no-match, use explicit flow-id
-+ * @DPNI_FS_MISS_HASH: In case of no-match, distribute using hash
-+ */
-+enum dpni_fs_miss_action {
-+ DPNI_FS_MISS_DROP = 0,
-+ DPNI_FS_MISS_EXPLICIT_FLOWID = 1,
-+ DPNI_FS_MISS_HASH = 2
-+};
-+
-+/**
-+ * struct dpni_fs_tbl_cfg - Flow Steering table configuration
-+ * @miss_action: Miss action selection
-+ * @default_flow_id: Used when 'miss_action = DPNI_FS_MISS_EXPLICIT_FLOWID'
-+ */
-+struct dpni_fs_tbl_cfg {
-+ enum dpni_fs_miss_action miss_action;
-+ uint16_t default_flow_id;
-+};
-+
-+/**
-+ * dpni_prepare_key_cfg() - function prepare extract parameters
-+ * @cfg: defining a full Key Generation profile (rule)
-+ * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
-+ *
-+ * This function has to be called before the following functions:
-+ * - dpni_set_rx_tc_dist()
-+ * - dpni_set_qos_table()
-+ */
-+int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-+ uint8_t *key_cfg_buf);
-+
-+/**
-+ * struct dpni_rx_tc_dist_cfg - Rx traffic class distribution configuration
-+ * @dist_size: Set the distribution size;
-+ * supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
-+ * 112,128,192,224,256,384,448,512,768,896,1024
-+ * @dist_mode: Distribution mode
-+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
-+ * the extractions to be used for the distribution key by calling
-+ * dpni_prepare_key_cfg() relevant only when
-+ * 'dist_mode != DPNI_DIST_MODE_NONE', otherwise it can be '0'
-+ * @fs_cfg: Flow Steering table configuration; only relevant if
-+ * 'dist_mode = DPNI_DIST_MODE_FS'
-+ */
-+struct dpni_rx_tc_dist_cfg {
-+ uint16_t dist_size;
-+ enum dpni_dist_mode dist_mode;
-+ uint64_t key_cfg_iova;
-+ struct dpni_fs_tbl_cfg fs_cfg;
-+};
-+
-+/**
-+ * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: Traffic class distribution configuration
-+ *
-+ * warning: if 'dist_mode != DPNI_DIST_MODE_NONE', call dpni_prepare_key_cfg()
-+ * first to prepare the key_cfg_iova parameter
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rx_tc_dist_cfg *cfg);
-+
-+/**
-+ * Set to select color aware mode (otherwise - color blind)
-+ */
-+#define DPNI_POLICER_OPT_COLOR_AWARE 0x00000001
-+/**
-+ * Set to discard frame with RED color
-+ */
-+#define DPNI_POLICER_OPT_DISCARD_RED 0x00000002
-+
-+/**
-+ * enum dpni_policer_mode - selecting the policer mode
-+ * @DPNI_POLICER_MODE_NONE: Policer is disabled
-+ * @DPNI_POLICER_MODE_PASS_THROUGH: Policer pass through
-+ * @DPNI_POLICER_MODE_RFC_2698: Policer algorithm RFC 2698
-+ * @DPNI_POLICER_MODE_RFC_4115: Policer algorithm RFC 4115
-+ */
-+enum dpni_policer_mode {
-+ DPNI_POLICER_MODE_NONE = 0,
-+ DPNI_POLICER_MODE_PASS_THROUGH,
-+ DPNI_POLICER_MODE_RFC_2698,
-+ DPNI_POLICER_MODE_RFC_4115
-+};
-+
-+/**
-+ * enum dpni_policer_unit - DPNI policer units
-+ * @DPNI_POLICER_UNIT_BYTES: bytes units
-+ * @DPNI_POLICER_UNIT_FRAMES: frames units
-+ */
-+enum dpni_policer_unit {
-+ DPNI_POLICER_UNIT_BYTES = 0,
-+ DPNI_POLICER_UNIT_FRAMES
-+};
-+
-+/**
-+ * enum dpni_policer_color - selecting the policer color
-+ * @DPNI_POLICER_COLOR_GREEN: Green color
-+ * @DPNI_POLICER_COLOR_YELLOW: Yellow color
-+ * @DPNI_POLICER_COLOR_RED: Red color
-+ */
-+enum dpni_policer_color {
-+ DPNI_POLICER_COLOR_GREEN = 0,
-+ DPNI_POLICER_COLOR_YELLOW,
-+ DPNI_POLICER_COLOR_RED
-+};
-+
-+/**
-+ * struct dpni_rx_tc_policing_cfg - Policer configuration
-+ * @options: Mask of available options; use 'DPNI_POLICER_OPT_<X>' values
-+ * @mode: policer mode
-+ * @default_color: For pass-through mode the policer re-colors with this
-+ * color any incoming packets. For Color aware non-pass-through mode:
-+ * policer re-colors with this color all packets with FD[DROPP]>2.
-+ * @units: Bytes or Packets
-+ * @cir: Committed information rate (CIR) in Kbps or packets/second
-+ * @cbs: Committed burst size (CBS) in bytes or packets
-+ * @eir: Peak information rate (PIR, rfc2698) in Kbps or packets/second
-+ * Excess information rate (EIR, rfc4115) in Kbps or packets/second
-+ * @ebs: Peak burst size (PBS, rfc2698) in bytes or packets
-+ * Excess burst size (EBS, rfc4115) in bytes or packets
-+ */
-+struct dpni_rx_tc_policing_cfg {
-+ uint32_t options;
-+ enum dpni_policer_mode mode;
-+ enum dpni_policer_unit units;
-+ enum dpni_policer_color default_color;
-+ uint32_t cir;
-+ uint32_t cbs;
-+ uint32_t eir;
-+ uint32_t ebs;
-+};
-+
-+/**
-+ * dpni_set_rx_tc_policing() - Set Rx traffic class policing configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: Traffic class policing configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_rx_tc_policing(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rx_tc_policing_cfg *cfg);
-+
-+/**
-+ * dpni_get_rx_tc_policing() - Get Rx traffic class policing configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: Traffic class policing configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_rx_tc_policing(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_rx_tc_policing_cfg *cfg);
-+
-+/**
-+ * enum dpni_congestion_unit - DPNI congestion units
-+ * @DPNI_CONGESTION_UNIT_BYTES: bytes units
-+ * @DPNI_CONGESTION_UNIT_FRAMES: frames units
-+ */
-+enum dpni_congestion_unit {
-+ DPNI_CONGESTION_UNIT_BYTES = 0,
-+ DPNI_CONGESTION_UNIT_FRAMES
-+};
-+
-+/**
-+ * enum dpni_early_drop_mode - DPNI early drop mode
-+ * @DPNI_EARLY_DROP_MODE_NONE: early drop is disabled
-+ * @DPNI_EARLY_DROP_MODE_TAIL: early drop in taildrop mode
-+ * @DPNI_EARLY_DROP_MODE_WRED: early drop in WRED mode
-+ */
-+enum dpni_early_drop_mode {
-+ DPNI_EARLY_DROP_MODE_NONE = 0,
-+ DPNI_EARLY_DROP_MODE_TAIL,
-+ DPNI_EARLY_DROP_MODE_WRED
-+};
-+
-+/**
-+ * struct dpni_wred_cfg - WRED configuration
-+ * @max_threshold: maximum threshold that packets may be discarded. Above this
-+ * threshold all packets are discarded; must be less than 2^39;
-+ * approximated to be expressed as (x+256)*2^(y-1) due to HW
-+ * implementation.
-+ * @min_threshold: minimum threshold that packets may be discarded at
-+ * @drop_probability: probability that a packet will be discarded (1-100,
-+ * associated with the max_threshold).
-+ */
-+struct dpni_wred_cfg {
-+ uint64_t max_threshold;
-+ uint64_t min_threshold;
-+ uint8_t drop_probability;
-+};
-+
-+/**
-+ * struct dpni_early_drop_cfg - early-drop configuration
-+ * @mode: drop mode
-+ * @units: units type
-+ * @green: WRED - 'green' configuration
-+ * @yellow: WRED - 'yellow' configuration
-+ * @red: WRED - 'red' configuration
-+ * @tail_drop_threshold: tail drop threshold
-+ */
-+struct dpni_early_drop_cfg {
-+ enum dpni_early_drop_mode mode;
-+ enum dpni_congestion_unit units;
-+
-+ struct dpni_wred_cfg green;
-+ struct dpni_wred_cfg yellow;
-+ struct dpni_wred_cfg red;
-+
-+ uint32_t tail_drop_threshold;
-+};
-+
-+/**
-+ * dpni_prepare_early_drop() - prepare an early drop.
-+ * @cfg: Early-drop configuration
-+ * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
-+ *
-+ * This function has to be called before dpni_set_rx_tc_early_drop or
-+ * dpni_set_tx_tc_early_drop
-+ *
-+ */
-+void dpni_prepare_early_drop(const struct dpni_early_drop_cfg *cfg,
-+ uint8_t *early_drop_buf);
-+
-+/**
-+ * dpni_extract_early_drop() - extract the early drop configuration.
-+ * @cfg: Early-drop configuration
-+ * @early_drop_buf: Zeroed 256 bytes of memory before mapping it to DMA
-+ *
-+ * This function has to be called after dpni_get_rx_tc_early_drop or
-+ * dpni_get_tx_tc_early_drop
-+ *
-+ */
-+void dpni_extract_early_drop(struct dpni_early_drop_cfg *cfg,
-+ const uint8_t *early_drop_buf);
-+
-+/**
-+ * dpni_set_rx_tc_early_drop() - Set Rx traffic class early-drop configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
-+ * with the early-drop configuration by calling dpni_prepare_early_drop()
-+ *
-+ * warning: Before calling this function, call dpni_prepare_early_drop() to
-+ * prepare the early_drop_iova parameter
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_rx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova);
-+
-+/**
-+ * dpni_get_rx_tc_early_drop() - Get Rx traffic class early-drop configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
-+ *
-+ * warning: After calling this function, call dpni_extract_early_drop() to
-+ * get the early drop configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_rx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova);
-+
-+/**
-+ * dpni_set_tx_tc_early_drop() - Set Tx traffic class early-drop configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory filled
-+ * with the early-drop configuration by calling dpni_prepare_early_drop()
-+ *
-+ * warning: Before calling this function, call dpni_prepare_early_drop() to
-+ * prepare the early_drop_iova parameter
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_tx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova);
-+
-+/**
-+ * dpni_get_tx_tc_early_drop() - Get Tx traffic class early-drop configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @early_drop_iova: I/O virtual address of 256 bytes DMA-able memory
-+ *
-+ * warning: After calling this function, call dpni_extract_early_drop() to
-+ * get the early drop configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_tx_tc_early_drop(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint64_t early_drop_iova);
-+
-+/**
-+ * enum dpni_dest - DPNI destination types
-+ * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and
-+ * does not generate FQDAN notifications; user is expected to
-+ * dequeue from the queue based on polling or other user-defined
-+ * method
-+ * @DPNI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
-+ * notifications to the specified DPIO; user is expected to dequeue
-+ * from the queue only after notification is received
-+ * @DPNI_DEST_DPCON: The queue is set in schedule mode and does not generate
-+ * FQDAN notifications, but is connected to the specified DPCON
-+ * object; user is expected to dequeue from the DPCON channel
-+ */
-+enum dpni_dest {
-+ DPNI_DEST_NONE = 0,
-+ DPNI_DEST_DPIO = 1,
-+ DPNI_DEST_DPCON = 2
-+};
-+
-+/**
-+ * struct dpni_dest_cfg - Structure representing DPNI destination parameters
-+ * @dest_type: Destination type
-+ * @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
-+ * @priority: Priority selection within the DPIO or DPCON channel; valid values
-+ * are 0-1 or 0-7, depending on the number of priorities in that
-+ * channel; not relevant for 'DPNI_DEST_NONE' option
-+ */
-+struct dpni_dest_cfg {
-+ enum dpni_dest dest_type;
-+ int dest_id;
-+ uint8_t priority;
-+};
-+
-+/* DPNI congestion options */
-+
-+/**
-+ * CSCN message is written to message_iova once entering a
-+ * congestion state (see 'threshold_entry')
-+ */
-+#define DPNI_CONG_OPT_WRITE_MEM_ON_ENTER 0x00000001
-+/**
-+ * CSCN message is written to message_iova once exiting a
-+ * congestion state (see 'threshold_exit')
-+ */
-+#define DPNI_CONG_OPT_WRITE_MEM_ON_EXIT 0x00000002
-+/**
-+ * CSCN write will attempt to allocate into a cache (coherent write);
-+ * valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is selected
-+ */
-+#define DPNI_CONG_OPT_COHERENT_WRITE 0x00000004
-+/**
-+ * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
-+ * DPIO/DPCON's WQ channel once entering a congestion state
-+ * (see 'threshold_entry')
-+ */
-+#define DPNI_CONG_OPT_NOTIFY_DEST_ON_ENTER 0x00000008
-+/**
-+ * if 'dest_cfg.dest_type != DPNI_DEST_NONE' CSCN message is sent to
-+ * DPIO/DPCON's WQ channel once exiting a congestion state
-+ * (see 'threshold_exit')
-+ */
-+#define DPNI_CONG_OPT_NOTIFY_DEST_ON_EXIT 0x00000010
-+/**
-+ * if 'dest_cfg.dest_type != DPNI_DEST_NONE' when the CSCN is written to the
-+ * sw-portal's DQRR, the DQRI interrupt is asserted immediately (if enabled)
-+ */
-+#define DPNI_CONG_OPT_INTR_COALESCING_DISABLED 0x00000020
-+
-+/**
-+ * struct dpni_congestion_notification_cfg - congestion notification
-+ * configuration
-+ * @units: units type
-+ * @threshold_entry: above this threshold we enter a congestion state.
-+ * set it to '0' to disable it
-+ * @threshold_exit: below this threshold we exit the congestion state.
-+ * @message_ctx: The context that will be part of the CSCN message
-+ * @message_iova: I/O virtual address (must be in DMA-able memory),
-+ * must be 16B aligned; valid only if 'DPNI_CONG_OPT_WRITE_MEM_<X>' is
-+ * contained in 'options'
-+ * @dest_cfg: CSCN can be send to either DPIO or DPCON WQ channel
-+ * @options: Mask of available options; use 'DPNI_CONG_OPT_<X>' values
-+ */
-+
-+struct dpni_congestion_notification_cfg {
-+ enum dpni_congestion_unit units;
-+ uint32_t threshold_entry;
-+ uint32_t threshold_exit;
-+ uint64_t message_ctx;
-+ uint64_t message_iova;
-+ struct dpni_dest_cfg dest_cfg;
-+ uint16_t options;
-+};
-+
-+/**
-+ * dpni_set_rx_tc_congestion_notification() - Set Rx traffic class congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: congestion notification configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * dpni_get_rx_tc_congestion_notification() - Get Rx traffic class congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: congestion notification configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_rx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * dpni_set_tx_tc_congestion_notification() - Set Tx traffic class congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: congestion notification configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * dpni_get_tx_tc_congestion_notification() - Get Tx traffic class congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: congestion notification configuration
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_tx_tc_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * enum dpni_flc_type - DPNI FLC types
-+ * @DPNI_FLC_USER_DEFINED: select the FLC to be used for user defined value
-+ * @DPNI_FLC_STASH: select the FLC to be used for stash control
-+ */
-+enum dpni_flc_type {
-+ DPNI_FLC_USER_DEFINED = 0,
-+ DPNI_FLC_STASH = 1,
-+};
-+
-+/**
-+ * enum dpni_stash_size - DPNI FLC stashing size
-+ * @DPNI_STASH_SIZE_0B: no stash
-+ * @DPNI_STASH_SIZE_64B: stashes 64 bytes
-+ * @DPNI_STASH_SIZE_128B: stashes 128 bytes
-+ * @DPNI_STASH_SIZE_192B: stashes 192 bytes
-+ */
-+enum dpni_stash_size {
-+ DPNI_STASH_SIZE_0B = 0,
-+ DPNI_STASH_SIZE_64B = 1,
-+ DPNI_STASH_SIZE_128B = 2,
-+ DPNI_STASH_SIZE_192B = 3,
-+};
-+
-+/* DPNI FLC stash options */
-+
-+/**
-+ * stashes the whole annotation area (up to 192 bytes)
-+ */
-+#define DPNI_FLC_STASH_FRAME_ANNOTATION 0x00000001
-+
-+/**
-+ * struct dpni_flc_cfg - Structure representing DPNI FLC configuration
-+ * @flc_type: FLC type
-+ * @options: Mask of available options;
-+ * use 'DPNI_FLC_STASH_<X>' values
-+ * @frame_data_size: Size of frame data to be stashed
-+ * @flow_context_size: Size of flow context to be stashed
-+ * @flow_context: 1. In case flc_type is 'DPNI_FLC_USER_DEFINED':
-+ * this value will be provided in the frame descriptor
-+ * (FD[FLC])
-+ * 2. In case flc_type is 'DPNI_FLC_STASH':
-+ * this value will be I/O virtual address of the
-+ * flow-context;
-+ * Must be cacheline-aligned and DMA-able memory
-+ */
-+struct dpni_flc_cfg {
-+ enum dpni_flc_type flc_type;
-+ uint32_t options;
-+ enum dpni_stash_size frame_data_size;
-+ enum dpni_stash_size flow_context_size;
-+ uint64_t flow_context;
-+};
-+
-+/**
-+ * DPNI queue modification options
-+ */
-+
-+/**
-+ * Select to modify the user's context associated with the queue
-+ */
-+#define DPNI_QUEUE_OPT_USER_CTX 0x00000001
-+/**
-+ * Select to modify the queue's destination
-+ */
-+#define DPNI_QUEUE_OPT_DEST 0x00000002
-+/** Select to modify the flow-context parameters;
-+ * not applicable for Tx-conf/Err queues as the FD comes from the user
-+ */
-+#define DPNI_QUEUE_OPT_FLC 0x00000004
-+/**
-+ * Select to modify the queue's order preservation
-+ */
-+#define DPNI_QUEUE_OPT_ORDER_PRESERVATION 0x00000008
-+/* Select to modify the queue's tail-drop threshold */
-+#define DPNI_QUEUE_OPT_TAILDROP_THRESHOLD 0x00000010
-+
-+/**
-+ * struct dpni_queue_cfg - Structure representing queue configuration
-+ * @options: Flags representing the suggested modifications to the queue;
-+ * Use any combination of 'DPNI_QUEUE_OPT_<X>' flags
-+ * @user_ctx: User context value provided in the frame descriptor of each
-+ * dequeued frame; valid only if 'DPNI_QUEUE_OPT_USER_CTX'
-+ * is contained in 'options'
-+ * @dest_cfg: Queue destination parameters;
-+ * valid only if 'DPNI_QUEUE_OPT_DEST' is contained in 'options'
-+ * @flc_cfg: Flow context configuration; in case the TC's distribution
-+ * is either NONE or HASH the FLC's settings of flow#0 are used.
-+ * in the case of FS (flow-steering) the flow's FLC settings
-+ * are used.
-+ * valid only if 'DPNI_QUEUE_OPT_FLC' is contained in 'options'
-+ * @order_preservation_en: enable/disable order preservation;
-+ * valid only if 'DPNI_QUEUE_OPT_ORDER_PRESERVATION' is contained
-+ * in 'options'
-+ * @tail_drop_threshold: set the queue's tail drop threshold in bytes;
-+ * '0' value disable the threshold; maximum value is 0xE000000;
-+ * valid only if 'DPNI_QUEUE_OPT_TAILDROP_THRESHOLD' is contained
-+ * in 'options'
-+ */
-+struct dpni_queue_cfg {
-+ uint32_t options;
-+ uint64_t user_ctx;
-+ struct dpni_dest_cfg dest_cfg;
-+ struct dpni_flc_cfg flc_cfg;
-+ int order_preservation_en;
-+ uint32_t tail_drop_threshold;
-+};
-+
-+/**
-+ * struct dpni_queue_attr - Structure representing queue attributes
-+ * @user_ctx: User context value provided in the frame descriptor of each
-+ * dequeued frame
-+ * @dest_cfg: Queue destination configuration
-+ * @flc_cfg: Flow context configuration
-+ * @order_preservation_en: enable/disable order preservation
-+ * @tail_drop_threshold: queue's tail drop threshold in bytes;
-+ * @fqid: Virtual fqid value to be used for dequeue operations
-+ */
-+struct dpni_queue_attr {
-+ uint64_t user_ctx;
-+ struct dpni_dest_cfg dest_cfg;
-+ struct dpni_flc_cfg flc_cfg;
-+ int order_preservation_en;
-+ uint32_t tail_drop_threshold;
-+
-+ uint32_t fqid;
-+};
-+
-+/**
-+ * DPNI Tx flow modification options
-+ */
-+
-+/**
-+ * Select to modify the settings for dedicate Tx confirmation/error
-+ */
-+#define DPNI_TX_FLOW_OPT_TX_CONF_ERROR 0x00000001
-+/**
-+ * Select to modify the L3 checksum generation setting
-+ */
-+#define DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN 0x00000010
-+/**
-+ * Select to modify the L4 checksum generation setting
-+ */
-+#define DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN 0x00000020
-+
-+/**
-+ * struct dpni_tx_flow_cfg - Structure representing Tx flow configuration
-+ * @options: Flags representing the suggested modifications to the Tx flow;
-+ * Use any combination 'DPNI_TX_FLOW_OPT_<X>' flags
-+ * @use_common_tx_conf_queue: Set to '1' to use the common (default) Tx
-+ * confirmation and error queue; Set to '0' to use the private
-+ * Tx confirmation and error queue; valid only if
-+ * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' wasn't set at DPNI creation
-+ * and 'DPNI_TX_FLOW_OPT_TX_CONF_ERROR' is contained in 'options'
-+ * @l3_chksum_gen: Set to '1' to enable L3 checksum generation; '0' to disable;
-+ * valid only if 'DPNI_TX_FLOW_OPT_L3_CHKSUM_GEN' is contained in 'options'
-+ * @l4_chksum_gen: Set to '1' to enable L4 checksum generation; '0' to disable;
-+ * valid only if 'DPNI_TX_FLOW_OPT_L4_CHKSUM_GEN' is contained in 'options'
-+ */
-+struct dpni_tx_flow_cfg {
-+ uint32_t options;
-+ int use_common_tx_conf_queue;
-+ int l3_chksum_gen;
-+ int l4_chksum_gen;
-+};
-+
-+/**
-+ * dpni_set_tx_flow() - Set Tx flow configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: Provides (or returns) the sender's flow ID;
-+ * for each new sender set (*flow_id) to 'DPNI_NEW_FLOW_ID' to generate
-+ * a new flow_id; this ID should be used as the QDBIN argument
-+ * in enqueue operations
-+ * @cfg: Tx flow configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_tx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t *flow_id,
-+ const struct dpni_tx_flow_cfg *cfg);
-+
-+/**
-+ * struct dpni_tx_flow_attr - Structure representing Tx flow attributes
-+ * @use_common_tx_conf_queue: '1' if using common (default) Tx confirmation and
-+ * error queue; '0' if using private Tx confirmation and error queue
-+ * @l3_chksum_gen: '1' if L3 checksum generation is enabled; '0' if disabled
-+ * @l4_chksum_gen: '1' if L4 checksum generation is enabled; '0' if disabled
-+ */
-+struct dpni_tx_flow_attr {
-+ int use_common_tx_conf_queue;
-+ int l3_chksum_gen;
-+ int l4_chksum_gen;
-+};
-+
-+/**
-+ * dpni_get_tx_flow() - Get Tx flow attributes
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: The sender's flow ID, as returned by the
-+ * dpni_set_tx_flow() function
-+ * @attr: Returned Tx flow attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_tx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_tx_flow_attr *attr);
-+
-+/**
-+ * struct dpni_tx_conf_cfg - Structure representing Tx conf configuration
-+ * @errors_only: Set to '1' to report back only error frames;
-+ * Set to '0' to confirm transmission/error for all transmitted frames;
-+ * @queue_cfg: Queue configuration
-+ */
-+struct dpni_tx_conf_cfg {
-+ int errors_only;
-+ struct dpni_queue_cfg queue_cfg;
-+};
-+
-+/**
-+ * dpni_set_tx_conf() - Set Tx confirmation and error queue configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: The sender's flow ID, as returned by the
-+ * dpni_set_tx_flow() function;
-+ * use 'DPNI_COMMON_TX_CONF' for common tx-conf
-+ * @cfg: Queue configuration
-+ *
-+ * If either 'DPNI_OPT_TX_CONF_DISABLED' or
-+ * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
-+ * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
-+ * i.e. only serve the common tx-conf-err queue;
-+ * if 'DPNI_OPT_TX_CONF_DISABLED' was selected, only error frames are reported
-+ * back - successfully transmitted frames are not confirmed. Otherwise, all
-+ * transmitted frames are sent for confirmation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_tx_conf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ const struct dpni_tx_conf_cfg *cfg);
-+
-+/**
-+ * struct dpni_tx_conf_attr - Structure representing Tx conf attributes
-+ * @errors_only: '1' if only error frames are reported back; '0' if all
-+ * transmitted frames are confirmed
-+ * @queue_attr: Queue attributes
-+ */
-+struct dpni_tx_conf_attr {
-+ int errors_only;
-+ struct dpni_queue_attr queue_attr;
-+};
-+
-+/**
-+ * dpni_get_tx_conf() - Get Tx confirmation and error queue attributes
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: The sender's flow ID, as returned by the
-+ * dpni_set_tx_flow() function;
-+ * use 'DPNI_COMMON_TX_CONF' for common tx-conf
-+ * @attr: Returned tx-conf attributes
-+ *
-+ * If either 'DPNI_OPT_TX_CONF_DISABLED' or
-+ * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
-+ * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
-+ * i.e. only serve the common tx-conf-err queue;
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_tx_conf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_tx_conf_attr *attr);
-+
-+/**
-+ * dpni_set_tx_conf_congestion_notification() - Set Tx conf congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: The sender's flow ID, as returned by the
-+ * dpni_set_tx_flow() function;
-+ * use 'DPNI_COMMON_TX_CONF' for common tx-conf
-+ * @cfg: congestion notification configuration
-+ *
-+ * If either 'DPNI_OPT_TX_CONF_DISABLED' or
-+ * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
-+ * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
-+ * i.e. only serve the common tx-conf-err queue;
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_set_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ const struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * dpni_get_tx_conf_congestion_notification() - Get Tx conf congestion
-+ * notification configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @flow_id: The sender's flow ID, as returned by the
-+ * dpni_set_tx_flow() function;
-+ * use 'DPNI_COMMON_TX_CONF' for common tx-conf
-+ * @cfg: congestion notification
-+ *
-+ * If either 'DPNI_OPT_TX_CONF_DISABLED' or
-+ * 'DPNI_OPT_PRIVATE_TX_CONF_ERROR_DISABLED' were selected at DPNI creation,
-+ * this function can ONLY be used with 'flow_id == DPNI_COMMON_TX_CONF';
-+ * i.e. only serve the common tx-conf-err queue;
-+ *
-+ * Return: '0' on Success; error code otherwise.
-+ */
-+int dpni_get_tx_conf_congestion_notification(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint16_t flow_id,
-+ struct dpni_congestion_notification_cfg *cfg);
-+
-+/**
-+ * dpni_set_tx_conf_revoke() - Tx confirmation revocation
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @revoke: revoke or not
-+ *
-+ * This function is useful only when 'DPNI_OPT_TX_CONF_DISABLED' is not
-+ * selected at DPNI creation.
-+ * Calling this function with 'revoke' set to '1' disables all transmit
-+ * confirmation (including the private confirmation queues), regardless of
-+ * previous settings; Note that in this case, Tx error frames are still
-+ * enqueued to the general transmit errors queue.
-+ * Calling this function with 'revoke' set to '0' restores the previous
-+ * settings for both general and private transmit confirmation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_tx_conf_revoke(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int revoke);
-+
-+/**
-+ * dpni_set_rx_flow() - Set Rx flow configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7);
-+ * use 'DPNI_ALL_TCS' to set all TCs and all flows
-+ * @flow_id: Rx flow id within the traffic class; use
-+ * 'DPNI_ALL_TC_FLOWS' to set all flows within
-+ * this tc_id; ignored if tc_id is set to
-+ * 'DPNI_ALL_TCS';
-+ * @cfg: Rx flow configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_rx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint16_t flow_id,
-+ const struct dpni_queue_cfg *cfg);
-+
-+/**
-+ * dpni_get_rx_flow() - Get Rx flow attributes
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @flow_id: Rx flow id within the traffic class
-+ * @attr: Returned Rx flow attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_rx_flow(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ uint16_t flow_id,
-+ struct dpni_queue_attr *attr);
-+
-+/**
-+ * dpni_set_rx_err_queue() - Set Rx error queue configuration
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: Queue configuration
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_rx_err_queue(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_queue_cfg *cfg);
-+
-+/**
-+ * dpni_get_rx_err_queue() - Get Rx error queue attributes
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @attr: Returned Queue attributes
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_get_rx_err_queue(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ struct dpni_queue_attr *attr);
-+
-+/**
-+ * struct dpni_qos_tbl_cfg - Structure representing QOS table configuration
-+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
-+ * key extractions to be used as the QoS criteria by calling
-+ * dpni_prepare_key_cfg()
-+ * @discard_on_miss: Set to '1' to discard frames in case of no match (miss);
-+ * '0' to use the 'default_tc' in such cases
-+ * @default_tc: Used in case of no-match and 'discard_on_miss'= 0
-+ */
-+struct dpni_qos_tbl_cfg {
-+ uint64_t key_cfg_iova;
-+ int discard_on_miss;
-+ uint8_t default_tc;
-+};
-+
-+/**
-+ * dpni_set_qos_table() - Set QoS mapping table
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: QoS table configuration
-+ *
-+ * This function and all QoS-related functions require that
-+ *'max_tcs > 1' was set at DPNI creation.
-+ *
-+ * warning: Before calling this function, call dpni_prepare_key_cfg() to
-+ * prepare the key_cfg_iova parameter
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_qos_table(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_qos_tbl_cfg *cfg);
-+
-+/**
-+ * struct dpni_rule_cfg - Rule configuration for table lookup
-+ * @key_iova: I/O virtual address of the key (must be in DMA-able memory)
-+ * @mask_iova: I/O virtual address of the mask (must be in DMA-able memory)
-+ * @key_size: key and mask size (in bytes)
-+ */
-+struct dpni_rule_cfg {
-+ uint64_t key_iova;
-+ uint64_t mask_iova;
-+ uint8_t key_size;
-+};
-+
-+/**
-+ * dpni_add_qos_entry() - Add QoS mapping entry (to select a traffic class)
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: QoS rule to add
-+ * @tc_id: Traffic class selection (0-7)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_add_qos_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_rule_cfg *cfg,
-+ uint8_t tc_id);
-+
-+/**
-+ * dpni_remove_qos_entry() - Remove QoS mapping entry
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @cfg: QoS rule to remove
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_remove_qos_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ const struct dpni_rule_cfg *cfg);
-+
-+/**
-+ * dpni_clear_qos_table() - Clear all QoS mapping entries
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ *
-+ * Following this function call, all frames are directed to
-+ * the default traffic class (0)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_clear_qos_table(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token);
-+
-+/**
-+ * dpni_add_fs_entry() - Add Flow Steering entry for a specific traffic class
-+ * (to select a flow ID)
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: Flow steering rule to add
-+ * @flow_id: Flow id selection (must be smaller than the
-+ * distribution size of the traffic class)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_add_fs_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rule_cfg *cfg,
-+ uint16_t flow_id);
-+
-+/**
-+ * dpni_remove_fs_entry() - Remove Flow Steering entry from a specific
-+ * traffic class
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ * @cfg: Flow steering rule to remove
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_remove_fs_entry(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id,
-+ const struct dpni_rule_cfg *cfg);
-+
-+/**
-+ * dpni_clear_fs_entries() - Clear all Flow Steering entries of a specific
-+ * traffic class
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @tc_id: Traffic class selection (0-7)
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_clear_fs_entries(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ uint8_t tc_id);
-+
-+/**
-+ * dpni_set_vlan_insertion() - Enable/disable VLAN insertion for egress frames
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
-+ * at DPNI creation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_vlan_insertion(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_set_vlan_removal() - Enable/disable VLAN removal for ingress frames
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Requires that the 'DPNI_OPT_VLAN_MANIPULATION' option is set
-+ * at DPNI creation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_vlan_removal(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_set_ipr() - Enable/disable IP reassembly of ingress frames
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Requires that the 'DPNI_OPT_IPR' option is set at DPNI creation.
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_ipr(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+/**
-+ * dpni_set_ipf() - Enable/disable IP fragmentation of egress frames
-+ * @mc_io: Pointer to MC portal's I/O object
-+ * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
-+ * @token: Token of DPNI object
-+ * @en: Set to '1' to enable; '0' to disable
-+ *
-+ * Requires that the 'DPNI_OPT_IPF' option is set at DPNI
-+ * creation. Fragmentation is performed according to MTU value
-+ * set by dpni_set_mtu() function
-+ *
-+ * Return: '0' on Success; Error code otherwise.
-+ */
-+int dpni_set_ipf(struct fsl_mc_io *mc_io,
-+ uint32_t cmd_flags,
-+ uint16_t token,
-+ int en);
-+
-+#endif /* __FSL_DPNI_H */
---- a/drivers/staging/fsl-mc/include/mc-cmd.h
-+++ b/drivers/staging/fsl-mc/include/mc-cmd.h
-@@ -103,8 +103,11 @@ enum mc_cmd_status {
- #define MC_CMD_HDR_READ_FLAGS(_hdr) \
- ((u32)mc_dec((_hdr), MC_CMD_HDR_FLAGS_O, MC_CMD_HDR_FLAGS_S))
-
-+#define MC_PREP_OP(_ext, _param, _offset, _width, _type, _arg) \
-+ ((_ext)[_param] |= cpu_to_le64(mc_enc((_offset), (_width), _arg)))
-+
- #define MC_EXT_OP(_ext, _param, _offset, _width, _type, _arg) \
-- ((_ext)[_param] |= mc_enc((_offset), (_width), _arg))
-+ (_arg = (_type)mc_dec(cpu_to_le64(_ext[_param]), (_offset), (_width)))
-
- #define MC_CMD_OP(_cmd, _param, _offset, _width, _type, _arg) \
- ((_cmd).params[_param] |= mc_enc((_offset), (_width), _arg))
---- /dev/null
-+++ b/drivers/staging/fsl-mc/include/net.h
-@@ -0,0 +1,481 @@
-+/* Copyright 2013-2015 Freescale Semiconductor Inc.
-+ *
-+ * Redistribution and use in source and binary forms, with or without
-+ * modification, are permitted provided that the following conditions are met:
-+ * * Redistributions of source code must retain the above copyright
-+ * notice, this list of conditions and the following disclaimer.
-+ * * Redistributions in binary form must reproduce the above copyright
-+ * notice, this list of conditions and the following disclaimer in the
-+ * documentation and/or other materials provided with the distribution.
-+ * * Neither the name of the above-listed copyright holders nor the
-+ * names of any contributors may be used to endorse or promote products
-+ * derived from this software without specific prior written permission.
-+ *
-+ *
-+ * ALTERNATIVELY, this software may be distributed under the terms of the
-+ * GNU General Public License ("GPL") as published by the Free Software
-+ * Foundation, either version 2 of that License or (at your option) any
-+ * later version.
-+ *
-+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
-+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-+ * POSSIBILITY OF SUCH DAMAGE.
-+ */
-+#ifndef __FSL_NET_H
-+#define __FSL_NET_H
-+
-+#define LAST_HDR_INDEX 0xFFFFFFFF
-+
-+/*****************************************************************************/
-+/* Protocol fields */
-+/*****************************************************************************/
-+
-+/************************* Ethernet fields *********************************/
-+#define NH_FLD_ETH_DA (1)
-+#define NH_FLD_ETH_SA (NH_FLD_ETH_DA << 1)
-+#define NH_FLD_ETH_LENGTH (NH_FLD_ETH_DA << 2)
-+#define NH_FLD_ETH_TYPE (NH_FLD_ETH_DA << 3)
-+#define NH_FLD_ETH_FINAL_CKSUM (NH_FLD_ETH_DA << 4)
-+#define NH_FLD_ETH_PADDING (NH_FLD_ETH_DA << 5)
-+#define NH_FLD_ETH_ALL_FIELDS ((NH_FLD_ETH_DA << 6) - 1)
-+
-+#define NH_FLD_ETH_ADDR_SIZE 6
-+
-+/*************************** VLAN fields ***********************************/
-+#define NH_FLD_VLAN_VPRI (1)
-+#define NH_FLD_VLAN_CFI (NH_FLD_VLAN_VPRI << 1)
-+#define NH_FLD_VLAN_VID (NH_FLD_VLAN_VPRI << 2)
-+#define NH_FLD_VLAN_LENGTH (NH_FLD_VLAN_VPRI << 3)
-+#define NH_FLD_VLAN_TYPE (NH_FLD_VLAN_VPRI << 4)
-+#define NH_FLD_VLAN_ALL_FIELDS ((NH_FLD_VLAN_VPRI << 5) - 1)
-+
-+#define NH_FLD_VLAN_TCI (NH_FLD_VLAN_VPRI | \
-+ NH_FLD_VLAN_CFI | \
-+ NH_FLD_VLAN_VID)
-+
-+/************************ IP (generic) fields ******************************/
-+#define NH_FLD_IP_VER (1)
-+#define NH_FLD_IP_DSCP (NH_FLD_IP_VER << 2)
-+#define NH_FLD_IP_ECN (NH_FLD_IP_VER << 3)
-+#define NH_FLD_IP_PROTO (NH_FLD_IP_VER << 4)
-+#define NH_FLD_IP_SRC (NH_FLD_IP_VER << 5)
-+#define NH_FLD_IP_DST (NH_FLD_IP_VER << 6)
-+#define NH_FLD_IP_TOS_TC (NH_FLD_IP_VER << 7)
-+#define NH_FLD_IP_ID (NH_FLD_IP_VER << 8)
-+#define NH_FLD_IP_ALL_FIELDS ((NH_FLD_IP_VER << 9) - 1)
-+
-+#define NH_FLD_IP_PROTO_SIZE 1
-+
-+/***************************** IPV4 fields *********************************/
-+#define NH_FLD_IPV4_VER (1)
-+#define NH_FLD_IPV4_HDR_LEN (NH_FLD_IPV4_VER << 1)
-+#define NH_FLD_IPV4_TOS (NH_FLD_IPV4_VER << 2)
-+#define NH_FLD_IPV4_TOTAL_LEN (NH_FLD_IPV4_VER << 3)
-+#define NH_FLD_IPV4_ID (NH_FLD_IPV4_VER << 4)
-+#define NH_FLD_IPV4_FLAG_D (NH_FLD_IPV4_VER << 5)
-+#define NH_FLD_IPV4_FLAG_M (NH_FLD_IPV4_VER << 6)
-+#define NH_FLD_IPV4_OFFSET (NH_FLD_IPV4_VER << 7)
-+#define NH_FLD_IPV4_TTL (NH_FLD_IPV4_VER << 8)
-+#define NH_FLD_IPV4_PROTO (NH_FLD_IPV4_VER << 9)
-+#define NH_FLD_IPV4_CKSUM (NH_FLD_IPV4_VER << 10)
-+#define NH_FLD_IPV4_SRC_IP (NH_FLD_IPV4_VER << 11)
-+#define NH_FLD_IPV4_DST_IP (NH_FLD_IPV4_VER << 12)
-+#define NH_FLD_IPV4_OPTS (NH_FLD_IPV4_VER << 13)
-+#define NH_FLD_IPV4_OPTS_COUNT (NH_FLD_IPV4_VER << 14)
-+#define NH_FLD_IPV4_ALL_FIELDS ((NH_FLD_IPV4_VER << 15) - 1)
-+
-+#define NH_FLD_IPV4_ADDR_SIZE 4
-+#define NH_FLD_IPV4_PROTO_SIZE 1
-+
-+/***************************** IPV6 fields *********************************/
-+#define NH_FLD_IPV6_VER (1)
-+#define NH_FLD_IPV6_TC (NH_FLD_IPV6_VER << 1)
-+#define NH_FLD_IPV6_SRC_IP (NH_FLD_IPV6_VER << 2)
-+#define NH_FLD_IPV6_DST_IP (NH_FLD_IPV6_VER << 3)
-+#define NH_FLD_IPV6_NEXT_HDR (NH_FLD_IPV6_VER << 4)
-+#define NH_FLD_IPV6_FL (NH_FLD_IPV6_VER << 5)
-+#define NH_FLD_IPV6_HOP_LIMIT (NH_FLD_IPV6_VER << 6)
-+#define NH_FLD_IPV6_ID (NH_FLD_IPV6_VER << 7)
-+#define NH_FLD_IPV6_ALL_FIELDS ((NH_FLD_IPV6_VER << 8) - 1)
-+
-+#define NH_FLD_IPV6_ADDR_SIZE 16
-+#define NH_FLD_IPV6_NEXT_HDR_SIZE 1
-+
-+/***************************** ICMP fields *********************************/
-+#define NH_FLD_ICMP_TYPE (1)
-+#define NH_FLD_ICMP_CODE (NH_FLD_ICMP_TYPE << 1)
-+#define NH_FLD_ICMP_CKSUM (NH_FLD_ICMP_TYPE << 2)
-+#define NH_FLD_ICMP_ID (NH_FLD_ICMP_TYPE << 3)
-+#define NH_FLD_ICMP_SQ_NUM (NH_FLD_ICMP_TYPE << 4)
-+#define NH_FLD_ICMP_ALL_FIELDS ((NH_FLD_ICMP_TYPE << 5) - 1)
-+
-+#define NH_FLD_ICMP_CODE_SIZE 1
-+#define NH_FLD_ICMP_TYPE_SIZE 1
-+
-+/***************************** IGMP fields *********************************/
-+#define NH_FLD_IGMP_VERSION (1)
-+#define NH_FLD_IGMP_TYPE (NH_FLD_IGMP_VERSION << 1)
-+#define NH_FLD_IGMP_CKSUM (NH_FLD_IGMP_VERSION << 2)
-+#define NH_FLD_IGMP_DATA (NH_FLD_IGMP_VERSION << 3)
-+#define NH_FLD_IGMP_ALL_FIELDS ((NH_FLD_IGMP_VERSION << 4) - 1)
-+
-+/***************************** TCP fields **********************************/
-+#define NH_FLD_TCP_PORT_SRC (1)
-+#define NH_FLD_TCP_PORT_DST (NH_FLD_TCP_PORT_SRC << 1)
-+#define NH_FLD_TCP_SEQ (NH_FLD_TCP_PORT_SRC << 2)
-+#define NH_FLD_TCP_ACK (NH_FLD_TCP_PORT_SRC << 3)
-+#define NH_FLD_TCP_OFFSET (NH_FLD_TCP_PORT_SRC << 4)
-+#define NH_FLD_TCP_FLAGS (NH_FLD_TCP_PORT_SRC << 5)
-+#define NH_FLD_TCP_WINDOW (NH_FLD_TCP_PORT_SRC << 6)
-+#define NH_FLD_TCP_CKSUM (NH_FLD_TCP_PORT_SRC << 7)
-+#define NH_FLD_TCP_URGPTR (NH_FLD_TCP_PORT_SRC << 8)
-+#define NH_FLD_TCP_OPTS (NH_FLD_TCP_PORT_SRC << 9)
-+#define NH_FLD_TCP_OPTS_COUNT (NH_FLD_TCP_PORT_SRC << 10)
-+#define NH_FLD_TCP_ALL_FIELDS ((NH_FLD_TCP_PORT_SRC << 11) - 1)
-+
-+#define NH_FLD_TCP_PORT_SIZE 2
-+
-+/***************************** UDP fields **********************************/
-+#define NH_FLD_UDP_PORT_SRC (1)
-+#define NH_FLD_UDP_PORT_DST (NH_FLD_UDP_PORT_SRC << 1)
-+#define NH_FLD_UDP_LEN (NH_FLD_UDP_PORT_SRC << 2)
-+#define NH_FLD_UDP_CKSUM (NH_FLD_UDP_PORT_SRC << 3)
-+#define NH_FLD_UDP_ALL_FIELDS ((NH_FLD_UDP_PORT_SRC << 4) - 1)
-+
-+#define NH_FLD_UDP_PORT_SIZE 2
-+
-+/*************************** UDP-lite fields *******************************/
-+#define NH_FLD_UDP_LITE_PORT_SRC (1)
-+#define NH_FLD_UDP_LITE_PORT_DST (NH_FLD_UDP_LITE_PORT_SRC << 1)
-+#define NH_FLD_UDP_LITE_ALL_FIELDS \
-+ ((NH_FLD_UDP_LITE_PORT_SRC << 2) - 1)
-+
-+#define NH_FLD_UDP_LITE_PORT_SIZE 2
-+
-+/*************************** UDP-encap-ESP fields **************************/
-+#define NH_FLD_UDP_ENC_ESP_PORT_SRC (1)
-+#define NH_FLD_UDP_ENC_ESP_PORT_DST (NH_FLD_UDP_ENC_ESP_PORT_SRC << 1)
-+#define NH_FLD_UDP_ENC_ESP_LEN (NH_FLD_UDP_ENC_ESP_PORT_SRC << 2)
-+#define NH_FLD_UDP_ENC_ESP_CKSUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 3)
-+#define NH_FLD_UDP_ENC_ESP_SPI (NH_FLD_UDP_ENC_ESP_PORT_SRC << 4)
-+#define NH_FLD_UDP_ENC_ESP_SEQUENCE_NUM (NH_FLD_UDP_ENC_ESP_PORT_SRC << 5)
-+#define NH_FLD_UDP_ENC_ESP_ALL_FIELDS \
-+ ((NH_FLD_UDP_ENC_ESP_PORT_SRC << 6) - 1)
-+
-+#define NH_FLD_UDP_ENC_ESP_PORT_SIZE 2
-+#define NH_FLD_UDP_ENC_ESP_SPI_SIZE 4
-+
-+/***************************** SCTP fields *********************************/
-+#define NH_FLD_SCTP_PORT_SRC (1)
-+#define NH_FLD_SCTP_PORT_DST (NH_FLD_SCTP_PORT_SRC << 1)
-+#define NH_FLD_SCTP_VER_TAG (NH_FLD_SCTP_PORT_SRC << 2)
-+#define NH_FLD_SCTP_CKSUM (NH_FLD_SCTP_PORT_SRC << 3)
-+#define NH_FLD_SCTP_ALL_FIELDS ((NH_FLD_SCTP_PORT_SRC << 4) - 1)
-+
-+#define NH_FLD_SCTP_PORT_SIZE 2
-+
-+/***************************** DCCP fields *********************************/
-+#define NH_FLD_DCCP_PORT_SRC (1)
-+#define NH_FLD_DCCP_PORT_DST (NH_FLD_DCCP_PORT_SRC << 1)
-+#define NH_FLD_DCCP_ALL_FIELDS ((NH_FLD_DCCP_PORT_SRC << 2) - 1)
-+
-+#define NH_FLD_DCCP_PORT_SIZE 2
-+
-+/***************************** IPHC fields *********************************/
-+#define NH_FLD_IPHC_CID (1)
-+#define NH_FLD_IPHC_CID_TYPE (NH_FLD_IPHC_CID << 1)
-+#define NH_FLD_IPHC_HCINDEX (NH_FLD_IPHC_CID << 2)
-+#define NH_FLD_IPHC_GEN (NH_FLD_IPHC_CID << 3)
-+#define NH_FLD_IPHC_D_BIT (NH_FLD_IPHC_CID << 4)
-+#define NH_FLD_IPHC_ALL_FIELDS ((NH_FLD_IPHC_CID << 5) - 1)
-+
-+/***************************** SCTP fields *********************************/
-+#define NH_FLD_SCTP_CHUNK_DATA_TYPE (1)
-+#define NH_FLD_SCTP_CHUNK_DATA_FLAGS (NH_FLD_SCTP_CHUNK_DATA_TYPE << 1)
-+#define NH_FLD_SCTP_CHUNK_DATA_LENGTH (NH_FLD_SCTP_CHUNK_DATA_TYPE << 2)
-+#define NH_FLD_SCTP_CHUNK_DATA_TSN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 3)
-+#define NH_FLD_SCTP_CHUNK_DATA_STREAM_ID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 4)
-+#define NH_FLD_SCTP_CHUNK_DATA_STREAM_SQN (NH_FLD_SCTP_CHUNK_DATA_TYPE << 5)
-+#define NH_FLD_SCTP_CHUNK_DATA_PAYLOAD_PID (NH_FLD_SCTP_CHUNK_DATA_TYPE << 6)
-+#define NH_FLD_SCTP_CHUNK_DATA_UNORDERED (NH_FLD_SCTP_CHUNK_DATA_TYPE << 7)
-+#define NH_FLD_SCTP_CHUNK_DATA_BEGGINING (NH_FLD_SCTP_CHUNK_DATA_TYPE << 8)
-+#define NH_FLD_SCTP_CHUNK_DATA_END (NH_FLD_SCTP_CHUNK_DATA_TYPE << 9)
-+#define NH_FLD_SCTP_CHUNK_DATA_ALL_FIELDS \
-+ ((NH_FLD_SCTP_CHUNK_DATA_TYPE << 10) - 1)
-+
-+/*************************** L2TPV2 fields *********************************/
-+#define NH_FLD_L2TPV2_TYPE_BIT (1)
-+#define NH_FLD_L2TPV2_LENGTH_BIT (NH_FLD_L2TPV2_TYPE_BIT << 1)
-+#define NH_FLD_L2TPV2_SEQUENCE_BIT (NH_FLD_L2TPV2_TYPE_BIT << 2)
-+#define NH_FLD_L2TPV2_OFFSET_BIT (NH_FLD_L2TPV2_TYPE_BIT << 3)
-+#define NH_FLD_L2TPV2_PRIORITY_BIT (NH_FLD_L2TPV2_TYPE_BIT << 4)
-+#define NH_FLD_L2TPV2_VERSION (NH_FLD_L2TPV2_TYPE_BIT << 5)
-+#define NH_FLD_L2TPV2_LEN (NH_FLD_L2TPV2_TYPE_BIT << 6)
-+#define NH_FLD_L2TPV2_TUNNEL_ID (NH_FLD_L2TPV2_TYPE_BIT << 7)
-+#define NH_FLD_L2TPV2_SESSION_ID (NH_FLD_L2TPV2_TYPE_BIT << 8)
-+#define NH_FLD_L2TPV2_NS (NH_FLD_L2TPV2_TYPE_BIT << 9)
-+#define NH_FLD_L2TPV2_NR (NH_FLD_L2TPV2_TYPE_BIT << 10)
-+#define NH_FLD_L2TPV2_OFFSET_SIZE (NH_FLD_L2TPV2_TYPE_BIT << 11)
-+#define NH_FLD_L2TPV2_FIRST_BYTE (NH_FLD_L2TPV2_TYPE_BIT << 12)
-+#define NH_FLD_L2TPV2_ALL_FIELDS \
-+ ((NH_FLD_L2TPV2_TYPE_BIT << 13) - 1)
-+
-+/*************************** L2TPV3 fields *********************************/
-+#define NH_FLD_L2TPV3_CTRL_TYPE_BIT (1)
-+#define NH_FLD_L2TPV3_CTRL_LENGTH_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 1)
-+#define NH_FLD_L2TPV3_CTRL_SEQUENCE_BIT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 2)
-+#define NH_FLD_L2TPV3_CTRL_VERSION (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 3)
-+#define NH_FLD_L2TPV3_CTRL_LENGTH (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 4)
-+#define NH_FLD_L2TPV3_CTRL_CONTROL (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 5)
-+#define NH_FLD_L2TPV3_CTRL_SENT (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 6)
-+#define NH_FLD_L2TPV3_CTRL_RECV (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 7)
-+#define NH_FLD_L2TPV3_CTRL_FIRST_BYTE (NH_FLD_L2TPV3_CTRL_TYPE_BIT << 8)
-+#define NH_FLD_L2TPV3_CTRL_ALL_FIELDS \
-+ ((NH_FLD_L2TPV3_CTRL_TYPE_BIT << 9) - 1)
-+
-+#define NH_FLD_L2TPV3_SESS_TYPE_BIT (1)
-+#define NH_FLD_L2TPV3_SESS_VERSION (NH_FLD_L2TPV3_SESS_TYPE_BIT << 1)
-+#define NH_FLD_L2TPV3_SESS_ID (NH_FLD_L2TPV3_SESS_TYPE_BIT << 2)
-+#define NH_FLD_L2TPV3_SESS_COOKIE (NH_FLD_L2TPV3_SESS_TYPE_BIT << 3)
-+#define NH_FLD_L2TPV3_SESS_ALL_FIELDS \
-+ ((NH_FLD_L2TPV3_SESS_TYPE_BIT << 4) - 1)
-+
-+/**************************** PPP fields ***********************************/
-+#define NH_FLD_PPP_PID (1)
-+#define NH_FLD_PPP_COMPRESSED (NH_FLD_PPP_PID << 1)
-+#define NH_FLD_PPP_ALL_FIELDS ((NH_FLD_PPP_PID << 2) - 1)
-+
-+/************************** PPPoE fields ***********************************/
-+#define NH_FLD_PPPOE_VER (1)
-+#define NH_FLD_PPPOE_TYPE (NH_FLD_PPPOE_VER << 1)
-+#define NH_FLD_PPPOE_CODE (NH_FLD_PPPOE_VER << 2)
-+#define NH_FLD_PPPOE_SID (NH_FLD_PPPOE_VER << 3)
-+#define NH_FLD_PPPOE_LEN (NH_FLD_PPPOE_VER << 4)
-+#define NH_FLD_PPPOE_SESSION (NH_FLD_PPPOE_VER << 5)
-+#define NH_FLD_PPPOE_PID (NH_FLD_PPPOE_VER << 6)
-+#define NH_FLD_PPPOE_ALL_FIELDS ((NH_FLD_PPPOE_VER << 7) - 1)
-+
-+/************************* PPP-Mux fields **********************************/
-+#define NH_FLD_PPPMUX_PID (1)
-+#define NH_FLD_PPPMUX_CKSUM (NH_FLD_PPPMUX_PID << 1)
-+#define NH_FLD_PPPMUX_COMPRESSED (NH_FLD_PPPMUX_PID << 2)
-+#define NH_FLD_PPPMUX_ALL_FIELDS ((NH_FLD_PPPMUX_PID << 3) - 1)
-+
-+/*********************** PPP-Mux sub-frame fields **************************/
-+#define NH_FLD_PPPMUX_SUBFRM_PFF (1)
-+#define NH_FLD_PPPMUX_SUBFRM_LXT (NH_FLD_PPPMUX_SUBFRM_PFF << 1)
-+#define NH_FLD_PPPMUX_SUBFRM_LEN (NH_FLD_PPPMUX_SUBFRM_PFF << 2)
-+#define NH_FLD_PPPMUX_SUBFRM_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 3)
-+#define NH_FLD_PPPMUX_SUBFRM_USE_PID (NH_FLD_PPPMUX_SUBFRM_PFF << 4)
-+#define NH_FLD_PPPMUX_SUBFRM_ALL_FIELDS \
-+ ((NH_FLD_PPPMUX_SUBFRM_PFF << 5) - 1)
-+
-+/*************************** LLC fields ************************************/
-+#define NH_FLD_LLC_DSAP (1)
-+#define NH_FLD_LLC_SSAP (NH_FLD_LLC_DSAP << 1)
-+#define NH_FLD_LLC_CTRL (NH_FLD_LLC_DSAP << 2)
-+#define NH_FLD_LLC_ALL_FIELDS ((NH_FLD_LLC_DSAP << 3) - 1)
-+
-+/*************************** NLPID fields **********************************/
-+#define NH_FLD_NLPID_NLPID (1)
-+#define NH_FLD_NLPID_ALL_FIELDS ((NH_FLD_NLPID_NLPID << 1) - 1)
-+
-+/*************************** SNAP fields ***********************************/
-+#define NH_FLD_SNAP_OUI (1)
-+#define NH_FLD_SNAP_PID (NH_FLD_SNAP_OUI << 1)
-+#define NH_FLD_SNAP_ALL_FIELDS ((NH_FLD_SNAP_OUI << 2) - 1)
-+
-+/*************************** LLC SNAP fields *******************************/
-+#define NH_FLD_LLC_SNAP_TYPE (1)
-+#define NH_FLD_LLC_SNAP_ALL_FIELDS ((NH_FLD_LLC_SNAP_TYPE << 1) - 1)
-+
-+#define NH_FLD_ARP_HTYPE (1)
-+#define NH_FLD_ARP_PTYPE (NH_FLD_ARP_HTYPE << 1)
-+#define NH_FLD_ARP_HLEN (NH_FLD_ARP_HTYPE << 2)
-+#define NH_FLD_ARP_PLEN (NH_FLD_ARP_HTYPE << 3)
-+#define NH_FLD_ARP_OPER (NH_FLD_ARP_HTYPE << 4)
-+#define NH_FLD_ARP_SHA (NH_FLD_ARP_HTYPE << 5)
-+#define NH_FLD_ARP_SPA (NH_FLD_ARP_HTYPE << 6)
-+#define NH_FLD_ARP_THA (NH_FLD_ARP_HTYPE << 7)
-+#define NH_FLD_ARP_TPA (NH_FLD_ARP_HTYPE << 8)
-+#define NH_FLD_ARP_ALL_FIELDS ((NH_FLD_ARP_HTYPE << 9) - 1)
-+
-+/*************************** RFC2684 fields ********************************/
-+#define NH_FLD_RFC2684_LLC (1)
-+#define NH_FLD_RFC2684_NLPID (NH_FLD_RFC2684_LLC << 1)
-+#define NH_FLD_RFC2684_OUI (NH_FLD_RFC2684_LLC << 2)
-+#define NH_FLD_RFC2684_PID (NH_FLD_RFC2684_LLC << 3)
-+#define NH_FLD_RFC2684_VPN_OUI (NH_FLD_RFC2684_LLC << 4)
-+#define NH_FLD_RFC2684_VPN_IDX (NH_FLD_RFC2684_LLC << 5)
-+#define NH_FLD_RFC2684_ALL_FIELDS ((NH_FLD_RFC2684_LLC << 6) - 1)
-+
-+/*************************** User defined fields ***************************/
-+#define NH_FLD_USER_DEFINED_SRCPORT (1)
-+#define NH_FLD_USER_DEFINED_PCDID (NH_FLD_USER_DEFINED_SRCPORT << 1)
-+#define NH_FLD_USER_DEFINED_ALL_FIELDS \
-+ ((NH_FLD_USER_DEFINED_SRCPORT << 2) - 1)
-+
-+/*************************** Payload fields ********************************/
-+#define NH_FLD_PAYLOAD_BUFFER (1)
-+#define NH_FLD_PAYLOAD_SIZE (NH_FLD_PAYLOAD_BUFFER << 1)
-+#define NH_FLD_MAX_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 2)
-+#define NH_FLD_MIN_FRM_SIZE (NH_FLD_PAYLOAD_BUFFER << 3)
-+#define NH_FLD_PAYLOAD_TYPE (NH_FLD_PAYLOAD_BUFFER << 4)
-+#define NH_FLD_FRAME_SIZE (NH_FLD_PAYLOAD_BUFFER << 5)
-+#define NH_FLD_PAYLOAD_ALL_FIELDS ((NH_FLD_PAYLOAD_BUFFER << 6) - 1)
-+
-+/*************************** GRE fields ************************************/
-+#define NH_FLD_GRE_TYPE (1)
-+#define NH_FLD_GRE_ALL_FIELDS ((NH_FLD_GRE_TYPE << 1) - 1)
-+
-+/*************************** MINENCAP fields *******************************/
-+#define NH_FLD_MINENCAP_SRC_IP (1)
-+#define NH_FLD_MINENCAP_DST_IP (NH_FLD_MINENCAP_SRC_IP << 1)
-+#define NH_FLD_MINENCAP_TYPE (NH_FLD_MINENCAP_SRC_IP << 2)
-+#define NH_FLD_MINENCAP_ALL_FIELDS \
-+ ((NH_FLD_MINENCAP_SRC_IP << 3) - 1)
-+
-+/*************************** IPSEC AH fields *******************************/
-+#define NH_FLD_IPSEC_AH_SPI (1)
-+#define NH_FLD_IPSEC_AH_NH (NH_FLD_IPSEC_AH_SPI << 1)
-+#define NH_FLD_IPSEC_AH_ALL_FIELDS ((NH_FLD_IPSEC_AH_SPI << 2) - 1)
-+
-+/*************************** IPSEC ESP fields ******************************/
-+#define NH_FLD_IPSEC_ESP_SPI (1)
-+#define NH_FLD_IPSEC_ESP_SEQUENCE_NUM (NH_FLD_IPSEC_ESP_SPI << 1)
-+#define NH_FLD_IPSEC_ESP_ALL_FIELDS ((NH_FLD_IPSEC_ESP_SPI << 2) - 1)
-+
-+#define NH_FLD_IPSEC_ESP_SPI_SIZE 4
-+
-+/*************************** MPLS fields ***********************************/
-+#define NH_FLD_MPLS_LABEL_STACK (1)
-+#define NH_FLD_MPLS_LABEL_STACK_ALL_FIELDS \
-+ ((NH_FLD_MPLS_LABEL_STACK << 1) - 1)
-+
-+/*************************** MACSEC fields *********************************/
-+#define NH_FLD_MACSEC_SECTAG (1)
-+#define NH_FLD_MACSEC_ALL_FIELDS ((NH_FLD_MACSEC_SECTAG << 1) - 1)
-+
-+/*************************** GTP fields ************************************/
-+#define NH_FLD_GTP_TEID (1)
-+
-+
-+/* Protocol options */
-+
-+/* Ethernet options */
-+#define NH_OPT_ETH_BROADCAST 1
-+#define NH_OPT_ETH_MULTICAST 2
-+#define NH_OPT_ETH_UNICAST 3
-+#define NH_OPT_ETH_BPDU 4
-+
-+#define NH_ETH_IS_MULTICAST_ADDR(addr) (addr[0] & 0x01)
-+/* also applicable for broadcast */
-+
-+/* VLAN options */
-+#define NH_OPT_VLAN_CFI 1
-+
-+/* IPV4 options */
-+#define NH_OPT_IPV4_UNICAST 1
-+#define NH_OPT_IPV4_MULTICAST 2
-+#define NH_OPT_IPV4_BROADCAST 3
-+#define NH_OPT_IPV4_OPTION 4
-+#define NH_OPT_IPV4_FRAG 5
-+#define NH_OPT_IPV4_INITIAL_FRAG 6
-+
-+/* IPV6 options */
-+#define NH_OPT_IPV6_UNICAST 1
-+#define NH_OPT_IPV6_MULTICAST 2
-+#define NH_OPT_IPV6_OPTION 3
-+#define NH_OPT_IPV6_FRAG 4
-+#define NH_OPT_IPV6_INITIAL_FRAG 5
-+
-+/* General IP options (may be used for any version) */
-+#define NH_OPT_IP_FRAG 1
-+#define NH_OPT_IP_INITIAL_FRAG 2
-+#define NH_OPT_IP_OPTION 3
-+
-+/* Minenc. options */
-+#define NH_OPT_MINENCAP_SRC_ADDR_PRESENT 1
-+
-+/* GRE. options */
-+#define NH_OPT_GRE_ROUTING_PRESENT 1
-+
-+/* TCP options */
-+#define NH_OPT_TCP_OPTIONS 1
-+#define NH_OPT_TCP_CONTROL_HIGH_BITS 2
-+#define NH_OPT_TCP_CONTROL_LOW_BITS 3
-+
-+/* CAPWAP options */
-+#define NH_OPT_CAPWAP_DTLS 1
-+
-+enum net_prot {
-+ NET_PROT_NONE = 0,
-+ NET_PROT_PAYLOAD,
-+ NET_PROT_ETH,
-+ NET_PROT_VLAN,
-+ NET_PROT_IPV4,
-+ NET_PROT_IPV6,
-+ NET_PROT_IP,
-+ NET_PROT_TCP,
-+ NET_PROT_UDP,
-+ NET_PROT_UDP_LITE,
-+ NET_PROT_IPHC,
-+ NET_PROT_SCTP,
-+ NET_PROT_SCTP_CHUNK_DATA,
-+ NET_PROT_PPPOE,
-+ NET_PROT_PPP,
-+ NET_PROT_PPPMUX,
-+ NET_PROT_PPPMUX_SUBFRM,
-+ NET_PROT_L2TPV2,
-+ NET_PROT_L2TPV3_CTRL,
-+ NET_PROT_L2TPV3_SESS,
-+ NET_PROT_LLC,
-+ NET_PROT_LLC_SNAP,
-+ NET_PROT_NLPID,
-+ NET_PROT_SNAP,
-+ NET_PROT_MPLS,
-+ NET_PROT_IPSEC_AH,
-+ NET_PROT_IPSEC_ESP,
-+ NET_PROT_UDP_ENC_ESP, /* RFC 3948 */
-+ NET_PROT_MACSEC,
-+ NET_PROT_GRE,
-+ NET_PROT_MINENCAP,
-+ NET_PROT_DCCP,
-+ NET_PROT_ICMP,
-+ NET_PROT_IGMP,
-+ NET_PROT_ARP,
-+ NET_PROT_CAPWAP_DATA,
-+ NET_PROT_CAPWAP_CTRL,
-+ NET_PROT_RFC2684,
-+ NET_PROT_ICMPV6,
-+ NET_PROT_FCOE,
-+ NET_PROT_FIP,
-+ NET_PROT_ISCSI,
-+ NET_PROT_GTP,
-+ NET_PROT_USER_DEFINED_L2,
-+ NET_PROT_USER_DEFINED_L3,
-+ NET_PROT_USER_DEFINED_L4,
-+ NET_PROT_USER_DEFINED_L5,
-+ NET_PROT_USER_DEFINED_SHIM1,
-+ NET_PROT_USER_DEFINED_SHIM2,
-+
-+ NET_PROT_DUMMY_LAST
-+};
-+
-+/*! IEEE8021.Q */
-+#define NH_IEEE8021Q_ETYPE 0x8100
-+#define NH_IEEE8021Q_HDR(etype, pcp, dei, vlan_id) \
-+ ((((uint32_t)(etype & 0xFFFF)) << 16) | \
-+ (((uint32_t)(pcp & 0x07)) << 13) | \
-+ (((uint32_t)(dei & 0x01)) << 12) | \
-+ (((uint32_t)(vlan_id & 0xFFF))))
-+
-+#endif /* __FSL_NET_H */
---- a/net/core/pktgen.c
-+++ b/net/core/pktgen.c
-@@ -2790,6 +2790,7 @@ static struct sk_buff *pktgen_alloc_skb(
- } else {
- skb = __netdev_alloc_skb(dev, size, GFP_NOWAIT);
- }
-+ skb_reserve(skb, LL_RESERVED_SPACE(dev));
-
- /* the caller pre-fetches from skb->data and reserves for the mac hdr */
- if (likely(skb))