aboutsummaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2017-04-26fs/affs: import amigaffs.hFabian Frederick1-144/+0
Have that file in global include/linux is not needed. Signed-off-by: Fabian Frederick <[email protected]> Signed-off-by: Al Viro <[email protected]>
2017-04-26Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+1
Signed-off-by: David S. Miller <[email protected]>
2017-04-26srcu: Specify auto-expedite holdoff timePaul E. McKenney1-0/+1
On small systems, in the absence of readers, expedited SRCU grace periods can complete in less than a microsecond. This means that an eight-CPU system can have all CPUs doing synchronize_srcu() in a tight loop and almost always expedite. This might actually be desirable in some situations, but in general it is a good way to needlessly burn CPU cycles. And in those situations where it is desirable, your friend is the function synchronize_srcu_expedited(). For other situations, this commit adds a kernel parameter that specifies a holdoff between completing the last SRCU grace period and auto-expediting the next. If the next grace period starts before the holdoff expires, auto-expediting is disabled. The holdoff is 50 microseconds by default, and can be tuned to the desired number of nanoseconds. A value of zero disables auto-expediting. Signed-off-by: Paul E. McKenney <[email protected]> Tested-by: Mike Galbraith <[email protected]>
2017-04-26srcu: Expedited grace periods with reduced memory contentionPaul E. McKenney1-1/+3
Commit f60d231a87c5 ("srcu: Crude control of expedited grace periods") introduced a per-srcu_struct atomic counter to track outstanding requests for grace periods. This works, but represents a memory-contention bottleneck. This commit therefore uses the srcu_node combining tree to remove this bottleneck. This commit adds new ->srcu_gp_seq_needed_exp fields to the srcu_data, srcu_node, and srcu_struct structures, which track the farthest-in-the-future grace period that must be expedited, which in turn requires that all nearer-term grace periods also be expedited. Requests for expediting start with the srcu_data structure, run up through the srcu_node tree, and end at the srcu_struct structure. Note that it may be necessary to expedite a grace period that just now started, and this is handled by a new srcu_funnel_exp_start() function, which is invoked when the grace period itself is already in its way, but when that grace period was not marked as expedited. A new srcu_get_delay() function returns zero if there is at least one expedited SRCU grace period in flight, or SRCU_INTERVAL otherwise. This function is used to calculate delays: Normal grace periods are allowed to extend in order to cover more requests with a given grace-period computation, which decreases per-request overhead. Signed-off-by: Paul E. McKenney <[email protected]> Tested-by: Mike Galbraith <[email protected]>
2017-04-26x86, iommu/vt-d: Add an option to disable Intel IOMMU force onShaohua Li1-0/+1
IOMMU harms performance signficantly when we run very fast networking workloads. It's 40GB networking doing XDP test. Software overhead is almost unaware, but it's the IOTLB miss (based on our analysis) which kills the performance. We observed the same performance issue even with software passthrough (identity mapping), only the hardware passthrough survives. The pps with iommu (with software passthrough) is only about ~30% of that without it. This is a limitation in hardware based on our observation, so we'd like to disable the IOMMU force on, but we do want to use TBOOT and we can sacrifice the DMA security bought by IOMMU. I must admit I know nothing about TBOOT, but TBOOT guys (cc-ed) think not eabling IOMMU is totally ok. So introduce a new boot option to disable the force on. It's kind of silly we need to run into intel_iommu_init even without force on, but we need to disable TBOOT PMR registers. For system without the boot option, nothing is changed. Signed-off-by: Shaohua Li <[email protected]> Signed-off-by: Joerg Roedel <[email protected]>
2017-04-26ieee80211: fix kernel-doc parsing errorsJohannes Berg1-6/+6
Some of the enum definitions are unnamed but there's still an attempt at documenting them - that doesn't work. Name them to make that work. Signed-off-by: Johannes Berg <[email protected]>
2017-04-26ieee80211: add FT-802.1X AKM suite selectorLuca Coelho1-0/+1
Add the definition for FT-8021.1X AKM selector as defined in IEEE Std 802.11-2016, table 9-133. Signed-off-by: Luca Coelho <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-04-26ieee80211: add SUITE_B AKM selectorsLuca Coelho1-12/+14
Add the definitions for SUITE_B and SUITE_B_192 AKM selectors as defined in IEEE802.11REVmc_D5.0, table 9-132. Signed-off-by: Luca Coelho <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-04-26blk-mq: Add blk_mq_ops.show_rq()Bart Van Assche1-0/+8
This new callback function will be used in the next patch to show more information about SCSI requests. Signed-off-by: Bart Van Assche <[email protected]> Reviewed-by: Omar Sandoval <[email protected]> Cc: Hannes Reinecke <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2017-04-26tcp: switch rcv_rtt_est and rcvq_space to high resolution timestampsEric Dumazet1-6/+6
Some devices or distributions use HZ=100 or HZ=250 TCP receive buffer autotuning has poor behavior caused by this choice. Since autotuning happens after 4 ms or 10 ms, short distance flows get their receive buffer tuned to a very high value, but after an initial period where it was frozen to (too small) initial value. With tp->tcp_mstamp introduction, we can switch to high resolution timestamps almost for free (at the expense of 8 additional bytes per TCP structure) Note that some TCP stacks use usec TCP timestamps where this patch makes even more sense : Many TCP flows have < 500 usec RTT. Hopefully this finer TS option can be standardized soon. Tested: HZ=100 kernel ./netperf -H lpaa24 -t TCP_RR -l 1000 -- -r 10000,10000 & Peer without patch : lpaa24:~# ss -tmi dst lpaa23 ... skmem:(r0,rb8388608,...) rcv_rtt:10 rcv_space:3210000 minrtt:0.017 Peer with the patch : lpaa23:~# ss -tmi dst lpaa24 ... skmem:(r0,rb428800,...) rcv_rtt:0.069 rcv_space:30000 minrtt:0.017 We can see saner RCVBUF, and more precise rcv_rtt information. Signed-off-by: Eric Dumazet <[email protected]> Acked-by: Soheil Hassas Yeganeh <[email protected]> Acked-by: Neal Cardwell <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-26tcp: add tp->tcp_mstamp fieldEric Dumazet1-0/+1
We want to use precise timestamps in TCP stack, but we do not want to call possibly expensive kernel time services too often. tp->tcp_mstamp is guaranteed to be updated once per incoming packet. We will use it in the following patches, removing specific skb_mstamp_get() calls, and removing ack_time from struct tcp_sacktag_state. Signed-off-by: Eric Dumazet <[email protected]> Acked-by: Soheil Hassas Yeganeh <[email protected]> Acked-by: Neal Cardwell <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-26rhashtable: remove insecure_max_entries paramFlorian Westphal1-4/+2
no users in the tree, insecure_max_entries is always set to ht->p.max_size * 2 in rhtashtable_init(). Replace only spot that uses it with a ht->p.max_size check. Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-26net: phy: fix auto-negotiation stall due to unavailable interruptAlexander Kochetkov1-0/+1
The Ethernet link on an interrupt driven PHY was not coming up if the Ethernet cable was plugged before the Ethernet interface was brought up. The patch trigger PHY state machine to update link state if PHY was requested to do auto-negotiation and auto-negotiation complete flag already set. During power-up cycle the PHY do auto-negotiation, generate interrupt and set auto-negotiation complete flag. Interrupt is handled by PHY state machine but doesn't update link state because PHY is in PHY_READY state. After some time MAC bring up, start and request PHY to do auto-negotiation. If there are no new settings to advertise genphy_config_aneg() doesn't start PHY auto-negotiation. PHY continue to stay in auto-negotiation complete state and doesn't fire interrupt. At the same time PHY state machine expect that PHY started auto-negotiation and is waiting for interrupt from PHY and it won't get it. Fixes: 321beec5047a ("net: phy: Use interrupts when available in NOLINK state") Signed-off-by: Alexander Kochetkov <[email protected]> Cc: stable <[email protected]> # v4.9+ Tested-by: Roger Quadros <[email protected]> Tested-by: Alexandre Belloni <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-26srcu: Make rcutorture writer stalls print SRCU GP statePaul E. McKenney3-0/+30
In the past, SRCU was simple enough that there was little point in making the rcutorture writer stall messages print the SRCU grace-period number state. With the advent of Tree SRCU, this has changed. This commit therefore makes Classic, Tiny, and Tree SRCU report this state to rcutorture as needed. Signed-off-by: Paul E. McKenney <[email protected]> Tested-by: Mike Galbraith <[email protected]>
2017-04-26srcu: Exact tracking of srcu_data structures containing callbacksPaul E. McKenney1-0/+4
The current Tree SRCU implementation schedules a workqueue for every srcu_data covered by a given leaf srcu_node structure having callbacks, even if only one of those srcu_data structures actually contains callbacks. This is clearly inefficient for workloads that don't feature callbacks everywhere all the time. This commit therefore adds an array of masks that are used by the leaf srcu_node structures to track exactly which srcu_data structures contain callbacks. Signed-off-by: Paul E. McKenney <[email protected]> Tested-by: Mike Galbraith <[email protected]>
2017-04-26NFSv4: Don't special case "launder"Trond Myklebust1-13/+1
If the client receives a fatal server error from nfs_pageio_add_request(), then we should always truncate the page on which the error occurred. Signed-off-by: Trond Myklebust <[email protected]>
2017-04-26CONFIG_ARCH_HAS_RAW_COPY_USER is unconditional nowAl Viro1-5/+2
all architectures converted Signed-off-by: Al Viro <[email protected]>
2017-04-26Merge branches 'uaccess.alpha', 'uaccess.arc', 'uaccess.arm', ↵Al Viro41-120/+383
'uaccess.arm64', 'uaccess.avr32', 'uaccess.bfin', 'uaccess.c6x', 'uaccess.cris', 'uaccess.frv', 'uaccess.h8300', 'uaccess.hexagon', 'uaccess.ia64', 'uaccess.m32r', 'uaccess.m68k', 'uaccess.metag', 'uaccess.microblaze', 'uaccess.mips', 'uaccess.mn10300', 'uaccess.nios2', 'uaccess.openrisc', 'uaccess.parisc', 'uaccess.powerpc', 'uaccess.s390', 'uaccess.score', 'uaccess.sh', 'uaccess.sparc', 'uaccess.tile', 'uaccess.um', 'uaccess.unicore32', 'uaccess.x86' and 'uaccess.xtensa' into work.uaccess
2017-04-26Merge remote-tracking branches 'spi/topic/ti-qspi' and 'spi/topic/xlp' into ↵Mark Brown1-0/+4
spi-next
2017-04-26Merge remote-tracking branches 'spi/topic/devprop', 'spi/topic/fsl', ↵Mark Brown1-0/+4
'spi/topic/fsl-dspi', 'spi/topic/imx' and 'spi/topic/lantiq' into spi-next
2017-04-26Merge remote-tracking branch 'spi/topic/core' into spi-nextMark Brown1-1/+1
2017-04-26ebtables: remove nf_hook_register usageFlorian Westphal1-2/+4
Similar to ip_register_table, pass nf_hook_ops to ebt_register_table(). This allows to handle hook registration also via pernet_ops and allows us to avoid use of legacy register_hook api. Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2017-04-25svcrdma: Clean out old XDR encodersChuck Lever1-4/+0
Clean up: These have been replaced and are no longer used. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Remove the req_map cacheChuck Lever1-33/+1
req_maps are no longer used by the send path and can thus be removed. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Remove unused RDMA Write completion handlerChuck Lever1-1/+0
Clean up. All RDMA Write completions are now handled by svc_rdma_wc_write_ctx. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Reduce size of sge array in struct svc_rdma_op_ctxtChuck Lever1-2/+7
The sge array in struct svc_rdma_op_ctxt is no longer used for sending RDMA Write WRs. It need only accommodate the construction of Send and Receive WRs. The maximum inline size is the largest payload it needs to handle now. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Clean up RPC-over-RDMA backchannel reply processingChuck Lever1-1/+1
Replace C structure-based XDR decoding with pointer arithmetic. Pointer arithmetic is considered more portable. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Clean up RDMA_ERROR pathChuck Lever2-5/+3
Now that svc_rdma_sendto has been renovated, svc_rdma_send_error can be refactored to reduce code duplication and remove C structure- based XDR encoding. It is also relocated to the source file that contains its only caller. This is a refactoring change only. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Use rdma_rw API in RPC reply pathChuck Lever1-1/+0
The current svcrdma sendto code path posts one RDMA Write WR at a time. Each of these Writes typically carries a small number of pages (for instance, up to 30 pages for mlx4 devices). That means a 1MB NFS READ reply requires 9 ib_post_send() calls for the Write WRs, and one for the Send WR carrying the actual RPC Reply message. Instead, use the new rdma_rw API. The details of Write WR chain construction and memory registration are taken care of in the RDMA core. svcrdma can focus on the details of the RPC-over-RDMA protocol. This gives three main benefits: 1. All Write WRs for one RDMA segment are posted in a single chain. As few as one ib_post_send() for each Write chunk. 2. The Write path can now use FRWR to register the Write buffers. If the device's maximum page list depth is large, this means a single Write WR is needed for each RPC's Write chunk data. 3. The new code introduces support for RPCs that carry both a Write list and a Reply chunk. This combination can be used for an NFSv4 READ where the data payload is large, and thus is removed from the Payload Stream, but the Payload Stream is still larger than the inline threshold. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Introduce local rdma_rw API helpersChuck Lever1-0/+11
The plan is to replace the local bespoke code that constructs and posts RDMA Read and Write Work Requests with calls to the rdma_rw API. This shares code with other RDMA-enabled ULPs that manages the gory details of buffer registration and posting Work Requests. Some design notes: o The structure of RPC-over-RDMA transport headers is flexible, allowing multiple segments per Reply with arbitrary alignment, each with a unique R_key. Write and Send WRs continue to be built and posted in separate code paths. However, one whole chunk (with one or more RDMA segments apiece) gets exactly one ib_post_send and one work completion. o svc_xprt reference counting is modified, since a chain of rdma_rw_ctx structs generates one completion, no matter how many Write WRs are posted. o The current code builds the transport header as it is construct- ing Write WRs. I've replaced that with marshaling of transport header data items in a separate step. This is because the exact structure of client-provided segments may not align with the components of the server's reply xdr_buf, or the pages in the page list. Thus parts of each client-provided segment may be written at different points in the send path. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Eliminate RPCRDMA_SQ_DEPTH_MULTChuck Lever1-1/+0
The Send Queue depth is temporarily reduced to 1 SQE per credit. The new rdma_rw API does an internal computation, during QP creation, to increase the depth of the Send Queue to handle RDMA Read and Write operations. This change has to come before the NFSD code paths are updated to use the rdma_rw API. Without this patch, rdma_rw_init_qp() increases the size of the SQ too much, resulting in memory allocation failures during QP creation. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Add svc_rdma_map_reply_hdr()Chuck Lever1-0/+3
Introduce a helper to DMA-map a reply's transport header before sending it. This will in part replace the map vector cache. Signed-off-by: Chuck Lever <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25svcrdma: Move send_wr to svc_rdma_op_ctxtChuck Lever1-0/+4
Clean up: Move the ib_send_wr off the stack, and move common code to post a Send Work Request into a helper. This is a refactoring change only. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25nfsd: check for oversized NFSv2/v3 argumentsJ. Bruce Fields1-2/+1
A client can append random data to the end of an NFSv2 or NFSv3 RPC call without our complaining; we'll just stop parsing at the end of the expected data and ignore the rest. Encoded arguments and replies are stored together in an array of pages, and if a call is too large it could leave inadequate space for the reply. This is normally OK because NFS RPC's typically have either short arguments and long replies (like READ) or long arguments and short replies (like WRITE). But a client that sends an incorrectly long reply can violate those assumptions. This was observed to cause crashes. So, insist that the argument not be any longer than we expect. Also, several operations increment rq_next_page in the decode routine before checking the argument size, which can leave rq_next_page pointing well past the end of the page array, causing trouble later in svc_free_pages. As followup we may also want to rewrite the encoding routines to check more carefully that they aren't running off the end of the page array. Reported-by: Tuomas Haanpää <[email protected]> Reported-by: Ari Kauppi <[email protected]> Cc: [email protected] Signed-off-by: J. Bruce Fields <[email protected]>
2017-04-25net: move xdp_prog field in RX cache linesEric Dumazet1-1/+1
(struct net_device, xdp_prog) field should be moved in RX cache lines, reducing latencies when a single packet is received on idle host, since netif_elide_gro() needs it. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-25x86, dax, pmem: remove indirection around memcpy_from_pmem()Dan Williams2-23/+8
memcpy_from_pmem() maps directly to memcpy_mcsafe(). The wrapper serves no real benefit aside from affording a more generic function name than the x86-specific 'mcsafe'. However this would not be the first time that x86 terminology leaked into the global namespace. For lack of better name, just use memcpy_mcsafe() directly. This conversion also catches a place where we should have been using plain memcpy, acpi_nfit_blk_single_io(). Cc: <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jeff Moyer <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Ross Zwisler <[email protected]> Acked-by: Tony Luck <[email protected]> Signed-off-by: Dan Williams <[email protected]>
2017-04-25block: remove block_device_operations ->direct_access()Dan Williams1-17/+0
Now that all the producers and consumers of dax interfaces have been converted to using dax_operations on a dax_device, remove the block device direct_access enabling. Signed-off-by: Dan Williams <[email protected]>
2017-04-25filesystem-dax: convert to dax_direct_access()Dan Williams1-2/+4
Now that a dax_device is plumbed through all dax-capable drivers we can switch from block_device_operations to dax_operations for invoking ->direct_access. This also lets us kill off some usages of struct blk_dax_ctl on the way to its eventual removal. Suggested-by: Christoph Hellwig <[email protected]> Signed-off-by: Dan Williams <[email protected]>
2017-04-25Revert "block: use DAX for partition table reads"Dan Williams1-6/+0
commit d1a5f2b4d8a1 ("block: use DAX for partition table reads") was part of a stalled effort to allow dax mappings of block devices. Since then the device-dax mechanism has filled the role of dax-mapping static device ranges. Now that we are moving ->direct_access() from a block_device operation to a dax_inode operation we would need block devices to map and carry their own dax_inode reference. Unless / until we decide to revive dax mapping of raw block devices through the dax_inode scheme, there is no need to carry read_dax_sector(). Its removal in turn allows for the removal of bdev_direct_access() and should have been included in commit 223757016837 ("block_dev: remove DAX leftovers"). Cc: Jeff Moyer <[email protected]> Signed-off-by: Dan Williams <[email protected]>
2017-04-25ext2, ext4, xfs: retrieve dax_device for iomap operationsDan Williams1-0/+1
In preparation for converting fs/dax.c to use dax_direct_access() instead of bdev_direct_access(), add the plumbing to retrieve the dax_device associated with a given block_device. Signed-off-by: Dan Williams <[email protected]>
2017-04-25dm: teach dm-targets to use a dax_device + dax_operationsDan Williams1-3/+4
Arrange for dm to lookup the dax services available from member devices. Update the dax-capable targets, linear and stripe, to route dax operations to the underlying device. Changes the target-internal ->direct_access() method to more closely align with the dax_operations ->direct_access() calling convention. Cc: Toshi Kani <[email protected]> Reviewed-by: Mike Snitzer <[email protected]> Signed-off-by: Dan Williams <[email protected]>
2017-04-25net: Generic XDPDavid S. Miller1-0/+8
This provides a generic SKB based non-optimized XDP path which is used if either the driver lacks a specific XDP implementation, or the user requests it via a new IFLA_XDP_FLAGS value named XDP_FLAGS_SKB_MODE. It is arguable that perhaps I should have required something like this as part of the initial XDP feature merge. I believe this is critical for two reasons: 1) Accessibility. More people can play with XDP with less dependencies. Yes I know we have XDP support in virtio_net, but that just creates another depedency for learning how to use this facility. I wrote this to make life easier for the XDP newbies. 2) As a model for what the expected semantics are. If there is a pure generic core implementation, it serves as a semantic example for driver folks adding XDP support. One thing I have not tried to address here is the issue of XDP_PACKET_HEADROOM, thanks to Daniel for spotting that. It seems incredibly expensive to do a skb_cow(skb, XDP_PACKET_HEADROOM) or whatever even if the XDP program doesn't try to push headers at all. I think we really need the verifier to somehow propagate whether certain XDP helpers are used or not. v5: - Handle both negative and positive offset after running prog - Fix mac length in XDP_TX case (Alexei) - Use rcu_dereference_protected() in free_netdev (kbuild test robot) v4: - Fix MAC header adjustmnet before calling prog (David Ahern) - Disable LRO when generic XDP is installed (Michael Chan) - Bypass qdisc et al. on XDP_TX and record the event (Alexei) - Do not perform generic XDP on reinjected packets (DaveM) v3: - Make sure XDP program sees packet at MAC header, push back MAC header if we do XDP_TX. (Alexei) - Elide GRO when generic XDP is in use. (Alexei) - Add XDP_FLAG_SKB_MODE flag which the user can use to request generic XDP even if the driver has an XDP implementation. (Alexei) - Report whether SKB mode is in use in rtnl_xdp_fill() via XDP_FLAGS attribute. (Daniel) v2: - Add some "fall through" comments in switch statements based upon feedback from Andrew Lunn - Use RCU for generic xdp_prog, thanks to Johannes Berg. Tested-by: Andy Gospodarek <[email protected]> Tested-by: Jesper Dangaard Brouer <[email protected]> Tested-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-25qed/qede: Add UDP ports in bulletin boardChopra, Manish1-0/+1
This patch adds support for UDP ports in bulletin board to notify UDP ports change to the VFs Signed-off-by: Manish Chopra <[email protected]> Signed-off-by: Yuval Mintz <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-25qed/qede: Enable tunnel offloads based on hw configurationChopra, Manish1-0/+5
This patch enables tunnel feature offloads based on hw configuration at initialization time instead of enabling them always. Signed-off-by: Manish Chopra <[email protected]> Signed-off-by: Yuval Mintz <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-04-25regulator: arizona-ldo1: Move pdata into a separate structureRichard Fitzgerald2-2/+26
In preparation for sharing this driver with Madera, move the pdata for the LDO1 regulator out of struct arizona_pdata into a dedicated pdata struct for this driver. As a result the code in arizona_ldo1_of_get_pdata() can be made independent of struct arizona. This patch also updates the definition of struct arizona_pdata and the use of this pdata in mach-crag6410-module.c Signed-off-by: Richard Fitzgerald <[email protected]> Acked-by: Krzysztof Kozlowski <[email protected]> Acked-by: Lee Jones <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2017-04-25regulator: arizona-micsupp: Move pdata into a separate structureRichard Fitzgerald2-1/+23
In preparation for sharing this driver with Madera, move the pdata for the micsupp regulator out of struct arizona_pdata into a dedicated pdata struct for this driver. As a result the code in arizona_micsupp_of_get_pdata() can be made independent of struct arizona. This patch also updates the definition of struct arizona_pdata and the use of this pdata in mach-crag6410-module.c Signed-off-by: Richard Fitzgerald <[email protected]> Acked-by: Lee Jones <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2017-04-25mtd: nand: allow drivers to request minimum alignment for passed bufferMasahiro Yamada1-0/+2
In some cases, nand_do_{read,write}_ops is passed with unaligned ops->datbuf. Drivers using DMA will be unhappy about unaligned buffer. The new struct member, buf_align, represents the minimum alignment the driver require for the buffer. If the buffer passed from the upper MTD layer does not have enough alignment, nand_do_*_ops will use bufpoi. Signed-off-by: Masahiro Yamada <[email protected]> Signed-off-by: Boris Brezillon <[email protected]>
2017-04-25mtd: nand: relax ecc.read_page() return value for uncorrectable ECCMasahiro Yamada1-1/+1
The comment for ecc.read_page() requires that it should return "0 if bitflips uncorrectable". Actually, drivers could return positive values when uncorrectable bitflips occur. For example, nand_read_page_swecc() is the case. If ecc.correct() returns -EBADMSG for the first ECC sector, and a positive value for the second one, nand_read_page_swecc() returns a positive max_bitflips and increments ecc_stats.failed for the same page. The requirement can be relaxed by tweaking nand_do_read_ops(). Move the max_bitflips calculation below the retry. Signed-off-by: Masahiro Yamada <[email protected]> Suggested-by: Boris Brezillon <[email protected]> Signed-off-by: Boris Brezillon <[email protected]>
2017-04-25mtd: nand: Remove unused chip->write_page() hookBoris Brezillon1-4/+0
The last/only user of the chip->write_page() hook (the Atmel NAND controller driver) has been reworked and is no longer specifying a custom ->write_page() implementation. Drop this hook before someone else start abusing it. Signed-off-by: Boris Brezillon <[email protected]> Reviewed-by: Masahiro Yamada <[email protected]>
2017-04-25can: complete initial namespace supportOliver Hartkopp1-2/+2
The statistics and its proc output was not implemented as per-net in the initial network namespace support by Mario Kicherer (8e8cda6d737d). This patch adds the missing per-net statistics for the CAN subsystem. Signed-off-by: Oliver Hartkopp <[email protected]> Signed-off-by: Marc Kleine-Budde <[email protected]>