Age | Commit message (Collapse) | Author | Files | Lines |
|
Antonio Quartulli says:
====================
Included changes:
- prevent compatibility issue between DAT and speedy join from creating
inconsistencies in the global translation table
- make sure temporary TT entries are purged out if not claimed
- fix comparison function used for TT hash table
- fix invalid stack access in batadv_dat_select_candidates()
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
It was found by Saurabh Sengar that the netlink code tried to allocate
memory with GFP_KERNEL while holding a spinlock. While it is possible
to fix the issue by replacing GFP_KERNEL with GFP_ATOMIC, it is better
to get rid of the spinlock while sending the packet. However, in order
to protect against a race condition that a quick response may be received
before the request is put on the request list, we need to put the request
on the list first.
Signed-off-by: Kaike Wan <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Reported-by: Saurabh Sengar <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
do_div is the wrong way to divide a sector_t, as it is less
efficient when sector_t is 32-bit wide. With the upcoming
do_div optimizations, the kernel starts warning about this:
drivers/infiniband/ulp/iser/iser_verbs.c:1296:4: note: in expansion of macro 'do_div'
include/asm-generic/div64.h:224:22: warning: passing argument 1 of '__div64_32' from incompatible pointer type
This changes the code to use sector_div instead, which always
produces optimal code.
Signed-off-by: Arnd Bergmann <[email protected]>
Reviewed-by: Sagi Grimberg <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The current implementation gets a spin_lock, and at any scale with
qib and hfi1 post send, the lock contention grows exponentially
with the number of QPs.
idr_find() is RCU compatibile, so read doesn't need the lock.
Change to use rcu_read_lock() and rcu_read_unlock() in
__idr_get_uobj().
kfree_rcu() is used to insure a grace period between the
idr removal and actual free.
Reviewed-by: Ira Weiny <[email protected]>
Signed-off-by: Mike Marciniszyn <[email protected]>
Reviewed-By: Jason Gunthorpe <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
Minor errors found via code inspection during future development.
SFF 8636 defines bit position 2 to hold the status indication of
QSFP memory paging. The mask used to test for the value was
incorrect and is fixed in this patch. Additionally, the dump
function had a mismatch between the field being printed out and
the field used to source the data which was fixed.
Reviewed-by: Mitko Haralanov <[email protected]>
Reviewed-by: Mike Marciniszyn <[email protected]>
Reported-by: Easwar Hariharan <[email protected]>
Signed-off-by: Easwar Hariharan <[email protected]>
Signed-off-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
Locally generated IPv4 and (probably) IPv6 packets are dropped because
skb->protocol isn't set. We could write wrappers to lwtunnel_output
for IPv4 and IPv6 that set the protocol accordingly and then call
lwtunnel_output, but mpls_output relies on the AF-specific type of dst
anyway to get the via address.
Therefore, make use of dst->dst_ops->family in mpls_output to
determine the type of nexthop and thus protocol of the packet instead
of checking skb->protocol.
Fixes: 61adedf3e3f1 ("route: move lwtunnel state to dst_entry")
Reported-by: Sam Russell <[email protected]>
Signed-off-by: Robert Shearman <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Jiri Benc says:
====================
vxlan: IPv6 fill_metadata_dst support
This adds IPv6 support to ndo_fill_metadata_dst in vxlan. The IPv4 part
needs some restructuring to avoid duplicate code, this will be sent as
a separate patch targeting net-next.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Fill the metadata correctly even when tunneling over IPv6. Also, check that
the provided metadata is of an address family that is supported by the
tunnel.
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Will be used also for ndo_fill_metadata_dst.
Signed-off-by: Jiri Benc <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Commit e622f2f4ad21 ("IB: split struct ib_send_wr")
introduced a regression for HCAs whose user mode post
sends go through ib_uverbs_post_send().
The code didn't account for the fact that the first sge is
offset by an operation dependent length. The allocation did,
but the pointer to the destination sge list is computed without
that knowledge. The sge list copy_from_user() then corrupts
fields in the work request
Store the operation dependent length in a local variable and
compute the sge list copy_from_user() destination using that length.
Reviewed-by: Ira Weiny <[email protected]>
Signed-off-by: Mike Marciniszyn <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
struct qib_mr requires the mr member be the last because struct
qib_mregion contains a dynamic array at the end. The additions
of members should have been placed before this structure as the
comment noted.
Failure to do so was causing random memory corruption. Reproducing
this bug was easy to do by running the client and server of
ib_write_bw -s 8 -n 5 on the same node.
This BUG() was tripped in a slab debug kernel:
kernel BUG at mm/slab.c:2572!
Fixes: 38071a461f0a ("IB/qib: Support the new memory registration API")
Reviewed-by: Mike Marciniszyn <[email protected]>
Signed-off-by: Ira Weiny <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The NFSv4.1 callback channel is currently broken because the receive
message will keep shrinking because the backchannel receive buffer size
never gets reset.
The easiest solution to this problem is instead of changing the receive
buffer, to rather adjust the copied request.
Fixes: 38b7631fbe42 ("nfs4: limit callback decoding to received bytes")
Cc: Benjamin Coddington <[email protected]>
Cc: [email protected]
Signed-off-by: Trond Myklebust <[email protected]>
|
|
Manish Chopra says:
====================
qed: Bug fixes
Please consider applying this series to net.
V2:
- Use available helpers for declaring bitmap
and bitmap operations.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
When using INTa, ISR might be called before device is configured
for INTa [E.g., due to other device asserting the shared interrupt line],
in which case the ISR would read the SISR registers that shouldn't be
read unless HW is already configured for INTa. This might break interrupts
later on. There's also an MSI-X issue due to this difference, although
it's mostly theoretical.
This patch changes the initialization order, calling request_irq() for the
slowpath interrupt only after the chip is configured for working
in the preferred interrupt mode.
Signed-off-by: Sudarsana Kalluru <[email protected]>
Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Can't rely on pci config space to discover bar size,
as in some environments this returns a wrong, too large value.
Instead, rely on device register, which contains the value
provided by MFW at preboot.
Signed-off-by: Ariel Elior <[email protected]>
Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Concurrent non-blocking slowpath ramrods can be completed
out-of-order on the completion chain. Recycling completed elements,
while previously sent elements are still completion pending,
can lead to overriding of active elements on the chain. Furthermore,
sending pending slowpath ramrods currently lacks the update of the
chain element physical pointer.
This patch:
* Ensures that ramrods are sent to the FW with
consecutive echo values.
* Handles out-of-order completions by freeing only first
successive completed entries.
* Updates the chain element physical pointer when copying
a pending element into a free element for sending.
Signed-off-by: Tomer Tayar <[email protected]>
Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The amount of chain next pointer elements between the producer
and the consumer indices depends on which pages they currently
point to. The current calculation is based only on their difference,
and it can lead to a number of free elements which is higher by 1
than the actual value.
Signed-off-by: Tomer Tayar <[email protected]>
Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If NO_DMA=y:
ERROR: "dma_map_single" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_unmap_page" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_sync_single_for_cpu" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_unmap_single" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_alloc_coherent" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_mapping_error" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_map_page" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
ERROR: "dma_free_coherent" [drivers/net/ethernet/aurora/nb8800.ko] undefined!
Signed-off-by: Geert Uytterhoeven <[email protected]>
Acked-by: Mans Rullgard <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Pull virtio fixes from Michael Tsirkin:
"This includes some fixes and cleanups in virtio and vhost code.
Most notably, shadowing the index fixes the excessive cacheline
bouncing observed on AMD platforms"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio_ring: shadow available ring flags & index
virtio: Do not drop __GFP_HIGH in alloc_indirect
vhost: replace % with & on data path
tools/virtio: fix byteswap logic
tools/virtio: move list macro stubs
virtio: fix memory leak of virtio ida cache layers
vhost: relax log address alignment
virtio-net: Stop doing DMA from the stack
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 fixes from Ted Ts'o:
"Ext4 bug fixes for v4.4, including fixes for post-2038 time encodings,
some endian conversion problems with ext4 encryption, potential memory
leaks after truncate in data=journal mode, and an ocfs2 regression
caused by a jbd2 performance improvement"
* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
jbd2: fix null committed data return in undo_access
ext4: add "static" to ext4_seq_##name##_fops struct
ext4: fix an endianness bug in ext4_encrypted_follow_link()
ext4: fix an endianness bug in ext4_encrypted_zeroout()
jbd2: Fix unreclaimed pages after truncate in data=journal mode
ext4: Fix handling of extended tv_sec
|
|
Bring the linker script in line with the recent increase of
L1_CACHE_BYTES to 128. Replace the hardcoded value of 64 with the
symbolic constant.
Signed-off-by: Ard Biesheuvel <[email protected]>
Acked-by: Mark Rutland <[email protected]>
[[email protected]: fix up RW_DATA_SECTION as well]
Signed-off-by: Catalin Marinas <[email protected]>
|
|
The LightNVM module exposes a debug interface when CONFIG_NVM_DEBUG is
set. This interfaces takes a string to configure media managers and
targets. Make sure this interface is only exposed when chosen
deliberately.
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
After the gennvm module has been initialized. It might be attached to
one or several devices. In that case, the module is in use. Make sure
that it can not be unloaded.
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
This patch fixes two issues during media manager registration.
1. The ppa pool can be used at media manager registration. Allocate the
ppa pool before that.
2. If a media manager can't be found, this should not lead to the
device being unallocated. A media manager can be registered later, that
can manage the device. Only warn if a media manager fails
initialization.
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
In the case where a request queue is passed to the low lever lightnvm
device drive integration, the device driver might pass its admin
commands through another queue. Instead pass nvm_dev, and let the
low level drive the appropriate queue.
Reported-by: Christoph Hellwig <[email protected]>
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
It is not obvious what NVM_IO_* and NVM_BLK_T_* are used for. Make sure
to comment them appropriately as the other constants.
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
The core can may issue I/Os before a media manager is registered with
the lightnvm subsystem. Make sure that we don't call the media manager
->end_io prematurely with a null pointer.
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
The spin_unlock is duplicated multiple times. Jump to a single unlock
to improve the code flow.
Signed-off-by: Wenwei Tao <[email protected]>
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Put the allocated blocks back to the free list
when the luns configure failed, to make these
blocks useable to others.
Signed-off-by: Wenwei Tao <[email protected]>
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
rrpc_get_blk use constant 0 as the input parameter
of nvm_get_blk, this may result in getting gc block
failed unexpectedly.
Signed-off-by: Wenwei Tao <[email protected]>
Signed-off-by: Matias Bjørling <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Improves cacheline transfer flow of available ring header.
Virtqueues are implemented as a pair of rings, one producer->consumer
avail ring and one consumer->producer used ring; preceding the
avail ring in memory are two contiguous u16 fields -- avail->flags
and avail->idx. A producer posts work by writing to avail->idx and
a consumer reads avail->idx.
The flags and idx fields only need to be written by a producer CPU
and only read by a consumer CPU; when the producer and consumer are
running on different CPUs and the virtio_ring code is structured to
only have source writes/sink reads, we can continuously transfer the
avail header cacheline between 'M' states between cores. This flow
optimizes core -> core bandwidth on certain CPUs.
(see: "Software Optimization Guide for AMD Family 15h Processors",
Section 11.6; similar language appears in the 10h guide and should
apply to CPUs w/ exclusive caches, using LLC as a transfer cache)
Unfortunately the existing virtio_ring code issued reads to the
avail->idx and read-modify-writes to avail->flags on the producer.
This change shadows the flags and index fields in producer memory;
the vring code now reads from the shadows and only ever writes to
avail->flags and avail->idx, allowing the cacheline to transfer
core -> core optimally.
In a concurrent version of vring_bench, the time required for
10,000,000 buffer checkout/returns was reduced by ~2% (average
across many runs) on an AMD Piledriver (15h) CPU:
(w/o shadowing):
Performance counter stats for './vring_bench':
5,451,082,016 L1-dcache-loads
...
2.221477739 seconds time elapsed
(w/ shadowing):
Performance counter stats for './vring_bench':
5,405,701,361 L1-dcache-loads
...
2.168405376 seconds time elapsed
The further away (in a NUMA sense) virtio producers and consumers are
from each other, the more we expect to benefit. Physical implementations
of virtio devices and implementations of virtio where the consumer polls
vring avail indexes (vhost) should also benefit.
Signed-off-by: Venkatesh Srinivas <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
b92b1b89a33c ("virtio: force vring descriptors to be allocated from
lowmem") tried to exclude highmem pages for descriptors so it cleared
__GFP_HIGHMEM from a given gfp mask. The patch also cleared __GFP_HIGH
which doesn't make much sense for this fix because __GFP_HIGH only
controls access to memory reserves and it doesn't have any influence
on the zone selection. Some of the call paths use GFP_ATOMIC and
dropping __GFP_HIGH will reduce their changes for success because the
lack of access to memory reserves.
Signed-off-by: Michal Hocko <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Will Deacon <[email protected]>
Reviewed-by: Mel Gorman <[email protected]>
|
|
We know vring num is a power of 2, so use &
to mask the high bits.
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
commit cf561f0d2eb74574ad9985a2feab134267a9d298 ("virtio: introduce
virtio_is_little_endian() helper") changed byteswap logic to
skip feature bit checks for LE platforms, but didn't
update tools/virtio, so vring_bench started failing.
Update the copy under tools/virtio/ (TODO: find a way to avoid this code
duplication).
Cc: Greg Kurz <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
Makes them more generally available.
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
The virtio core uses a static ida named virtio_index_ida for
assigning index numbers to virtio devices during registration.
The ida core may allocate some internal idr cache layers and
an ida bitmap upon any ida allocation, and all these layers are
truely freed only upon the ida destruction. The virtio_index_ida
is not destroyed at present, leading to a memory leak when using
the virtio core as a module and atleast one virtio device is
registered and unregistered.
Fix this by invoking ida_destroy() in the virtio core module
exit.
Cc: [email protected]
Signed-off-by: Suman Anna <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
commit 5d9a07b0de512b77bf28d2401e5fe3351f00a240 ("vhost: relax used
address alignment") fixed the alignment for the used virtual address,
but not for the physical address used for logging.
That's a mistake: alignment should clearly be the same for virtual and
physical addresses,
Cc: [email protected]
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
page reads
Every attempt to issue a read log page command lockup the controller.
The command is currently sent if the sata device includes the devlsp feature
to read out the timing data.
This attempt to read the data, locks up the controller and the device
is not recognzied correctly (failed to set xfermode) and cannot be accessed.
This was found on Freescale P1013/P1022 and T4240 CPUs
using a ATP IG mSATA 4GB with the devslp feature.
fsl-sata ff718000.sata: Sata FSL Platform/CSB Driver init
[ 1.254195] scsi0 : sata_fsl
[ 1.256004] ata1: SATA max UDMA/133 irq 74
[ 1.370666] fsl-gianfar ethernet.3: enabled errata workarounds, flags: 0x4
[ 1.470671] fsl-gianfar ethernet.4: enabled errata workarounds, flags: 0x4
[ 1.775584] ata1: Signature Update detected @ 504 msecs
[ 1.947594] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1.948366] ata1.00: ATA-8: ATP IG mSATA, 20150311, max UDMA/133
[ 1.948371] ata1.00: 7732368 sectors, multi 0: LBA
[ 1.948843] ata1.00: failed to get Identify Device Data, Emask 0x1
[ 1.948857] ata1.00: failed to set xfermode (err_mask=0x40)
[ 7.467557] ata1: Signature Update detected @ 504 msecs
[ 7.639560] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 7.651320] ata1.00: failed to get Identify Device Data, Emask 0x1
[ 7.651360] ata1.00: failed to set xfermode (err_mask=0x40)
[ 7.655628] ata1: limiting SATA link speed to 1.5 Gbps
[ 7.659458] ata1.00: limiting speed to UDMA/133:PIO3
[ 13.163554] ata1: Signature Update detected @ 504 msecs
[ 13.335558] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 13.347298] ata1.00: failed to get Identify Device Data, Emask 0x1
[ 13.347334] ata1.00: failed to set xfermode (err_mask=0x40)
[ 13.351601] ata1.00: disabled
[ 13.353278] ata1: exception Emask 0x50 SAct 0x0 SErr 0x800 action 0x6 frozen t4
[ 13.359281] ata1: SError: { HostInt }
[ 13.361644] ata1: hard resetting link
Signed-off-by: Andreas Werner <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
log page
Some controller lockup on a ata_read_log_page.
Add new ata port flag ATA_FLAG_NO_LOG_PAGE which can used
to blacklist a controller.
If this flag is set, any attempt to read a log page returns an error
without actually issuing the command.
Signed-off-by: Andreas Werner <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
The following commit which went into mainline through networking tree
3b13758f51de ("cgroups: Allow dynamically changing net_classid")
conflicts in net/core/netclassid_cgroup.c with the following pending
fix in cgroup/for-4.4-fixes.
1f7dd3e5a6e4 ("cgroup: fix handling of multi-destination migration from subtree_control enabling")
The former separates out update_classid() from cgrp_attach() and
updates it to walk all fds of all tasks in the target css so that it
can be used from both migration and config change paths. The latter
drops @css from cgrp_attach().
Resolve the conflict by making cgrp_attach() call update_classid()
with the css from the first task. We can revive @tset walking in
cgrp_attach() but given that net_cls is v1 only where there always is
only one target css during migration, this is fine.
Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Stephen Rothwell <[email protected]>
Cc: Nina Schiff <[email protected]>
|
|
batadv_dat_select_candidates provides an u32 to batadv_hash_dat but it
needs a batadv_dat_entry with at least ip and vid filled in.
Fixes: 3e26722bc9f2 ("batman-adv: make the Distributed ARP Table vlan aware")
Signed-off-by: Sven Eckelmann <[email protected]>
Acked-by: Antonio Quartulli <[email protected]>
Signed-off-by: Marek Lindner <[email protected]>
Signed-off-by: Antonio Quartulli <[email protected]>
|
|
The translation table implementation, namely batadv_compare_tt(),
is used to compare two client entries and deciding if they are the
holding the same information. Each client entry is identified by
its mac address and its VLAN id (VID).
Consequently, batadv_compare_tt() has to not only compare the mac
addresses but also the VIDs.
Without this fix adding a new client entry that possesses the same
mac address as another client but operates on a different VID will
fail because both client entries will considered identical.
Signed-off-by: Marek Lindner <[email protected]>
Signed-off-by: Antonio Quartulli <[email protected]>
|
|
Once virtio starts using the DMA API, we won't be able to safely DMA
from the stack. virtio-net does a couple of config DMA requests
from small stack buffers -- switch to using dynamically-allocated
memory.
This should have no effect on any performance-critical code paths.
Reported-by: Andy Lutomirski <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Andy Lutomirski <[email protected]>
|
|
Current ARC SDP boards cannot reliably handle 1Gbit
Ethernet connections due to limitations in hardware.
To make sure networking is stable on the board we're
limiting phy to 100 Mbit.
Signed-off-by: Alexey Brodkin <[email protected]>
Signed-off-by: Vineet Gupta <[email protected]>
|
|
In the case when a temporary entry is added first and a proper tt entry
is added after that, the temporary tt entry is kept in the orig list.
However the temporary flag is removed at this point, and therefore the
purge function can not find this temporary entry anymore.
Therefore, remove the previous temp entry before adding the new proper
one.
This case can happen if a client behind a given originator moves before
the TT announcement is sent out. Other than that, this case can also be
created by bogus or malicious payload frames for VLANs which are not
existent on the sending originator.
Reported-by: Alessandro Bolletta <[email protected]>
Signed-off-by: Simon Wunderlich <[email protected]>
Acked-by: Antonio Quartulli <[email protected]>
Signed-off-by: Marek Lindner <[email protected]>
Signed-off-by: Antonio Quartulli <[email protected]>
|
|
DAT Cache replies are answered on behalf of other clients which are not
connected to the answering originator. Therefore, we shouldn't add these
clients to the answering originators TT table through speed join to
avoid bogus entries.
Reported-by: Alessandro Bolletta <[email protected]>
Signed-off-by: Simon Wunderlich <[email protected]>
Acked-by: Antonio Quartulli <[email protected]>
Signed-off-by: Marek Lindner <[email protected]>
Signed-off-by: Antonio Quartulli <[email protected]>
|
|
On driver detach, devm_phy_release() will put a refcount to
the phy, so gets a refconut to it before return.
Signed-off-by: Chunfeng Yun <[email protected]>
Signed-off-by: Kishon Vijay Abraham I <[email protected]>
|
|
On the internal mic of the Packard Bell DOTS, one channel
has an inverted signal. Add a quirk to fix this up.
Cc: [email protected]
BugLink: https://bugs.launchpad.net/bugs/1523232
Signed-off-by: David Henningsson <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
|
In BXT-P A0, HD-Audio DMA requests is later than expected,
and makes an audio stream sensitive to system latencies when
24/32 bits are playing.
Adjusting threshold of DMA fifo to force the DMA request
sooner to improve latency tolerance at the expense of power.
v2: move Intel specific code to hda_intel.c
Signed-off-by: Lu, Han <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
|
This semicolon causes a build error if the function call is wrapped in
parentheses.
Fixes: aabc92bbe3cf ("net: add __netdev_alloc_pcpu_stats() to indicate gfp flags")
Reported-by: Imre Kaloz <[email protected]>
Signed-off-by: Felix Fietkau <[email protected]>
Acked-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|