Age | Commit message (Collapse) | Author | Files | Lines |
|
RFC 8033 suggests an alternative approach to calculate the queue
delay in PIE by using a timestamp on every enqueued packet. This
patch adds an implementation of that approach and sets it as the
default method to calculate queue delay. The previous method (based
on Little's law) to calculate queue delay is set as optional.
Signed-off-by: Gautam Ramakrishnan <[email protected]>
Signed-off-by: Leslie Monis <[email protected]>
Signed-off-by: Mohit P. Tahiliani <[email protected]>
Acked-by: Dave Taht <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add page_pool_update_nid() to be called by page pool consumers when they
detect numa node changes.
It will update the page pool nid value to start allocating from the new
effective numa node.
This is to mitigate page pool allocating pages from a wrong numa node,
where the pool was originally allocated, and holding on to pages that
belong to a different numa node, which causes performance degradation.
For pages that are already being consumed and could be returned to the
pool by the consumer, in next patch we will add a check per page to avoid
recycling them back to the pool and return them to the page allocator.
Signed-off-by: Saeed Mahameed <[email protected]>
Acked-by: Jonathan Lemon <[email protected]>
Reviewed-by: Ilias Apalodimas <[email protected]>
Acked-by: Jesper Dangaard Brouer <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The valid memory address check in dma_capable only makes sense when mapping
normal memory, not when using dma_map_resource to map a device resource.
Add a new boolean argument to dma_capable to exclude that check for the
dma_map_resource case.
Fixes: b12d66278dd6 ("dma-direct: check for overflows on 32 bit DMA addresses")
Reported-by: Marek Szyprowski <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Tested-by: Marek Szyprowski <[email protected]>
|
|
Move dma_capable down a bit so that we don't need a forward declaration
for phys_to_dma.
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Nicolas Saenz Julienne <[email protected]>
|
|
Currently each architectures that wants to override dma_to_phys and
phys_to_dma also has to provide dma_capable. But there isn't really
any good reason for that. powerpc and mips just have copies of the
generic one minus the latests fix, and the arm one was the inspiration
for said fix, but misses the bus_dma_mask handling.
Make all architectures use the generic version instead.
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Michael Ellerman <[email protected]> (powerpc)
Reviewed-by: Nicolas Saenz Julienne <[email protected]>
|
|
These are pure cache maintainance routines, so drop the unused
struct device argument.
Signed-off-by: Christoph Hellwig <[email protected]>
Suggested-by: Daniel Vetter <[email protected]>
|
|
Match on ethertype and set up protocol dependency. Check for protocol
dependency before accessing the tci field. Allow to match on the
encapsulated ethertype too.
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Hardware offload support at this stage assumes an ethernet device in
place. The flow dissector provides the intermediate representation to
express this selector, so extend it to allow to store the interface
type. Flower does not uses this, so skb_flow_dissect_meta() is not
extended to match on this new field.
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch constifies the pointer to source register data that is passed
as an input parameter.
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Many PCI and other drivers performs snd_pcm_period_elapsed() simply in
its interrupt handler, so the sync_stop operation is just to call
synchronize_irq(). Instead of putting this call multiple times,
introduce the common card->sync_irq field. When this field is set,
PCM core performs synchronize_irq() for sync-stop operation. Each
driver just needs to copy its local IRQ number to card->sync_irq, and
that's all we need.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
The standard programming model of a PCM sound driver is to process
snd_pcm_period_elapsed() from an interrupt handler. When a running
stream is stopped, PCM core calls the trigger-STOP PCM ops, sets the
stream state to SETUP, and moves on to the next step. This is
performed in an atomic manner -- this could be called from the interrupt
context, after all.
The problem is that, if the stream goes further and reaches to the
CLOSE state immediately, the stream might be still being processed in
snd_pcm_period_elapsed() in the interrupt context, and hits a NULL
dereference. Such a crash happens because of the atomic operation,
and we can't wait until the stream-stop finishes.
For addressing such a problem, this commit adds a new PCM ops,
sync_stop. This gets called at the appropriate places that need a
sync with the stream-stop, i.e. at hw_params, prepare and hw_free.
Some drivers already have a similar mechanism implemented locally, and
we'll refactor the code later.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
It should be used only in the PCM core code locally.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
This patch adds the support for the feature to automatically allocate
and free PCM buffers, so called "managed buffer allocation" mode.
It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and
snd_pcm_set_managed_buffer_all(), both of which correspond to the
existing preallocator helpers, snd_pcm_lib_preallocate_pages() and
snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used,
it not only performs the pre-allocation of buffers, but also it
manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params
ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core,
respectively. This allows drivers to drop the explicit calls of the
memory allocation / release functions, and it will be a good amount of
code reduction in the end of this patch series.
When the PCM substream is set to the managed buffer allocation mode,
the managed_buffer_alloc flag is set in the substream object. Since
some drivers want to know when a buffer is newly allocated or
re-allocated at hw_params callback (e.g. want to set up the additional
stuff for the given buffer only at allocation time), now PCM core
turns on buffer_changed flag when the buffer has changed.
The standard conversions to use the new API will be straightforward:
- Replace snd_pcm_lib_preallocate*() calls with the corresponding
snd_pcm_set_managed_buffer*(); the arguments should be unchanged
- Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls;
the check of snd_pcm_lib_malloc() returns should be replaced with
the check of runtime->buffer_changed flag.
- If hw_params or hw_free becomes empty, drop them from PCM ops
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
Add SND_SOC_BYTES_E to accept getter and putter.
Signed-off-by: Tzung-Bi Shih <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
This ASCII string can carry additional information about
soundcard components or configuration. Add the possibility
to set this string via the ASoC card.
Signed-off-by: Jaroslav Kysela <[email protected]>
Cc: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
Extend devm_of_pci_get_host_bridge_resources() and
pci_parse_request_of_pci_ranges() helpers to also parse the inbound
addresses from DT 'dma-ranges' and populate a resource list with the
translated addresses. This will help ensure 'dma-ranges' is always
parsed in a consistent way.
Tested-by: Srinath Mannam <[email protected]>
Tested-by: Thomas Petazzoni <[email protected]> # for AArdvark
Signed-off-by: Rob Herring <[email protected]>
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Reviewed-by: Srinath Mannam <[email protected]>
Reviewed-by: Andrew Murray <[email protected]>
Acked-by: Gustavo Pimentel <[email protected]>
Cc: Jingoo Han <[email protected]>
Cc: Gustavo Pimentel <[email protected]>
Cc: Lorenzo Pieralisi <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Thomas Petazzoni <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Toan Le <[email protected]>
Cc: Ley Foon Tan <[email protected]>
Cc: Tom Joseph <[email protected]>
Cc: Ray Jui <[email protected]>
Cc: Scott Branden <[email protected]>
Cc: [email protected]
Cc: Ryder Lee <[email protected]>
Cc: Karthikeyan Mitran <[email protected]>
Cc: Hou Zhiqiang <[email protected]>
Cc: Simon Horman <[email protected]>
Cc: Shawn Lin <[email protected]>
Cc: Heiko Stuebner <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
This patch adds support for this VMD device which supports the bus
restriction mode.
Signed-off-by: Jon Derrick <[email protected]>
Signed-off-by: Lorenzo Pieralisi <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core
Pull irqchip updates from Marc Zyngier:
- Qualcomm PDC wakeup interrupt support
- Layerscape external IRQ support
- Broadcom bcm7038 PM and wakeup support
- Ingenic driver cleanup and modernization
- GICv3 ITS preparation for GICv4.1 updates
- GICv4 fixes
- Various cleanups
|
|
Add tap delay nodes for setting SDIO Tap Delays on ZynqMP platform.
Signed-off-by: Manish Narani <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Modify cpuidle_use_deepest_state() to take an additional exit latency
limit argument to be passed to find_deepest_idle_state() and make
cpuidle_idle_call() pass dev->forced_idle_latency_limit_ns to it for
forced idle.
Suggested-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
[ rjw: Rebase and rearrange code, subject & changelog ]
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
In some cases it may be useful to specify an exit latency limit for
the idle state to be used during CPU idle time injection.
Instead of duplicating the information in struct cpuidle_device
or propagating the latency limit in the call stack, replace the
use_deepest_state field with forced_latency_limit_ns to represent
that limit, so that the deepest idle state with exit latency within
that limit is forced (i.e. no governors) when it is set.
A zero exit latency limit for forced idle means to use governors in
the usual way (analogous to use_deepest_state equal to "false" before
this change).
Additionally, add play_idle_precise() taking two arguments, the
duration of forced idle and the idle state exit latency limit, both
in nanoseconds, and redefine play_idle() as a wrapper around that
new function.
This change is preparatory, no functional impact is expected.
Suggested-by: Rafael J. Wysocki <[email protected]>
Signed-off-by: Daniel Lezcano <[email protected]>
[ rjw: Subject, changelog, cpuidle_use_deepest_state() kerneldoc, whitespace ]
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
The mutex will be used in subsequent changes to replace the busy looping of
a waiter when the futex owner is currently executing the exit cleanup to
prevent a potential live lock.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Instead of relying on PF_EXITING use an explicit state for the futex exit
and set it in the futex exit function. This moves the smp barrier and the
lock/unlock serialization into the futex code.
As with the DEAD state this is restricted to the exit path as exec
continues to use the same task struct.
This allows to simplify that logic in a next step.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
To allow separate handling of the futex exit state in the futex exit code
for exit and exec, split futex_mm_release() into two functions and invoke
them from the corresponding exit/exec_mm_release() callsites.
Preparatory only, no functional change.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
mm_release() contains the futex exit handling. mm_release() is called from
do_exit()->exit_mm() and from exec()->exec_mm().
In the exit_mm() case PF_EXITING and the futex state is updated. In the
exec_mm() case these states are not touched.
As the futex exit code needs further protections against exit races, this
needs to be split into two functions.
Preparatory only, no functional change.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
The futex exit handling relies on PF_ flags. That's suboptimal as it
requires a smp_mb() and an ugly lock/unlock of the exiting tasks pi_lock in
the middle of do_exit() to enforce the observability of PF_EXITING in the
futex code.
Add a futex_state member to task_struct and convert the PF_EXITPIDONE logic
over to the new state. The PF_EXITING dependency will be cleaned up in a
later step.
This prepares for handling various futex exit issues later.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
The futex exit handling is #ifdeffed into mm_release() which is not pretty
to begin with. But upcoming changes to address futex exit races need to add
more functionality to this exit code.
Split it out into a function, move it into futex code and make the various
futex exit functions static.
Preparatory only and no functional change.
Folded build fix from Borislav.
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Ingo Molnar <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
The iSCSI target driver is the only target driver that does not wait for
ongoing commands to finish before freeing a session. Make the iSCSI target
driver wait for ongoing commands to finish before freeing a session. This
patch fixes the following KASAN complaint:
BUG: KASAN: use-after-free in __lock_acquire+0xb1a/0x2710
Read of size 8 at addr ffff8881154eca70 by task kworker/0:2/247
CPU: 0 PID: 247 Comm: kworker/0:2 Not tainted 5.4.0-rc1-dbg+ #6
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
Workqueue: target_completion target_complete_ok_work [target_core_mod]
Call Trace:
dump_stack+0x8a/0xd6
print_address_description.constprop.0+0x40/0x60
__kasan_report.cold+0x1b/0x33
kasan_report+0x16/0x20
__asan_load8+0x58/0x90
__lock_acquire+0xb1a/0x2710
lock_acquire+0xd3/0x200
_raw_spin_lock_irqsave+0x43/0x60
target_release_cmd_kref+0x162/0x7f0 [target_core_mod]
target_put_sess_cmd+0x2e/0x40 [target_core_mod]
lio_check_stop_free+0x12/0x20 [iscsi_target_mod]
transport_cmd_check_stop_to_fabric+0xd8/0xe0 [target_core_mod]
target_complete_ok_work+0x1b0/0x790 [target_core_mod]
process_one_work+0x549/0xa40
worker_thread+0x7a/0x5d0
kthread+0x1bc/0x210
ret_from_fork+0x24/0x30
Allocated by task 889:
save_stack+0x23/0x90
__kasan_kmalloc.constprop.0+0xcf/0xe0
kasan_slab_alloc+0x12/0x20
kmem_cache_alloc+0xf6/0x360
transport_alloc_session+0x29/0x80 [target_core_mod]
iscsi_target_login_thread+0xcd6/0x18f0 [iscsi_target_mod]
kthread+0x1bc/0x210
ret_from_fork+0x24/0x30
Freed by task 1025:
save_stack+0x23/0x90
__kasan_slab_free+0x13a/0x190
kasan_slab_free+0x12/0x20
kmem_cache_free+0x146/0x400
transport_free_session+0x179/0x2f0 [target_core_mod]
transport_deregister_session+0x130/0x180 [target_core_mod]
iscsit_close_session+0x12c/0x350 [iscsi_target_mod]
iscsit_logout_post_handler+0x136/0x380 [iscsi_target_mod]
iscsit_response_queue+0x8de/0xbe0 [iscsi_target_mod]
iscsi_target_tx_thread+0x27f/0x370 [iscsi_target_mod]
kthread+0x1bc/0x210
ret_from_fork+0x24/0x30
The buggy address belongs to the object at ffff8881154ec9c0
which belongs to the cache se_sess_cache of size 352
The buggy address is located 176 bytes inside of
352-byte region [ffff8881154ec9c0, ffff8881154ecb20)
The buggy address belongs to the page:
page:ffffea0004553b00 refcount:1 mapcount:0 mapping:ffff888101755400 index:0x0 compound_mapcount: 0
flags: 0x2fff000000010200(slab|head)
raw: 2fff000000010200 dead000000000100 dead000000000122 ffff888101755400
raw: 0000000000000000 0000000080130013 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff8881154ec900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff8881154ec980: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
>ffff8881154eca00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff8881154eca80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff8881154ecb00: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
Cc: Mike Christie <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Roman Bolshakov <[email protected]>
Signed-off-by: Bart Van Assche <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
|
|
Bring back tls_sw_sendpage_locked. sk_msg redirection into a socket
with TLS_TX takes the following path:
tcp_bpf_sendmsg_redir
tcp_bpf_push_locked
tcp_bpf_push
kernel_sendpage_locked
sock->ops->sendpage_locked
Also update the flags test in tls_sw_sendpage_locked to allow flag
MSG_NO_SHARED_FRAGS. bpf_tcp_sendmsg sets this.
Link: https://lore.kernel.org/netdev/CA+FuTSdaAawmZ2N8nfDDKu3XLpXBbMtcCT0q4FntDD2gn8ASUw@mail.gmail.com/T/#t
Link: https://github.com/wdebruij/kerneltools/commits/icept.2
Fixes: 0608c69c9a80 ("bpf: sk_msg, sock{map|hash} redirect through ULP")
Fixes: f3de19af0f5b ("Revert \"net/tls: remove unused function tls_sw_sendpage_locked\"")
Signed-off-by: Willem de Bruijn <[email protected]>
Acked-by: John Fastabend <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch uses rtd instead of pcm at snd_soc_pcm_component_new/free()
parameter.
This is prepare for dai_link remove bug fix on topology.
Signed-off-by: Kuninori Morimoto <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
A 'struct device_type' instance can carry default attributes for the
device. Use this facility to remove the export of
nvdimm_bus_attribute_group and put the responsibility on the core rather
than leaf implementations to define this attribute.
Cc: Ira Weiny <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Oliver O'Halloran" <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Link: https://lore.kernel.org/r/157309903815.1582359.6418211876315050283.stgit@dwillia2-desk3.amr.corp.intel.com
|
|
A 'struct device_type' instance can carry default attributes for the
device. Use this facility to remove the export of
nvdimm_attribute_group and put the responsibility on the core rather
than leaf implementations to define this attribute.
Cc: Ira Weiny <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Oliver O'Halloran" <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Link: https://lore.kernel.org/r/157309903201.1582359.10966209746585062329.stgit@dwillia2-desk3.amr.corp.intel.com
|
|
A 'struct device_type' instance can carry default attributes for the
device. Use this facility to remove the export of
nd_mapping_attribute_group and put the responsibility on the core rather
than leaf implementations to define this attribute.
Cc: Ira Weiny <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Oliver O'Halloran" <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Link: https://lore.kernel.org/r/157309902686.1582359.6749533709859492704.stgit@dwillia2-desk3.amr.corp.intel.com
|
|
A 'struct device_type' instance can carry default attributes for the
device. Use this facility to remove the export of
nd_region_attribute_group and put the responsibility on the core rather
than leaf implementations to define this attribute.
Cc: Ira Weiny <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Oliver O'Halloran" <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Link: https://lore.kernel.org/r/157309902169.1582359.16828508538444551337.stgit@dwillia2-desk3.amr.corp.intel.com
|
|
A 'struct device_type' instance can carry default attributes for the
device. Use this facility to remove the export of
nd_numa_attribute_group and put the responsibility on the core rather
than leaf implementations to define this attribute.
Cc: Ira Weiny <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: "Oliver O'Halloran" <[email protected]>
Cc: Vishal Verma <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Link: https://lore.kernel.org/r/157401269537.43284.14411189404186877352.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <[email protected]>
|
|
The EC is in charge of controlling the keyboard backlight on
the Wilco platform. We expose a standard LED class device
named platform::kbd_backlight.
Since the EC will never change the backlight level of its own accord,
we don't need to implement a brightness_get() method.
Signed-off-by: Nick Crews <[email protected]>
Signed-off-by: Daniel Campello <[email protected]>
Reviewed-by: Daniel Campello <[email protected]>
Signed-off-by: Enric Balletbo i Serra <[email protected]>
|
|
Add a device to control the charging algorithm used on Wilco devices,
which will be picked up by the drivers/power/supply/wilco-charger.c
driver. See Documentation/ABI/testing/sysfs-class-power-wilco for the
userspace interface and other info.
Signed-off-by: Nick Crews <[email protected]>
Signed-off-by: Enric Balletbo i Serra <[email protected]>
|
|
Commit 99e98d3fb100 ("cpuidle: Consolidate disabled state checks")
overlooked the fact that the imx6q and tegra20 cpuidle drivers use
the "disabled" field in struct cpuidle_state for quirks which trigger
after the initialization of cpuidle, so reading the initial value of
that field is not sufficient for those drivers.
In order to allow them to implement the quirks without using the
"disabled" field in struct cpuidle_state, introduce a new helper
function and modify them to use it.
Fixes: 99e98d3fb100 ("cpuidle: Consolidate disabled state checks")
Reported-by: Len Brown <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
The MM tracepoint for page free (called kmem:mm_page_free) doesn't provide
the page pointer directly, instead it provides the PFN (Page Frame Number).
This is annoying when writing a page_pool leak detector in BPF.
This patch change page_pool tracepoints to also provide the PFN.
The page pointer is still provided to allow other kinds of
troubleshooting from BPF.
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
When Jonathan change the page_pool to become responsible to its
own shutdown via deferred work queue, then the disconnect_cnt
counter was removed from xdp memory model tracepoint.
This patch change the page_pool_inflight tracepoint name to
page_pool_release, because it reflects the new responsability
better. And it reintroduces a counter that reflect the number of
times page_pool_release have been tried.
The counter is also used by the code, to only empty the alloc
cache once. With a stuck work queue running every second and
counter being 64-bit, it will overrun in approx 584 billion
years. For comparison, Earth lifetime expectancy is 7.5 billion
years, before the Sun will engulf, and destroy, the Earth.
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Acked-by: Toke Høiland-Jørgensen <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add core phylib help for supporting SFP sockets on PHYs. This provides
a mechanism to inform the SFP layer about PHY up/down events, and also
unregister the SFP bus when the PHY is going away.
Signed-off-by: Russell King <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Pablo Neira Ayuso says:
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for net-next:
1) Wildcard support for the net,iface set from Kristian Evensen.
2) Offload support for matching on the input interface.
3) Simplify matching on vlan header fields.
4) Add nft_payload_rebuild_vlan_hdr() function to rebuild the vlan
header from the vlan sk_buff metadata.
5) Pass extack to nft_flow_cls_offload_setup().
6) Add C-VLAN matching support.
7) Use time64_t in xt_time to fix y2038 overflow, from Arnd Bergmann.
8) Use time_t in nft_meta to fix y2038 overflow, also from Arnd.
9) Add flow_action_entry_next() helper function to flowtable offload
infrastructure.
10) Add IPv6 support to the flowtable offload infrastructure.
11) Support for input interface matching from postrouting,
from Phil Sutter.
12) Missing check for ndo callback in flowtable offload, from wenxu.
13) Remove conntrack parameter from flow_offload_fill_dir(), from wenxu.
14) Do not pass flow_rule object for rule removal, cookie is sufficient
to achieve this.
15) Release flow_rule object in case of error from the offload commit
path.
16) Undo offload ruleset updates if transaction fails.
17) Check for error when binding flowtable callbacks, from wenxu.
18) Always unbind flowtable callbacks when unregistering hooks.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
While checking the results of the :c:func: removal, I noticed that there
was no documentation for request_irq(), and request_threaded_irq() was not
mentioned at all. Add a kerneldoc comment for request_irq() and add
request_threaded_irq() to the list of functions.
Reviewed-by: Thomas Gleixner <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
Change to use SPI_CS_HIGH to support spi CS polarity setting
for chips support enhance_timing.
Signed-off-by: Luhua Xu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
The type name is misleading, a single entry is named 'cache' while this
normally means a collection of objects. Rename that everywhere. Also the
identifier was quite long, making function prototypes harder to format.
Suggested-by: Nikolay Borisov <[email protected]>
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
The new raid1c3 and raid1c4 profiles are backward incompatible and the
name shall be 'raid1c34', the status can be found in the global
supported features in /sys/fs/btrfs/features or in the per-filesystem
directory.
Signed-off-by: David Sterba <[email protected]>
|
|
Add new block group profile to store 4 copies in a simliar way that
current RAID1 does. The profile attributes and constraints are defined
in the raid table and used by the same code that already handles the 2-
and 3-copy RAID1.
The minimum number of devices is 4, the maximum number of devices/chunks
that can be lost/damaged is 3. There is no comparable traditional RAID
level, the profile is added for future needs to accompany triple-parity
and beyond.
Signed-off-by: David Sterba <[email protected]>
|
|
Add new block group profile to store 3 copies in a simliar way that
current RAID1 does. The profile attributes and constraints are defined
in the raid table and used by the same code that already handles the
2-copy RAID1.
The minimum number of devices is 3, the maximum number of devices/chunks
that can be lost/damaged is 2. Like RAID6 but with 33% space
utilization.
Signed-off-by: David Sterba <[email protected]>
|
|
The on-disk format of block group item makes use of the key that stores
the offset and length. This is further used in the code, although this
makes thing harder to understand. The key is also packed so the
offset/length is not properly aligned as u64.
Add start (key.objectid) and length (key.offset) members to block group
and remove the embedded key. When the item is searched or written, a
local variable for key is used.
Reviewed-by: Johannes Thumshirn <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
For unknown reasons, the member 'used' in the block group struct is
stored in the b-tree item and accessed everywhere using the special
accessor helper. Let's unify it and make it a regular member and only
update the item before writing it to the tree.
The item is still being used for flags and chunk_objectid, there's some
duplication until the item is removed in following patches.
Reviewed-by: Johannes Thumshirn <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|