diff options
author | Paolo Abeni <pabeni@redhat.com> | 2024-07-02 15:00:14 +0200 |
---|---|---|
committer | Paolo Abeni <pabeni@redhat.com> | 2024-07-02 15:00:14 +0200 |
commit | e27d7168f0c8c024344e9541513aa71d921402a5 (patch) | |
tree | 8a9a45d244c77d07157d08d3aabc32f144e40af9 /tools/perf/scripts/python/task-analyzer.py | |
parent | e2dd0d0593c17f32c7263e9d6f7554ecaabb0baf (diff) | |
parent | 40eca00ae605d77b6d784824a6ce54c5b42dfce6 (diff) |
Merge branch 'page_pool-bnxt_en-unlink-old-page-pool-in-queue-api-using-helper'
David Wei says:
====================
page_pool: bnxt_en: unlink old page pool in queue api using helper
56ef27e3 unexported page_pool_unlink_napi() and renamed it to
page_pool_disable_direct_recycling(). This is because there was no
in-tree user of page_pool_unlink_napi().
Since then Rx queue API and an implementation in bnxt got merged. In the
bnxt implementation, it broadly follows the following steps: allocate
new queue memory + page pool, stop old rx queue, swap, then destroy old
queue memory + page pool.
The existing NAPI instance is re-used so when the old page pool that is
no longer used but still linked to this shared NAPI instance is
destroyed, it will trigger warnings.
In my initial patches I unlinked a page pool from a NAPI instance
directly. Instead, export page_pool_disable_direct_recycling() and call
that instead to avoid having a driver touch a core struct.
====================
Link: https://patch.msgid.link/20240627030200.3647145-1-dw@davidwei.uk
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'tools/perf/scripts/python/task-analyzer.py')
0 files changed, 0 insertions, 0 deletions