diff options
| author | Sean Christopherson <[email protected]> | 2020-02-18 13:07:31 -0800 |
|---|---|---|
| committer | Paolo Bonzini <[email protected]> | 2020-03-16 17:57:26 +0100 |
| commit | 0577d1abe704c315bb5cdfc71f4ca7b9b5358f59 (patch) | |
| tree | 8040b969d89f86242bc5699dd4ecc1534f47d795 /tools/perf/scripts/python/syscall-counts-by-pid.py | |
| parent | 2a49f61dfcdc25ec06b41f7466ccb94a7a9d2624 (diff) | |
KVM: Terminate memslot walks via used_slots
Refactor memslot handling to treat the number of used slots as the de
facto size of the memslot array, e.g. return NULL from id_to_memslot()
when an invalid index is provided instead of relying on npages==0 to
detect an invalid memslot. Rework the sorting and walking of memslots
in advance of dynamically sizing memslots to aid bisection and debug,
e.g. with luck, a bug in the refactoring will bisect here and/or hit a
WARN instead of randomly corrupting memory.
Alternatively, a global null/invalid memslot could be returned, i.e. so
callers of id_to_memslot() don't have to explicitly check for a NULL
memslot, but that approach runs the risk of introducing difficult-to-
debug issues, e.g. if the global null slot is modified. Constifying
the return from id_to_memslot() to combat such issues is possible, but
would require a massive refactoring of arch specific code and would
still be susceptible to casting shenanigans.
Add function comments to update_memslots() and search_memslots() to
explicitly (and loudly) state how memslots are sorted.
Opportunistically stuff @hva with a non-canonical value when deleting a
private memslot on x86 to detect bogus usage of the freed slot.
No functional change intended.
Tested-by: Christoffer Dall <[email protected]>
Tested-by: Marc Zyngier <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/syscall-counts-by-pid.py')
0 files changed, 0 insertions, 0 deletions