Age | Commit message (Collapse) | Author | Files | Lines |
|
x86 systems identify traffic using the CLOSID and RMID. The CLOSID is
used to lookup the control policy, the RMID is used for monitoring. For
x86 these are independent numbers.
Arm's MPAM has equivalent features PARTID and PMG, where the PARTID is
used to lookup the control policy. The PMG in contrast is a small number
of bits that are used to subdivide PARTID when monitoring. The
cache-occupancy monitors require the PARTID to be specified when
monitoring.
This means MPAM's PMG field is not unique. There are multiple PMG-0, one
per allocated CLOSID/PARTID. If PMG is treated as equivalent to RMID, it
cannot be allocated as an independent number. Bitmaps like rmid_busy_llc
need to be sized by the number of unique entries for this resource.
Treat the combined CLOSID and RMID as an index, and provide architecture
helpers to pack and unpack an index. This makes the MPAM values unique.
The domain's rmid_busy_llc and rmid_ptrs[] are then sized by index, as
are domain mbm_local[] and mbm_total[].
x86 can ignore the CLOSID field when packing and unpacking an index, and
report as many indexes as RMID.
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Shaopeng Tan <[email protected]>
Reviewed-by: Babu Moger <[email protected]>
Reviewed-by: Reinette Chatre <[email protected]>
Tested-by: Shaopeng Tan <[email protected]>
Tested-by: Peter Newman <[email protected]>
Tested-by: Babu Moger <[email protected]>
Tested-by: Carl Worth <[email protected]> # arm64
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
|
|
x86's RMID are independent of the CLOSID. An RMID can be allocated,
used and freed without considering the CLOSID.
MPAM's equivalent feature is PMG, which is not an independent number,
it extends the CLOSID/PARTID space. For MPAM, only PMG-bits worth of
'RMID' can be allocated for a single CLOSID.
i.e. if there is 1 bit of PMG space, then each CLOSID can have two
monitor groups.
To allow resctrl to disambiguate RMID values for different CLOSID,
everything in resctrl that keeps an RMID value needs to know the CLOSID
too. This will always be ignored on x86.
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Shaopeng Tan <[email protected]>
Reviewed-by: Xin Hao <[email protected]>
Reviewed-by: Reinette Chatre <[email protected]>
Tested-by: Shaopeng Tan <[email protected]>
Tested-by: Peter Newman <[email protected]>
Tested-by: Babu Moger <[email protected]>
Tested-by: Carl Worth <[email protected]> # arm64
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
|
|
Now that __num_cores_per_package and __num_threads_per_package are
available, cpuinfo::x86_max_cores and the related math all over the place
can be replaced with the ready to consume data.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Move is_vsyscall_vaddr() into asm/vsyscall.h to make it available for
copy_from_kernel_nofault_allowed() in arch/x86/mm/maccess.c.
Reviewed-by: Sohil Mehta <[email protected]>
Signed-off-by: Hou Tao <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Expose properly accounted information and accessors so the fiddling with
other topology variables can be replaced.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The plural of die is dies.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
It's really a non-intuitive name. Rename it to __max_threads_per_core which
is obvious.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Replace the logical package and die management functionality and retrieve
the logical IDs from the topology bitmaps.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
With the topology bitmaps in place the logical package and die IDs can
trivially be retrieved by determining the bitmap weight of the relevant
topology domain level up to and including the physical ID in question.
Provide a function to that effect.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Now that all possible APIC IDs are tracked in the topology bitmaps, its
trivial to retrieve the real information from there.
This gets rid of the guesstimates for the maximal packages and dies per
package as the actual numbers can be determined before a single AP has been
brought up.
The number of SMT threads can now be determined correctly from the bitmaps
in all situations. Up to now a system which has SMT disabled in the BIOS
will still claim that it is SMT capable, because the lowest APIC ID bit is
reserved for that and CPUID leaf 0xb/0x1f still enumerates the SMT domain
accordingly. By calculating the bitmap weights of the SMT and the CORE
domain and setting them into relation the SMT disabled in BIOS situation
reports correctly that the system is not SMT capable.
It also handles the situation correctly when a hybrid systems boot CPU does
not have SMT as it takes the SMT capability of the APs fully into account.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Managing possible CPUs is an unreadable and uncomprehensible maze. Aside of
that it's backwards because it applies command line limits after
registering all APICs.
Rewrite it so that it:
- Applies the command line limits upfront so that only the allowed amount
of APIC IDs can be registered.
- Applies eventual late restrictions in an understandable way
- Uses simple min_t() calculations which are trivial to follow.
- Provides a separate function for resetting to UP mode late in the
bringup process.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Now that all external fiddling with num_processors and disabled_cpus is
gone, move the last user prefill_possible_map() into the topology code too
and remove the global visibility of these variables.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Aside of switching over to the new interface, record the number of
registered CPUs locally, which allows to make num_processors and
disabled_cpus confined to the topology code.
No functional change intended.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
generic_processor_info() aside of being a complete misnomer is used for
both early boot registration and ACPI CPU hotplug.
While it's arguable that this can share some code, it results in code which
is hard to understand and kept around post init for no real reason.
Also the call sites do lots of manual fiddling in topology related
variables instead of having proper interfaces for the purpose which handle
the topology internals correctly.
Provide topology_register_apic(), topology_hotplug_apic() and
topology_hotunplug_apic() which have the extra magic of the call sites
incorporated and for now are wrappers around generic_processor_info().
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The APIC/CPU registration sits in the middle of the APIC code. In fact this
is a topology evaluation function and has nothing to do with the inner
workings of the local APIC.
Move it out into a file which reflects what this is about.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The ACPI ID for CPUs is preset with U32_MAX which is completely non
obvious. Use a proper define for it.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Paranoia is not wrong, but having an APIC callback which is in most
implementations a complete NOOP and in one actually looking whether the
APICID of an upcoming CPU has been registered. The same APICID which was
used to bring the CPU out of wait for startup.
That's paranoia for the paranoia sake. Remove the voodoo.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
There is absolutely no point to write the APIC ID which was read from the
local APIC earlier, back into the local APIC for the 64-bit UP case.
Remove that along with the apic callback which is solely there for this
pointless exercise.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
physid_t is a wrapper around bitmap. Just remove the onion layer and use
bitmap functionality directly.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
There is no reason to have the early mptable evaluation conditionally
invoked only from the AMD numa topology code.
Make it explicit and invoke it from setup_arch() right after the
corresponding ACPI init call. Remove the pointless wrapper and invoke
x86_init::mpparse::early_parse_smp_config() directly.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Now that all platforms have the new split SMP configuration callbacks set
up, flip the switch and remove the old callback pointer and mop up the
platform code.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
x86_dtb_init() is a misnomer and it really should be used as a SMP
configuration parser which is selected by the platform via
x86_init::mpparse:parse_smp_config().
Rename it to x86_dtb_parse_smp_config() in preparation for that.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
In preparation of splitting the get_smp_config() callback, rename
default_get_smp_config() to mpparse_get_smp_config() and provide an early
and late wrapper.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The early argument of x86_init::mpparse::get_smp_config() is more than
confusing. Provide two callbacks, one for each purpose.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
MPTABLE is no longer the default SMP configuration mechanism. Rename it to
mpparse_find_mptable() because that's what it does.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
No more users.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Yet another set_bit() operation wrapped in oring a mask.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
There is no point to do that. The ATOMs have an XAPIC for which this
function is a pointless exercise.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Detect all possible combinations of mismatch right in the CPUID evaluation
code.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The package shift has been already evaluated by the early CPU init.
Put the mindless copy right next to the original leaf 0xb parser.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Now that the core code does not use this monstrosity anymore, it's time to
put it to rest.
The only real purpose was to read the APIC ID on UV and VSMP systems for
the actual evaluation. That's what the core code does now.
For doing the actual shift operation there is truly no APIC callback
required.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
No more users.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Now that everything is converted switch it over and remove the intermediate
operation.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
cpuinfo::topo::x86_coreid_bits is about to be phased out. Use the core
domain size from the topology information.
Add a comment why the early MPTABLE parsing is required and decrapify the
loop which sets the APIC ID to node map.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Switch it over to the new topology evaluation mechanism and remove the
random bits and pieces which are sprinkled all over the place.
No functional change intended.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
AMD/HYGON uses various methods for topology evaluation:
- Leaf 0x80000008 and 0x8000001e based with an optional leaf 0xb,
which is the preferred variant for modern CPUs.
Leaf 0xb will be superseded by leaf 0x80000026 soon, which is just
another variant of the Intel 0x1f leaf for whatever reasons.
- Subleaf 0x80000008 and NODEID_MSR base
- Legacy fallback
That code is following the principle of random bits and pieces all over the
place which results in multiple evaluations and impenetrable code flows in
the same way as the Intel parsing did.
Provide a sane implementation by clearly separating the three variants and
bringing them in the proper preference order in one place.
This provides the parsing for both AMD and HYGON because there is no point
in having a separate HYGON parser which only differs by 3 lines of
code. Any further divergence between AMD and HYGON can be handled in
different functions, while still sharing the existing parsers.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
AMD (ab)uses topology_die_id() to store the Node ID information and
topology_max_dies_per_pkg to store the number of nodes per package.
This collides with the proper processor die level enumeration which is
coming on AMD with CPUID 8000_0026, unless there is a correlation between
the two. There is zero documentation about that.
So provide new storage and new accessors which for now still access die_id
and topology_max_die_per_pkg(). Will be mopped up after AMD and HYGON are
converted over.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Topology evaluation is a complete disaster and impenetrable mess. It's
scattered all over the place with some vendor implementations doing early
evaluation and some not. The most horrific part is the permanent
overwriting of smt_max_siblings and __max_die_per_package, instead of
establishing them once on the boot CPU and validating the result on the
APs.
The goals are:
- One topology evaluation entry point
- Proper sharing of pointlessly duplicated code
- Proper structuring of the evaluation logic and preferences.
- Evaluating important system wide information only once on the boot CPU
- Making the 0xb/0x1f leaf parsing less convoluted and actually fixing
the short comings of leaf 0x1f evaluation.
Start to consolidate the topology evaluation code by providing the entry
points for the early boot CPU evaluation and for the final parsing on the
boot CPU and the APs.
Move the trivial pieces into that new code:
- The initialization of cpuinfo_x86::topo
- The evaluation of CPUID leaf 1, which presets topo::initial_apicid
- topo_apicid is set to topo::initial_apicid when invoked from early
boot. When invoked for the final evaluation on the boot CPU it reads
the actual APIC ID, which makes apic_get_initial_apicid() obsolete
once everything is converted over.
Provide a temporary helper function topo_converted() which shields off the
not yet converted CPU vendors from invoking code which would break them.
This shielding covers all vendor CPUs which support SMP, but not the
historical pure UP ones as they only need the topology info init and
eventually the initial APIC initialization.
Provide two new members in cpuinfo_x86::topo to store the maximum number of
SMT siblings and the number of dies per package and add them to the debugfs
readout. These two members will be used to populate this information on the
boot CPU and to validate the APs against it.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Provide a few helper functions to read CPUID leafs or individual registers
into a data structure without requiring unions.
Signed-off-by: Thomas Gleixner <[email protected]>
Tested-by: Juergen Gross <[email protected]>
Tested-by: Sohil Mehta <[email protected]>
Tested-by: Michael Kelley <[email protected]>
Tested-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: Zhang Rui <[email protected]>
Tested-by: Wang Wendy <[email protected]>
Tested-by: K Prateek Nayak <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/878r3mg570.ffs@tglx
|
|
dependent patches
Merge in pending alternatives patching infrastructure changes, before
applying more patches.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
branch
Conflicts:
arch/x86/include/asm/percpu.h
arch/x86/include/asm/text-patching.h
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Make sure the default return thunk is not used after all return
instructions have been patched by the alternatives because the default
return thunk is insufficient when it comes to mitigating Retbleed or
SRSO.
Fix based on an earlier version by David Kaplan <[email protected]>.
[ bp: Fix the compilation error of warn_thunk_thunk being an invisible
symbol, hoist thunk macro into calling.h ]
Signed-off-by: Josh Poimboeuf <[email protected]>
Co-developed-by: Borislav Petkov (AMD) <[email protected]>
Signed-off-by: Borislav Petkov (AMD) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/20240104132446.GEZZaxnrIgIyat0pqf@fat_crate.local
|
|
We've had issues with gcc and 'asm goto' before, and we created a
'asm_volatile_goto()' macro for that in the past: see commits
3f0116c3238a ("compiler/gcc4: Add quirk for 'asm goto' miscompilation
bug") and a9f180345f53 ("compiler/gcc4: Make quirk for
asm_volatile_goto() unconditional").
Then, much later, we ended up removing the workaround in commit
43c249ea0b1e ("compiler-gcc.h: remove ancient workaround for gcc PR
58670") because we no longer supported building the kernel with the
affected gcc versions, but we left the macro uses around.
Now, Sean Christopherson reports a new version of a very similar
problem, which is fixed by re-applying that ancient workaround. But the
problem in question is limited to only the 'asm goto with outputs'
cases, so instead of re-introducing the old workaround as-is, let's
rename and limit the workaround to just that much less common case.
It looks like there are at least two separate issues that all hit in
this area:
(a) some versions of gcc don't mark the asm goto as 'volatile' when it
has outputs:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98619
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110420
which is easy to work around by just adding the 'volatile' by hand.
(b) Internal compiler errors:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110422
which are worked around by adding the extra empty 'asm' as a
barrier, as in the original workaround.
but the problem Sean sees may be a third thing since it involves bad
code generation (not an ICE) even with the manually added 'volatile'.
but the same old workaround works for this case, even if this feels a
bit like voodoo programming and may only be hiding the issue.
Reported-and-tested-by: Sean Christopherson <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/
Cc: Nick Desaulniers <[email protected]>
Cc: Uros Bizjak <[email protected]>
Cc: Jakub Jelinek <[email protected]>
Cc: Andrew Pinski <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Cleanups to Kconfig definitions for KVM
* replace HAVE_KVM with an architecture-dependent symbol, when CONFIG_KVM
may or may not be available depending on CPU capabilities (MIPS)
* replace HAVE_KVM with IS_ENABLED(CONFIG_KVM) for host-side code that is
not part of the KVM module, so that it is completely compiled out
* factor common "select" statements in common code instead of requiring
each architecture to specify it
|
|
It is more accurate to check if KVM is enabled, instead of having the
architecture say so. Architectures always "have" KVM, so for example
checking CONFIG_HAVE_KVM in x86 code is pointless, but if KVM is disabled
in a specific build, there is no need for support code.
Alternatively, many of the #ifdefs could simply be deleted. However,
this would add completely dead code. For example, when KVM is disabled,
there should not be any posted interrupts, i.e. NOT wiring up the "dummy"
handlers and treating IRQs on those vectors as spurious is the right
thing to do.
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Since all architectures (for historical reasons) have to define
struct kvm_guest_debug_arch, and since userspace has to check
KVM_CHECK_EXTENSION(KVM_CAP_SET_GUEST_DEBUG) anyway, there is
no advantage in masking the capability #define itself. Remove
the #define __KVM_HAVE_GUEST_DEBUG from architecture-specific
headers.
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
KVM uses __KVM_HAVE_* symbols in the architecture-dependent uapi/asm/kvm.h to mask
unused definitions in include/uapi/linux/kvm.h. __KVM_HAVE_READONLY_MEM however
was nothing but a misguided attempt to define KVM_CAP_READONLY_MEM only on
architectures where KVM_CHECK_EXTENSION(KVM_CAP_READONLY_MEM) could possibly
return nonzero. This however does not make sense, and it prevented userspace
from supporting this architecture-independent feature without recompilation.
Therefore, these days __KVM_HAVE_READONLY_MEM does not mask anything and
is only used in virt/kvm/kvm_main.c. Userspace does not need to test it
and there should be no need for it to exist. Remove it and replace it
with a Kconfig symbol within Linux source code.
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Several capabilities that exist only on x86 nevertheless have their
structs defined in include/uapi/linux/kvm.h. Move them to
arch/x86/include/uapi/asm/kvm.h for cleanliness.
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Change uapi header uses of GENMASK to instead use the uapi/linux/bits.h bit
macros, since GENMASK is not defined in uapi headers.
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Change uapi header uses of BIT to instead use the uapi/linux/const.h bit
macros, since BIT is not defined in uapi headers.
The PMU mask uses _BITUL since it targets a 32 bit flag field, whereas
the longmode definition is meant for a 64 bit flag field.
Cc: Sean Christophersen <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Signed-off-by: Dionna Glaze <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|