aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-12-02arm64: add C wrappers for SET_PSTATE_*()Mark Rutland3-3/+7
To make callsites easier to read, add trivial C wrappers for the SET_PSTATE_*() helpers, and convert trivial uses over to these. The new wrappers will be used further in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-12-02arm64: ensure ERET from kthread is illegalMark Rutland1-9/+8
For consistency, all tasks have a pt_regs reserved at the highest portion of their task stack. Among other things, this ensures that a task's SP is always pointing within its stack rather than pointing immediately past the end. While it is never legitimate to ERET from a kthread, we take pains to initialize pt_regs for kthreads as if this were legitimate. As this is never legitimate, the effects of an erroneous return are rarely tested. Let's simplify things by initializing a kthread's pt_regs such that an ERET is caught as an illegal exception return, and removing the explicit initialization of other exception context. Note that as spectre_v4_enable_task_mitigation() only manipulates the PSTATE within the unused regs this is safe to remove. As user tasks will have their exception context initialized via start_thread() or start_compat_thread(), this should only impact cases where something has gone very wrong and we'd like that to be clearly indicated. Signed-off-by: Mark Rutland <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Morse <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]>
2020-11-30KVM: arm64: Advertise ID_AA64PFR0_EL1.CSV3=1 if the CPUs are Meltdown-safeMarc Zyngier3-5/+18
Cores that predate the introduction of ID_AA64PFR0_EL1.CSV3 to the ARMv8 architecture have this field set to 0, even of some of them are not affected by the vulnerability. The kernel maintains a list of unaffected cores (A53, A55 and a few others) so that it doesn't impose an expensive mitigation uncessarily. As we do for CSV2, let's expose the CSV3 property to guests that run on HW that is effectively not vulnerable. This can be reset to zero by writing to the ID register from userspace, ensuring that VMs can be migrated despite the new property being set. Reported-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-30KVM: arm64: Delay the polling of the GICR_VPENDBASER.Dirty bitShenming Lu6-4/+47
In order to reduce the impact of the VPT parsing happening on the GIC, we can split the vcpu reseidency in two phases: - programming GICR_VPENDBASER: this still happens in vcpu_load() - checking for the VPT parsing to be complete: this can happen on vcpu entry (in kvm_vgic_flush_hwstate()) This allows the GIC and the CPU to work in parallel, rewmoving some of the entry overhead. Suggested-by: Marc Zyngier <[email protected]> Signed-off-by: Shenming Lu <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-28arm64: Make the Meltdown mitigation state availableMarc Zyngier2-3/+19
Our Meltdown mitigation state isn't exposed outside of cpufeature.c, contrary to the rest of the Spectre mitigation state. As we are going to use it in KVM, expose a arm64_get_meltdown_state() helper which returns the same possible values as arm64_get_spectre_v?_state(). Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/misc-5.11' into kvmarm-master/nextMarc Zyngier5-59/+37
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/cache-demux' into kvmarm-master/nextMarc Zyngier2-10/+31
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: selftests: Filter out DEMUX registersAndrew Jones1-9/+30
DEMUX register presence depends on the host's hardware (the CLIDR_EL1 register to be precise). This means there's no set of them that we can bless and that it's possible to encounter new ones when running on different hardware (which would generate "Consider adding them ..." messages, but we'll never want to add them.) Remove the ones we have in the blessed list and filter them out of the new list, but also provide a new command line switch to list them if one so desires. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: CSSELR_EL1 max is 13Andrew Jones1-1/+1
Not counting TnD, which KVM doesn't currently consider, CSSELR_EL1 can have a maximum value of 0b1101 (13), which corresponds to an instruction cache at level 7. With CSSELR_MAX set to 12 we can only select up to cache level 6. Change it to 14. Signed-off-by: Andrew Jones <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Remove unused __extended_idmap_trampoline() prototypeWill Deacon1-1/+0
__extended_idmap_trampoline() was removed a long time ago by 3421e9d88d7a ("arm64: KVM: Simplify HYP init/teardown") so remove the unused function prototype. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Remove kvm_arch_vm_ioctl_check_extension()Will Deacon4-55/+34
kvm_arch_vm_ioctl_check_extension() is only called from kvm_vm_ioctl_check_extension(), so we can inline it and remove the extra function. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/Will Deacon2-3/+3
'struct kvm_arch_memory_slot' isn't part of the user ABI, so move it out of the uapi/ headers in case we start using it in future and accidentally back ourselves into a corner. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: nSVM: set fixed bits by handPaolo Bonzini1-4/+5
SVM generally ignores fixed-1 bits. Set them manually so that we do not end up by mistake without those bits set in struct kvm_vcpu; it is part of userspace API that KVM always returns value with the bits set. Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-27Merge branch 'kvm-arm64/vector-rework' into kvmarm-master/nextMarc Zyngier13-260/+215
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/pmu-undef' into kvmarm-master/nextMarc Zyngier5-53/+32
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Get rid of the PMU ready stateMarc Zyngier2-4/+0
The PMU ready state has no user left. Goodbye. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Gate kvm_pmu_update_state() on the PMU featureMarc Zyngier1-1/+1
We currently gate the update of the PMU state on the PMU being "ready". The "ready" state is only set to true when the first vcpu run is successful, and if it isn't, we never reach the update code. So the "ready" state is never the right thing to check for, and it should instead be the presence of the PMU feature, which makes a bit more sense. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Remove dead PMU sysreg decoding codeMarc Zyngier1-5/+4
The handling of traps in access_pmu_evcntr() has a couple of omminous "else return false;" statements that don't make any sense: the decoding tree coverse all the registers that trap to this handler, and returning false implies that we change PC, which we don't. Get rid of what is evidently dead code. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Remove PMU RAZ/WI handlingMarc Zyngier1-30/+0
There is no RAZ/WI handling allowed for the PMU registers in the ARMv8 architecture. Nobody can remember how we cam to the conclusion that we could do this, but the ARMv8 ARM is pretty clear that we cannot. Remove the RAZ/WI handling of the PMU system registers when it is not configured. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Inject UNDEF on PMU access when no PMU configuredMarc Zyngier1-4/+8
The ARMv8 architecture says that in the absence of FEAT_PMUv3, all the PMU-related register generate an UNDEF. Let's make sure that all our PMU handers catch this case by hooking into check_pmu_access_disabled(), and add checks in a couple of other places. Note that we still cannot deliver an exception into the guest as the offending cases are already caught by the RAZ/WI handling. But this puts us one step away to architectural compliance. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Refuse illegal KVM_ARM_VCPU_PMU_V3 at reset timeMarc Zyngier2-2/+6
We accept to configure a PMU when a vcpu is created, even if the HW (or the host) doesn't support it. This results in failures when attributes get set, which is a bit odd as we should have failed the vcpu creation the first place. Move the check to the point where we check the vcpu feature set, and fail early if we cannot support a PMU. This further simplifies the attribute handling. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Set ID_AA64DFR0_EL1.PMUVer to 0 when no PMU supportMarc Zyngier1-1/+6
We always expose the HW view of PMU in ID_AA64FDR0_EL1.PMUver, even when the PMU feature is disabled, while the architecture says that FEAT_PMUv3 not being implemented should result in this field being zero. Let's follow the architecture's guidance. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Refuse to run VCPU if PMU is not initializedAlexandru Elisei1-4/+4
When enabling the PMU in kvm_arm_pmu_v3_enable(), KVM returns early if the PMU flag created is false and skips any other checks. Because PMU emulation is gated only on the VCPU feature being set, this makes it possible for userspace to get away with setting the VCPU feature but not doing any initialization for the PMU. Fix it by returning an error when trying to run the VCPU if the PMU hasn't been initialized correctly. The PMU is marked as created only if the interrupt ID has been set when using an in-kernel irqchip. This means the same check in kvm_arm_pmu_v3_enable() is redundant, remove it. Signed-off-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-27KVM: arm64: Add kvm_vcpu_has_pmu() helperMarc Zyngier2-5/+6
There are a number of places where we check for the KVM_ARM_VCPU_PMU_V3 feature. Wrap this check into a new kvm_vcpu_has_pmu(), and use it at the existing locations. No functional change. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/host-hvc-table' into kvmarm-master/nextMarc Zyngier8-117/+241
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/copro-no-more' into kvmarm-master/nextMarc Zyngier10-274/+151
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27Merge branch 'kvm-arm64/el2-pc' into kvmarm-master/nextMarc Zyngier21-737/+666
Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exceptionMarc Zyngier1-3/+3
Registers x0/x1 get repeateadly pushed and poped during a host HVC call. Instead, leave the registers on the stack, trading a store instruction on the fast path for an add on the slow path. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Simplify __kvm_enable_ssbs()Marc Zyngier4-14/+6
Move the setting of SSBS directly into the HVC handler, using the C helpers rather than the inline asssembly code. Reviewed-by: Alexandru Elisei <[email protected]> Signed-off-by: Marc Zyngier <[email protected]>
2020-11-27KVM: arm64: Patch kimage_voffset instead of loading the EL1 valueMarc Zyngier4-7/+30
Directly using the kimage_voffset variable is fine for now, but will become more problematic as we start distrusting EL1. Instead, patch the kimage_voffset into the HYP text, ensuring we don't have to load an untrusted value later on. Signed-off-by: Marc Zyngier <[email protected]>
2020-11-19kvm: x86/mmu: Add TDP MMU SPTE changed trace pointBen Gardon2-0/+31
Add an extremely verbose trace point to the TDP MMU to log all SPTE changes, regardless of callstack / motivation. This is useful when a complete picture of the paging structure is needed or a change cannot be explained with the other, existing trace points. Tested: ran the demand paging selftest on an Intel Skylake machine with all the trace points used by the TDP MMU enabled and observed them firing with expected values. This patch can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/3813 Signed-off-by: Ben Gardon <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-19kvm: x86/mmu: Add existing trace points to TDP MMUBen Gardon1-1/+11
The TDP MMU was initially implemented without some of the usual tracepoints found in mmu.c. Correct this discrepancy by adding the missing trace points to the TDP MMU. Tested: ran the demand paging selftest on an Intel Skylake machine with all the trace points used by the TDP MMU enabled and observed them firing with expected values. This patch can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/3812 Signed-off-by: Ben Gardon <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16KVM: SVM: check CR4 changes against vcpu->archPaolo Bonzini1-1/+1
Similarly to what vmx/vmx.c does, use vcpu->arch.cr4 to check if CR4 bits PGE, PKE and OSXSAVE have changed. When switching between VMCB01 and VMCB02, CPUID has to be adjusted every time if CR4.PKE or CR4.OSXSAVE change; without this patch, instead, CR4 would be checked against the previous value for L2 on vmentry, and against the previous value for L1 on vmexit, and CPUID would not be updated. Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16KVM: SVM: Move asid to vcpu_svmCathy Avery2-3/+8
KVM does not have separate ASIDs for L1 and L2; either the nested hypervisor and nested guests share a single ASID, or on older processor the ASID is used only to implement TLB flushing. Either way, ASIDs are handled at the VM level. In preparation for having different VMCBs passed to VMLOAD/VMRUN/VMSAVE for L1 and L2, store the current ASID to struct vcpu_svm and only move it to the VMCB in svm_vcpu_run. This way, TLB flushes can be applied no matter which VMCB will be active during the next svm_vcpu_run. Signed-off-by: Cathy Avery <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16x86/kvm: remove unused macro HV_CLOCK_SIZEAlex Shi1-1/+0
This macro is useless, and could cause gcc warning: arch/x86/kernel/kvmclock.c:47:0: warning: macro "HV_CLOCK_SIZE" is not used [-Wunused-macros] Let's remove it. Signed-off-by: Alex Shi <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Sean Christopherson <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Cc: Wanpeng Li <[email protected]> Cc: Jim Mattson <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: [email protected] Cc: [email protected] Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16KVM: selftests: x86: Set supported CPUIDs on default VMAndrew Jones18-26/+30
Almost all tests do this anyway and the ones that don't don't appear to care. Only vmx_set_nested_state_test assumes that a feature (VMX) is disabled until later setting the supported CPUIDs. It's better to disable that explicitly anyway. Signed-off-by: Andrew Jones <[email protected]> Message-Id: <[email protected]> [Restore CPUID_VMX, or vmx_set_nested_state breaks. - Paolo] Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16KVM: selftests: Make test skipping consistentAndrew Jones3-10/+14
Signed-off-by: Andrew Jones <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2020-11-16KVM: arm64: Remove redundant hyp vectors entryWill Deacon3-3/+9
The hyp vectors entry corresponding to HYP_VECTOR_DIRECT (i.e. when neither Spectre-v2 nor Spectre-v3a are present) is unused, as we can simply dispatch straight to __kvm_hyp_vector in this case. Remove the redundant vector, and massage the logic for resolving a slot to a vectors entry. Reported-by: Marc Zyngier <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16arm64: spectre: Consolidate spectre-v3a detectionWill Deacon3-11/+15
The spectre-v3a mitigation is split between cpu_errata.c and spectre.c, with the former handling detection of the problem and the latter handling enabling of the workaround. Move the detection logic alongside the enabling logic, like we do for the other spectre mitigations. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16arm64: spectre: Rename ARM64_HARDEN_EL2_VECTORS to ARM64_SPECTRE_V3AWill Deacon8-18/+22
Since ARM64_HARDEN_EL2_VECTORS is really a mitigation for Spectre-v3a, rename it accordingly for consistency with the v2 and v4 mitigation. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Allocate hyp vectors staticallyWill Deacon9-183/+126
The EL2 vectors installed when a guest is running point at one of the following configurations for a given CPU: - Straight at __kvm_hyp_vector - A trampoline containing an SMC sequence to mitigate Spectre-v2 and then a direct branch to __kvm_hyp_vector - A dynamically-allocated trampoline which has an indirect branch to __kvm_hyp_vector - A dynamically-allocated trampoline containing an SMC sequence to mitigate Spectre-v2 and then an indirect branch to __kvm_hyp_vector The indirect branches mean that VA randomization at EL2 isn't trivially bypassable using Spectre-v3a (where the vector base is readable by the guest). Rather than populate these vectors dynamically, configure everything statically and use an enumerated type to identify the vector "slot" corresponding to one of the configurations above. This both simplifies the code, but also makes it much easier to implement at EL2 later on. Signed-off-by: Will Deacon <[email protected]> [maz: fixed double call to kvm_init_vector_slots() on nVHE] Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Re-jig logic when patching hardened hyp vectorsWill Deacon1-2/+2
The hardened hyp vectors are not used on systems running with VHE or CPUs without the ARM64_HARDEN_EL2_VECTORS capability. Re-jig the checking logic slightly in kvm_patch_vector_branch() so that it's a bit clearer what we're looking for. This is purely cosmetic. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Move BP hardening helpers into spectre.hWill Deacon4-30/+32
The BP hardening helpers are an integral part of the Spectre-v2 mitigation, so move them into asm/spectre.h and inline the arm64_get_bp_hardening_data() function at the same time. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Make BP hardening globals static insteadWill Deacon3-6/+8
Branch predictor hardening of the hyp vectors is partially driven by a couple of global variables ('__kvm_bp_vect_base' and '__kvm_harden_el2_vector_slot'). However, these are only used within a single compilation unit, so internalise them there instead. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Move kvm_get_hyp_vector() out of header fileWill Deacon2-45/+44
kvm_get_hyp_vector() has only one caller, so move it out of kvm_mmu.h and inline it into a new function, cpu_set_hyp_vector(), for setting the vector. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Tidy up kvm_map_vector()Will Deacon1-14/+14
The bulk of the work in kvm_map_vector() is conditional on the ARM64_HARDEN_EL2_VECTORS capability, so return early if that is not set and make the code a bit easier to read. Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-16KVM: arm64: Remove redundant Spectre-v2 code from kvm_map_vector()Will Deacon1-5/+0
'__kvm_bp_vect_base' is only used when dealing with the hardened vectors so remove the redundant assignments in kvm_map_vectors(). Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Marc Zyngier <[email protected]> Cc: Marc Zyngier <[email protected]> Cc: Quentin Perret <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2020-11-15Linux 5.10-rc4Linus Torvalds1-1/+1
2020-11-15Merge tag 'drm-fixes-2020-11-16' of git://anongit.freedesktop.org/drm/drmLinus Torvalds3-22/+24
Pull drm fixes from Dave Airlie: "Nouveau fixes: - atomic modesetting regression fix - ttm pre-nv50 fix - connector NULL ptr deref fix" * tag 'drm-fixes-2020-11-16' of git://anongit.freedesktop.org/drm/drm: drm/nouveau/kms/nv50-: Use atomic encoder callbacks everywhere drm/nouveau/ttm: avoid using nouveau_drm.ttm.type_vram prior to nv50 drm/nouveau/kms: Fix NULL pointer dereference in nouveau_connector_detect_depth
2020-11-16Merge branch 'linux-5.10' of git://github.com/skeggsb/linux into drm-fixesDave Airlie3-22/+24
- atomic modesetting regression fix - ttm pre-nv50 fix - connector NULL ptr deref fix Signed-off-by: Dave Airlie <[email protected]> From: Ben Skeggs <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/CACAvsv5D9p78MNN0OxVeRZxN8LDqcadJEGUEFCgWJQ6+_rjPuw@mail.gmail.com