Age | Commit message (Collapse) | Author | Files | Lines |
|
into next/soc
Pull "mvebu SoC suspend changes for v3.19" from Jason Cooper:
- Armada 370/XP suspend/resume support
- mvebu SoC driver suspend/resume support
- irqchip
- clocksource
- mbus
- clk
* tag 'mvebu-soc-suspend-3.19' of git://git.infradead.org/linux-mvebu:
ARM: mvebu: add SDRAM controller description for Armada XP
ARM: mvebu: adjust mbus controller description on Armada 370/XP
ARM: mvebu: add suspend/resume DT information for Armada XP GP
ARM: mvebu: synchronize secondary CPU clocks on resume
ARM: mvebu: make sure MMU is disabled in armada_370_xp_cpu_resume
ARM: mvebu: Armada XP GP specific suspend/resume code
ARM: mvebu: reserve the first 10 KB of each memory bank for suspend/resume
ARM: mvebu: implement suspend/resume support for Armada XP
clk: mvebu: add suspend/resume for gatable clocks
bus: mvebu-mbus: provide a mechanism to save SDRAM window configuration
bus: mvebu-mbus: suspend/resume support
clocksource: time-armada-370-xp: add suspend/resume support
irqchip: armada-370-xp: Add suspend/resume support
Documentation: dt-bindings: minimal documentation for MVEBU SDRAM controller
Signed-off-by: Arnd Bergmann <[email protected]>
|
|
This is a squash of several imx_v6_v7_defconfig update patches.
- Enable tlv320aic3x audio codec by default (Phytec PBAB01 board)
- Enable DS1307 rtc and gpio fan by default (TBS2910 board)
- Select thermal related drivers
- Add SNVS power off driver
Signed-off-by: Dmitry Lavnikevich <[email protected]>
Signed-off-by: Soeren Moch <[email protected]>
Signed-off-by: Fabio Estevam <[email protected]>
Signed-off-by: Robin Gong <[email protected]>
Signed-off-by: Shawn Guo <[email protected]>
Signed-off-by: Arnd Bergmann <[email protected]>
|
|
Instead of returning a possibly random or'ed together value, let's
always return -EFAULT if rc is set.
Signed-off-by: Jens Freimann <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Acked-by: Cornelia Huck <[email protected]>
Signed-off-by: Christian Borntraeger <[email protected]>
|
|
Currently we use a mixture of atomic/non-atomic bitops
and the local_int spin lock to protect the pending_irqs bitmap
and interrupt payload data.
We need to use atomic bitops for the pending_irqs bitmap everywhere
and in addition acquire the local_int lock where interrupt data needs
to be protected.
Signed-off-by: Jens Freimann <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Signed-off-by: Christian Borntraeger <[email protected]>
|
|
The cpu address of a source cpu (responsible for an external irq) is only to
be stored if bit 6 of the ext irq code is set.
If bit 6 is not set, it is to be zeroed out.
The special external irq code used for virtio and pfault uses the cpu addr as a
parameter field. As bit 6 is set, this implementation is correct.
Reviewed-by: Thomas Huth <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
Signed-off-by: Christian Borntraeger <[email protected]>
|
|
Add iommus properties to the device tree nodes for the two display
controllers found on Tegra124. This will allow the display controllers
to map physically non-contiguous buffers to I/O virtual contiguous
address spaces so that they can be used for scan-out.
Signed-off-by: Thierry Reding <[email protected]>
|
|
Add iommus properties to the device tree nodes for the two display
controllers found on Tegra114. This will allow the display controllers
to map physically non-contiguous buffers to I/O virtual contiguous
address spaces so that they can be used for scan-out.
Signed-off-by: Thierry Reding <[email protected]>
|
|
Add iommus properties to the device tree nodes for the two display
controllers found on Tegra30. This will allow the display controllers to
map physically non-contiguous buffers to I/O virtual contiguous address
spaces so that they can be used for scan-out.
Signed-off-by: Thierry Reding <[email protected]>
|
|
Add the memory controller and wire up the interrupt that is used to
report errors. Provide a reference to the memory controller clock and
mark the device as being an IOMMU by adding an #iommu-cells property.
Signed-off-by: Thierry Reding <[email protected]>
|
|
Add the device tree node for the memory controller found on Tegra114
SoCs. The memory controller integrates an IOMMU (called SMMU) as well as
various knobs to tweak memory accesses by the various clients.
The old IOMMU device tree node is collapsed into the memory controller
node to more accurately describe the hardware. While this change is
incompatible, the IOMMU driver has never had any users so the change is
not going to cause any breakage.
Signed-off-by: Thierry Reding <[email protected]>
|
|
Collapses the old memory-controller and IOMMU device tree nodes into a
single node to more accurately describe the hardware.
While this is an incompatible change there are no users of the IOMMU on
Tegra, even though a driver has existed for some time.
Signed-off-by: Thierry Reding <[email protected]>
|
|
This patch adds the APB_MISC_GP_MIPI_PAD_CTRL_0 as a pin-control bank on
Tegra124 so the new MIPI pad control group can be muxed between CSI and
DSI_B.
Signed-off-by: Sean Paul <[email protected]>
Acked-by: Stephen Warren <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
|
|
While fixing an x2apic bug,
17d68b7 KVM: x86: fix guest-initiated crash with x2apic (CVE-2013-6376)
we've made only one cluster available. This means that the amount of
logically addressible x2APICs was reduced to 16 and VCPUs kept
overwriting themselves in that region, so even the first cluster wasn't
set up correctly.
This patch extends x2APIC support back to the logical_map's limit, and
keeps the CVE fixed as messages for non-present APICs are dropped.
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
They can't be violated now, but play it safe for the future.
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
x2apic allows destinations > 0xff and we don't want them delivered to
lower APICs. They are correctly handled by doing nothing.
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Physical mode can't address more than one APIC, but lowest-prio is
allowed, so we just reuse our paths.
SDM 10.6.2.1 Physical Destination:
Also, for any non-broadcast IPI or I/O subsystem initiated interrupt
with lowest priority delivery mode, software must ensure that APICs
defined in the interrupt address are present and enabled to receive
interrupts.
We could warn on top of that.
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
False from kvm_irq_delivery_to_apic_fast() means that we don't handle it
in the fast path, but we still return false in cases that were perfectly
handled, fix that.
Signed-off-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
0x830 MSR is 0x300 xAPIC MMIO, which is MSR_ICR.
Signed-off-by: Radim KrÄmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
x2APIC has no registers for DFR and ICR2 (see Intel SDM 10.12.1.2 "x2APIC
Register Address Space"). KVM needs to cause #GP on such accesses.
Fix it (DFR and ICR2 on read, ICR2 on write, DFR already handled on writes).
Signed-off-by: Nadav Amit <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Certain x86 instructions that use modrm operands only allow memory operand
(i.e., mod012), and cause a #UD exception otherwise. KVM ignores this fact.
Currently, the instructions that are such and are emulated by KVM are MOVBE,
MOVNTPS, MOVNTPD and MOVNTI. MOVBE is the most blunt example, since it may be
emulated by the host regardless of MMIO.
The fix introduces a new group for handling such instructions, marking mod3 as
illegal instruction.
Signed-off-by: Nadav Amit <[email protected]>
Reviewed-by: Radim Krčmář <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Instead of checking at each call of set_phys_to_machine() whether a
new p2m page has to be allocated due to writing an entry in a large
invalid or identity area, just map those areas read only and react
to a page fault on write by allocating the new page.
This change will make the common path with no allocation much
faster as it only requires a single write of the new mfn instead
of walking the address translation tables and checking for the
special cases.
Suggested-by: David Vrabel <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
At start of the day the Xen hypervisor presents a contiguous mfn list
to a pv-domain. In order to support sparse memory this mfn list is
accessed via a three level p2m tree built early in the boot process.
Whenever the system needs the mfn associated with a pfn this tree is
used to find the mfn.
Instead of using a software walked tree for accessing a specific mfn
list entry this patch is creating a virtual address area for the
entire possible mfn list including memory holes. The holes are
covered by mapping a pre-defined page consisting only of "invalid
mfn" entries. Access to a mfn entry is possible by just using the
virtual base address of the mfn list and the pfn as index into that
list. This speeds up the (hot) path of determining the mfn of a
pfn.
Kernel build on a Dell Latitude E6440 (2 cores, HT) in 64 bit Dom0
showed following improvements:
Elapsed time: 32:50 -> 32:35
System: 18:07 -> 17:47
User: 104:00 -> 103:30
Tested with following configurations:
- 64 bit dom0, 8GB RAM
- 64 bit dom0, 128 GB RAM, PCI-area above 4 GB
- 32 bit domU, 512 MB, 8 GB, 43 GB (more wouldn't work even without
the patch)
- 32 bit domU, ballooning up and down
- 32 bit domU, save and restore
- 32 bit domU with PCI passthrough
- 64 bit domU, 8 GB, 2049 MB, 5000 MB
- 64 bit domU, ballooning up and down
- 64 bit domU, save and restore
- 64 bit domU with PCI passthrough
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
Today get_phys_to_machine() is always called when the mfn for a pfn
is to be obtained. Add a wrapper __pfn_to_mfn() as inline function
to be able to avoid calling get_phys_to_machine() when possible as
soon as the switch to a linear mapped p2m list has been done.
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
Introduces lookup_pmd_address() to get the address of the pmd entry
related to a virtual address in the current address space. This
function is needed for support of a virtual mapped sparse p2m list
in xen pv domains, as we need the address of the pmd entry, not the
one of the pte in that case.
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
When the physical memory configuration is initialized the p2m entries
for not pouplated memory pages are set to "invalid". As those pages
are beyond the hypervisor built p2m list the p2m tree has to be
extended.
This patch delays processing the extra memory related p2m entries
during the boot process until some more basic memory management
functions are callable. This removes the need to create new p2m
entries until virtual memory management is available.
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
The m2p overrides are used to be able to find the local pfn for a
foreign mfn mapped into the domain. They are used by driver backends
having to access frontend data.
As this functionality isn't used in early boot it makes no sense to
initialize the m2p override functions very early. It can be done
later without doing any harm, removing the need for allocating memory
via extend_brk().
While at it make some m2p override functions static as they are only
used internally.
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
Early in the boot process the memory layout of a pv-domain is changed
to match the E820 map (either the host one for Dom0 or the Xen one)
regarding placement of RAM and PCI holes. This requires removing memory
pages initially located at positions not suitable for RAM and adding
them later at higher addresses where no restrictions apply.
To be able to operate on the hypervisor supported p2m list until a
virtual mapped linear p2m list can be constructed, remapping must
be delayed until virtual memory management is initialized, as the
initial p2m list can't be extended unlimited at physical memory
initialization time due to it's fixed structure.
A further advantage is the reduction in complexity and code volume as
we don't have to be careful regarding memory restrictions during p2m
updates.
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
In arch/x86/xen/p2m.c three different allocation functions for
obtaining a memory page are used: extend_brk(), alloc_bootmem_align()
or __get_free_page(). Which of those functions is used depends on the
progress of the boot process of the system.
Introduce a common allocation routine selecting the to be called
allocation routine dynamically based on the boot progress. This allows
moving initialization steps without having to care about changing
allocation calls.
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
Some functions in arch/x86/xen/p2m.c are used locally only. Make them
static. Rearrange the functions in p2m.c to avoid forward declarations.
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
The source arch/x86/xen/p2m.c has some coding style issues. Fix them.
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
When hardware supports APIC/x2APIC virtualization we don't need to use
pirqs for MSI handling and instead use APIC since most APIC accesses
(MMIO or MSR) will now be processed without VMEXITs.
As an example, netperf on the original code produces this profile
(collected wih 'xentrace -e 0x0008ffff -T 5'):
342 cpu_change
260 CPUID
34638 HLT
64067 INJ_VIRQ
28374 INTR
82733 INTR_WINDOW
10 NPF
24337 TRAP
370610 vlapic_accept_pic_intr
307528 VMENTRY
307527 VMEXIT
140998 VMMCALL
127 wrap_buffer
After applying this patch the same test shows
230 cpu_change
260 CPUID
36542 HLT
174 INJ_VIRQ
27250 INTR
222 INTR_WINDOW
20 NPF
24999 TRAP
381812 vlapic_accept_pic_intr
166480 VMENTRY
166479 VMEXIT
77208 VMMCALL
81 wrap_buffer
ApacheBench results (ab -n 10000 -c 200) improve by about 10%
Signed-off-by: Boris Ostrovsky <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Reviewed-by: Andrew Cooper <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
If the hardware supports APIC virtualization we may decide not to use
pirqs and instead use APIC/x2APIC directly, meaning that we don't want
to set x86_msi.setup_msi_irqs and x86_msi.teardown_msi_irq to
Xen-specific routines. However, x2APIC is not set up by the time
pci_xen_hvm_init() is called so we need to postpone setting these ops
until later, when we know which APIC mode is used.
(Note that currently x2APIC is never initialized on HVM guests. This
may change in the future)
Signed-off-by: Boris Ostrovsky <[email protected]>
Acked-by: Stefano Stabellini <[email protected]>
Signed-off-by: David Vrabel <[email protected]>
|
|
Introduce support for new hypercall GNTTABOP_cache_flush.
Use it to perform cache flashing on pages used for dma when necessary.
If GNTTABOP_cache_flush is supported by the hypervisor, we don't need to
bounce dma map operations that involve foreign grants and non-coherent
devices.
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Ian Campbell <[email protected]>
|
|
Introduce an arch specific function to find out whether a particular dma
mapping operation needs to bounce on the swiotlb buffer.
On ARM and ARM64, if the page involved is a foreign page and the device
is not coherent, we need to bounce because at unmap time we cannot
execute any required cache maintenance operations (we don't know how to
find the pfn from the mfn).
No change of behaviour for x86.
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Ian Campbell <[email protected]>
Acked-by: Konrad Rzeszutek Wilk <[email protected]>
|
|
Merge xen/mm32.c into xen/mm.c.
As a consequence the code gets compiled on arm64 too.
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
In xen_dma_map_page, if the page is a local page, call the native
map_page dma_ops. If the page is foreign, call __xen_dma_map_page that
issues any required cache maintenane operations via hypercall.
The reason for doing this is that the native dma_ops map_page could
allocate buffers than need to be freed. If the page is foreign we don't
call the native unmap_page dma_ops function, resulting in a memory leak.
Suggested-by: Catalin Marinas <[email protected]>
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
dev_addr is the machine address of the page.
The new parameter can be used by the ARM and ARM64 implementations of
xen_dma_map_page to find out if the page is a local page (pfn == mfn) or
a foreign page (pfn != mfn).
dev_addr could be retrieved again from the physical address, using
pfn_to_mfn, but it requires accessing an rbtree. Since we already have
the dev_addr in our hands at the call site there is no need to get the
mfn twice.
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Konrad Rzeszutek Wilk <[email protected]>
|
|
Use is_device_dma_coherent to check whether we need to issue cache
maintenance operations rather than checking on the existence of a
particular dma_ops function for the device.
This is correct because coherent devices don't need cache maintenance
operations - arm_coherent_dma_ops does not set the hooks that we
were previously checking for existance.
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Ian Campbell <[email protected]>
|
|
Introduce a boolean flag and an accessor function to check whether a
device is dma_coherent. Set the flag from set_arch_dma_coherent_ops.
Signed-off-by: Stefano Stabellini <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Acked-by: Russell King <[email protected]>
|
|
Introduce a boolean flag and an accessor function to check whether a
device is dma_coherent. Set the flag from set_arch_dma_coherent_ops.
Signed-off-by: Stefano Stabellini <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
Remove code duplication in mm32.c by calling the native dma_ops if the
page is a local page (not a foreign page). Use a simple pfn_valid(pfn)
check to figure out if the page is local, exploiting the fact that dom0
is mapped 1:1, therefore pfn_valid always returns false when called on a
foreign mfn.
Suggested-by: Catalin Marinas <[email protected]>
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
Dom0 is not actually capable of issuing outer_inv_range or
outer_clean_range calls.
Signed-off-by: Stefano Stabellini <[email protected]>
Acked-by: Ian Campbell <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
The feature has been removed from Xen. Also Linux cannot use it on ARM32
without CONFIG_ARM_LPAE.
Signed-off-by: Stefano Stabellini <[email protected]>
Acked-by: Ian Campbell <[email protected]>
Reviewed-by: David Vrabel <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
|
|
Currently the kernel patches all necessary instructions once at boot
time, so modules are not covered by this.
Change the apply_alternatives() function to take a beginning and an
end pointer and introduce a new variant (apply_alternatives_all()) to
cover the existing use case for the static kernel image section.
Add a module_finalize() function to arm64 to check for an
alternatives section in a module and patch only the instructions from
that specific area.
Since that module code is not touched before the module
initialization has ended, we don't need to halt the machine before
doing the patching in the module's code.
Signed-off-by: Andre Przywara <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
If the overflow threshold for a counter is set above or near the
0xffffffff boundary then the kernel may lose track of the overflow
causing only events that occur *after* the overflow to be recorded.
Specifically the problem occurs when the value of the performance counter
overtakes its original programmed value due to wrap around.
Typical solutions to this problem are either to avoid programming in
values likely to be overtaken or to treat the overflow bit as the 33rd
bit of the counter.
Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
during irqsave sections (context switches for example) so instead we take
the simpler approach of avoiding values likely to be overtaken.
We set the limit to half of max_period because this matches the limit
imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
rate for large threshold values, however even with a very fast counter
ticking at 4GHz the interrupt rate would only be ~1Hz.
Signed-off-by: Daniel Thompson <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
If I include asm/irq.h on the top of my code, and set ARCH=arm64,
I'll get a compile warning, details are below:
warning: ‘struct pt_regs’
declared inside parameter list [enabled by default]
This patch is suggested by Arnd, see:
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-December/308270.html
Signed-off-by: Chunyan Zhang <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Building arm64.allmodconfig leads to the following warning:
usb/gadget/function/f_ncm.c:203:0: warning: "NCAPS" redefined
#define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE)
^
In file included from /home/build/work/batch/arch/arm64/include/asm/io.h:32:0,
from /home/build/work/batch/include/linux/clocksource.h:19,
from /home/build/work/batch/include/clocksource/arm_arch_timer.h:19,
from /home/build/work/batch/arch/arm64/include/asm/arch_timer.h:27,
from /home/build/work/batch/arch/arm64/include/asm/timex.h:19,
from /home/build/work/batch/include/linux/timex.h:65,
from /home/build/work/batch/include/linux/sched.h:19,
from /home/build/work/batch/arch/arm64/include/asm/compat.h:25,
from /home/build/work/batch/arch/arm64/include/asm/stat.h:23,
from /home/build/work/batch/include/linux/stat.h:5,
from /home/build/work/batch/include/linux/module.h:10,
from /home/build/work/batch/drivers/usb/gadget/function/f_ncm.c:19:
arch/arm64/include/asm/cpufeature.h:27:0: note: this is the location of the previous definition
#define NCAPS 2
So add a ARM64 prefix to avoid such problem.
Reported-by: Olof's autobuilder <[email protected]>
Signed-off-by: Fabio Estevam <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
This is in preparation for clock providers to not have to deal with struct clk.
Signed-off-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Michael Turquette <[email protected]>
|
|
It is not valid to select CONFIG_PM directly without selecting
CONFIG_PM_SLEEP or CONFIG_PM_RUNTIME too, because that breaks
dependencies (ia64 does that) and it is not necessary to select
CONFIG_PM directly if CONFIG_PM_SLEEP or CONFIG_PM_RUNTIME is
selected, because CONFIG_PM will be set automatically in that
case (sh does that).
Fix those mistakes.
Acked-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Kevin Hilman <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Instead of using the dev_ops ->stop|start() callbacks for genpd, let's
convert to use genpd's flag field and set it to GENPD_FLAG_PM_CLK.
Signed-off-by: Ulf Hansson <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|