Age | Commit message (Collapse) | Author | Files | Lines |
|
When connected to HDMI sources, some DVI monitors de-assert their HPD
signal and TDMS loads for one seconds every four seconds when there is
no signal present on the connection.
Unfortunately, this behaviour is indistinguishable from a proper HDMI
setup with an AV receiver in the path to the display: the HDMI spec
requires us to detect HPD deassertions as short as 100ms, which indicate
that the EDID has changed.
Since it is possible to connect a DVI monitor to an AV receiver and then
to a HDMI source, merely working around this by detecting the lack of
HDMI vendor block in the EDID is insufficient - the AV receiver is at
liberty to modify the EDID as it sees fit, and it will place its own
parameters into the EDID including the HDMI vendor block.
DRM has support for forcing the state of a connector, which we should
implement to allow us to work around these broken DVI monitors - we can
tell DRM to force the connection state to indicate that there is always
a device connected to work around this problem. Although this requires
manual configuration, it is better than nothing at all.
When a forced connection state has been set, there is no point handling
our RXSENSE interrupts, so disable them in this circumstance.
Tested-by: Philipp Zabel <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Add support for interlaced video modes to the dw_hdmi bridge. This
mainly involves halving the vertical parameters to be programmed into
the bridge registers, and setting the interlace_allowed connector flag.
This brings working 1080i support. However, 480i and 576i fail to
work due to the lack of proper pixel repetition support, which is not
trivial to add due to the tabular PLL parameterisation. Hence, we
filter out these modes in our mode_valid() method.
Tested-by: Philipp Zabel <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The support for interlaced video modes seems to be broken; we don't use
anything other than the vtotal/htotal from the timing information to
define the various sync counters.
Freescale patches for interlaced video support contain an alternative
sync counter setup, which we include here. This setup produces the
hsync and vsync via the normal counter 2 and 3, but moves the display
enable signal from counter 5 to counter 6. Therefore, we need to
change the display controller setup as well.
The corresponding Freescale patches for this change are:
iMX6-HDMI-support-interlaced-display-mode.patch
IPU-fine-tuning-the-interlace-display-timing-for-CEA.patch
This produces a working interlace format output from the IPU.
Tested-by: Philipp Zabel <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Use a function to convert the sync pin to a bit mask for the DI_GENERAL
register, and move this out of the interlace/non-interlace path to the
common path.
Tested-by: Philipp Zabel <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
It's possible that the destination can be shadowed in userspace
(as, for example, the perf buffers are now). So we should take
care not to leak data that could be inspected by userspace.
Signed-off-by: Chris Metcalf <[email protected]>
|
|
Both alpha and tile needed implementations of zero_bytemask.
The alpha version is untested.
Signed-off-by: Chris Metcalf <[email protected]>
|
|
arch/tile added word-at-a-time.h after the patch that added generic-y
entries; the generic-y entry is now stale.
arch/h8300 is newer than the generic-y patch for word-at-a-time.h,
and needs a generic-y entry.
arch/powerpc seems to have gotten a generic-y entry by mistake in
the first patch; this change removes it.
Signed-off-by: Chris Metcalf <[email protected]>
|
|
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
in_atomic(): 0, irqs_disabled(): 128, pid: 342, name: perf
1 lock held by perf/342:
#0: (break_hook_lock){+.+...}, at: [<ffffffc0000851ac>] call_break_hook+0x34/0xd0
irq event stamp: 62224
hardirqs last enabled at (62223): [<ffffffc00010b7bc>] __call_rcu.constprop.59+0x104/0x270
hardirqs last disabled at (62224): [<ffffffc0000fbe20>] vprintk_emit+0x68/0x640
softirqs last enabled at (0): [<ffffffc000097928>] copy_process.part.8+0x428/0x17f8
softirqs last disabled at (0): [< (null)>] (null)
CPU: 0 PID: 342 Comm: perf Not tainted 4.1.6-rt5 #4
Hardware name: linux,dummy-virt (DT)
Call trace:
[<ffffffc000089968>] dump_backtrace+0x0/0x128
[<ffffffc000089ab0>] show_stack+0x20/0x30
[<ffffffc0007030d0>] dump_stack+0x7c/0xa0
[<ffffffc0000c878c>] ___might_sleep+0x174/0x260
[<ffffffc000708ac8>] __rt_spin_lock+0x28/0x40
[<ffffffc000708db0>] rt_read_lock+0x60/0x80
[<ffffffc0000851a8>] call_break_hook+0x30/0xd0
[<ffffffc000085a70>] brk_handler+0x30/0x98
[<ffffffc000082248>] do_debug_exception+0x50/0xb8
Exception stack(0xffffffc00514fe30 to 0xffffffc00514ff50)
fe20: 00000000 00000000 c1594680 0000007f
fe40: ffffffff ffffffff 92063940 0000007f 0550dcd8 ffffffc0 00000000 00000000
fe60: 0514fe70 ffffffc0 000be1f8 ffffffc0 0514feb0 ffffffc0 0008948c ffffffc0
fe80: 00000004 00000000 0514fed0 ffffffc0 ffffffff ffffffff 9282a948 0000007f
fea0: 00000000 00000000 9282b708 0000007f c1592820 0000007f 00083914 ffffffc0
fec0: 00000000 00000000 00000010 00000000 00000064 00000000 00000001 00000000
fee0: 005101e0 00000000 c1594680 0000007f c1594740 0000007f ffffffd8 ffffff80
ff00: 00000000 00000000 00000000 00000000 c1594770 0000007f c1594770 0000007f
ff20: 00665e10 00000000 7f7f7f7f 7f7f7f7f 01010101 01010101 00000000 00000000
ff40: 928e4cc0 0000007f 91ff11e8 0000007f
call_break_hook is called in atomic context (hard irq disabled), so replace
the sleepable lock to rcu lock, replace relevant list operations to rcu
version and call synchronize_rcu() in unregister_break_hook().
And, replace write lock to spinlock in {un}register_break_hook.
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
When booting a kernel without an initrd, the kernel reports that it
moves -1 bytes worth, having gone through the motions with initrd_start
equal to initrd_end:
Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff]
Prevent this by bailing out early when the initrd size is zero (i.e. we
have no initrd), avoiding the confusing message and other associated
work.
Fixes: 1570f0d7ab425c1e ("arm64: support initrd outside kernel linear map")
Cc: Mark Salter <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
These have not been tested very extensively and generally
seem to be problematic. Mark them as experimental for now.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=92270
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: [email protected]
|
|
So the problem this patch is trying to address is as follows:
CPU0 CPU1
context_switch(A, B)
ttwu(A)
LOCK A->pi_lock
A->on_cpu == 0
finish_task_switch(A)
prev_state = A->state <-.
WMB |
A->on_cpu = 0; |
UNLOCK rq0->lock |
| context_switch(C, A)
`-- A->state = TASK_DEAD
prev_state == TASK_DEAD
put_task_struct(A)
context_switch(A, C)
finish_task_switch(A)
A->state == TASK_DEAD
put_task_struct(A)
The argument being that the WMB will allow the load of A->state on CPU0
to cross over and observe CPU1's store of A->state, which will then
result in a double-drop and use-after-free.
Now the comment states (and this was true once upon a long time ago)
that we need to observe A->state while holding rq->lock because that
will order us against the wakeup; however the wakeup will not in fact
acquire (that) rq->lock; it takes A->pi_lock these days.
We can obviously fix this by upgrading the WMB to an MB, but that is
expensive, so we'd rather avoid that.
The alternative this patch takes is: smp_store_release(&A->on_cpu, 0),
which avoids the MB on some archs, but not important ones like ARM.
Reported-by: Oleg Nesterov <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Linus Torvalds <[email protected]>
Cc: <[email protected]> # v3.1+
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: e4a52bcb9a18 ("sched: Remove rq->lock from the first half of ttwu()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Make sure we are not the root device before attempting to
read the pcie bridge registers to check the pcie gen speeed.
Fixes a crash when the device is passed through to a VM.
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: [email protected]
|
|
end_clone_bio() is a endio callback for clone bio and should check
and save the clone's bi_error for error reporting. However,
4246a0b63bd8 ("block: add a bi_error field to struct bio") changed
the function to check the original bio's bi_error, which is 0.
Without this fix, clone's error is ignored and reported to the
original request as success. Thus data corruption will be observed.
Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio")
Signed-off-by: Jun'ichi Nomura <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen bug fixes from David Vrabel:
- Fix VM save performance regression with x86 PV guests
- Make kexec work in x86 PVHVM guests (if Xen has the soft-reset ABI)
- Other minor fixes.
* tag 'for-linus-4.3b-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
x86/xen/p2m: hint at the last populated P2M entry
x86/xen: Do not clip xen_e820_map to xen_e820_map_entries when sanitizing map
x86/xen: Support kexec/kdump in HVM guests by doing a soft reset
xen/x86: Don't try to write syscall-related MSRs for PV guests
xen: use correct type for HYPERVISOR_memory_op()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Martin Schwidefsky:
"Three bug fixes and an update to the default configuration"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/defconfig: set SCSI_DH=y
s390/vtime: correct scaled cputime of partially idle CPUs
s390/boot/decompression: disable floating point in decompressor
s390/numa: use correct type for node_to_cpumask_map
|
|
The "fh_len" passed to ->fh_to_* is not guaranteed to be that same as
that returned by encode_fh - it may be larger.
With NFSv2, the filehandle is fixed length, so it may appear longer
than expected and be zero-padded.
So we must test that fh_len is at least some value, not exactly equal
to it.
Signed-off-by: NeilBrown <[email protected]>
Acked-by: David Sterba <[email protected]>
|
|
After reading one of chunk or tree root tree's root node from disk, if the
root node does not have EXTENT_BUFFER_UPTODATE flag set, we fail to release
the memory used by the root node. Fix this.
Signed-off-by: Chandan Rajendra <[email protected]>
|
|
Pull CIFS fixes from Steve French:
"Two fixes for problems pointed out by automated tools.
Thanks PaX/grsecurity team and Dan Carpenter (and the Smatch tool)"
* 'for-next' of git://git.samba.org/sfrench/cifs-2.6:
[CIFS] Update cifs version number
[SMB3] Do not fall back to SMBWriteX in set_file_size error cases
[SMB3] Missing null tcon check
|
|
With commit 633d6f17cd91ad5bf2370265946f716e42d388c6 (x86/xen: prepare
p2m list for memory hotplug) the P2M may be sized to accomdate a much
larger amount of memory than the domain currently has.
When saving a domain, the toolstack must scan all the P2M looking for
populated pages. This results in a performance regression due to the
unnecessary scanning.
Instead of reporting (via shared_info) the maximum possible size of
the P2M, hint at the last PFN which might be populated. This hint is
increased as new leaves are added to the P2M (in the expectation that
they will be used for populated entries).
Signed-off-by: David Vrabel <[email protected]>
Cc: <[email protected]> # 4.0+
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas into fixes
Merge "Renesas ARM Based SoC Fixes for v4.3" from Simon Horman
* Add Add CPG/MSTP Clock Domain for sound on r8a779[01] SoCs.
This allows sound to work once again.
* tag 'renesas-fixes-for-v4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas:
ARM: shmobile: r8a7791 dtsi: Add CPG/MSTP Clock Domain for sound
ARM: shmobile: r8a7790 dtsi: Add CPG/MSTP Clock Domain for sound
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux into fixes
Merge "Allwinner fixes for 4.3" from Maxime Ripard:
Two patches, one that fixes one of the DT build, and the other raising the
voltage of the lowest OPP of the A20 to remain within the SoC operating
boundaries
* tag 'sunxi-fixes-for-4.3' of https://git.kernel.org/pub/scm/linux/kernel/git/mripard/linux:
ARM: dts: Fix Makefile target for sun4i-a10-itead-iteaduino-plus
ARM: dts: sunxi: Raise minimum CPU voltage for sun7i-a20 to meet SoC specifications
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into fixes
Merge "Samsung fixes for v4.3" from Kukjin Kim:
- fix invalid clock used for FIMD IOMMU
- fix thermal boot issue smdk5250-smdk5250
- fix S2R on exynos4412 trats2 boards
- fix LEDs on exynos5422-odroidxu3-common
- fix booting of all 8 cores on exynos542x
* tag 'samsung-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
ARM: dts: Fix wrong clock binding for sysmmu_fimd1_1 on exynos5420
ARM: dts: Fix bootup thermal issue on smdk5250
ARM: dts: add suspend opp to exynos4412
ARM: dts: Fix LEDs on exynos5422-odroidxu3
ARM: EXYNOS: reset Little cores when cpu is up
|
|
Whilst discussing possible ways to trigger an invalidate_range on a
userptr with an aliased GGTT mmapping (and so cause a struct_mutex
deadlock), the conclusion is that we can, and we must, prevent any
possible deadlock by avoiding taking the mutex at all during
invalidate_range. This has numerous advantages all of which stem from
avoid the sleeping function from inside the unknown context. In
particular, it simplifies the invalidate_range because we no longer
have to juggle the spinlock/mutex and can just hold the spinlock
for the entire walk. To compensate, we have to make get_pages a bit more
complicated in order to serialise with a pending cancel_userptr worker.
As we hold the struct_mutex, we have no choice but to return EAGAIN and
hope that the worker is then flushed before we retry after reacquiring
the struct_mutex.
The important caveat is that the invalidate_range itself is no longer
synchronous. There exists a small but definite period in time in which
the old PTE's page remain accessible via the GPU. Note however that the
physical pages themselves are not invalidated by the mmu_notifier, just
the CPU view of the address space. The impact should be limited to a
delay in pages being flushed, rather than a possibility of writing to
the wrong pages. The only race condition that this worsens is remapping
an userptr active on the GPU where fresh work may still reference the
old pages due to struct_mutex contention. Given that userspace is racing
with the GPU, it is fair to say that the results are undefined.
v2: Only queue (and importantly only take one refcnt) the worker once.
Signed-off-by: Chris Wilson <[email protected]>
Cc: Michał Winiarski <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
Reviewed-by: Tvrtko Ursulin <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
Michał Winiarski found a really evil way to trigger a struct_mutex
deadlock with userptr. He found that if he allocated a userptr bo and
then GTT mmaped another bo, or even itself, at the same address as the
userptr using MAP_FIXED, he could then cause a deadlock any time we then
had to invalidate the GTT mmappings (so at will). Tvrtko then found by
repeatedly allocating GTT mmappings he could alias with an old userptr
mmap and also trigger the deadlock.
To counter act the deadlock, we make the observation that we only need
to take the struct_mutex if the object has any pages to revoke, and that
before userspace can alias with the userptr address space, it must have
invalidated the userptr->pages. Thus if we can check for those pages
outside of the struct_mutex, we can avoid the deadlock. To do so we
introduce a separate flag for userptr objects that we can inspect from
the mmu-notifier underneath its spinlock.
The patch makes one eye-catching change. That is the removal serial=0
after detecting a to-be-freed object inside the invalidate walker. I
felt setting serial=0 was a questionable pessimisation: it denies us the
chance to reuse the current iterator for the next loop (before it is
freed) and being explicit makes the reader question the validity of the
locking (since the object-free race could occur elsewhere). The
serialisation of the iterator is through the spinlock, if the object is
freed before the next loop then the notifier.serial will be incremented
and we start the walk from the beginning as we detect the invalid cache.
To try and tame the error paths and interactions with the userptr->active
flag, we have to do a fair amount of rearranging of get_pages_userptr().
v2: Grammar fixes
v3: Reorder set-active so that it is only set when obj->pages is set
(and so needs cancellation). Only the order of setting obj->pages and
the active-flag is crucial. Calling gup after invalidate-range begin
means the userptr sees the new set of backing storage (and so will not
need to invalidate its new pages), but we have to be careful not to set
the active-flag prior to successfully establishing obj->pages.
v4: Take the active->flag early so we know in the mmu-notifier when we
have to cancel a pending gup-worker.
v5: Rearrange the error path so that is not so convoluted
v6: Set pinned to 0 when negative before calling release_pages()
Reported-by: Michał Winiarski <[email protected]>
Testcase: igt/gem_userptr_blits/map-fixed*
Signed-off-by: Chris Wilson <[email protected]>
Cc: Michał Winiarski <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
Cc: [email protected]
Reviewed-by: Tvrtko Ursulin <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
The userptr worker allows for a slight race condition where upon there
may two or more threads calling get_user_pages for the same object. When
we have the array of pages, then we serialise the update of the object.
However, the worker should only overwrite the obj->userptr.work pointer
if and only if it is the active one. Currently we clear it for a
secondary worker with the effect that we may rarely force a second
lookup.
v2: Rebase and rename a variable to avoid 80cols
v3: Mention v2
Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Tvrtko Ursulin <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
We tried to fix this in commit fdc454c1484a ("drm/i915: Prevent out of
range pt in gen6_for_each_pde").
But the static analyzer still complains that, just before we break due
to "iter < I915_PDES", we do "pt = (pd)->page_table[iter]" with an
iter value that is bigger than I915_PDES. Of course, this isn't really
a problem since no one uses pt outside the macro. Still, every single
new usage of the macro will create a new issue for us to mark as a
false positive.
Also, Paulo re-started the discussion a while ago [1], but didn't end up
implemented.
In order to "solve" this "problem", this patch takes the ideas from
Chris and Dave, but that check would change the desired behavior of the
code, because the object (for example pdp->page_directory[iter]) can be
null during init/alloc, and C would take this as false, breaking the for
loop immediately.
This has been already verified with "static analysis tools".
[1]http://lists.freedesktop.org/archives/intel-gfx/2015-June/068548.html
v2: Make it a single statement, while preventing the common subexpression
elimination (Chris)
Cc: Paulo Zanoni <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Dave Gordon <[email protected]>
Signed-off-by: Michel Thierry <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
Prevent leaking VMAs and PPGTT VMs when objects are imported
via flink.
Scenario is that any VMAs created by the importer will be left
dangling after the importer exits, or destroys the PPGTT context
with which they are associated.
This is caused by object destruction not running when the
importer closes the buffer object handle due the reference held
by the exporter. This also leaks the VM since the VMA has a
reference on it.
In practice these leaks can be observed by stopping and starting
the X server on a kernel with fbcon compiled in. Every time
X server exits another VMA will be leaked against the fbcon's
frame buffer object.
Also on systems where flink buffer sharing is used extensively,
like Android, this leak has even more serious consequences.
This version is takes a general approach from the earlier work
by Rafael Barbalho (drm/i915: Clean-up PPGTT on context
destruction) and tries to incorporate the subsequent discussion
between Chris Wilson and Daniel Vetter.
v2:
Removed immediate cleanup on object retire - it was causing a
recursive VMA unbind via i915_gem_object_wait_rendering. And
it is in fact not even needed since by definition context
cleanup worker runs only after the last context reference has
been dropped, hence all VMAs against the VM belonging to the
context are already on the inactive list.
v3:
Previous version could deadlock since VMA unbind waits on any
rendering on an object to complete. Objects can be busy in a
different VM which would mean that the cleanup loop would do
the wait with the struct mutex held.
This is an even simpler approach where we just unbind VMAs
without waiting since we know all VMAs belonging to this VM
are idle, and there is nothing in flight, at the point
context destructor runs.
v4:
Double underscore prefix for __915_vma_unbind_no_wait and a
commit message typo fix. (Michel Thierry)
Note that this is just a partial/interim fix since we have a bit a
fundamental issue with cleaning up, e.g.
https://bugs.freedesktop.org/show_bug.cgi?id=87729
Signed-off-by: Tvrtko Ursulin <[email protected]>
Testcase: igt/gem_ppgtt.c/flink-and-exit-vma-leak
Reviewed-by: Michel Thierry <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Rafael Barbalho <[email protected]>
Cc: Michel Thierry <[email protected]>
[danvet: Add a note that this isn't everything.]
Signed-off-by: Daniel Vetter <[email protected]>
|
|
Neither myself or Liam is especially interested in this driver any more
and the devices are already covered by the general ex-Wolfson entry so
just remove this.
Signed-off-by: Mark Brown <[email protected]>
Acked-by: Liam Girdwood <[email protected]>
|
|
All architectures must now define ioremap_uc(), but MIPS currently
only has ioremap_nocache().
Fixes: 4c73e8926623 ("arch/*/io.h: Add ioremap_uc() to all architectures")
Signed-off-by: Ben Hutchings <[email protected]>
Cc: Luis R. Rodriguez <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/11263/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
|
|
|
|
This continues the pattern started in commit cc1ef118fc09 ("drm/irq:
Make pipe unsigned and name consistent"). This is applied to the public
APIs and driver callbacks, so pretty much all drivers need to be updated
to match the new prototypes.
Cc: Christian König <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Russell King <[email protected]>
Cc: Inki Dae <[email protected]>
Cc: Jianwei Wang <[email protected]>
Cc: Alison Wang <[email protected]>
Cc: Patrik Jakobsson <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Jani Nikula <[email protected]>
Cc: Philipp Zabel <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Rob Clark <[email protected]>
Cc: Ben Skeggs <[email protected]>
Cc: Tomi Valkeinen <[email protected]>
Cc: Laurent Pinchart <[email protected]>
Cc: Mark Yao <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Vincent Abriou <[email protected]>
Cc: Thomas Hellstrom <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
Reviewed-by: Ville Syrjälä <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
The minimum volume level for the TAS2552 (control register value 0x00)
is -7dB however the driver declares it as -0.07dB.
Running amixer before the patch reports:
dBscale-min=-0.07dB,step=1.00dB,mute=0
Running amixer with the patch applied reports:
dBscale-min=-7.00dB,step=1.00dB,mute=0
Signed-off-by: Andreas Dannenberg <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
Cc: [email protected]
|
|
Signed-off-by: Fengguang Wu <[email protected]>
Reviewed-by: Jani Nikula <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
The link training functions had confusing names. The start function
actually does the clock recovery phase of the link training, and the
complete function does the channel equalization. So call them that
instead. Also, every call to intel_dp_start_link_train() was followed
by a call to intel_dp_complete_link_train(), so add a new start
function that calls clock_recory and channel_equalization.
Signed-off-by: Ander Conselvan de Oliveira <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
This patch adds a separate probe function for HDMI
EDID read over DDC channel. This function has been
registered as a .hot_plug handler for HDMI encoder.
The current implementation of hdmi_detect()
function re-sets the cached HDMI edid (in connector->detect_edid) in
every detect call.This function gets called many times, sometimes
directly from userspace probes, forcing drivers to read EDID every
detect function call.This causes several problems like:
1. Race conditions in multiple hot_plug / unplug cases, between
interrupts bottom halves and userspace detections.
2. Many Un-necessary EDID reads for single hotplug/unplug
3. HDMI complaince failures which expects only one EDID read per hotplug
This function will be serving the purpose of really reading the EDID
by really probing the DDC channel, and updating the cached EDID.
The plan is to:
1. i915 IRQ handler bottom half function already calls
intel_encoder->hotplug() function. Adding This probe function which
will read the EDID only in case of a hotplug / unplug.
2. During init_connector this probe will be called to read the edid
3. Reuse the cached EDID in hdmi_detect() function.
The "< gen7" check is there because this was tested only for >=gen7
platforms. For older platforms the hotplug/reading edid path remains same.
v2: Calling set_edid instead of hdmi_probe during init.
Also, for platforms having DDI, intel_encoder for DP and HDMI is same
(taken from intel_dig_port), so for DP also, hot_plug function gets called
which is not intended here. So, check for HDMI in intel_hdmi_probe
Rely on HPD for updating edid only for platforms gen > 8 and also for VLV.
v3: Dropping the gen < 8 || !VLV check. Now all platforms should rely on
hotplug or init for updating the edid.(Daniel)
Also, calling hdmi_probe in init instead of set_edid
v4: Renaming intel_hdmi_probe to intel_hdmi_hot_plug.
Also calling this hotplug handler from intel_hpd_init to take care of init
resume scenarios.
v5: Moved the call to encoder hotplug during init to separate patch(Daniel)
Signed-off-by: Shashank Sharma <[email protected]>
Signed-off-by: Sonika Jindal <[email protected]>
[danvet: Mark intel_hdmi_hot_plug as static.]
Signed-off-by: Daniel Vetter <[email protected]>
|
|
This is required to support glDispatchComputeIndirect for gen7.
Signed-off-by: Jordan Justen <[email protected]>
Reviewed-by: Kristian Høgsberg <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
When using RC6 timeout mode, the timeout value
should be written to GEN6_RC6_THRESHOLD.
v2: Updated commit message. (Tom)
v3: Rebase over whitespace differences. (Daniel)
Cc: Tom O'Rourke <Tom.O'[email protected]>
Signed-off-by: Sagar Arun Kamble <[email protected]>
Reviewed-by: Tom O'Rourke <Tom.O'[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
Add host2guc interface to notify GuC power state changes when
enter or resume from power saving state.
v3: Move intel_guc_suspend to i915_drm_suspend for consistency.
v2: Add GuC suspend/resume to runtime suspend/resume too
v1: Change to a more flexible way when fill host to GuC scratch
data in order to remove hard coding.
Signed-off-by: Alex Dai <[email protected]>
Reviewed-by: Sagar Arun Kamble <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
The BIOS can leave the CHV display PHY in some odd state where
some of the LDOs/lanes won't power down fully when unused. This
will trigger a host of asserts that were added in:
30142273a3e83936fd7b45aa5339311a9295ca51 drm/i915: Add CHV PHY LDO power sanity checks
6669e39f95b5530ca8cb9137703ceb5e83e5d648 drm/i915: Add some CHV DPIO lane power state asserts
To avoid that, skip the asserts until the PHY power well has been
disabled at least once. That will fully reset the PHY, and once
brought back up, the dynamic power down features will work correctly.
Signed-off-by: Ville Syrjälä <[email protected]>
Reviewed-by: Deepak S<[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
The docs are unclear as usual, so it's not clear whether LRC should be
bypassed, performed normally or GRC code should be used as the LRC code.
Some old docs stated that LRC bypass ought to be used, more recent ones
no longer say that. Some docs indicated that we could use GRC as the LRC
code on CHV, but the BIOS doesn't do that, so let's not do it either.
Besides to enable LRC bypass properly, I believe we should set the bit
already before deasserting cmnreset.
Signed-off-by: Ville Syrjälä <[email protected]>
Reviewed-by: Deepak S<[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
For all the encoders, call the hot_plug if it is registered.
This is required for connected boot and resume cases to generate
fake hpd resulting in reading of edid.
Removing the initial sdvo hot_plug call too so that it will be called
just once from this loop.
Signed-off-by: Sonika Jindal <[email protected]>
Signed-off-by: Daniel Vetter <[email protected]>
|
|
Josef ran into a deadlock while a transaction handle was finalizing the
creation of its block groups, which produced the following trace:
[260445.593112] fio D ffff88022a9df468 0 8924 4518 0x00000084
[260445.593119] ffff88022a9df468 ffffffff81c134c0 ffff880429693c00 ffff88022a9df488
[260445.593126] ffff88022a9e0000 ffff8803490d7b00 ffff8803490d7b18 ffff88022a9df4b0
[260445.593132] ffff8803490d7af8 ffff88022a9df488 ffffffff8175a437 ffff8803490d7b00
[260445.593137] Call Trace:
[260445.593145] [<ffffffff8175a437>] schedule+0x37/0x80
[260445.593189] [<ffffffffa0850f37>] btrfs_tree_lock+0xa7/0x1f0 [btrfs]
[260445.593197] [<ffffffff810db7c0>] ? prepare_to_wait_event+0xf0/0xf0
[260445.593225] [<ffffffffa07eac44>] btrfs_lock_root_node+0x34/0x50 [btrfs]
[260445.593253] [<ffffffffa07eff6b>] btrfs_search_slot+0x88b/0xa00 [btrfs]
[260445.593295] [<ffffffffa08389df>] ? free_extent_buffer+0x4f/0x90 [btrfs]
[260445.593324] [<ffffffffa07f1a06>] btrfs_insert_empty_items+0x66/0xc0 [btrfs]
[260445.593351] [<ffffffffa07ea94a>] ? btrfs_alloc_path+0x1a/0x20 [btrfs]
[260445.593394] [<ffffffffa08403b9>] btrfs_finish_chunk_alloc+0x1c9/0x570 [btrfs]
[260445.593427] [<ffffffffa08002ab>] btrfs_create_pending_block_groups+0x11b/0x200 [btrfs]
[260445.593459] [<ffffffffa0800964>] do_chunk_alloc+0x2a4/0x2e0 [btrfs]
[260445.593491] [<ffffffffa0803815>] find_free_extent+0xa55/0xd90 [btrfs]
[260445.593524] [<ffffffffa0803c22>] btrfs_reserve_extent+0xd2/0x220 [btrfs]
[260445.593532] [<ffffffff8119fe5d>] ? account_page_dirtied+0xdd/0x170
[260445.593564] [<ffffffffa0803e78>] btrfs_alloc_tree_block+0x108/0x4a0 [btrfs]
[260445.593597] [<ffffffffa080c9de>] ? btree_set_page_dirty+0xe/0x10 [btrfs]
[260445.593626] [<ffffffffa07eb5cd>] __btrfs_cow_block+0x12d/0x5b0 [btrfs]
[260445.593654] [<ffffffffa07ebbff>] btrfs_cow_block+0x11f/0x1c0 [btrfs]
[260445.593682] [<ffffffffa07ef8c7>] btrfs_search_slot+0x1e7/0xa00 [btrfs]
[260445.593724] [<ffffffffa08389df>] ? free_extent_buffer+0x4f/0x90 [btrfs]
[260445.593752] [<ffffffffa07f1a06>] btrfs_insert_empty_items+0x66/0xc0 [btrfs]
[260445.593830] [<ffffffffa07ea94a>] ? btrfs_alloc_path+0x1a/0x20 [btrfs]
[260445.593905] [<ffffffffa08403b9>] btrfs_finish_chunk_alloc+0x1c9/0x570 [btrfs]
[260445.593946] [<ffffffffa08002ab>] btrfs_create_pending_block_groups+0x11b/0x200 [btrfs]
[260445.593990] [<ffffffffa0815798>] btrfs_commit_transaction+0xa8/0xb40 [btrfs]
[260445.594042] [<ffffffffa085abcd>] ? btrfs_log_dentry_safe+0x6d/0x80 [btrfs]
[260445.594089] [<ffffffffa082bc84>] btrfs_sync_file+0x294/0x350 [btrfs]
[260445.594115] [<ffffffff8123e29b>] vfs_fsync_range+0x3b/0xa0
[260445.594133] [<ffffffff81023891>] ? syscall_trace_enter_phase1+0x131/0x180
[260445.594149] [<ffffffff8123e35d>] do_fsync+0x3d/0x70
[260445.594169] [<ffffffff81023bb8>] ? syscall_trace_leave+0xb8/0x110
[260445.594187] [<ffffffff8123e600>] SyS_fsync+0x10/0x20
[260445.594204] [<ffffffff8175de6e>] entry_SYSCALL_64_fastpath+0x12/0x71
This happened because the same transaction handle created a large number
of block groups and while finalizing their creation (inserting new items
and updating existing items in the chunk and device trees) a new metadata
extent had to be allocated and no free space was found in the current
metadata block groups, which made find_free_extent() attempt to allocate
a new block group via do_chunk_alloc(). However at do_chunk_alloc() we
ended up allocating a new system chunk too and exceeded the threshold
of 2Mb of reserved chunk bytes, which makes do_chunk_alloc() enter the
final part of block group creation again (at
btrfs_create_pending_block_groups()) and attempt to lock again the root
of the chunk tree when it's already write locked by the same task.
Similarly we can deadlock on extent tree nodes/leafs if while we are
running delayed references we end up creating a new metadata block group
in order to allocate a new node/leaf for the extent tree (as part of
a CoW operation or growing the tree), as btrfs_create_pending_block_groups
inserts items into the extent tree as well. In this case we get the
following trace:
[14242.773581] fio D ffff880428ca3418 0 3615 3100 0x00000084
[14242.773588] ffff880428ca3418 ffff88042d66b000 ffff88042a03c800 ffff880428ca3438
[14242.773594] ffff880428ca4000 ffff8803e4b20190 ffff8803e4b201a8 ffff880428ca3460
[14242.773600] ffff8803e4b20188 ffff880428ca3438 ffffffff8175a437 ffff8803e4b20190
[14242.773606] Call Trace:
[14242.773613] [<ffffffff8175a437>] schedule+0x37/0x80
[14242.773656] [<ffffffffa057ff07>] btrfs_tree_lock+0xa7/0x1f0 [btrfs]
[14242.773664] [<ffffffff810db7c0>] ? prepare_to_wait_event+0xf0/0xf0
[14242.773692] [<ffffffffa0519c44>] btrfs_lock_root_node+0x34/0x50 [btrfs]
[14242.773720] [<ffffffffa051ef6b>] btrfs_search_slot+0x88b/0xa00 [btrfs]
[14242.773750] [<ffffffffa0520a06>] btrfs_insert_empty_items+0x66/0xc0 [btrfs]
[14242.773758] [<ffffffff811ef4a2>] ? kmem_cache_alloc+0x1d2/0x200
[14242.773786] [<ffffffffa0520ad1>] btrfs_insert_item+0x71/0xf0 [btrfs]
[14242.773818] [<ffffffffa052f292>] btrfs_create_pending_block_groups+0x102/0x200 [btrfs]
[14242.773850] [<ffffffffa052f96e>] do_chunk_alloc+0x2ae/0x2f0 [btrfs]
[14242.773934] [<ffffffffa0532825>] find_free_extent+0xa55/0xd90 [btrfs]
[14242.773998] [<ffffffffa0532c22>] btrfs_reserve_extent+0xc2/0x1d0 [btrfs]
[14242.774041] [<ffffffffa0532e38>] btrfs_alloc_tree_block+0x108/0x4a0 [btrfs]
[14242.774078] [<ffffffffa051a5cd>] __btrfs_cow_block+0x12d/0x5b0 [btrfs]
[14242.774118] [<ffffffffa051abff>] btrfs_cow_block+0x11f/0x1c0 [btrfs]
[14242.774155] [<ffffffffa051e8c7>] btrfs_search_slot+0x1e7/0xa00 [btrfs]
[14242.774194] [<ffffffffa0528021>] ? __btrfs_free_extent.isra.70+0x2e1/0xcb0 [btrfs]
[14242.774235] [<ffffffffa0520a06>] btrfs_insert_empty_items+0x66/0xc0 [btrfs]
[14242.774274] [<ffffffffa051994a>] ? btrfs_alloc_path+0x1a/0x20 [btrfs]
[14242.774318] [<ffffffffa052c433>] __btrfs_run_delayed_refs+0xbb3/0x1020 [btrfs]
[14242.774358] [<ffffffffa052f404>] btrfs_run_delayed_refs.part.78+0x74/0x280 [btrfs]
[14242.774391] [<ffffffffa052f627>] btrfs_run_delayed_refs+0x17/0x20 [btrfs]
[14242.774432] [<ffffffffa05be236>] commit_cowonly_roots+0x8d/0x2bd [btrfs]
[14242.774474] [<ffffffffa059d07f>] ? __btrfs_run_delayed_items+0x1cf/0x210 [btrfs]
[14242.774516] [<ffffffffa05adac3>] ? btrfs_qgroup_account_extents+0x83/0x130 [btrfs]
[14242.774558] [<ffffffffa0544c40>] btrfs_commit_transaction+0x590/0xb40 [btrfs]
[14242.774599] [<ffffffffa0589b9d>] ? btrfs_log_dentry_safe+0x6d/0x80 [btrfs]
[14242.774642] [<ffffffffa055ac54>] btrfs_sync_file+0x294/0x350 [btrfs]
[14242.774650] [<ffffffff8123e29b>] vfs_fsync_range+0x3b/0xa0
[14242.774657] [<ffffffff81023891>] ? syscall_trace_enter_phase1+0x131/0x180
[14242.774663] [<ffffffff8123e35d>] do_fsync+0x3d/0x70
[14242.774669] [<ffffffff81023bb8>] ? syscall_trace_leave+0xb8/0x110
[14242.774675] [<ffffffff8123e600>] SyS_fsync+0x10/0x20
[14242.774681] [<ffffffff8175de6e>] entry_SYSCALL_64_fastpath+0x12/0x71
Fix this by never recursing into the finalization phase of block group
creation and making sure we never trigger the finalization of block group
creation while running delayed references.
Reported-by: Josef Bacik <[email protected]>
Fixes: 00d80e342c0f ("Btrfs: fix quick exhaustion of the system array in the superblock")
Signed-off-by: Filipe Manana <[email protected]>
|
|
My previous fix in commit 005efedf2c7d ("Btrfs: fix read corruption of
compressed and shared extents") was effective only if the compressed
extents cover a file range with a length that is not a multiple of 16
pages. That's because the detection of when we reached a different range
of the file that shares the same compressed extent as the previously
processed range was done at extent_io.c:__do_contiguous_readpages(),
which covers subranges with a length up to 16 pages, because
extent_readpages() groups the pages in clusters no larger than 16 pages.
So fix this by tracking the start of the previously processed file
range's extent map at extent_readpages().
The following test case for fstests reproduces the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_cloner
rm -f $seqres.full
test_clone_and_read_compressed_extent()
{
local mount_opts=$1
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount $mount_opts
# Create our test file with a single extent of 64Kb that is going to
# be compressed no matter which compression algo is used (zlib/lzo).
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 64K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now clone the compressed extent into an adjacent file offset.
$CLONER_PROG -s 0 -d $((64 * 1024)) -l $((64 * 1024)) \
$SCRATCH_MNT/foo $SCRATCH_MNT/foo
echo "File digest before unmount:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
# Remount the fs or clear the page cache to trigger the bug in
# btrfs. Because the extent has an uncompressed length that is a
# multiple of 16 pages, all the pages belonging to the second range
# of the file (64K to 128K), which points to the same extent as the
# first range (0K to 64K), had their contents full of zeroes instead
# of the byte 0xaa. This was a bug exclusively in the read path of
# compressed extents, the correct data was stored on disk, btrfs
# just failed to fill in the pages correctly.
_scratch_remount
echo "File digest after remount:"
# Must match the digest we got before.
md5sum $SCRATCH_MNT/foo | _filter_scratch
}
echo -e "\nTesting with zlib compression..."
test_clone_and_read_compressed_extent "-o compress=zlib"
_scratch_unmount
echo -e "\nTesting with lzo compression..."
test_clone_and_read_compressed_extent "-o compress=lzo"
status=0
exit
Cc: [email protected]
Signed-off-by: Filipe Manana <[email protected]>
Tested-by: Timofey Titovets <[email protected]>
|
|
When the inode given to did_overwrite_ref() matches the current progress
and has a reference that collides with the reference of other inode that
has the same number as the current progress, we were always telling our
caller that the inode's reference was overwritten, which is incorrect
because the other inode might be a new inode (different generation number)
in which case we must return false from did_overwrite_ref() so that its
callers don't use an orphanized path for the inode (as it will never be
orphanized, instead it will be unlinked and the new inode created later).
The following test case for fstests reproduces the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -fr $send_files_dir
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_supported_fs btrfs
_supported_os Linux
_require_scratch
_need_to_be_root
send_files_dir=$TEST_DIR/btrfs-test-$seq
rm -f $seqres.full
rm -fr $send_files_dir
mkdir $send_files_dir
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount
# Create our test file with a single extent of 64K.
mkdir -p $SCRATCH_MNT/foo
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K" $SCRATCH_MNT/foo/bar \
| _filter_xfs_io
_run_btrfs_util_prog subvolume snapshot -r $SCRATCH_MNT \
$SCRATCH_MNT/mysnap1
_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT \
$SCRATCH_MNT/mysnap2
echo "File digest before being replaced:"
md5sum $SCRATCH_MNT/mysnap1/foo/bar | _filter_scratch
# Remove the file and then create a new one in the same location with
# the same name but with different content. This new file ends up
# getting the same inode number as the previous one, because that inode
# number was the highest inode number used by the snapshot's root and
# therefore when attempting to find the a new inode number for the new
# file, we end up reusing the same inode number. This happens because
# currently btrfs uses the highest inode number summed by 1 for the
# first inode created once a snapshot's root is loaded (done at
# fs/btrfs/inode-map.c:btrfs_find_free_objectid in the linux kernel
# tree).
# Having these two different files in the snapshots with the same inode
# number (but different generation numbers) caused the btrfs send code
# to emit an incorrect path for the file when issuing an unlink
# operation because it failed to realize they were different files.
rm -f $SCRATCH_MNT/mysnap2/foo/bar
$XFS_IO_PROG -f -c "pwrite -S 0xbb 0 96K" \
$SCRATCH_MNT/mysnap2/foo/bar | _filter_xfs_io
_run_btrfs_util_prog subvolume snapshot -r $SCRATCH_MNT/mysnap2 \
$SCRATCH_MNT/mysnap2_ro
_run_btrfs_util_prog send $SCRATCH_MNT/mysnap1 -f $send_files_dir/1.snap
_run_btrfs_util_prog send -p $SCRATCH_MNT/mysnap1 \
$SCRATCH_MNT/mysnap2_ro -f $send_files_dir/2.snap
echo "File digest in the original filesystem after being replaced:"
md5sum $SCRATCH_MNT/mysnap2_ro/foo/bar | _filter_scratch
# Now recreate the filesystem by receiving both send streams and verify
# we get the same file contents that the original filesystem had.
_scratch_unmount
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount
_run_btrfs_util_prog receive -vv $SCRATCH_MNT -f $send_files_dir/1.snap
_run_btrfs_util_prog receive -vv $SCRATCH_MNT -f $send_files_dir/2.snap
echo "File digest in the new filesystem:"
# Must match the digest from the new file.
md5sum $SCRATCH_MNT/mysnap2_ro/foo/bar | _filter_scratch
status=0
exit
Reported-by: Martin Raiber <[email protected]>
Fixes: 8b191a684968 ("Btrfs: incremental send, check if orphanized dir inode needs delayed rename")
Signed-off-by: Filipe Manana <[email protected]>
|
|
When running kprobe test on arm64 rt kernel, it reports the below warning:
root@qemu7:~# modprobe kprobe_example
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
in_atomic(): 0, irqs_disabled(): 128, pid: 484, name: modprobe
CPU: 0 PID: 484 Comm: modprobe Not tainted 4.1.6-rt5 #2
Hardware name: linux,dummy-virt (DT)
Call trace:
[<ffffffc0000891b8>] dump_backtrace+0x0/0x128
[<ffffffc000089300>] show_stack+0x20/0x30
[<ffffffc00061dae8>] dump_stack+0x1c/0x28
[<ffffffc0000bbad0>] ___might_sleep+0x120/0x198
[<ffffffc0006223e8>] rt_spin_lock+0x28/0x40
[<ffffffc000622b30>] __aarch64_insn_write+0x28/0x78
[<ffffffc000622e48>] aarch64_insn_patch_text_nosync+0x18/0x48
[<ffffffc000622ee8>] aarch64_insn_patch_text_cb+0x70/0xa0
[<ffffffc000622f40>] aarch64_insn_patch_text_sync+0x28/0x48
[<ffffffc0006236e0>] arch_arm_kprobe+0x38/0x48
[<ffffffc00010e6f4>] arm_kprobe+0x34/0x50
[<ffffffc000110374>] register_kprobe+0x4cc/0x5b8
[<ffffffbffc002038>] kprobe_init+0x38/0x7c [kprobe_example]
[<ffffffc000084240>] do_one_initcall+0x90/0x1b0
[<ffffffc00061c498>] do_init_module+0x6c/0x1cc
[<ffffffc0000fd0c0>] load_module+0x17f8/0x1db0
[<ffffffc0000fd8cc>] SyS_finit_module+0xb4/0xc8
Convert patch_lock to raw loc kto avoid this issue.
Although the problem is found on rt kernel, the fix should be applicable to
mainline kernel too.
Acked-by: Steven Rostedt <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
This is the arm64 portion of commit 45cac65b0fcd ("readahead: fault
retry breaks mmap file read random detection"), which was absent from
the initial port and has since gone unnoticed. The original commit says:
> .fault now can retry. The retry can break state machine of .fault. In
> filemap_fault, if page is miss, ra->mmap_miss is increased. In the second
> try, since the page is in page cache now, ra->mmap_miss is decreased. And
> these are done in one fault, so we can't detect random mmap file access.
>
> Add a new flag to indicate .fault is tried once. In the second try, skip
> ra->mmap_miss decreasing. The filemap_fault state machine is ok with it.
With this change, Mark reports that:
> Random read improves by 250%, sequential read improves by 40%, and
> random write by 400% to an eMMC device with dm crypto wrapped around it.
Cc: Shaohua Li <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Wu Fengguang <[email protected]>
Cc: <[email protected]>
Signed-off-by: Mark Salyzyn <[email protected]>
Signed-off-by: Riley Andrews <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Fix comment typo: s/handers/handlers/
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
When OSS emulation is loaded on ISA SB AWE32 chip, we get now kernel
warnings like:
WARNING: CPU: 0 PID: 2791 at fs/sysfs/dir.c:31 sysfs_warn_dup+0x51/0x80()
sysfs: cannot create duplicate filename '/devices/isa/sbawe.0/sound/card0/seq-oss-0-0'
It's because both emux synth and opl3 drivers try to register their
OSS device object with the same static index number 0. This hasn't
been a big problem until the recent rewrite of device management code
(that exposes sysfs at the same time), but it's been an obvious bug.
This patch works around it just by using a different index number of
emux synth object. There can be a more elegant way to fix, but it's
enough for now, as this code won't be touched so often, in anyway.
Reported-and-tested-by: Michael Shell <[email protected]>
Cc: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
|
Hw only has 3 crtcs. copy paste typo.
Signed-off-by: Alex Deucher <[email protected]>
Cc: [email protected]
|