aboutsummaryrefslogtreecommitdiff
path: root/arch/sparc
AgeCommit message (Collapse)AuthorFilesLines
2016-12-14sparc: implement watchdog_nmi_enable and watchdog_nmi_disableBabu Moger1-1/+43
Implement functions watchdog_nmi_enable and watchdog_nmi_disable to enable/disable nmi watchdog. Sparc uses arch specific nmi watchdog handler. Currently, we do not have a way to enable/disable nmi watchdog dynamically. With these patches we can enable or disable arch specific nmi watchdogs using proc or sysctl interface. Example commands. To enable: echo 1 > /proc/sys/kernel/nmi_watchdog To disable: echo 0 > /proc/sys/kernel/nmi_watchdog It can also achieved using the sysctl parameter kernel.nmi_watchdog Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Babu Moger <[email protected]> Acked-by: Don Zickus <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Kosina <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Yaowei Bai <[email protected]> Cc: Aaron Tomlin <[email protected]> Cc: Ulrich Obergfell <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Hidehiro Kawai <[email protected]> Cc: Josh Hunt <[email protected]> Cc: "David S. Miller" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-12-14arch/sparc: add option to skip DMA sync as a part of map and unmapAlexander Duyck2-4/+4
This change allows us to pass DMA_ATTR_SKIP_CPU_SYNC which allows us to avoid invoking cache line invalidation if the driver will just handle it via a sync_for_cpu or sync_for_device call. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Duyck <[email protected]> Cc: "David S. Miller" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-12-12Merge branch 'smp-hotplug-for-linus' of ↵Linus Torvalds1-36/+9
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull smp hotplug updates from Thomas Gleixner: "This is the final round of converting the notifier mess to the state machine. The removal of the notifiers and the related infrastructure will happen around rc1, as there are conversions outstanding in other trees. The whole exercise removed about 2000 lines of code in total and in course of the conversion several dozen bugs got fixed. The new mechanism allows to test almost every hotplug step standalone, so usage sites can exercise all transitions extensively. There is more room for improvement, like integrating all the pointlessly different architecture mechanisms of synchronizing, setting cpus online etc into the core code" * 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits) tracing/rb: Init the CPU mask on allocation soc/fsl/qbman: Convert to hotplug state machine soc/fsl/qbman: Convert to hotplug state machine zram: Convert to hotplug state machine KVM/PPC/Book3S HV: Convert to hotplug state machine arm64/cpuinfo: Convert to hotplug state machine arm64/cpuinfo: Make hotplug notifier symmetric mm/compaction: Convert to hotplug state machine iommu/vt-d: Convert to hotplug state machine mm/zswap: Convert pool to hotplug state machine mm/zswap: Convert dst-mem to hotplug state machine mm/zsmalloc: Convert to hotplug state machine mm/vmstat: Convert to hotplug state machine mm/vmstat: Avoid on each online CPU loops mm/vmstat: Drop get_online_cpus() from init_cpu_node_state/vmstat_cpu_dead() tracing/rb: Convert to hotplug state machine oprofile/nmi timer: Convert to hotplug state machine net/iucv: Use explicit clean up labels in iucv_init() x86/pci/amd-bus: Convert to hotplug state machine x86/oprofile/nmi: Convert to hotplug state machine ...
2016-12-12Merge branch 'locking-core-for-linus' of ↵Linus Torvalds3-3/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: "The tree got pretty big in this development cycle, but the net effect is pretty good: 115 files changed, 673 insertions(+), 1522 deletions(-) The main changes were: - Rework and generalize the mutex code to remove per arch mutex primitives. (Peter Zijlstra) - Add vCPU preemption support: add an interface to query the preemption status of vCPUs and use it in locking primitives - this optimizes paravirt performance. (Pan Xinhui, Juergen Gross, Christian Borntraeger) - Introduce cpu_relax_yield() and remov cpu_relax_lowlatency() to clean up and improve the s390 lock yielding machinery and its core kernel impact. (Christian Borntraeger) - Micro-optimize mutexes some more. (Waiman Long) - Reluctantly add the to-be-deprecated mutex_trylock_recursive() interface on a temporary basis, to give the DRM code more time to get rid of its locking hacks. Any other users will be NAK-ed on sight. (We turned off the deprecation warning for the time being to not pollute the build log.) (Peter Zijlstra) - Improve the rtmutex code a bit, in light of recent long lived bugs/races. (Thomas Gleixner) - Misc fixes, cleanups" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits) x86/paravirt: Fix bool return type for PVOP_CALL() x86/paravirt: Fix native_patch() locking/ww_mutex: Use relaxed atomics locking/rtmutex: Explain locking rules for rt_mutex_proxy_unlock()/init_proxy_locked() locking/rtmutex: Get rid of RT_MUTEX_OWNER_MASKALL x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted() locking/mutex: Break out of expensive busy-loop on {mutex,rwsem}_spin_on_owner() when owner vCPU is preempted locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() Documentation/virtual/kvm: Support the vCPU preemption check x86/xen: Support the vCPU preemption check x86/kvm: Support the vCPU preemption check x86/kvm: Support the vCPU preemption check kvm: Introduce kvm_write_guest_offset_cached() locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests locking/spinlocks, s390: Implement vcpu_is_preempted(cpu) locking/core, powerpc: Implement vcpu_is_preempted(cpu) sched/core: Introduce the vcpu_is_preempted(cpu) interface sched/wake_q: Rename WAKE_Q to DEFINE_WAKE_Q locking/core: Provide common cpu_relax_yield() definition locking/mutex: Don't mark mutex_trylock_recursive() as deprecated, temporarily ...
2016-12-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds18-44/+535
Pull sparc updates from David Miller: "Just a bunch of small cleanups and fixes here, and support for user probes from Allen Pais" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc: fix a building error reported by kbuild sparc64: fix typo in pgd_clear() sparc64: restore irq in error paths in iommu sparc: leon: Fix a retry loop in leon_init_timers() sparc64: make string buffers large enough sparc64: move dereference after check for NULL sparc: kernel: use builtin_platform_driver sparc64:Support User Probes for sparc
2016-12-11sparc: fix a building error reported by kbuildGonglei \(Arei\)1-0/+1
>> arch/sparc/include/asm/topology_64.h:44:44: error: implicit declaration of function 'cpu_data' [-Werror=implicit-function-declaration] #define topology_physical_package_id(cpu) (cpu_data(cpu).proc_id) ^ Let's include cpudata.h in topology_64.h. Cc: Sam Ravnborg <[email protected]> Cc: David S. Miller <[email protected]> Cc: [email protected] Suggested-by: Sam Ravnborg <[email protected]> Signed-off-by: Gonglei <[email protected]> Acked-by: Sam Ravnborg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc64: fix typo in pgd_clear()Kirill A. Shutemov1-1/+1
It really has to be pgdp, not pgd. It just happend to work since all callers have 'pgd' as an argument. Signed-off-by: Kirill A. Shutemov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc64: restore irq in error paths in iommuDan Carpenter1-0/+2
There are some error paths where we should restore IRQs but we don't. Fixes: bb620c3d3925 ("sparc: Make sparc64 use scalable lib/iommu-common.c functions") Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc: leon: Fix a retry loop in leon_init_timers()Dan Carpenter1-28/+28
The original code causes a static checker warning because it has a continue inside a do { } while (0); loop. In that context, a continue and a break are equivalent. The intent was to go back to the start of the loop so the continue was a bug. I've added a retry label at the start and changed the continue to a goto retry. Then I removed the do { } while (0) loop and pulled the code in one indent level. Fixes: 2791c1a43900 ("SPARC/LEON: added support for selecting Timer Core and Timer within core") Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc64: make string buffers large enoughDan Carpenter1-2/+2
My static checker complains that if "lvl" is ULONG_MAX (this is 64 bit) then some of the strings will overflow. I don't know if that's possible but it seems simple enough to make the buffers slightly larger. Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc64: move dereference after check for NULLDan Carpenter1-3/+2
We shouldn't dereference "iommu" until after we have checked that it is non-NULL. Fixes: f08978b0fdbf ("sparc64: Enable sun4v dma ops to use IOMMU v2 APIs") Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc: kernel: use builtin_platform_driverGeliang Tang1-6/+1
Use builtin_platform_driver() helper to simplify the code. Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-12-11sparc64:Support User Probes for sparcAllen Pais12-4/+498
Signed-off-by: Eric Saint Etienne <[email protected]> Signed-off-by: Allen Pais <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-30tcp: SOF_TIMESTAMPING_OPT_STATS option for SO_TIMESTAMPINGFrancis Yan1-0/+2
This patch exports the sender chronograph stats via the socket SO_TIMESTAMPING channel. Currently we can instrument how long a particular application unit of data was queued in TCP by tracking SOF_TIMESTAMPING_TX_SOFTWARE and SOF_TIMESTAMPING_TX_SCHED. Having these sender chronograph stats exported simultaneously along with these timestamps allow further breaking down the various sender limitation. For example, a video server can tell if a particular chunk of video on a connection takes a long time to deliver because TCP was experiencing small receive window. It is not possible to tell before this patch without packet traces. To prepare these stats, the user needs to set SOF_TIMESTAMPING_OPT_STATS and SOF_TIMESTAMPING_OPT_TSONLY flags while requesting other SOF_TIMESTAMPING TX timestamps. When the timestamps are available in the error queue, the stats are returned in a separate control message of type SCM_TIMESTAMPING_OPT_STATS, in a list of TLVs (struct nlattr) of types: TCP_NLA_BUSY_TIME, TCP_NLA_RWND_LIMITED, TCP_NLA_SNDBUF_LIMITED. Unit is microsecond. Signed-off-by: Francis Yan <[email protected]> Signed-off-by: Yuchung Cheng <[email protected]> Signed-off-by: Soheil Hassas Yeganeh <[email protected]> Acked-by: Neal Cardwell <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-22sparc/sysfs: Convert to hotplug state machineSebastian Andrzej Siewior1-36/+9
Install the callbacks via the state machine and let the core invoke the callbacks on the already online CPUs. The previous convention of keeping the files around until the CPU is dead has not been preserved as there is no point to keep them available when the cpu is going down. This makes the hotplug call symmetric. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Acked-by: "David S. Miller" <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-11-22Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar11-70/+916
Signed-off-by: Ingo Molnar <[email protected]>
2016-11-19sparc: drop duplicate header scatterlist.hGeliang Tang1-1/+0
Drop duplicate header scatterlist.h from iommu_common.h. Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18config: Adding the new config parameter CONFIG_PROVE_LOCKING_SMALL for sparcBabu Moger1-0/+1
This new config parameter limits the space used for "Lock debugging: prove locking correctness" by about 4MB. The current sparc systems have the limitation of 32MB size for kernel size including .text, .data and .bss sections. With PROVE_LOCKING feature, the kernel size could grow beyond this limit and causing system boot-up issues. With this option, kernel limits the size of the entries of lock_chains, stack_trace etc., so that kernel fits in required size limit. This is not visible to user and only used for sparc. Signed-off-by: Babu Moger <[email protected]> Acked-by: Sam Ravnborg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Enable 64-bit DMATushar Dave2-2/+10
ATU 64bit addressing allows PCIe devices with 64bit DMA capabilities to use ATU for 64bit DMA. Signed-off-by: Tushar Dave <[email protected]> Reviewed-by: chris hyser <[email protected]> Acked-by: Sowmini Varadhan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Enable sun4v dma ops to use IOMMU v2 APIsTushar Dave4-58/+211
Add Hypervisor IOMMU v2 APIs pci_iotsb_map(), pci_iotsb_demap() and enable sun4v dma ops to use IOMMU v2 API for all PCIe devices with 64bit DMA mask. Signed-off-by: Tushar Dave <[email protected]> Reviewed-by: chris hyser <[email protected]> Acked-by: Sowmini Varadhan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Bind PCIe devices to use IOMMU v2 serviceTushar Dave3-0/+60
In order to use Hypervisor (HV) IOMMU v2 API for map/demap, each PCIe device has to be bound to IOTSB using HV API pci_iotsb_bind(). Signed-off-by: Tushar Dave <[email protected]> Reviewed-by: chris hyser <[email protected]> Acked-by: Sowmini Varadhan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Initialize iommu_map_table and iommu_poolTushar Dave2-0/+21
Like legacy IOMMU, use common iommu_map_table and iommu_pool for ATU. This change initializes iommu_map_table and iommu_pool for ATU. Signed-off-by: Tushar Dave <[email protected]> Reviewed-by: chris hyser <[email protected]> Reviewed-by: Sowmini Varadhan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Add ATU (new IOMMU) supportTushar Dave6-0/+529
ATU (Address Translation Unit) is a new IOMMU in SPARC supported with Hypervisor IOMMU v2 APIs. Current SPARC IOMMU supports only 32bit address ranges and one TSB per PCIe root complex that has a 2GB per root complex DVMA space limit. The limit has become a scalability bottleneck nowadays that a typical 10G/40G NIC can consume 300MB-500MB DVMA space per instance. When DVMA resource is exhausted, devices will not be usable since the driver can't allocate DVMA. ATU removes bottleneck by allowing guest os to create IOTSB of size 32G (or more) with 64bit address ranges available in ATU HW. 32G is more than enough DVMA space to be shared by all PCIe devices under root complex contrast to 2G space provided by legacy IOMMU. ATU allows PCIe devices to use 64bit DMA addressing. Devices which choose to use 32bit DMA mask will continue to work with the existing legacy IOMMU. Signed-off-by: Tushar Dave <[email protected]> Reviewed-by: chris hyser <[email protected]> Acked-by: Sowmini Varadhan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-18sparc64: Add FORCE_MAX_ZONEORDER and default to 13Dave Kleikamp1-0/+18
This change allows ATU (new IOMMU) in SPARC systems to request large (32M) contiguous memory during boot for creating IOTSB backing store. Signed-off-by: Dave Kleikamp <[email protected]> Signed-off-by: Tushar Dave <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-17locking/core: Provide common cpu_relax_yield() definitionChristian Borntraeger2-2/+0
No need to duplicate the same define everywhere. Since the only user is stop-machine and the only provider is s390, we can use a default implementation of cpu_relax_yield() in sched.h. Suggested-by: Russell King <[email protected]> Signed-off-by: Christian Borntraeger <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Acked-by: Russell King <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: linux-s390 <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-11-16locking/core, arch: Remove cpu_relax_lowlatency()Christian Borntraeger2-2/+0
As there are no users left, we can remove cpu_relax_lowlatency() implementations from every architecture. Signed-off-by: Christian Borntraeger <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-11-16locking/core: Introduce cpu_relax_yield()Christian Borntraeger2-0/+2
For spinning loops people do often use barrier() or cpu_relax(). For most architectures cpu_relax and barrier are the same, but on some architectures cpu_relax can add some latency. For example on power,sparc64 and arc, cpu_relax can shift the CPU towards other hardware threads in an SMT environment. On s390 cpu_relax does even more, it uses an hypercall to the hypervisor to give up the timeslice. In contrast to the SMT yielding this can result in larger latencies. In some places this latency is unwanted, so another variant "cpu_relax_lowlatency" was introduced. Before this is used in more and more places, lets revert the logic and provide a cpu_relax_yield that can be called in places where yielding is more important than latency. By default this is the same as cpu_relax on all architectures. Signed-off-by: Christian Borntraeger <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Noam Camus <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-11-14sparc64: fix compile warning section mismatch in find_node()Thomas Tai1-3/+3
A compile warning is introduced by a commit to fix the find_node(). This patch fix the compile warning by moving find_node() into __init section. Because find_node() is only used by memblock_nid_range() which is only used by a __init add_node_ranges(). find_node() and memblock_nid_range() should also be inside __init section. Signed-off-by: Thomas Tai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-11Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar32-723/+1400
Signed-off-by: Ingo Molnar <[email protected]>
2016-11-10sparc32: Fix inverted invalid_frame_pointer checks on sigreturnsAndreas Larsson1-2/+2
Signed-off-by: Andreas Larsson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-10sparc64: Fix find_node warning if numa node cannot be foundThomas Tai1-4/+61
When booting up LDOM, find_node() warns that a physical address doesn't match a NUMA node. WARNING: CPU: 0 PID: 0 at arch/sparc/mm/init_64.c:835 find_node+0xf4/0x120 find_node: A physical address doesn't match a NUMA node rule. Some physical memory will be owned by node 0.Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc3 #4 Call Trace: [0000000000468ba0] __warn+0xc0/0xe0 [0000000000468c74] warn_slowpath_fmt+0x34/0x60 [00000000004592f4] find_node+0xf4/0x120 [0000000000dd0774] add_node_ranges+0x38/0xe4 [0000000000dd0b1c] numa_parse_mdesc+0x268/0x2e4 [0000000000dd0e9c] bootmem_init+0xb8/0x160 [0000000000dd174c] paging_init+0x808/0x8fc [0000000000dcb0d0] setup_arch+0x2c8/0x2f0 [0000000000dc68a0] start_kernel+0x48/0x424 [0000000000dcb374] start_early_boot+0x27c/0x28c [0000000000a32c08] tlb_fixup_done+0x4c/0x64 [0000000000027f08] 0x27f08 It is because linux use an internal structure node_masks[] to keep the best memory latency node only. However, LDOM mdesc can contain single latency-group with multiple memory latency nodes. If the address doesn't match the best latency node within node_masks[], it should check for an alternative via mdesc. The warning message should only be printed if the address doesn't match any node_masks[] nor within mdesc. To minimize the impact of searching mdesc every time, the last matched mask and index is stored in a variable. Signed-off-by: Thomas Tai <[email protected]> Reviewed-by: Chris Hyser <[email protected]> Reviewed-by: Liam Merwick <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-10-27sparc64: Handle extremely large kernel TLB range flushes more gracefully.David S. Miller1-55/+228
When the vmalloc area gets fragmented, and because the firmware mapping area sits between where modules live and the vmalloc area, we can sometimes receive requests for enormous kernel TLB range flushes. When this happens the cpu just spins flushing billions of pages and this triggers the NMI watchdog and other problems. We took care of this on the TSB side by doing a linear scan of the table once we pass a certain threshold. Do something similar for the TLB flush, however we are limited by the TLB flush facilities provided by the different chip variants. First of all we use an (mostly arbitrary) cut-off of 256K which is about 32 pages. This can be tuned in the future. The huge range code path for each chip works as follows: 1) On spitfire we flush all non-locked TLB entries using diagnostic acceses. 2) On cheetah we use the "flush all" TLB flush. 3) On sun4v/hypervisor we do a TLB context flush on context 0, which unlike previous chips does not remove "permanent" or locked entries. We could probably do something better on spitfire, such as limiting the flush to kernel TLB entries or even doing range comparisons. However that probably isn't worth it since those chips are old and the TLB only had 64 entries. Reported-by: James Clarke <[email protected]> Tested-by: James Clarke <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-10-26sparc64: Fix illegal relative branches in hypervisor patched TLB cross-call ↵David S. Miller1-12/+30
code. Just like the non-cross-call TLB flush handlers, the cross-call ones need to avoid doing PC-relative branches outside of their code blocks. Signed-off-by: David S. Miller <[email protected]>
2016-10-26sparc64: Fix instruction count in comment for __hypervisor_flush_tlb_pending.David S. Miller1-1/+1
Noticed by James Clarke. Signed-off-by: David S. Miller <[email protected]>
2016-10-25sparc64: Handle extremely large kernel TSB range flushes sanely.David S. Miller1-0/+17
If the number of pages we are flushing is more than twice the number of entries in the TSB, just scan the TSB table for matches rather than probing each and every page in the range. Based upon a patch and report by James Clarke. Signed-off-by: David S. Miller <[email protected]>
2016-10-25sparc: Handle negative offsets in arch_jump_label_transformJames Clarke1-6/+17
Additionally, if the offset will overflow the immediate for a ba,pt instruction, fall back on a standard ba to get an extra 3 bits. Signed-off-by: James Clarke <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-10-25sparc64: Fix illegal relative branches in hypervisor patched TLB code.David S. Miller1-14/+51
When we copy code over to patch another piece of code, we can only use PC-relative branches that target code within that piece of code. Such PC-relative branches cannot be made to external symbols because the patch moves the location of the code and thus modifies the relative address of external symbols. Use an absolute jmpl to fix this problem. Signed-off-by: David S. Miller <[email protected]>
2016-10-25locking/mutex: Kill arch specific codePeter Zijlstra1-1/+0
Its all generic atomic_long_t stuff now. Tested-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-10-24sparc64: Delete now unused user copy fixup functions.David S. Miller3-105/+4
Now that all of the user copy routines are converted to return accurate residual lengths when an exception occurs, we no longer need the broken fixup routines. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Delete now unused user copy assembler helpers.David S. Miller1-30/+0
All of __ret{,l}_mone{_asi,_fp,_asi_fpu} are now unused. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert U3copy_{from,to}_user to accurate exception reporting.David S. Miller3-81/+162
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert NG2copy_{from,to}_user to accurate exception reporting.David S. Miller3-91/+153
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert NGcopy_{from,to}_user to accurate exception reporting.David S. Miller3-79/+162
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert NG4copy_{from,to}_user to accurate exception reporting.David S. Miller3-79/+231
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert U1copy_{from,to}_user to accurate exception reporting.David S. Miller3-124/+237
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert GENcopy_{from,to}_user to accurate exception reporting.David S. Miller3-18/+38
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Convert copy_in_user to accurate exception reporting.David S. Miller1-10/+25
Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Prepare to move to more saner user copy exception handling.David S. Miller15-39/+47
The fixup helper function mechanism for handling user copy fault handling is not %100 accurrate, and can never be made so. We are going to transition the code to return the running return return length, which is always kept track in one or more registers of each of these routines. In order to convert them one by one, we have to allow the existing behavior to continue functioning. Therefore make all the copy code that wants the fixup helper to be used return negative one. After all of the user copy routines have been converted, this logic and the fixup helpers themselves can be removed completely. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc64: Delete __ret_efault.David S. Miller2-8/+1
It is completely unused. Signed-off-by: David S. Miller <[email protected]>
2016-10-24sparc32: Fix old style declaration GCC warningsTobias Klauser1-1/+1
Fix [-Wold-style-declaration] GCC warnings by moving the inline keyword before the return type. Signed-off-by: Tobias Klauser <[email protected]> Signed-off-by: David S. Miller <[email protected]>