aboutsummaryrefslogtreecommitdiff
path: root/arch/mips/kernel
AgeCommit message (Collapse)AuthorFilesLines
2019-07-03mips/kprobes: Export kprobe_fault_handler()Anshuman Khandual1-1/+1
Generic kprobe_page_fault() calls into kprobe_fault_handler() which must be available with and without CONFIG_KPROBES. There is one stub implementation for !CONFIG_KPROBES. For CONFIG_KPROBES all subscribing archs must provide a kprobe_fault_handler() definition. Currently mips has an implementation which is defined as 'static inline'. Make it available for generic kprobes to comply with the above new requirement. Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: James Hogan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: [email protected] Cc: [email protected] Reported-by: kbuild test robot <[email protected]> Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Paul Burton <[email protected]> Fixes: 773734b44557 ("mm, kprobes: generalize and rename notify_page_fault() as kprobe_page_fault()") Cc: [email protected]
2019-06-28arch: wire-up pidfd_open()Christian Brauner3-0/+3
This wires up the pidfd_open() syscall into all arches at once. Signed-off-by: Christian Brauner <[email protected]> Reviewed-by: David Howells <[email protected]> Reviewed-by: Oleg Nesterov <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Kees Cook <[email protected]> Cc: Joel Fernandes (Google) <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Jann Horn <[email protected]> Cc: Andy Lutomirsky <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Aleksa Sarai <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Al Viro <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-06-24Merge tag 'v5.2-rc6' into sched/core, to refresh the branchIngo Molnar7-27/+7
Signed-off-by: Ingo Molnar <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner5-21/+5
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 230Thomas Gleixner2-6/+2
Based on 2 normalized pattern(s): this source code is licensed under the gnu general public license version 2 see the file copying for more details this source code is licensed under general public license version 2 see extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 52 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Enrico Weigelt <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-06-17Merge tag 'v5.2-rc5' into sched/core, to pick up fixesIngo Molnar2-15/+1
Signed-off-by: Ingo Molnar <[email protected]>
2019-06-11MIPS: ftrace: Reword prepare_ftrace_return() comment blockGeert Uytterhoeven1-11/+12
Improve the comment block for prepare_ftrace_return(). Signed-off-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: [email protected] Cc: [email protected]
2019-06-08Merge tag 'mips_fixes_5.2_1' of ↵Linus Torvalds1-3/+0
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: - Declare ginvt() __always_inline due to its use of an argument as an inline asm immediate. - A VDSO build fix following Kbuild changes made this cycle. - A fix for boot failures on txx9 systems following memory initialization changes made this cycle. - Bounds check virt_addr_valid() to prevent it spuriously indicating that bogus addresses are valid, in turn fixing hardened usercopy failures that have been present since v4.12. - Build uImage.gz for pistachio systems by default, since this is the image we need in order to actually boot on a board. - Remove an unused variable in our uprobes code. * tag 'mips_fixes_5.2_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: uprobes: remove set but not used variable 'epc' MIPS: pistachio: Build uImage.gz by default MIPS: Make virt_addr_valid() return bool MIPS: Bounds check virt_addr_valid MIPS: TXx9: Fix boot crash in free_initmem() MIPS: remove a space after -I to cope with header search paths for VDSO MIPS: mark ginvt() as __always_inline
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 252Thomas Gleixner1-12/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed as is without any warranty of any kind whether express or implied without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 2 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-06-03sched/core: Provide a pointer to the valid CPU maskSebastian Andrzej Siewior2-4/+4
In commit: 4b53a3412d66 ("sched/core: Remove the tsk_nr_cpus_allowed() wrapper") the tsk_nr_cpus_allowed() wrapper was removed. There was not much difference in !RT but in RT we used this to implement migrate_disable(). Within a migrate_disable() section the CPU mask is restricted to single CPU while the "normal" CPU mask remains untouched. As an alternative implementation Ingo suggested to use: struct task_struct { const cpumask_t *cpus_ptr; cpumask_t cpus_mask; }; with t->cpus_ptr = &t->cpus_mask; In -RT we then can switch the cpus_ptr to: t->cpus_ptr = &cpumask_of(task_cpu(p)); in a migration disabled region. The rules are simple: - Code that 'uses' ->cpus_allowed would use the pointer. - Code that 'modifies' ->cpus_allowed would use the direct mask. Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 182Thomas Gleixner2-24/+2
Based on 1 normalized pattern(s): this program is free software you can distribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 32 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Reviewed-by: Steve Winslow <[email protected]> Reviewed-by: Alexios Zavras <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 167Thomas Gleixner1-13/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation version 2 of the license this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 83 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157Thomas Gleixner3-30/+3
Based on 3 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version [author] [kishon] [vijay] [abraham] [i] [kishon]@[ti] [com] this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version [author] [graeme] [gregory] [gg]@[slimlogic] [co] [uk] [author] [kishon] [vijay] [abraham] [i] [kishon]@[ti] [com] [based] [on] [twl6030]_[usb] [c] [author] [hema] [hk] [hemahk]@[ti] [com] this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1105 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156Thomas Gleixner5-68/+5
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 1334 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner18-89/+18
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-29MIPS: uprobes: remove set but not used variable 'epc'YueHaibing1-3/+0
Fixes gcc '-Wunused-but-set-variable' warning: arch/mips/kernel/uprobes.c: In function 'arch_uprobe_pre_xol': arch/mips/kernel/uprobes.c:115:17: warning: variable 'epc' set but not used [-Wunused-but-set-variable] It's never used since introduction in commit 40e084a506eb ("MIPS: Add uprobes support.") Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: <[email protected]>
2019-05-29signal: Remove the task parameter from force_sig_faultEric W. Biederman1-6/+6
As synchronous exceptions really only make sense against the current task (otherwise how are you synchronous) remove the task parameter from from force_sig_fault to make it explicit that is what is going on. The two known exceptions that deliver a synchronous exception to a stopped ptraced task have already been changed to force_sig_fault_to_task. The callers have been changed with the following emacs regular expression (with obvious variations on the architectures that take more arguments) to avoid typos: force_sig_fault[(]\([^,]+\)[,]\([^,]+\)[,]\([^,]+\)[,]\W+current[)] -> force_sig_fault(\1,\2,\3) Signed-off-by: "Eric W. Biederman" <[email protected]>
2019-05-29signal: Use force_sig_fault_to_task for the two calls that don't deliver to ↵Eric W. Biederman1-1/+1
current In preparation for removing the task parameter from force_sig_fault introduce force_sig_fault_to_task and use it for the two cases where it matters. On mips force_fcr31_sig calls force_sig_fault and is called on either the current task, or a task that is suspended and is being switched to by the scheduler. This is safe because the task being switched to by the scheduler is guaranteed to be suspended. This ensures that task->sighand is stable while the signal is delivered to it. On parisc user_enable_single_step calls force_sig_fault and is in turn called by ptrace_request. The function ptrace_request always calls user_enable_single_step on a child that is stopped for tracing. The child being traced and not reaped ensures that child->sighand is not NULL, and that the child will not change child->sighand. Signed-off-by: "Eric W. Biederman" <[email protected]>
2019-05-27signal: Remove task parameter from force_sigEric W. Biederman7-48/+48
All of the remaining callers pass current into force_sig so remove the task parameter to make this obvious and to make misuse more difficult in the future. This also makes it clear force_sig passes current into force_sig_info. Signed-off-by: "Eric W. Biederman" <[email protected]>
2019-05-21treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 1Thomas Gleixner4-56/+4
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin street fifth floor boston ma 02110 1301 usa this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option [no]_[pad]_[ctrl] any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin street fifth floor boston ma 02110 1301 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 176 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Jilayne Lovejoy <[email protected]> Reviewed-by: Steve Winslow <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-21treewide: Add SPDX license identifier for missed filesThomas Gleixner1-0/+1
Add SPDX license identifiers to all files which: - Have no license information of any form - Have EXPORT_.*_SYMBOL_GPL inside which was used in the initial scan/conversion to ignore the file These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-19Merge tag 'mips_5.2_2' of ↵Linus Torvalds2-18/+11
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull a few more MIPS updates from Paul Burton: "Some SGI IP27 specific PCI rework and a batch of fixes: - A build fix for BMIPS5000 configurations with CONFIG_HW_PERF_EVENTS=y, which also neatly removes some #ifdefery. - A fix to report supported ISAs correctly on older Ingenic SoCs which incorrectly indicate MIPSr2 support in their cop0 Config register. - Some PCI modernization for SGI IP27 systems as part of ongoing work to support some other SGI systems. - A fix allowing use of appended DTB files with generic kernels. - DMA mask fixes for SGI IP22 & Alchemy systems" * tag 'mips_5.2_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: Alchemy: add DMA masks for on-chip ethernet MIPS: SGI-IP22: provide missing dma_mask/coherent_dma_mask generic: fix appended dtb support MIPS: SGI-IP27: abstract chipset irq from bridge MIPS: SGI-IP27: use generic PCI driver MIPS: Fix Ingenic SoCs sometimes reporting wrong ISA MIPS: perf: Fix build with CONFIG_CPU_BMIPS5000 enabled
2019-05-17Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds3-0/+18
Pull more vfs mount updates from Al Viro: "Propagation of new syscalls to other architectures + cosmetic change from Christian (fscontext didn't follow the convention for anon inode names)" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: uapi: Wire up the mount API syscalls on non-x86 arches [ver #2] uapi, x86: Fix the syscall numbering of the mount API syscalls [ver #2] uapi, fsopen: use square brackets around "fscontext" [ver #2]
2019-05-16uapi: Wire up the mount API syscalls on non-x86 arches [ver #2]David Howells3-0/+18
Wire up the mount API syscalls on non-x86 arches. Reported-by: Arnd Bergmann <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Signed-off-by: Al Viro <[email protected]>
2019-05-14MIPS: mark mult_sh_align_mod() as __always_inlineMasahiro Yamada1-2/+2
This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common place. We need to eliminate potential issues beforehand. If it is enabled for mips, the following error is reported: arch/mips/kernel/cpu-bugs64.c: In function 'mult_sh_align_mod.constprop': arch/mips/kernel/cpu-bugs64.c:33:2: error: asm operand 1 probably doesn't match constraints [-Werror] asm volatile( ^~~ arch/mips/kernel/cpu-bugs64.c:33:2: error: asm operand 1 probably doesn't match constraints [-Werror] asm volatile( ^~~ arch/mips/kernel/cpu-bugs64.c:33:2: error: impossible constraint in 'asm' asm volatile( ^~~ arch/mips/kernel/cpu-bugs64.c:33:2: error: impossible constraint in 'asm' asm volatile( ^~~ Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Masahiro Yamada <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Boris Brezillon <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Norris <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Marek Vasut <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Malaterre <[email protected]> Cc: Miquel Raynal <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Russell King <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-05-09MIPS: Fix Ingenic SoCs sometimes reporting wrong ISAPaul Cercueil1-0/+8
The config0 register in the Xburst CPUs with a processor ID of PRID_COMP_INGENIC_D0 report themselves as MIPS32r2 compatible, but they don't actually support this ISA. Signed-off-by: Paul Cercueil <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected]
2019-05-09MIPS: perf: Fix build with CONFIG_CPU_BMIPS5000 enabledFlorian Fainelli1-18/+3
arch/mips/kernel/perf_event_mipsxx.c: In function 'mipsxx_pmu_enable_event': arch/mips/kernel/perf_event_mipsxx.c:326:21: error: unused variable 'event' [-Werror=unused-variable] struct perf_event *event = container_of(evt, struct perf_event, hw); ^~~~~ Fix this by making use of IS_ENABLED() to simplify the code and avoid unnecessary ifdefery. Fixes: 84002c88599d ("MIPS: perf: Fix perf with MT counting other threads") Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: [email protected] Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] # v4.18+
2019-05-08Merge tag 'mips_5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linuxLinus Torvalds5-145/+100
Pull MIPS updates from Paul Burton: - A set of memblock initialization improvements thanks to Serge Semin, tidying up after our conversion from bootmem to memblock back in v4.20. - Our eBPF JIT the previously supported only MIPS64r2 through MIPS64r5 is improved to also support MIPS64r6. Support for MIPS32 systems is introduced, with the caveat that it only works for programs that don't use 64 bit registers or operations - those will bail out & need to be interpreted. - Improvements to the allocation & configuration of our exception vector that should fix issues seen on some platforms using recent versions of U-Boot. - Some minor improvements to code generated for jump labels, along with enabling them by default for generic kernels. * tag 'mips_5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (27 commits) mips: Manually call fdt_init_reserved_mem() method mips: Make sure dt memory regions are valid mips: Perform early low memory test mips: Dump memblock regions for debugging mips: Add reserve-nomap memory type support mips: Use memblock to reserve the __nosave memory range mips: Discard post-CMA-init foreach loop mips: Reserve memory for the kernel image resources MIPS: Remove duplicate EBase configuration MIPS: Sync icache for whole exception vector MIPS: Always allocate exception vector for MIPSr2+ MIPS: Use memblock_phys_alloc() for exception vector mips: Combine memblock init and memory reservation loops mips: Discard rudiments from bootmem_init mips: Make sure kernel .bss exists in boot mem pool mips: vdso: drop unnecessary cc-ldoption Revert "MIPS: ralink: fix cpu clock of mt7621 and add dt clk devices" MIPS: generic: Enable CONFIG_JUMP_LABEL MIPS: jump_label: Use compact branches for >= r6 MIPS: jump_label: Remove redundant nops ...
2019-05-07Merge tag 'audit-pr-20190507' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit Pull audit updates from Paul Moore: "We've got a reasonably broad set of audit patches for the v5.2 merge window, the highlights are below: - The biggest change, and the source of all the arch/* changes, is the patchset from Dmitry to help enable some of the work he is doing around PTRACE_GET_SYSCALL_INFO. To be honest, including this in the audit tree is a bit of a stretch, but it does help move audit a little further along towards proper syscall auditing for all arches, and everyone else seemed to agree that audit was a "good" spot for this to land (or maybe they just didn't want to merge it? dunno.). - We can now audit time/NTP adjustments. - We continue the work to connect associated audit records into a single event" * tag 'audit-pr-20190507' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit: (21 commits) audit: fix a memory leak bug ntp: Audit NTP parameters adjustment timekeeping: Audit clock adjustments audit: purge unnecessary list_empty calls audit: link integrity evm_write_xattrs record to syscall event syscall_get_arch: add "struct task_struct *" argument unicore32: define syscall_get_arch() Move EM_UNICORE to uapi/linux/elf-em.h nios2: define syscall_get_arch() nds32: define syscall_get_arch() Move EM_NDS32 to uapi/linux/elf-em.h m68k: define syscall_get_arch() hexagon: define syscall_get_arch() Move EM_HEXAGON to uapi/linux/elf-em.h h8300: define syscall_get_arch() c6x: define syscall_get_arch() arc: define syscall_get_arch() Move EM_ARCOMPACT and EM_ARCV2 to uapi/linux/elf-em.h audit: Make audit_log_cap and audit_copy_inode static audit: connect LOGIN record to its syscall record ...
2019-05-05mips: Manually call fdt_init_reserved_mem() methodSerge Semin1-0/+3
Since memblock-patchset was introduced the reserved-memory nodes are supported being declared in dt-files. So these nodes are actually parsed during the arch setup procedure when the early_init_fdt_scan_reserved_mem() method is called. But due to the arch-specific boot mem_map container utilization we need to manually call the fdt_init_reserved_mem() method after all the available and reserved memory has been moved to memblock. The first function call performed before bootmem_init() by the early_init_fdt_scan_reserved_mem() routine fails due to the lack of any memblock memory regions to allocate from at that stage. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Serge Semin <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-05mips: Make sure dt memory regions are validSerge Semin1-1/+13
There are situations when memory regions coming from dts may be too big for the platform physical address space. This especially concerns XPA-capable systems. Bootloader may determine more than 4GB memory available and pass it to the kernel over dts memory node, while kernel is built without XPA/64BIT support. In this case the region may either simply be truncated by add_memory_region() method or by u64->phys_addr_t type casting. But in worst case the method can even drop the memory region if it exceeds PHYS_ADDR_MAX size. So lets make sure the retrieved from dts memory regions are valid, and if some of them aren't, just manually truncate them with a warning printed out. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Serge Semin <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-03mips: Perform early low memory testSerge Semin1-0/+2
memblock subsystem provides a method to optionally test the passed memory region in case if it was requested via special kernel boot argument. Lets add the function at the bottom of the arch_mem_init() method. Testing at this point in the boot sequence should be safe since all critical areas are now reserved and a minimum of allocations have been done. Reviewed-by: Matt Redfearn <[email protected]> Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Serge Semin <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-03mips: Dump memblock regions for debuggingSerge Semin1-0/+2
It is useful to have the whole memblock memory space printed to console when basic memlock initializations are done. It can be performed by ready-to-use method memblock_dump_all(), which prints the available and reserved memory spaces if memblock=debug kernel parameter is specified. Lets call it at the very end of arch_mem_init() function, when all memblock memory and reserved regions are defined, but before any serious allocation is performed. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Serge Semin <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-02mips: Add reserve-nomap memory type supportSerge Semin2-1/+11
It might be necessary to prevent the virtual mapping creation for a requested memory region. For instance there is a "no-map" property indicating exactly this feature. In this case we need to not only reserve the specified region by pretending it doesn't exist in the memory space, but completely remove the range from system just by removing it from memblock. The same way it's done in default early_init_dt_reserve_memory_arch() method. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Matt Redfearn <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-02mips: Use memblock to reserve the __nosave memory rangeSerge Semin1-2/+3
Originally before legacy bootmem was removed, the memory for the range was correctly reserved by reserve_bootmem_region(). But since memblock has been selected for early memory allocation the function can be utilized only after paging is fully initialized (as it is done by memblock_free_all() function). So calling it from arch_mem_init() method is prone to errors, and at this stage we need to reserve the memory in the memblock allocator. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Matt Redfearn <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-02mips: Discard post-CMA-init foreach loopSerge Semin1-5/+0
Really the loop is pointless, since it walks over memblock-reserved memory regions and mark them as reserved in memblock. Before bootmem was removed from the kernel, this loop had been used to map the memory reserved by CMA into the legacy bootmem allocator. But now the early memory allocator is memblock, which is used by CMA for reservation, so we don't need any mapping anymore. Reviewed-by: Matt Redfearn <[email protected]> Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-02mips: Reserve memory for the kernel image resourcesSerge Semin1-27/+3
The reserved_end variable had been used by the bootmem_init() code to find a lowest limit of memory available for memmap blob. The original code just tried to find a free memory space higher than kernel was placed. This limitation seems justified for the memmap ragion search process, but I can't see any obvious reason to reserve the unused space below kernel seeing some platforms place it much higher than standard 1MB. Moreover the RELOCATION config enables it to be loaded at any memory address. So lets reserve the memory occupied by the kernel only, leaving the region below being free for allocations. After doing this we can now discard the code freeing a space between kernel _text and VMLINUX_LOAD_ADDRESS symbols since it's going to be free anyway (unless marked as reserved by platforms). Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Matt Redfearn <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-05-02MIPS: Remove duplicate EBase configurationPaul Burton1-17/+3
Clean up our configuration of the EBase register by making configure_exception_vector() write to it unconditionally on systems implementing MIPSr2 or higher, and removing the duplicate code in per_cpu_trap_init(). The latter would have duplicated work on systems with vectored interrupts, and didn't set BEV for safety like the configure_exception_vector() version of the code does. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Serge Semin <[email protected]> Tested-by: Serge Semin <[email protected]> Cc: [email protected]
2019-05-02MIPS: Sync icache for whole exception vectorPaul Burton1-1/+1
Rather than performing cache flushing for a fixed 0x400 bytes, use the actual size of the vector in order to ensure we cover all emitted code on systems that make use of vectored interrupts. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Reviewed-by: Serge Semin <[email protected]> Tested-by: Serge Semin <[email protected]> Cc: [email protected]
2019-05-02MIPS: Always allocate exception vector for MIPSr2+Paul Burton1-20/+15
Currently we allocate the exception vector on systems which use a vectored interrupt mode, but otherwise attempt to reuse whatever exception vector the bootloader uses. This can be problematic for a number of reasons: 1) The memory isn't properly marked reserved in the memblock allocator. We've relied on the fact that EBase is generally in the memory below the kernel image which we don't free, but this is about to change. 2) Recent versions of U-Boot place their exception vector high in kseg0, in memory which isn't protected by being lower than the kernel anyway & can end up being clobbered. 3) We are unnecessarily reliant upon there being memory at the address EBase points to upon entry to the kernel. This is often the case, but if the bootloader doesn't configure EBase & leaves it with its default value then we rely upon there being memory at physical address 0 for no good reason. Improve this situation by allocating the exception vector in all cases when running on MIPSr2 or higher, and reserving the memory for MIPSr1 or lower. This ensures we don't clobber the exception vector in any configuration, and for MIPSr2 & higher removes the need for memory at physical address 0. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Serge Semin <[email protected]> Tested-by: Serge Semin <[email protected]> Cc: [email protected]
2019-05-02MIPS: Use memblock_phys_alloc() for exception vectorPaul Burton1-4/+4
Allocate the exception vector using memblock_phys_alloc() which gives us a physical address, rather than the previous convoluted setup which obtained a virtual address using memblock_alloc(), converted it to a physical address & then back to a virtual address. Signed-off-by: Paul Burton <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Reviewed-by: Serge Semin <[email protected]> Tested-by: Serge Semin <[email protected]> Cc: [email protected]
2019-04-24mips: Combine memblock init and memory reservation loopsSerge Semin1-41/+7
Before bootmem was completely removed from the kernel, the last loop in the bootmem_init() had been used to reserve the correspondingly marked regions, initialize sparsemem sections and to free the low memory pages, which then would be used for early memory allocations. After the bootmem removing patchset had been merged the loop was left to do the first two things only. But it didn't do them quite well. First of all it leaves the BOOT_MEM_INIT_RAM memory types unreserved, which is definitely bug (although it isn't noticeable due to being used by the kernel region only, which is fully marked as reserved). Secondly the reservation is supposed to be done for any memory including the high one. (I couldn't figure out why the highmem was ignored in the first place, since platforms and dts' may declare any memory region for reservation) Thirdly the reserved_end variable had been used here to not accidentally free memory occupied by kernel. Since we already reserved the corresponding region higher in this method there is no need in using the variable here anymore. Fourthly the sparsemem should be aware of all the memory types in the system including the ROM_DATA even if it is going to be reserved for the whole system uptime. Finally after all these notes are fixed the loop of memory reservation can be freely merged into the memory installation loop as it's done in this patch. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Matt Redfearn <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-04-24mips: Discard rudiments from bootmem_initSerge Semin1-20/+5
There is a pointless code left in the bootmem_init() method since the bootmem allocator removal. First part resides the PFN ranges calculation loop. The conditional expressions and continue operator are useless there, since nothing is done after them. Second part is in RAM ranges installation loop. We can simplify the conditions cascade a bit without much of the logic redefinition, so to reduce the code length. In particular the end boundary value can be verified after the possible reduction to be below max_low_pfn. Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Matt Redfearn <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-04-24mips: Make sure kernel .bss exists in boot mem poolSerge Semin1-0/+3
Current MIPS platform code makes sure the kernel text, data and init sections are added to the boot memory map pool right after the arch-specific memory setup method has been executed. But for some reason the MIPS platform code skipped the kernel .bss section, which definitely should be in the boot mem pool as well in any case. Lets fix this just be adding the space between __bss_start and __bss_stop. Reviewed-by: Matt Redfearn <[email protected]> Signed-off-by: Serge Semin <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Bogendoerfer <[email protected]> Cc: Huacai Chen <[email protected]> Cc: Stefan Agner <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Alexandre Belloni <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: [email protected]
2019-04-23Merge tag 'syscalls-5.1' of ↵Linus Torvalds3-0/+12
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull syscall numbering updates from Arnd Bergmann: "arch: add pidfd and io_uring syscalls everywhere This comes a bit late, but should be in 5.1 anyway: we want the newly added system calls to be synchronized across all architectures in the release. I hope that in the future, any newly added system calls can be added to all architectures at the same time, and tested there while they are in linux-next, avoiding dependencies between the architecture maintainer trees and the tree that contains the new system call" * tag 'syscalls-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: arch: add pidfd and io_uring syscalls everywhere
2019-04-22Merge tag 'mips_fixes_5.1_3' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "A couple more MIPS fixes: - Fix indirect syscall tracing & seccomp filtering for big endian MIPS64 kernels, which previously loaded the syscall number incorrectly & would always use zero. - Fix performance counter IRQ setup for Atheros/ath79 SoCs, allowing perf to function on those systems. And not really a fix, but a useful addition: - Add a Broadcom mailing list to the MAINTAINERS entry for BMIPS systems to allow relevant engineers to track patch submissions" * tag 'mips_fixes_5.1_3' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: perf: ath79: Fix perfcount IRQ assignment MIPS: scall64-o32: Fix indirect syscall number load MAINTAINERS: BMIPS: Add internal Broadcom mailing list
2019-04-15MIPS: scall64-o32: Fix indirect syscall number loadAurelien Jarno1-1/+1
Commit 4c21b8fd8f14 (MIPS: seccomp: Handle indirect system calls (o32)) added indirect syscall detection for O32 processes running on MIPS64, but it did not work correctly for big endian kernel/processes. The reason is that the syscall number is loaded from ARG1 using the lw instruction while this is a 64-bit value, so zero is loaded instead of the syscall number. Fix the code by using the ld instruction instead. When running a 32-bit processes on a 64 bit CPU, the values are properly sign-extended, so it ensures the value passed to syscall_trace_enter is correct. Recent systemd versions with seccomp enabled whitelist the getpid syscall for their internal processes (e.g. systemd-journald), but call it through syscall(SYS_getpid). This fix therefore allows O32 big endian systems with a 64-bit kernel to run recent systemd versions. Signed-off-by: Aurelien Jarno <[email protected]> Cc: <[email protected]> # v3.15+ Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Signed-off-by: Paul Burton <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: James Hogan <[email protected]> Cc: [email protected] Cc: [email protected]
2019-04-15arch: add pidfd and io_uring syscalls everywhereArnd Bergmann3-0/+12
Add the io_uring and pidfd_send_signal system calls to all architectures. These system calls are designed to handle both native and compat tasks, so all entries are the same across architectures, only arm-compat and the generic tale still use an old format. Acked-by: Michael Ellerman <[email protected]> (powerpc) Acked-by: Heiko Carstens <[email protected]> (s390) Acked-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-04-09Merge tag 'mips_fixes_5.1_2' of ↵Linus Torvalds1-1/+2
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "A few minor MIPS fixes: - Provide struct pt_regs * from get_irq_regs() to kgdb_nmicallback() when handling an IPI triggered by kgdb_roundup_cpus(), matching the behavior of other architectures & resolving kgdb issues for SMP systems. - Defer a pointer dereference until after a NULL check in the irq_shutdown callback for SGI IP27 HUB interrupts. - A defconfig update for the MSCC Ocelot to enable some necessary drivers" * tag 'mips_fixes_5.1_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: generic: Add switchdev, pinctrl and fit to ocelot_defconfig MIPS: SGI-IP27: Fix use of unchecked pointer in shutdown_bridge_irq MIPS: KGDB: fix kgdb support for SMP platforms.
2019-04-09MIPS: jump_label: Use compact branches for >= r6Paul Burton1-5/+25
MIPSr6 introduced compact branches which have no delay slots. Make use of them for jump labels in order to avoid the need for a nop to fill the branch or jump delay slot, saving 4 bytes of code for each static branch. Signed-off-by: Paul Burton <[email protected]> Cc: [email protected]