aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/sysdev/msi_bitmap.c
AgeCommit message (Collapse)AuthorFilesLines
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 441Thomas Gleixner1-6/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation version 2 of the license extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 315 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Armijn Hemel <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-03-12treewide: add checks for the return value of memblock_alloc*()Mike Rapoport1-0/+3
Add check for the return value of memblock_alloc*() functions and call panic() in case of error. The panic message repeats the one used by panicing memblock allocators with adjustment of parameters to include only relevant ones. The replacement was mostly automated with semantic patches like the one below with manual massaging of format strings. @@ expression ptr, size, align; @@ ptr = memblock_alloc(size, align); + if (!ptr) + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align); [[email protected]: use '%pa' with 'phys_addr_t' type] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: fix format strings for panics after memblock_alloc] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: don't panic if the allocation in sparse_buffer_init fails] Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx [[email protected]: fix xtensa printk warning] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Anders Roxell <[email protected]> Reviewed-by: Guo Ren <[email protected]> [c-sky] Acked-by: Paul Burton <[email protected]> [MIPS] Acked-by: Heiko Carstens <[email protected]> [s390] Reviewed-by: Juergen Gross <[email protected]> [Xen] Reviewed-by: Geert Uytterhoeven <[email protected]> [m68k] Acked-by: Max Filippov <[email protected]> [xtensa] Cc: Catalin Marinas <[email protected]> Cc: Christophe Leroy <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Guo Ren <[email protected]> Cc: Mark Salter <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Rob Herring <[email protected]> Cc: Rob Herring <[email protected]> Cc: Russell King <[email protected]> Cc: Stafford Horne <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31memblock: stop using implicit alignment to SMP_CACHE_BYTESMike Rapoport1-1/+1
When a memblock allocation APIs are called with align = 0, the alignment is implicitly set to SMP_CACHE_BYTES. Implicit alignment is done deep in the memblock allocator and it can come as a surprise. Not that such an alignment would be wrong even when used incorrectly but it is better to be explicit for the sake of clarity and the prinicple of the least surprise. Replace all such uses of memblock APIs with the 'align' parameter explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment in the memblock internal allocation functions. For the case when memblock APIs are used via helper functions, e.g. like iommu_arena_new_node() in Alpha, the helper functions were detected with Coccinelle's help and then manually examined and updated where appropriate. The direct memblock APIs users were updated using the semantic patch below: @@ expression size, min_addr, max_addr, nid; @@ ( | - memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid) + memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid) | - memblock_alloc(size, 0) + memblock_alloc(size, SMP_CACHE_BYTES) | - memblock_alloc_raw(size, 0) + memblock_alloc_raw(size, SMP_CACHE_BYTES) | - memblock_alloc_from(size, 0, min_addr) + memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_nopanic(size, 0) + memblock_alloc_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_low(size, 0) + memblock_alloc_low(size, SMP_CACHE_BYTES) | - memblock_alloc_low_nopanic(size, 0) + memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES) | - memblock_alloc_from_nopanic(size, 0, min_addr) + memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr) | - memblock_alloc_node(size, 0, nid) + memblock_alloc_node(size, SMP_CACHE_BYTES, nid) ) [[email protected]: changelog update] [[email protected]: coding-style fixes] [[email protected]: fix missed uses of implicit alignment] Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Suggested-by: Michal Hocko <[email protected]> Acked-by: Paul Burton <[email protected]> [MIPS] Acked-by: Michael Ellerman <[email protected]> [powerpc] Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michal Simek <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31mm: remove include/linux/bootmem.hMike Rapoport1-1/+1
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header. The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h> @@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h> [[email protected]: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-10-31memblock: remove _virt from APIs returning virtual addressMike Rapoport1-1/+1
The conversion is done using sed -i 's@memblock_virt_alloc@memblock_alloc@g' \ $(git grep -l memblock_virt_alloc) Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2018-07-04powerpc/msi: Remove VLA usageKees Cook1-7/+8
In the quest to remove all stack VLA usage from the kernel[1], this switches from an unchanging variable to a constant expression to eliminate the VLA generation. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2018-04-05headers: untangle kmemleak.h from mm.hRandy Dunlap1-0/+1
Currently <linux/slab.h> #includes <linux/kmemleak.h> for no obvious reason. It looks like it's only a convenience, so remove kmemleak.h from slab.h and add <linux/kmemleak.h> to any users of kmemleak_* that don't already #include it. Also remove <linux/kmemleak.h> from source files that do not use it. This is tested on i386 allmodconfig and x86_64 allmodconfig. It would be good to run it through the 0day bot for other $ARCHes. I have neither the horsepower nor the storage space for the other $ARCHes. Update: This patch has been extensively build-tested by both the 0day bot & kisskb/ozlabs build farms. Both of them reported 2 build failures for which patches are included here (in v2). [ slab.h is the second most used header file after module.h; kernel.h is right there with slab.h. There could be some minor error in the counting due to some #includes having comments after them and I didn't combine all of those. ] [[email protected]: security/keys/big_key.c needs vmalloc.h, per sfr] Link: http://lkml.kernel.org/r/[email protected] Link: http://kisskb.ellerman.id.au/kisskb/head/13396/ Signed-off-by: Randy Dunlap <[email protected]> Reviewed-by: Ingo Molnar <[email protected]> Reported-by: Michael Ellerman <[email protected]> [2 build failures] Reported-by: Fengguang Wu <[email protected]> [2 build failures] Reviewed-by: Andrew Morton <[email protected]> Cc: Wei Yongjun <[email protected]> Cc: Luis R. Rodriguez <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Mimi Zohar <[email protected]> Cc: John Johansen <[email protected]> Cc: Stephen Rothwell <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-08-23powerpc: Convert to using %pOF instead of full_nameRob Herring1-2/+2
Now that we have a custom printf format specifier, convert users of full_name to use %pOF instead. This is preparation to remove storing of the full path string for each node. Signed-off-by: Rob Herring <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Anatolij Gustschin <[email protected]> Cc: Scott Wood <[email protected]> Cc: Kumar Gala <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: [email protected] Reviewed-by: Tyrel Datwyler <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-08-02treewide: replace obsolete _refok by __refFabian Frederick1-1/+1
There was only one use of __initdata_refok and __exit_refok __init_refok was used 46 times against 82 for __ref. Those definitions are obsolete since commit 312b1485fb50 ("Introduce new section reference annotations tags: __ref, __refdata, __refconst") This patch removes the following compatibility definitions and replaces them treewide. /* compatibility defines */ #define __init_refok __ref #define __initdata_refok __refdata #define __exit_refok __ref I can also provide separate patches if necessary. (One patch per tree and check in 1 month or 2 to remove old definitions) [[email protected]: coding-style fixes] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Fabian Frederick <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Sam Ravnborg <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-10-28powerpc/msi: Fix section mismatch warning in msi_bitmap_alloc()Denis Kirjanov1-1/+1
Building with CONFIG_DEBUG_SECTION_MISMATCH gives the following warning: The function .msi_bitmap_alloc() references the function __init .memblock_virt_alloc_try_nid(). Memory allocation in msi_bitmap_alloc() uses either slab allocator or memblock boot time allocator depending on slab_is_available(). So the section mismatch warning is correct, but in practice there is no bug so mark msi_bitmap_alloc() as __init_refok. Signed-off-by: Denis Kirjanov <[email protected]> [mpe: Flesh out change log a bit] Signed-off-by: Michael Ellerman <[email protected]>
2015-10-05powerpc/msi: Free the bitmap if it was slab allocatedDenis Kirjanov1-4/+12
During the MSI bitmap test on boot kmemleak spews the following trace: unreferenced object 0xc00000016e86c900 (size 64): comm "swapper/0", pid 1, jiffies 4294893173 (age 518.024s) hex dump (first 32 bytes): 00 00 01 ff 7f ff 7f 37 00 00 00 00 00 00 00 00 .......7........ ff ff ff ff ff ff ff ff 01 ff ff ff ff ff ff ff ................ backtrace: [<c00000000003eebc>] .zalloc_maybe_bootmem+0x3c/0x380 [<c000000000042d6c>] .msi_bitmap_alloc+0x3c/0xb0 [<c000000000a9aff8>] .msi_bitmap_selftest+0x30/0x2b4 [<c0000000000090f4>] .do_one_initcall+0xd4/0x270 [<c000000000a8e250>] .kernel_init_freeable+0x1a0/0x280 [<c000000000009b5c>] .kernel_init+0x1c/0x120 [<c000000000007fbc>] .ret_from_kernel_thread+0x58/0x9c Add a flag to msi_bitmap for tracking allocations from slab and memblock so we can properly free/handle memory in msi_bitmap_free(). Signed-off-by: Denis Kirjanov <[email protected]> Reviewed-by: Catalin Marinas <[email protected]> [mpe: Reword changelog & use bitmap_from_slab in the if] Signed-off-by: Michael Ellerman <[email protected]>
2014-10-15powerpc/msi: Use WARN_ON() in msi bitmap selftestsMichael Ellerman1-30/+24
As demonstrated in the previous commit, the failure message from the msi bitmap selftests is a bit subtle, it's easy to miss a failure in a busy boot log. So drop our check() macro and use WARN_ON() instead. This necessitates inverting all the conditions as well. Signed-off-by: Michael Ellerman <[email protected]>
2014-10-15powerpc/msi: Fix the msi bitmap alignment testsMichael Ellerman1-8/+18
When we added the alignment tests recently we failed to check they were actually passing - oops. They weren't passing, because the bitmap was full. We should also be a bit more careful when checking the return code, a negative error return could by divisible by our alignment value. Fixes: b0345bbc6d09 ("powerpc/msi: Improve IRQ bitmap allocator") Signed-off-by: Michael Ellerman <[email protected]>
2014-10-08powerpc/msi: Improve IRQ bitmap allocatorIan Munsie1-11/+25
Currently msi_bitmap_alloc_hwirqs() will round up any IRQ allocation requests to the nearest power of 2. eg. ask for 5 IRQs and you'll get 8. This wastes a lot of IRQs which can be a scarce resource. For cxl we may require multiple IRQs for every context that is attached to the accelerator. There may be 1000s of contexts attached, hence we can easily run out of IRQs, especially if we are needlessly wasting them. This changes the msi_bitmap_alloc_hwirqs() to allocate only the required number of IRQs, hence avoiding this wastage. It keeps the natural alignment requirement though. Signed-off-by: Ian Munsie <[email protected]> Signed-off-by: Michael Neuling <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2014-09-25powerpc: Make a bunch of things staticAnton Blanchard1-3/+3
Signed-off-by: Anton Blanchard <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2014-04-09powerpc: Use of_node_init() for the fakenode in msi_bitmap.cLi Zhong1-1/+1
This patch uses of_node_init() to initialize the kobject in the fake node used in test_of_node(), to avoid following kobject warning. [ 0.897654] kobject: '(null)' (c0000007ca183a08): is not initialized, yet kobject_put() is being called. [ 0.897682] ------------[ cut here ]------------ [ 0.897688] WARNING: at lib/kobject.c:670 [ 0.897692] Modules linked in: [ 0.897701] CPU: 4 PID: 1 Comm: swapper/0 Not tainted 3.14.0+ #1 [ 0.897708] task: c0000007ca100000 ti: c0000007ca180000 task.ti: c0000007ca180000 [ 0.897715] NIP: c00000000046a1f0 LR: c00000000046a1ec CTR: 0000000001704660 [ 0.897721] REGS: c0000007ca1835c0 TRAP: 0700 Not tainted (3.14.0+) [ 0.897727] MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI> CR: 28000024 XER: 0000000d [ 0.897749] CFAR: c0000000008ef4ec SOFTE: 1 GPR00: c00000000046a1ec c0000007ca183840 c0000000014c59b8 000000000000005c GPR04: 0000000000000001 c000000000129770 0000000000000000 0000000000000001 GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000003fef GPR12: 0000000000000000 c00000000f221200 c00000000000c350 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR24: 0000000000000000 c00000000144e808 c000000000c56f20 00000000000000d8 GPR28: c000000000cd5058 0000000000000000 c000000001454ca8 c0000007ca183a08 [ 0.897856] NIP [c00000000046a1f0] .kobject_put+0xa0/0xb0 [ 0.897863] LR [c00000000046a1ec] .kobject_put+0x9c/0xb0 [ 0.897868] Call Trace: [ 0.897874] [c0000007ca183840] [c00000000046a1ec] .kobject_put+0x9c/0xb0 (unreliable) [ 0.897885] [c0000007ca1838c0] [c000000000743f9c] .of_node_put+0x2c/0x50 [ 0.897894] [c0000007ca183940] [c000000000c83954] .test_of_node+0x1dc/0x208 [ 0.897902] [c0000007ca183b80] [c000000000c839a4] .msi_bitmap_selftest+0x24/0x38 [ 0.897913] [c0000007ca183bf0] [c00000000000bb34] .do_one_initcall+0x144/0x200 [ 0.897922] [c0000007ca183ce0] [c000000000c748e4] .kernel_init_freeable+0x2b4/0x394 [ 0.897931] [c0000007ca183db0] [c00000000000c374] .kernel_init+0x24/0x130 [ 0.897940] [c0000007ca183e30] [c00000000000a2f4] .ret_from_kernel_thread+0x5c/0x68 [ 0.897947] Instruction dump: [ 0.897952] 7fe3fb78 38210080 e8010010 ebe1fff8 7c0803a6 4800014c e89f0000 3c62ff6e [ 0.897971] 7fe5fb78 3863a950 48485279 60000000 <0fe00000> 39000000 393f0038 4bffff80 [ 0.897992] ---[ end trace 1eeffdb9f825a556 ]--- Signed-off-by: Li Zhong <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2014-03-11of: Make device nodes kobjects so they show up in sysfsGrant Likely1-1/+1
Device tree nodes are already treated as objects, and we already want to expose them to userspace which is done using the /proc filesystem today. Right now the kernel has to do a lot of work to keep the /proc view in sync with the in-kernel representation. If device_nodes are switched to be kobjects then the device tree code can be a whole lot simpler. It also turns out that switching to using /sysfs from /proc results in smaller code and data size, and the userspace ABI won't change if /proc/device-tree symlinks to /sys/firmware/devicetree/base. v7: Add missing sysfs_bin_attr_init() v6: Add __of_add_property() early init fixes from Pantelis v5: Rename firmware/ofw to firmware/devicetree Fix updating property values in sysfs v4: Fixed build error on Powerpc Fixed handling of dynamic nodes on powerpc v3: Fixed handling of duplicate attribute and child node names v2: switch to using sysfs bin_attributes which solve the problem of reporting incorrect property size. Signed-off-by: Grant Likely <[email protected]> Tested-by: Sascha Hauer <[email protected]> Cc: Rob Herring <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: David S. Miller <[email protected]> Cc: Nathan Fontenot <[email protected]> Cc: Pantelis Antoniou <[email protected]>
2012-03-28Disintegrate asm/system.h for PowerPCDavid Howells1-0/+1
Disintegrate asm/system.h for PowerPC. Signed-off-by: David Howells <[email protected]> Acked-by: Benjamin Herrenschmidt <[email protected]> cc: [email protected]
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-0/+1
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <[email protected]> Guess-its-ok-by: Christoph Lameter <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Lee Schermerhorn <[email protected]>
2009-03-24powerpc/msi: Mark the MSI bitmap selftest code as __initMichael Ellerman1-3/+3
Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Benjamin Herrenschmidt <[email protected]>
2008-08-20powerpc: Split-out common MSI bitmap logic into msi_bitmap.cMichael Ellerman1-0/+247
There are now two almost identical implementations of an MSI bitmap allocator, one in mpic_msi.c and the other in fsl_msi.c. Merge them together and put the result in msi_bitmap.c. Some of the MPIC bits will remain to provide a nicer interface for the MPIC users. In the process we fix two buglets. The first is that the allocation routines, now msi_bitmap_alloc_hwirqs(), returned an unsigned result, even though they use -1 to indicate allocation failure. Although all the callers were checking correctly, it is much better for the routine to just return an int. At least until someone wants > ~2 billion MSIs. The second buglet is that the device tree reservation logic only allowed power-of-two reservations. AFAICT that didn't effect any existing code but it's nicer if we can reserve arbitrary irqs from MSI use. We also add some selftests, which exposed the two buglets and now test for them, as well as some basic sanity tests. The tests are only built when CONFIG_DEBUG_KERNEL=y. Signed-off-by: Michael Ellerman <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>