aboutsummaryrefslogtreecommitdiff
path: root/arch/sparc/mm/init_64.c
AgeCommit message (Collapse)AuthorFilesLines
2016-12-24Replace <asm/uaccess.h> with <linux/uaccess.h> globallyLinus Torvalds1-1/+1
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: Al Viro <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-11-14sparc64: fix compile warning section mismatch in find_node()Thomas Tai1-3/+3
A compile warning is introduced by a commit to fix the find_node(). This patch fix the compile warning by moving find_node() into __init section. Because find_node() is only used by memblock_nid_range() which is only used by a __init add_node_ranges(). find_node() and memblock_nid_range() should also be inside __init section. Signed-off-by: Thomas Tai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-11-10sparc64: Fix find_node warning if numa node cannot be foundThomas Tai1-4/+61
When booting up LDOM, find_node() warns that a physical address doesn't match a NUMA node. WARNING: CPU: 0 PID: 0 at arch/sparc/mm/init_64.c:835 find_node+0xf4/0x120 find_node: A physical address doesn't match a NUMA node rule. Some physical memory will be owned by node 0.Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc3 #4 Call Trace: [0000000000468ba0] __warn+0xc0/0xe0 [0000000000468c74] warn_slowpath_fmt+0x34/0x60 [00000000004592f4] find_node+0xf4/0x120 [0000000000dd0774] add_node_ranges+0x38/0xe4 [0000000000dd0b1c] numa_parse_mdesc+0x268/0x2e4 [0000000000dd0e9c] bootmem_init+0xb8/0x160 [0000000000dd174c] paging_init+0x808/0x8fc [0000000000dcb0d0] setup_arch+0x2c8/0x2f0 [0000000000dc68a0] start_kernel+0x48/0x424 [0000000000dcb374] start_early_boot+0x27c/0x28c [0000000000a32c08] tlb_fixup_done+0x4c/0x64 [0000000000027f08] 0x27f08 It is because linux use an internal structure node_masks[] to keep the best memory latency node only. However, LDOM mdesc can contain single latency-group with multiple memory latency nodes. If the address doesn't match the best latency node within node_masks[], it should check for an alternative via mdesc. The warning message should only be printed if the address doesn't match any node_masks[] nor within mdesc. To minimize the impact of searching mdesc every time, the last matched mask and index is stored in a variable. Signed-off-by: Thomas Tai <[email protected]> Reviewed-by: Chris Hyser <[email protected]> Reviewed-by: Liam Merwick <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-10-06sparc: migrate exception table users off module.h and onto extable.hPaul Gortmaker1-1/+1
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: "David S. Miller" <[email protected]> Cc: [email protected] Signed-off-by: Paul Gortmaker <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-09-28sparc64: Fix irq stack bootmem allocation.Atish Patra1-16/+0
Currently, irq stack bootmem is allocated for all possible cpus before nr_cpus value changes the list of possible cpus. As a result, there is unnecessary wastage of bootmemory. Move the irq stack bootmem allocation so that it happens after possible cpu list is modified based on nr_cpus value. Signed-off-by: Atish Patra <[email protected]> Reviewed-by: Bob Picco <[email protected]> Reviewed-by: Vijay Kumar <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-09-28sparc64: fix section mismatch in find_numa_latencies_for_groupPaul Gortmaker1-3/+3
To fix: WARNING: vmlinux.o(.text.unlikely+0x580): Section mismatch in reference from the function find_numa_latencies_for_group() to the function .init.text:find_mlgroup() The function find_numa_latencies_for_group() references the function __init find_mlgroup(). This is often because find_numa_latencies_for_group lacks a __init annotation or the annotation of find_mlgroup is wrong. It turns out find_numa_latencies_for_group is only called from: static int __init numa_parse_mdesc(void) and hence we can tag find_numa_latencies_for_group with __init. In doing so we see that find_best_numa_node_for_mlgroup is only called from within __init and hence can also be marked with __init. Cc: "David S. Miller" <[email protected]> Cc: Nitin Gupta <[email protected]> Cc: Chris Hyser <[email protected]> Cc: Santosh Shilimkar <[email protected]> Cc: [email protected] Signed-off-by: Paul Gortmaker <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-07-29sparc64: Trim page tables for 8M hugepagesNitin Gupta1-2/+4
For PMD aligned (8M) hugepages, we currently allocate all four page table levels which is wasteful. We now allocate till PMD level only which saves memory usage from page tables. Also, when freeing page table for 8M hugepage backed region, make sure we don't try to access non-existent PTE level. Orabug: 22630259 Signed-off-by: Nitin Gupta <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-07-28sparc64 mm: Fix base TSB sizing when hugetlb pages are usedMike Kravetz1-1/+2
do_sparc64_fault() calculates both the base and huge page RSS sizes and uses this information in calls to tsb_grow(). The calculation for base page TSB size is not correct if the task uses hugetlb pages. hugetlb pages are not accounted for in RSS, therefore the call to get_mm_rss(mm) does not include hugetlb pages. However, the number of pages based on huge_pte_count (which does include hugetlb pages) is subtracted from this value. This will result in an artificially small and often negative RSS calculation. The base TSB size is then often set to max_tsb_size as the passed RSS is unsigned, so a negative value looks really big. THP pages are also accounted for in huge_pte_count, and THP pages are accounted for in RSS so the calculation in do_sparc64_fault() is correct if a task only uses THP pages. A single huge_pte_count is not sufficient for TSB sizing if both hugetlb and THP pages can be used. Instead of a single counter, use two: one for hugetlb and one for THP. Signed-off-by: Mike Kravetz <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-06-24tree wide: get rid of __GFP_REPEAT for order-0 allocations part IMichal Hocko1-4/+2
This is the third version of the patchset previously sent [1]. I have basically only rebased it on top of 4.7-rc1 tree and dropped "dm: get rid of superfluous gfp flags" which went through dm tree. I am sending it now because it is tree wide and chances for conflicts are reduced considerably when we want to target rc2. I plan to send the next step and rename the flag and move to a better semantic later during this release cycle so we will have a new semantic ready for 4.8 merge window hopefully. Motivation: While working on something unrelated I've checked the current usage of __GFP_REPEAT in the tree. It seems that a majority of the usage is and always has been bogus because __GFP_REPEAT has always been about costly high order allocations while we are using it for order-0 or very small orders very often. It seems that a big pile of them is just a copy&paste when a code has been adopted from one arch to another. I think it makes some sense to get rid of them because they are just making the semantic more unclear. Please note that GFP_REPEAT is documented as * __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt * _might_ fail. This depends upon the particular VM implementation. while !costly requests have basically nofail semantic. So one could reasonably expect that order-0 request with __GFP_REPEAT will not loop for ever. This is not implemented right now though. I would like to move on with __GFP_REPEAT and define a better semantic for it. $ git grep __GFP_REPEAT origin/master | wc -l 111 $ git grep __GFP_REPEAT | wc -l 36 So we are down to the third after this patch series. The remaining places really seem to be relying on __GFP_REPEAT due to large allocation requests. This still needs some double checking which I will do later after all the simple ones are sorted out. I am touching a lot of arch specific code here and I hope I got it right but as a matter of fact I even didn't compile test for some archs as I do not have cross compiler for them. Patches should be quite trivial to review for stupid compile mistakes though. The tricky parts are usually hidden by macro definitions and thats where I would appreciate help from arch maintainers. [1] http://lkml.kernel.org/r/[email protected] This patch (of 19): __GFP_REPEAT has a rather weak semantic but since it has been introduced around 2.6.12 it has been ignored for low order allocations. Yet we have the full kernel tree with its usage for apparently order-0 allocations. This is really confusing because __GFP_REPEAT is explicitly documented to allow allocation failures which is a weaker semantic than the current order-0 has (basically nofail). Let's simply drop __GFP_REPEAT from those places. This would allow to identify place which really need allocator to retry harder and formulate a more specific semantic for what the flag is supposed to do actually. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: "Theodore Ts'o" <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chen Liqin <[email protected]> Cc: Chris Metcalf <[email protected]> [for tile] Cc: Guan Xuetao <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Helge Deller <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jan Kara <[email protected]> Cc: John Crispin <[email protected]> Cc: Lennox Wu <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Will Deacon <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-05-25sparc64: Take ctx_alloc_lock properly in hugetlb_setup().David S. Miller1-3/+7
On cheetahplus chips we take the ctx_alloc_lock in order to modify the TLB lookup parameters for the indexed TLBs, which are stored in the context register. This is called with interrupts disabled, however ctx_alloc_lock is an IRQ safe lock, therefore we must take acquire/release it properly with spin_{lock,unlock}_irq(). Reported-by: Meelis Roos <[email protected]> Tested-by: Meelis Roos <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-20sparc64: Reduce TLB flushes during hugepte changesNitin Gupta1-12/+0
During hugepage map/unmap, TSB and TLB flushes are currently issued at every PAGE_SIZE'd boundary which is unnecessary. We now issue the flush at REAL_HPAGE_SIZE boundaries only. Without this patch workloads which unmap a large hugepage backed VMA region get CPU lockups due to excessive TLB flush calls. Orabug: 22365539, 22643230, 22995196 Signed-off-by: Nitin Gupta <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-04-21sparc64: recognize and support Sonoma CPU typeKhalid Aziz1-0/+3
Add code to recognize SPARC-Sonoma cpu correctly and update cpu hardware caps and cpu distribution map. SPARC-Sonoma is based upon SPARC-M7 core along with additional PCI functions added on and is reported by firmware as "SPARC-SN". Signed-off-by: Khalid Aziz <[email protected]> Acked-by: Allen Pais <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-01-30arch: Set IORESOURCE_SYSTEM_RAM flag for System RAMToshi Kani1-4/+4
Set IORESOURCE_SYSTEM_RAM in flags of resource ranges with "System RAM", "Kernel code", "Kernel data", and "Kernel bss". Note that: - IORESOURCE_SYSRAM (i.e. modifier bit) is set in flags when IORESOURCE_MEM is already set. IORESOURCE_SYSTEM_RAM is defined as (IORESOURCE_MEM|IORESOURCE_SYSRAM). - Some archs do not set 'flags' for children nodes, such as "Kernel code". This patch does not change 'flags' in this case. Signed-off-by: Toshi Kani <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Luis R. Rodriguez <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Toshi Kani <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: linux-mm <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-01-14sparc64: Fix numa node distance initializationNitin Gupta1-7/+8
Orabug: 22495713 Currently, NUMA node distance matrix is initialized only when a machine descriptor (MD) exists. However, sun4u machines (e.g. Sun Blade 2500) do not have an MD and thus distance values were left uninitialized. The initialization is now moved such that it happens on both sun4u and sun4v. Signed-off-by: Nitin Gupta <[email protected]> Tested-by: Mikael Pettersson <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2015-11-04sparc64: Fix numa distance valuesNitin Gupta1-1/+69
Orabug: 21896119 Use machine descriptor (MD) to get node latency values instead of just using default values. Testing: On an T5-8 system with: - total nodes = 8 - self latencies = 0x26d18 - latency to other nodes = 0x3a598 => latency ratio = ~1.5 output of numactl --hardware - before fix: node distances: node 0 1 2 3 4 5 6 7 0: 10 20 20 20 20 20 20 20 1: 20 10 20 20 20 20 20 20 2: 20 20 10 20 20 20 20 20 3: 20 20 20 10 20 20 20 20 4: 20 20 20 20 10 20 20 20 5: 20 20 20 20 20 10 20 20 6: 20 20 20 20 20 20 10 20 7: 20 20 20 20 20 20 20 10 - after fix: node distances: node 0 1 2 3 4 5 6 7 0: 10 15 15 15 15 15 15 15 1: 15 10 15 15 15 15 15 15 2: 15 15 10 15 15 15 15 15 3: 15 15 15 10 15 15 15 15 4: 15 15 15 15 10 15 15 15 5: 15 15 15 15 15 10 15 15 6: 15 15 15 15 15 15 10 15 7: 15 15 15 15 15 15 15 10 Signed-off-by: Nitin Gupta <[email protected]> Reviewed-by: Chris Hyser <[email protected]> Reviewed-by: Santosh Shilimkar <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2015-06-24mm/memblock: add extra "flags" to memblock to allow selection of memory ↵Tony Luck1-2/+4
based on attribute Some high end Intel Xeon systems report uncorrectable memory errors as a recoverable machine check. Linux has included code for some time to process these and just signal the affected processes (or even recover completely if the error was in a read only page that can be replaced by reading from disk). But we have no recovery path for errors encountered during kernel code execution. Except for some very specific cases were are unlikely to ever be able to recover. Enter memory mirroring. Actually 3rd generation of memory mirroing. Gen1: All memory is mirrored Pro: No s/w enabling - h/w just gets good data from other side of the mirror Con: Halves effective memory capacity available to OS/applications Gen2: Partial memory mirror - just mirror memory begind some memory controllers Pro: Keep more of the capacity Con: Nightmare to enable. Have to choose between allocating from mirrored memory for safety vs. NUMA local memory for performance Gen3: Address range partial memory mirror - some mirror on each memory controller Pro: Can tune the amount of mirror and keep NUMA performance Con: I have to write memory management code to implement The current plan is just to use mirrored memory for kernel allocations. This has been broken into two phases: 1) This patch series - find the mirrored memory, use it for boot time allocations 2) Wade into mm/page_alloc.c and define a ZONE_MIRROR to pick up the unused mirrored memory from mm/memblock.c and only give it out to select kernel allocations (this is still being scoped because page_alloc.c is scary). This patch (of 3): Add extra "flags" to memblock to allow selection of memory based on attribute. No functional changes Signed-off-by: Tony Luck <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: Hanjun Guo <[email protected]> Cc: Xiexiuqi <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: Naoya Horiguchi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-06-02Merge branch 'linus' into sched/core, to resolve conflictIngo Molnar1-21/+53
Conflicts: arch/sparc/include/asm/topology_64.h Signed-off-by: Ingo Molnar <[email protected]>
2015-05-31sparc: Resolve conflict between sparc v9 and M7 on usage of bit 9 of TTEKhalid Aziz1-21/+53
sparc: Resolve conflict between sparc v9 and M7 on usage of bit 9 of TTE Bit 9 of TTE is CV (Cacheable in V-cache) on sparc v9 processor while the same bit 9 is MCDE (Memory Corruption Detection Enable) on M7 processor. This creates a conflicting usage of the same bit. Kernel sets TTE.cv bit on all pages for sun4v architecture which works well for sparc v9 but enables memory corruption detection on M7 processor which is not the intent. This patch adds code to determine if kernel is running on M7 processor and takes steps to not enable memory corruption detection in TTE erroneously. Signed-off-by: Khalid Aziz <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2015-05-19mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in ↵David Hildenbrand1-1/+1
the handler Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: Thomas Gleixner <[email protected]> Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-03-18sparc: Fix /proc/kcoreDavid S. Miller1-1/+1
/proc/kcore investigates the "System RAM" elements in /proc/iomem to initialize it's memory tables. Therefore we have to register them before it tries to do so. kcore uses device_initcall() so let's use arch_initcall() for the registry. Also we need ARCH_PROC_KCORE_TEXT to get the virtual addresses of the kernel image correct. Reported-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-12-13mm/debug-pagealloc: make debug-pagealloc boottime configurableJoonsoo Kim1-1/+1
Now, we have prepared to avoid using debug-pagealloc in boottime. So introduce new kernel-parameter to disable debug-pagealloc in boottime, and makes related functions to be disabled in this case. Only non-intuitive part is change of guard page functions. Because guard page is effective only if debug-pagealloc is enabled, turning off according to debug-pagealloc is reasonable thing to do. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Michal Nazarewicz <[email protected]> Cc: Jungsoo Son <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2014-10-05sparc64: Kill unnecessary tables and increase MAX_BANKS.David S. Miller1-23/+2
swapper_low_pmd_dir and swapper_pud_dir are actually completely useless and unnecessary. We just need swapper_pg_dir[]. Naturally the other page table chunks will be allocated on an as-needed basis. Since the kernel actually accesses these tables in the PAGE_OFFSET view, there is not even a TLB locality advantage of placing them in the kernel image. Use the hard coded vmlinux.ld.S slot for swapper_pg_dir which is naturally page aligned. Increase MAX_BANKS to 1024 in order to handle heavily fragmented virtual guests. Even with this MAX_BANKS increase, the kernel is 20K+ smaller. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Adjust vmalloc region size based upon available virtual address bits.David S. Miller1-11/+19
In order to accomodate embedded per-cpu allocation with large numbers of cpus and numa nodes, we have to use as much virtual address space as possible for the vmalloc region. Otherwise we can get things like: PERCPU: max_distance=0x380001c10000 too large for vmalloc space 0xff00000000 So, once we select a value for PAGE_OFFSET, derive the size of the vmalloc region based upon that. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Increase MAX_PHYS_ADDRESS_BITS to 53.David S. Miller1-1/+8
Make sure, at compile time, that the kernel can properly support whatever MAX_PHYS_ADDRESS_BITS is defined to. On M7 chips, use a max_phys_bits value of 49. Based upon a patch by Bob Picco. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Use kernel page tables for vmemmap.David S. Miller1-38/+34
For sparse memory configurations, the vmemmap array behaves terribly and it takes up an inordinate amount of space in the BSS section of the kernel image unconditionally. Just build huge PMDs and look them up just like we do for TLB misses in the vmalloc area. Kernel BSS shrinks by about 2MB. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Fix physical memory management regressions with large max_phys_bits.David S. Miller1-225/+168
If max_phys_bits needs to be > 43 (f.e. for T4 chips), things like DEBUG_PAGEALLOC stop working because the 3-level page tables only can cover up to 43 bits. Another problem is that when we increased MAX_PHYS_ADDRESS_BITS up to 47, several statically allocated tables became enormous. Compounding this is that we will need to support up to 49 bits of physical addressing for M7 chips. The two tables in question are sparc64_valid_addr_bitmap and kpte_linear_bitmap. The first holds a bitmap, with 1 bit for each 4MB chunk of physical memory, indicating whether that chunk actually exists in the machine and is valid. The second table is a set of 2-bit values which tell how large of a mapping (4MB, 256MB, 2GB, 16GB, respectively) we can use at each 256MB chunk of ram in the system. These tables are huge and take up an enormous amount of the BSS section of the sparc64 kernel image. Specifically, the sparc64_valid_addr_bitmap is 4MB, and the kpte_linear_bitmap is 128K. So let's solve the space wastage and the DEBUG_PAGEALLOC problem at the same time, by using the kernel page tables (as designed) to manage this information. We have to keep using large mappings when DEBUG_PAGEALLOC is disabled, and we do this by encoding huge PMDs and PUDs. On a T4-2 with 256GB of ram the kernel page table takes up 16K with DEBUG_PAGEALLOC disabled and 256MB with it enabled. Furthermore, this memory is dynamically allocated at run time rather than coded statically into the kernel image. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Adjust KTSB assembler to support larger physical addresses.David S. Miller1-3/+25
As currently coded the KTSB accesses in the kernel only support up to 47 bits of physical addressing. Adjust the instruction and patching sequence in order to support arbitrary 64 bits addresses. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Define VA hole at run time, rather than at compile time.David S. Miller1-0/+21
Now that we use 4-level page tables, we can provide up to 53-bits of virtual address space to the user. Adjust the VA hole based upon the capabilities of the cpu type probed. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-05sparc64: Switch to 4-level page tables.David S. Miller1-4/+27
This has become necessary with chips that support more than 43-bits of physical addressing. Based almost entirely upon a patch by Bob Picco. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2014-10-04sparc64: Fix reversed start/end in flush_tlb_kernel_range()David S. Miller1-2/+2
When we have to split up a flush request into multiple pieces (in order to avoid the firmware range) we don't specify the arguments in the right order for the second piece. Fix the order, or else we get hangs as the code tries to flush "a lot" of entries and we get lockups like this: [ 4422.981276] NMI watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [expect:117032] [ 4422.996130] Modules linked in: ipv6 loop usb_storage igb ptp sg sr_mod ehci_pci ehci_hcd pps_core n2_rng rng_core [ 4423.016617] CPU: 12 PID: 117032 Comm: expect Not tainted 3.17.0-rc4+ #1608 [ 4423.030331] task: fff8003cc730e220 ti: fff8003d99d54000 task.ti: fff8003d99d54000 [ 4423.045282] TSTATE: 0000000011001602 TPC: 00000000004521e8 TNPC: 00000000004521ec Y: 00000000 Not tainted [ 4423.064905] TPC: <__flush_tlb_kernel_range+0x28/0x40> [ 4423.074964] g0: 000000000052fd10 g1: 00000001295a8000 g2: ffffff7176ffc000 g3: 0000000000002000 [ 4423.092324] g4: fff8003cc730e220 g5: fff8003dfedcc000 g6: fff8003d99d54000 g7: 0000000000000006 [ 4423.109687] o0: 0000000000000000 o1: 0000000000000000 o2: 0000000000000003 o3: 00000000f0000000 [ 4423.127058] o4: 0000000000000080 o5: 00000001295a8000 sp: fff8003d99d56d01 ret_pc: 000000000052ff54 [ 4423.145121] RPC: <__purge_vmap_area_lazy+0x314/0x3a0> [ 4423.155185] l0: 0000000000000000 l1: 0000000000000000 l2: 0000000000a38040 l3: 0000000000000000 [ 4423.172559] l4: fff8003dae8965e0 l5: ffffffffffffffff l6: 0000000000000000 l7: 00000000f7e2b138 [ 4423.189913] i0: fff8003d99d576a0 i1: fff8003d99d576a8 i2: fff8003d99d575e8 i3: 0000000000000000 [ 4423.207284] i4: 0000000000008008 i5: fff8003d99d575c8 i6: fff8003d99d56df1 i7: 0000000000530c24 [ 4423.224640] I7: <free_vmap_area_noflush+0x64/0x80> [ 4423.234193] Call Trace: [ 4423.239051] [0000000000530c24] free_vmap_area_noflush+0x64/0x80 [ 4423.251029] [0000000000531a7c] remove_vm_area+0x5c/0x80 [ 4423.261628] [0000000000531b80] __vunmap+0x20/0x120 [ 4423.271352] [000000000071cf18] n_tty_close+0x18/0x40 [ 4423.281423] [00000000007222b0] tty_ldisc_close+0x30/0x60 [ 4423.292183] [00000000007225a4] tty_ldisc_reinit+0x24/0xa0 [ 4423.303120] [0000000000722ab4] tty_ldisc_hangup+0xd4/0x1e0 [ 4423.314232] [0000000000719aa0] __tty_hangup+0x280/0x3c0 [ 4423.324835] [0000000000724cb4] pty_close+0x134/0x1a0 [ 4423.334905] [000000000071aa24] tty_release+0x104/0x500 [ 4423.345316] [00000000005511d0] __fput+0x90/0x1e0 [ 4423.354701] [000000000047fa54] task_work_run+0x94/0xe0 [ 4423.365126] [0000000000404b44] __handle_signal+0xc/0x2c Fixes: 4ca9a23765da ("sparc64: Guard against flushing openfirmware mappings.") Signed-off-by: David S. Miller <[email protected]>
2014-09-16sparc64: mem boot option correctionbob picco1-1/+48
The "mem" boot option can result in many unexpected consequences. This patch attempts to prevent boot hangs which have been experienced on T4-4 and T5-8. Basically the boot loader allocates vmlinuz and initrd higher in available OBP physical memory. For example, on a 2Tb T5-8 it isn't possible to boot with mem=20G. The patch utilizes memblock to avoid reserved regions and trim memory which is only free. Other improvements are possible for a multi-node machine. This is a snippet of the boot log with mem=20G on T5-8 with the patch applied: MEMBLOCK configuration: <- before memory reduction memory size = 0x1ffad6ce000 reserved size = 0xa1adf44 memory.cnt = 0xb memory[0x0] [0x00000030400000-0x00003fdde47fff], 0x3fada48000 bytes memory[0x1] [0x00003fdde4e000-0x00003fdde4ffff], 0x2000 bytes memory[0x2] [0x00080000000000-0x00083fffffffff], 0x4000000000 bytes memory[0x3] [0x00100000000000-0x00103fffffffff], 0x4000000000 bytes memory[0x4] [0x00180000000000-0x00183fffffffff], 0x4000000000 bytes memory[0x5] [0x00200000000000-0x00203fffffffff], 0x4000000000 bytes memory[0x6] [0x00280000000000-0x00283fffffffff], 0x4000000000 bytes memory[0x7] [0x00300000000000-0x00303fffffffff], 0x4000000000 bytes memory[0x8] [0x00380000000000-0x00383fffc71fff], 0x3fffc72000 bytes memory[0x9] [0x00383fffc92000-0x00383fffca1fff], 0x10000 bytes memory[0xa] [0x00383fffcb4000-0x00383fffcb5fff], 0x2000 bytes reserved.cnt = 0x2 reserved[0x0] [0x00380000000000-0x0038000117e7f8], 0x117e7f9 bytes reserved[0x1] [0x00380004000000-0x0038000d02f74a], 0x902f74b bytes ... MEMBLOCK configuration: <- after reduction of memory memory size = 0x50a1adf44 reserved size = 0xa1adf44 memory.cnt = 0x4 memory[0x0] [0x00380000000000-0x0038000117e7f8], 0x117e7f9 bytes memory[0x1] [0x00380004000000-0x0038050d01d74a], 0x50901d74b bytes memory[0x2] [0x00383fffc92000-0x00383fffca1fff], 0x10000 bytes memory[0x3] [0x00383fffcb4000-0x00383fffcb5fff], 0x2000 bytes reserved.cnt = 0x2 reserved[0x0] [0x00380000000000-0x0038000117e7f8], 0x117e7f9 bytes reserved[0x1] [0x00380004000000-0x0038000d02f74a], 0x902f74b bytes ... Early memory node ranges node 7: [mem 0x380000000000-0x38000117dfff] node 7: [mem 0x380004000000-0x380f0d01bfff] node 7: [mem 0x383fffc92000-0x383fffca1fff] node 7: [mem 0x383fffcb4000-0x383fffcb5fff] Could not find start_pfn for node 0 Could not find start_pfn for node 1 Could not find start_pfn for node 2 Could not find start_pfn for node 3 Could not find start_pfn for node 4 Could not find start_pfn for node 5 Could not find start_pfn for node 6 . The patch was tested on T4-1, T5-8 and Jalap?no. Cc: [email protected] Signed-off-by: Bob Picco <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-09-16sparc64: find_node adjustmentbob picco1-1/+4
We have seen an issue with guest boot into LDOM that causes early boot failures because of no matching rules for node identitity of the memory. I analyzed this on my T4 and concluded there might not be a solution. I saw the issue in mainline too when booting into the control/primary domain - with guests configured. Note, this could be a firmware bug on some older machines. I'll provide a full explanation of the issues below. Should we not find a matching BEST latency group for a real address (RA) then we will assume node 0. On the T4-2 here with the information provided I can't see an alternative. Technically the LDOM shown below should match the MBLOCK to the favorable latency group. However other factors must be considered too. Were the memory controllers configured "fine" grained interleave or "coarse" grain interleaved - T4. Also should a "group" MD node be considered a NUMA node? There has to be at least one Machine Description (MD) "group" and hence one NUMA node. The group can have one or more latency groups (lg) - more than one memory controller. The current code chooses the smallest latency as the most favorable per group. The latency and lg information is in MLGROUP below. MBLOCK is the base and size of the RAs for the machine as fetched from OBP /memory "available" property. My machine has one MBLOCK but more would be possible - with holes? For a T4-2 the following information has been gathered: with LDOM guest MEMBLOCK configuration: memory size = 0x27f870000 memory.cnt = 0x3 memory[0x0] [0x00000020400000-0x0000029fc67fff], 0x27f868000 bytes memory[0x1] [0x0000029fd8a000-0x0000029fd8bfff], 0x2000 bytes memory[0x2] [0x0000029fd92000-0x0000029fd97fff], 0x6000 bytes reserved.cnt = 0x2 reserved[0x0] [0x00000020800000-0x000000216c15c0], 0xec15c1 bytes reserved[0x1] [0x00000024800000-0x0000002c180c1e], 0x7980c1f bytes MBLOCK[0]: base[20000000] size[280000000] offset[0] (note: "base" and "size" reported in "MBLOCK" encompass the "memory[X]" values) (note: (RA + offset) & mask = val is the formula to detect a match for the memory controller. should there be no match for find_node node, a return value of -1 resulted for the node - BAD) There is one group. It has these forward links MLGROUP[1]: node[545] latency[1f7e8] match[200000000] mask[200000000] MLGROUP[2]: node[54d] latency[2de60] match[0] mask[200000000] NUMA NODE[0]: node[545] mask[200000000] val[200000000] (latency[1f7e8]) (note: "val" is the best lg's (smallest latency) "match") no LDOM guest - bare metal MEMBLOCK configuration: memory size = 0xfdf2d0000 memory.cnt = 0x3 memory[0x0] [0x00000020400000-0x00000fff6adfff], 0xfdf2ae000 bytes memory[0x1] [0x00000fff6d2000-0x00000fff6e7fff], 0x16000 bytes memory[0x2] [0x00000fff766000-0x00000fff771fff], 0xc000 bytes reserved.cnt = 0x2 reserved[0x0] [0x00000020800000-0x00000021a04580], 0x1204581 bytes reserved[0x1] [0x00000024800000-0x0000002c7d29fc], 0x7fd29fd bytes MBLOCK[0]: base[20000000] size[fe0000000] offset[0] there are two groups group node[16d5] MLGROUP[0]: node[1765] latency[1f7e8] match[0] mask[200000000] MLGROUP[3]: node[177d] latency[2de60] match[200000000] mask[200000000] NUMA NODE[0]: node[1765] mask[200000000] val[0] (latency[1f7e8]) group node[171d] MLGROUP[2]: node[1775] latency[2de60] match[0] mask[200000000] MLGROUP[1]: node[176d] latency[1f7e8] match[200000000] mask[200000000] NUMA NODE[1]: node[176d] mask[200000000] val[200000000] (latency[1f7e8]) (note: for this two "group" bare metal machine, 1/2 memory is in group one's lg and 1/2 memory is in group two's lg). Cc: [email protected] Signed-off-by: Bob Picco <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-08-05sparc64: Fix up merge thinko.David S. Miller1-1/+0
Signed-off-by: David S. Miller <[email protected]>
2014-08-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcDavid S. Miller1-0/+32
Conflicts: arch/sparc/mm/init_64.c Conflict was simple non-overlapping additions. Signed-off-by: David S. Miller <[email protected]>
2014-08-04sparc64: Guard against flushing openfirmware mappings.David S. Miller1-0/+23
Based almost entirely upon a patch by Christopher Alexander Tobias Schulze. In commit db64fe02258f1507e13fe5212a989922323685ce ("mm: rewrite vmap layer") lazy VMAP tlb flushing was added to the vmalloc layer. This causes problems on sparc64. Sparc64 has two VMAP mapped regions and they are not contiguous with eachother. First we have the malloc mapping area, then another unrelated region, then the vmalloc region. This "another unrelated region" is where the firmware is mapped. If the lazy TLB flushing logic in the vmalloc code triggers after we've had both a module unload and a vfree or similar, it will pass an address range that goes from somewhere inside the malloc region to somewhere inside the vmalloc region, and thus covering the openfirmware area entirely. The sparc64 kernel learns about openfirmware's dynamic mappings in this region early in the boot, and then services TLB misses in this area. But openfirmware has some locked TLB entries which are not mentioned in those dynamic mappings and we should thus not disturb them. These huge lazy TLB flush ranges causes those openfirmware locked TLB entries to be removed, resulting in all kinds of problems including hard hangs and crashes during reboot/reset. Besides causing problems like this, such huge TLB flush ranges are also incredibly inefficient. A plea has been made with the author of the VMAP lazy TLB flushing code, but for now we'll put a safety guard into our flush_tlb_kernel_range() implementation. Since the implementation has become non-trivial, stop defining it as a macro and instead make it a function in a C source file. Signed-off-by: David S. Miller <[email protected]>
2014-08-04sparc64: Do not insert non-valid PTEs into the TSB hash table.David S. Miller1-0/+8
The assumption was that update_mmu_cache() (and the equivalent for PMDs) would only be called when the PTE being installed will be accessible by the user. This is not true for code paths originating from remove_migration_pte(). There are dire consequences for placing a non-valid PTE into the TSB. The TLB miss frramework assumes thatwhen a TSB entry matches we can just load it into the TLB and return from the TLB miss trap. So if a non-valid PTE is in there, we will deadlock taking the TLB miss over and over, never satisfying the miss. Just exit early from update_mmu_cache() and friends in this situation. Based upon a report and patch from Christopher Alexander Tobias Schulze. Signed-off-by: David S. Miller <[email protected]>
2014-07-21sparc64 - add mem to iomem resourcebob picco1-0/+65
This patch adds sparc RAM to /proc/iomem. It also identifies the code, data and bss regions of the kernel. Signed-off-by: Bob Picco <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-06-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-nextLinus Torvalds1-2/+7
Pull sparc fixes from David Miller: "Sparc sparse fixes from Sam Ravnborg" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: (67 commits) sparc64: fix sparse warnings in int_64.c sparc64: fix sparse warning in ftrace.c sparc64: fix sparse warning in kprobes.c sparc64: fix sparse warning in kgdb_64.c sparc64: fix sparse warnings in compat_audit.c sparc64: fix sparse warnings in init_64.c sparc64: fix sparse warnings in aes_glue.c sparc: fix sparse warnings in smp_32.c + smp_64.c sparc64: fix sparse warnings in perf_event.c sparc64: fix sparse warnings in kprobes.c sparc64: fix sparse warning in tsb.c sparc64: clean up compat_sigset_t.seta handling sparc64: fix sparse "Should it be static?" warnings in signal32.c sparc64: fix sparse warnings in sys_sparc32.c sparc64: fix sparse warning in pci.c sparc64: fix sparse warnings in smp_64.c sparc64: fix sparse warning in prom_64.c sparc64: fix sparse warning in btext.c sparc64: fix sparse warnings in sys_sparc_64.c + unaligned_64.c sparc64: fix sparse warning in process_64.c ... Conflicts: arch/sparc/include/asm/pgtable_64.h
2014-05-18sparc64: fix sparse warnings in int_64.cSam Ravnborg1-2/+6
Fix following warnings: init_64.c:798:5: warning: symbol 'numa_cpu_lookup_table' was not declared. Should it be static? init_64.c:799:11: warning: symbol 'numa_cpumask_lookup_table' was not declared. Should it be static? The warnings were present with an allnoconfig Fix so the variables are only declared if CONFIG_NEED_MULTIPLE_NODES is defined. Signed-off-by: Sam Ravnborg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-05-18sparc64: fix sparse warnings in init_64.cSam Ravnborg1-0/+1
Fix following warnings: init_64.c:191:10: warning: symbol 'dcpage_flushes' was not declared. Should it be static? init_64.c:193:10: warning: symbol 'dcpage_flushes_xcall' was not declared. Should it be static? Add extern declaration to asm/setup.h and drop local declaration in smp_64.h Signed-off-by: Sam Ravnborg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-05-03sparc64: Use 'ILOG2_4MB' instead of constant '22'.David S. Miller1-6/+6
Signed-off-by: David S. Miller <[email protected]>
2014-01-21memblock: make memblock_set_node() support different memblock_typeTang Chen1-2/+3
[[email protected]: fix powerpc build] Signed-off-by: Tang Chen <[email protected]> Reviewed-by: Zhang Yanfei <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: "Rafael J . Wysocki" <[email protected]> Cc: Chen Tang <[email protected]> Cc: Gong Chen <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiang Liu <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Larry Woodman <[email protected]> Cc: Len Brown <[email protected]> Cc: Liu Jiang <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Nazarewicz <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Taku Izumi <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Thomas Renninger <[email protected]> Cc: Toshi Kani <[email protected]> Cc: Vasilis Liaskovitis <[email protected]> Cc: Wanpeng Li <[email protected]> Cc: Wen Congyang <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Yinghai Lu <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-11-18sparc64: merge fixStephen Rothwell1-2/+0
After merging the final tree, today's linux-next build (sparc64 defconfig) failed like this: arch/sparc/mm/init_64.c: In function 'pte_alloc_one': arch/sparc/mm/init_64.c:2568:9: error: unused variable 'pte' [-Werror=unused-variable] Caused by the merge between commit 37b3a8ff3e08 ("sparc64: Move from 4MB to 8MB huge pages") and commit 1ae9ae5f7df7 ("sparc: handle pgtable_page_ctor() fail") (I had the following merge fix in linux-next, but it didn't seem to propagate upstream - may have forgotten to point it out :-(). Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2013-11-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-nextLinus Torvalds1-167/+115
Pull sparc update from David Miller: 1) Implement support for up to 47-bit physical addresses on sparc64. 2) Support HAVE_CONTEXT_TRACKING on sparc64, from Kirill Tkhai. 3) Fix Simba bridge window calculations, from Kjetil Oftedal. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: sparc64: Implement HAVE_CONTEXT_TRACKING sparc64: Add self-IPI support for smp_send_reschedule() sparc: PCI: Fix incorrect address calculation of PCI Bridge windows on Simba-bridges sparc64: Encode huge PMDs using PTE encoding. sparc64: Move to 64-bit PGDs and PMDs. sparc64: Move from 4MB to 8MB huge pages. sparc64: Make PAGE_OFFSET variable. sparc64: Fix inconsistent max-physical-address defines. sparc64: Document the shift counts used to validate linear kernel addresses. sparc64: Define PAGE_OFFSET in terms of physical address bits. sparc64: Use PAGE_OFFSET instead of a magic constant. sparc64: Clean up 64-bit mmap exclusion defines.
2013-11-15sparc: handle pgtable_page_ctor() failKirill A. Shutemov1-5/+6
Signed-off-by: Kirill A. Shutemov <[email protected]> Acked-by: David S. Miller <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-11-13sparc64: Encode huge PMDs using PTE encoding.David S. Miller1-101/+1
Now that we have 64-bits for PMDs we can stop using special encodings for the huge PMD values, and just put real PTEs in there. We allocate a _PAGE_PMD_HUGE bit to distinguish between plain PMDs and huge ones. It is the same for both 4U and 4V PTE layouts. We also use _PAGE_SPECIAL to indicate the splitting state, since a huge PMD cannot also be special. All of the PMD --> PTE translation code disappears, and most of the huge PMD bit modifications and tests just degenerate into the PTE operations. In particular USER_PGTABLE_CHECK_PMD_HUGE becomes trivial. As a side effect, normal PMDs don't shift the physical address around. This also speeds up the page table walks in the TLB miss paths since they don't have to do the shifts any more. Another non-trivial aspect is that pte_modify() has to be changed to preserve the _PAGE_PMD_HUGE bits as well as the page size field of the pte. Signed-off-by: David S. Miller <[email protected]>
2013-11-12sparc64: Move to 64-bit PGDs and PMDs.David S. Miller1-1/+1
To make the page tables compact, we were using 32-bit PGDs and PMDs. We only had to support <= 43 bits of physical addresses so this was quite feasible. In order to support larger physical addresses we have to move to 64-bit PGDs and PMDs. Most of the changes are straight-forward: 1) {pgd,pmd}_t --> unsigned long 2) Anything that tries to use plain "unsigned int" types with pgd/pmd values needs to be adjusted. In particular things like "0U" become "0UL". 3) {PGDIR,PMD}_BITS decrease by one. 4) In the assembler page table walkers, use "ldxa" instead of "lduwa" and adjust the low bit masks to clear out the low 3 bits instead of just the low 2 bits during pgd/pmd address formation. Also, use PTRS_PER_PGD and PTRS_PER_PMD in the sizing of the swapper_{pg_dir,low_pmd_dir} arrays. This patch does not try to take advantage of having 64-bits in the PMDs to simplify the hugepage code, that will come in a subsequent change. Signed-off-by: David S. Miller <[email protected]>
2013-11-12sparc64: Move from 4MB to 8MB huge pages.David S. Miller1-59/+15
The impetus for this is that we would like to move to 64-bit PMDs and PGDs, but that would result in only supporting a 42-bit address space with the current page table layout. It'd be nice to support at least 43-bits. The reason we'd end up with only 42-bits after making PMDs and PGDs 64-bit is that we only use half-page sized PTE tables in order to make PMDs line up to 4MB, the hardware huge page size we use. So what we do here is we make huge pages 8MB, and fabricate them using 4MB hw TLB entries. Facilitate this by providing a "REAL_HPAGE_SHIFT" which is used in places that really need to operate on hardware 4MB pages. Use full pages (512 entries) for PTE tables, and adjust PMD_SHIFT, PGD_SHIFT, and the build time CPP test as needed. Use a CPP test to make sure REAL_HPAGE_SHIFT and the _PAGE_SZHUGE_* we use match up. This makes the pgtable cache completely unused, so remove the code managing it and the state used in mm_context_t. Now we have less spinlocks taken in the page table allocation path. The technique we use to fabricate the 8MB pages is to transfer bit 22 from the missing virtual address into the PTEs physical address field. That takes care of the transparent huge pages case. For hugetlb, we fill things in at the PTE level and that code already puts the sub huge page physical bits into the PTEs, based upon the offset, so there is nothing special we need to do. It all just works out. So, a small amount of complexity in the THP case, but this code is about to get much simpler when we move the 64-bit PMDs as we can move away from the fancy 32-bit huge PMD encoding and just put a real PTE value in there. With bug fixes and help from Bob Picco. Signed-off-by: David S. Miller <[email protected]>
2013-11-12sparc64: Make PAGE_OFFSET variable.David S. Miller1-0/+92
Choose PAGE_OFFSET dynamically based upon cpu type. Original UltraSPARC-I (spitfire) chips only supported a 44-bit virtual address space. Newer chips (T4 and later) support 52-bit virtual addresses and up to 47-bits of physical memory space. Therefore we have to adjust PAGE_SIZE dynamically based upon the capabilities of the chip. Note that this change alone does not allow us to support > 43-bit physical memory, to do that we need to re-arrange our page table support. The current encodings of the pmd_t and pgd_t pointers restricts us to "32 + 11" == 43 bits. This change can waste quite a bit of memory for the various tables. In particular, a future change should work to size and allocate kern_linear_bitmap[] and sparc64_valid_addr_bitmap[] dynamically. This isn't easy as we really cannot take a TLB miss when accessing kern_linear_bitmap[]. We'd have to lock it into the TLB or similar. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>
2013-11-12sparc64: Use PAGE_OFFSET instead of a magic constant.David S. Miller1-7/+7
This pertains to all of the computations of the kernel fast TLB miss xor values. Based upon a patch by Bob Picco. Signed-off-by: David S. Miller <[email protected]> Acked-by: Bob Picco <[email protected]>