diff options
Diffstat (limited to 'Documentation/core-api')
| -rw-r--r-- | Documentation/core-api/atomic_ops.rst | 13 | ||||
| -rw-r--r-- | Documentation/core-api/cachetlb.rst | 415 | ||||
| -rw-r--r-- | Documentation/core-api/circular-buffers.rst | 237 | ||||
| -rw-r--r-- | Documentation/core-api/gfp_mask-from-fs-io.rst | 66 | ||||
| -rw-r--r-- | Documentation/core-api/index.rst | 3 | ||||
| -rw-r--r-- | Documentation/core-api/kernel-api.rst | 62 | ||||
| -rw-r--r-- | Documentation/core-api/printk-formats.rst | 3 | ||||
| -rw-r--r-- | Documentation/core-api/refcount-vs-atomic.rst | 2 | 
8 files changed, 761 insertions, 40 deletions
| diff --git a/Documentation/core-api/atomic_ops.rst b/Documentation/core-api/atomic_ops.rst index fce929144ccd..2e7165f86f55 100644 --- a/Documentation/core-api/atomic_ops.rst +++ b/Documentation/core-api/atomic_ops.rst @@ -111,7 +111,6 @@ If the compiler can prove that do_something() does not store to the  variable a, then the compiler is within its rights transforming this to  the following:: -	tmp = a;  	if (a > 0)  		for (;;)  			do_something(); @@ -119,7 +118,7 @@ the following::  If you don't want the compiler to do this (and you probably don't), then  you should use something like the following:: -	while (READ_ONCE(a) < 0) +	while (READ_ONCE(a) > 0)  		do_something();  Alternatively, you could place a barrier() call in the loop. @@ -467,10 +466,12 @@ Like the above, except that these routines return a boolean which  indicates whether the changed bit was set _BEFORE_ the atomic bit  operation. -WARNING! It is incredibly important that the value be a boolean, -ie. "0" or "1".  Do not try to be fancy and save a few instructions by -declaring the above to return "long" and just returning something like -"old_val & mask" because that will not work. + +.. warning:: +        It is incredibly important that the value be a boolean, ie. "0" or "1". +        Do not try to be fancy and save a few instructions by declaring the +        above to return "long" and just returning something like "old_val & +        mask" because that will not work.  For one thing, this return value gets truncated to int in many code  paths using these interfaces, so on 64-bit if the bit is set in the diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst new file mode 100644 index 000000000000..6eb9d3f090cd --- /dev/null +++ b/Documentation/core-api/cachetlb.rst @@ -0,0 +1,415 @@ +================================== +Cache and TLB Flushing Under Linux +================================== + +:Author: David S. Miller <[email protected]> + +This document describes the cache/tlb flushing interfaces called +by the Linux VM subsystem.  It enumerates over each interface, +describes its intended purpose, and what side effect is expected +after the interface is invoked. + +The side effects described below are stated for a uniprocessor +implementation, and what is to happen on that single processor.  The +SMP cases are a simple extension, in that you just extend the +definition such that the side effect for a particular interface occurs +on all processors in the system.  Don't let this scare you into +thinking SMP cache/tlb flushing must be so inefficient, this is in +fact an area where many optimizations are possible.  For example, +if it can be proven that a user address space has never executed +on a cpu (see mm_cpumask()), one need not perform a flush +for this address space on that cpu. + +First, the TLB flushing interfaces, since they are the simplest.  The +"TLB" is abstracted under Linux as something the cpu uses to cache +virtual-->physical address translations obtained from the software +page tables.  Meaning that if the software page tables change, it is +possible for stale translations to exist in this "TLB" cache. +Therefore when software page table changes occur, the kernel will +invoke one of the following flush methods _after_ the page table +changes occur: + +1) ``void flush_tlb_all(void)`` + +	The most severe flush of all.  After this interface runs, +	any previous page table modification whatsoever will be +	visible to the cpu. + +	This is usually invoked when the kernel page tables are +	changed, since such translations are "global" in nature. + +2) ``void flush_tlb_mm(struct mm_struct *mm)`` + +	This interface flushes an entire user address space from +	the TLB.  After running, this interface must make sure that +	any previous page table modifications for the address space +	'mm' will be visible to the cpu.  That is, after running, +	there will be no entries in the TLB for 'mm'. + +	This interface is used to handle whole address space +	page table operations such as what happens during +	fork, and exec. + +3) ``void flush_tlb_range(struct vm_area_struct *vma, +   unsigned long start, unsigned long end)`` + +	Here we are flushing a specific range of (user) virtual +	address translations from the TLB.  After running, this +	interface must make sure that any previous page table +	modifications for the address space 'vma->vm_mm' in the range +	'start' to 'end-1' will be visible to the cpu.  That is, after +	running, there will be no entries in the TLB for 'mm' for +	virtual addresses in the range 'start' to 'end-1'. + +	The "vma" is the backing store being used for the region. +	Primarily, this is used for munmap() type operations. + +	The interface is provided in hopes that the port can find +	a suitably efficient method for removing multiple page +	sized translations from the TLB, instead of having the kernel +	call flush_tlb_page (see below) for each entry which may be +	modified. + +4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)`` + +	This time we need to remove the PAGE_SIZE sized translation +	from the TLB.  The 'vma' is the backing structure used by +	Linux to keep track of mmap'd regions for a process, the +	address space is available via vma->vm_mm.  Also, one may +	test (vma->vm_flags & VM_EXEC) to see if this region is +	executable (and thus could be in the 'instruction TLB' in +	split-tlb type setups). + +	After running, this interface must make sure that any previous +	page table modification for address space 'vma->vm_mm' for +	user virtual address 'addr' will be visible to the cpu.  That +	is, after running, there will be no entries in the TLB for +	'vma->vm_mm' for virtual address 'addr'. + +	This is used primarily during fault processing. + +5) ``void update_mmu_cache(struct vm_area_struct *vma, +   unsigned long address, pte_t *ptep)`` + +	At the end of every page fault, this routine is invoked to +	tell the architecture specific code that a translation +	now exists at virtual address "address" for address space +	"vma->vm_mm", in the software page tables. + +	A port may use this information in any way it so chooses. +	For example, it could use this event to pre-load TLB +	translations for software managed TLB configurations. +	The sparc64 port currently does this. + +6) ``void tlb_migrate_finish(struct mm_struct *mm)`` + +	This interface is called at the end of an explicit +	process migration. This interface provides a hook +	to allow a platform to update TLB or context-specific +	information for the address space. + +	The ia64 sn2 platform is one example of a platform +	that uses this interface. + +Next, we have the cache flushing interfaces.  In general, when Linux +is changing an existing virtual-->physical mapping to a new value, +the sequence will be in one of the following forms:: + +	1) flush_cache_mm(mm); +	   change_all_page_tables_of(mm); +	   flush_tlb_mm(mm); + +	2) flush_cache_range(vma, start, end); +	   change_range_of_page_tables(mm, start, end); +	   flush_tlb_range(vma, start, end); + +	3) flush_cache_page(vma, addr, pfn); +	   set_pte(pte_pointer, new_pte_val); +	   flush_tlb_page(vma, addr); + +The cache level flush will always be first, because this allows +us to properly handle systems whose caches are strict and require +a virtual-->physical translation to exist for a virtual address +when that virtual address is flushed from the cache.  The HyperSparc +cpu is one such cpu with this attribute. + +The cache flushing routines below need only deal with cache flushing +to the extent that it is necessary for a particular cpu.  Mostly, +these routines must be implemented for cpus which have virtually +indexed caches which must be flushed when virtual-->physical +translations are changed or removed.  So, for example, the physically +indexed physically tagged caches of IA32 processors have no need to +implement these interfaces since the caches are fully synchronized +and have no dependency on translation information. + +Here are the routines, one by one: + +1) ``void flush_cache_mm(struct mm_struct *mm)`` + +	This interface flushes an entire user address space from +	the caches.  That is, after running, there will be no cache +	lines associated with 'mm'. + +	This interface is used to handle whole address space +	page table operations such as what happens during exit and exec. + +2) ``void flush_cache_dup_mm(struct mm_struct *mm)`` + +	This interface flushes an entire user address space from +	the caches.  That is, after running, there will be no cache +	lines associated with 'mm'. + +	This interface is used to handle whole address space +	page table operations such as what happens during fork. + +	This option is separate from flush_cache_mm to allow some +	optimizations for VIPT caches. + +3) ``void flush_cache_range(struct vm_area_struct *vma, +   unsigned long start, unsigned long end)`` + +	Here we are flushing a specific range of (user) virtual +	addresses from the cache.  After running, there will be no +	entries in the cache for 'vma->vm_mm' for virtual addresses in +	the range 'start' to 'end-1'. + +	The "vma" is the backing store being used for the region. +	Primarily, this is used for munmap() type operations. + +	The interface is provided in hopes that the port can find +	a suitably efficient method for removing multiple page +	sized regions from the cache, instead of having the kernel +	call flush_cache_page (see below) for each entry which may be +	modified. + +4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)`` + +	This time we need to remove a PAGE_SIZE sized range +	from the cache.  The 'vma' is the backing structure used by +	Linux to keep track of mmap'd regions for a process, the +	address space is available via vma->vm_mm.  Also, one may +	test (vma->vm_flags & VM_EXEC) to see if this region is +	executable (and thus could be in the 'instruction cache' in +	"Harvard" type cache layouts). + +	The 'pfn' indicates the physical page frame (shift this value +	left by PAGE_SHIFT to get the physical address) that 'addr' +	translates to.  It is this mapping which should be removed from +	the cache. + +	After running, there will be no entries in the cache for +	'vma->vm_mm' for virtual address 'addr' which translates +	to 'pfn'. + +	This is used primarily during fault processing. + +5) ``void flush_cache_kmaps(void)`` + +	This routine need only be implemented if the platform utilizes +	highmem.  It will be called right before all of the kmaps +	are invalidated. + +	After running, there will be no entries in the cache for +	the kernel virtual address range PKMAP_ADDR(0) to +	PKMAP_ADDR(LAST_PKMAP). + +	This routing should be implemented in asm/highmem.h + +6) ``void flush_cache_vmap(unsigned long start, unsigned long end)`` +   ``void flush_cache_vunmap(unsigned long start, unsigned long end)`` + +	Here in these two interfaces we are flushing a specific range +	of (kernel) virtual addresses from the cache.  After running, +	there will be no entries in the cache for the kernel address +	space for virtual addresses in the range 'start' to 'end-1'. + +	The first of these two routines is invoked after map_vm_area() +	has installed the page table entries.  The second is invoked +	before unmap_kernel_range() deletes the page table entries. + +There exists another whole class of cpu cache issues which currently +require a whole different set of interfaces to handle properly. +The biggest problem is that of virtual aliasing in the data cache +of a processor. + +Is your port susceptible to virtual aliasing in its D-cache? +Well, if your D-cache is virtually indexed, is larger in size than +PAGE_SIZE, and does not prevent multiple cache lines for the same +physical address from existing at once, you have this problem. + +If your D-cache has this problem, first define asm/shmparam.h SHMLBA +properly, it should essentially be the size of your virtually +addressed D-cache (or if the size is variable, the largest possible +size).  This setting will force the SYSv IPC layer to only allow user +processes to mmap shared memory at address which are a multiple of +this value. + +.. note:: + +  This does not fix shared mmaps, check out the sparc64 port for +  one way to solve this (in particular SPARC_FLAG_MMAPSHARED). + +Next, you have to solve the D-cache aliasing issue for all +other cases.  Please keep in mind that fact that, for a given page +mapped into some user address space, there is always at least one more +mapping, that of the kernel in its linear mapping starting at +PAGE_OFFSET.  So immediately, once the first user maps a given +physical page into its address space, by implication the D-cache +aliasing problem has the potential to exist since the kernel already +maps this page at its virtual address. + +  ``void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)`` +  ``void clear_user_page(void *to, unsigned long addr, struct page *page)`` + +	These two routines store data in user anonymous or COW +	pages.  It allows a port to efficiently avoid D-cache alias +	issues between userspace and the kernel. + +	For example, a port may temporarily map 'from' and 'to' to +	kernel virtual addresses during the copy.  The virtual address +	for these two pages is chosen in such a way that the kernel +	load/store instructions happen to virtual addresses which are +	of the same "color" as the user mapping of the page.  Sparc64 +	for example, uses this technique. + +	The 'addr' parameter tells the virtual address where the +	user will ultimately have this page mapped, and the 'page' +	parameter gives a pointer to the struct page of the target. + +	If D-cache aliasing is not an issue, these two routines may +	simply call memcpy/memset directly and do nothing more. + +  ``void flush_dcache_page(struct page *page)`` + +	Any time the kernel writes to a page cache page, _OR_ +	the kernel is about to read from a page cache page and +	user space shared/writable mappings of this page potentially +	exist, this routine is called. + +	.. note:: + +	      This routine need only be called for page cache pages +	      which can potentially ever be mapped into the address +	      space of a user process.  So for example, VFS layer code +	      handling vfs symlinks in the page cache need not call +	      this interface at all. + +	The phrase "kernel writes to a page cache page" means, +	specifically, that the kernel executes store instructions +	that dirty data in that page at the page->virtual mapping +	of that page.  It is important to flush here to handle +	D-cache aliasing, to make sure these kernel stores are +	visible to user space mappings of that page. + +	The corollary case is just as important, if there are users +	which have shared+writable mappings of this file, we must make +	sure that kernel reads of these pages will see the most recent +	stores done by the user. + +	If D-cache aliasing is not an issue, this routine may +	simply be defined as a nop on that architecture. + +        There is a bit set aside in page->flags (PG_arch_1) as +	"architecture private".  The kernel guarantees that, +	for pagecache pages, it will clear this bit when such +	a page first enters the pagecache. + +	This allows these interfaces to be implemented much more +	efficiently.  It allows one to "defer" (perhaps indefinitely) +	the actual flush if there are currently no user processes +	mapping this page.  See sparc64's flush_dcache_page and +	update_mmu_cache implementations for an example of how to go +	about doing this. + +	The idea is, first at flush_dcache_page() time, if +	page->mapping->i_mmap is an empty tree, just mark the architecture +	private page flag bit.  Later, in update_mmu_cache(), a check is +	made of this flag bit, and if set the flush is done and the flag +	bit is cleared. + +	.. important:: + +			It is often important, if you defer the flush, +			that the actual flush occurs on the same CPU +			as did the cpu stores into the page to make it +			dirty.  Again, see sparc64 for examples of how +			to deal with this. + +  ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page, +  unsigned long user_vaddr, void *dst, void *src, int len)`` +  ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page, +  unsigned long user_vaddr, void *dst, void *src, int len)`` + +	When the kernel needs to copy arbitrary data in and out +	of arbitrary user pages (f.e. for ptrace()) it will use +	these two routines. + +	Any necessary cache flushing or other coherency operations +	that need to occur should happen here.  If the processor's +	instruction cache does not snoop cpu stores, it is very +	likely that you will need to flush the instruction cache +	for copy_to_user_page(). + +  ``void flush_anon_page(struct vm_area_struct *vma, struct page *page, +  unsigned long vmaddr)`` + +  	When the kernel needs to access the contents of an anonymous +	page, it calls this function (currently only +	get_user_pages()).  Note: flush_dcache_page() deliberately +	doesn't work for an anonymous page.  The default +	implementation is a nop (and should remain so for all coherent +	architectures).  For incoherent architectures, it should flush +	the cache of the page at vmaddr. + +  ``void flush_kernel_dcache_page(struct page *page)`` + +	When the kernel needs to modify a user page is has obtained +	with kmap, it calls this function after all modifications are +	complete (but before kunmapping it) to bring the underlying +	page up to date.  It is assumed here that the user has no +	incoherent cached copies (i.e. the original page was obtained +	from a mechanism like get_user_pages()).  The default +	implementation is a nop and should remain so on all coherent +	architectures.  On incoherent architectures, this should flush +	the kernel cache for page (using page_address(page)). + + +  ``void flush_icache_range(unsigned long start, unsigned long end)`` + +  	When the kernel stores into addresses that it will execute +	out of (eg when loading modules), this function is called. + +	If the icache does not snoop stores then this routine will need +	to flush it. + +  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` + +	All the functionality of flush_icache_page can be implemented in +	flush_dcache_page and update_mmu_cache. In the future, the hope +	is to remove this interface completely. + +The final category of APIs is for I/O to deliberately aliased address +ranges inside the kernel.  Such aliases are set up by use of the +vmap/vmalloc API.  Since kernel I/O goes via physical pages, the I/O +subsystem assumes that the user mapping and kernel offset mapping are +the only aliases.  This isn't true for vmap aliases, so anything in +the kernel trying to do I/O to vmap areas must manually manage +coherency.  It must do this by flushing the vmap range before doing +I/O and invalidating it after the I/O returns. + +  ``void flush_kernel_vmap_range(void *vaddr, int size)`` + +       flushes the kernel cache for a given virtual address range in +       the vmap area.  This is to make sure that any data the kernel +       modified in the vmap range is made visible to the physical +       page.  The design is to make this area safe to perform I/O on. +       Note that this API does *not* also flush the offset map alias +       of the area. + +  ``void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates`` + +       the cache for a given virtual address range in the vmap area +       which prevents the processor from making the cache stale by +       speculatively reading data while the I/O was occurring to the +       physical pages.  This is only necessary for data reads into the +       vmap area. diff --git a/Documentation/core-api/circular-buffers.rst b/Documentation/core-api/circular-buffers.rst new file mode 100644 index 000000000000..53e51caa3347 --- /dev/null +++ b/Documentation/core-api/circular-buffers.rst @@ -0,0 +1,237 @@ +================ +Circular Buffers +================ + +:Author: David Howells <[email protected]> +:Author: Paul E. McKenney <[email protected]> + + +Linux provides a number of features that can be used to implement circular +buffering.  There are two sets of such features: + + (1) Convenience functions for determining information about power-of-2 sized +     buffers. + + (2) Memory barriers for when the producer and the consumer of objects in the +     buffer don't want to share a lock. + +To use these facilities, as discussed below, there needs to be just one +producer and just one consumer.  It is possible to handle multiple producers by +serialising them, and to handle multiple consumers by serialising them. + + +.. Contents: + + (*) What is a circular buffer? + + (*) Measuring power-of-2 buffers. + + (*) Using memory barriers with circular buffers. +     - The producer. +     - The consumer. + + + +What is a circular buffer? +========================== + +First of all, what is a circular buffer?  A circular buffer is a buffer of +fixed, finite size into which there are two indices: + + (1) A 'head' index - the point at which the producer inserts items into the +     buffer. + + (2) A 'tail' index - the point at which the consumer finds the next item in +     the buffer. + +Typically when the tail pointer is equal to the head pointer, the buffer is +empty; and the buffer is full when the head pointer is one less than the tail +pointer. + +The head index is incremented when items are added, and the tail index when +items are removed.  The tail index should never jump the head index, and both +indices should be wrapped to 0 when they reach the end of the buffer, thus +allowing an infinite amount of data to flow through the buffer. + +Typically, items will all be of the same unit size, but this isn't strictly +required to use the techniques below.  The indices can be increased by more +than 1 if multiple items or variable-sized items are to be included in the +buffer, provided that neither index overtakes the other.  The implementer must +be careful, however, as a region more than one unit in size may wrap the end of +the buffer and be broken into two segments. + +Measuring power-of-2 buffers +============================ + +Calculation of the occupancy or the remaining capacity of an arbitrarily sized +circular buffer would normally be a slow operation, requiring the use of a +modulus (divide) instruction.  However, if the buffer is of a power-of-2 size, +then a much quicker bitwise-AND instruction can be used instead. + +Linux provides a set of macros for handling power-of-2 circular buffers.  These +can be made use of by:: + +	#include <linux/circ_buf.h> + +The macros are: + + (#) Measure the remaining capacity of a buffer:: + +	CIRC_SPACE(head_index, tail_index, buffer_size); + +     This returns the amount of space left in the buffer[1] into which items +     can be inserted. + + + (#) Measure the maximum consecutive immediate space in a buffer:: + +	CIRC_SPACE_TO_END(head_index, tail_index, buffer_size); + +     This returns the amount of consecutive space left in the buffer[1] into +     which items can be immediately inserted without having to wrap back to the +     beginning of the buffer. + + + (#) Measure the occupancy of a buffer:: + +	CIRC_CNT(head_index, tail_index, buffer_size); + +     This returns the number of items currently occupying a buffer[2]. + + + (#) Measure the non-wrapping occupancy of a buffer:: + +	CIRC_CNT_TO_END(head_index, tail_index, buffer_size); + +     This returns the number of consecutive items[2] that can be extracted from +     the buffer without having to wrap back to the beginning of the buffer. + + +Each of these macros will nominally return a value between 0 and buffer_size-1, +however: + + (1) CIRC_SPACE*() are intended to be used in the producer.  To the producer +     they will return a lower bound as the producer controls the head index, +     but the consumer may still be depleting the buffer on another CPU and +     moving the tail index. + +     To the consumer it will show an upper bound as the producer may be busy +     depleting the space. + + (2) CIRC_CNT*() are intended to be used in the consumer.  To the consumer they +     will return a lower bound as the consumer controls the tail index, but the +     producer may still be filling the buffer on another CPU and moving the +     head index. + +     To the producer it will show an upper bound as the consumer may be busy +     emptying the buffer. + + (3) To a third party, the order in which the writes to the indices by the +     producer and consumer become visible cannot be guaranteed as they are +     independent and may be made on different CPUs - so the result in such a +     situation will merely be a guess, and may even be negative. + +Using memory barriers with circular buffers +=========================================== + +By using memory barriers in conjunction with circular buffers, you can avoid +the need to: + + (1) use a single lock to govern access to both ends of the buffer, thus +     allowing the buffer to be filled and emptied at the same time; and + + (2) use atomic counter operations. + +There are two sides to this: the producer that fills the buffer, and the +consumer that empties it.  Only one thing should be filling a buffer at any one +time, and only one thing should be emptying a buffer at any one time, but the +two sides can operate simultaneously. + + +The producer +------------ + +The producer will look something like this:: + +	spin_lock(&producer_lock); + +	unsigned long head = buffer->head; +	/* The spin_unlock() and next spin_lock() provide needed ordering. */ +	unsigned long tail = READ_ONCE(buffer->tail); + +	if (CIRC_SPACE(head, tail, buffer->size) >= 1) { +		/* insert one item into the buffer */ +		struct item *item = buffer[head]; + +		produce_item(item); + +		smp_store_release(buffer->head, +				  (head + 1) & (buffer->size - 1)); + +		/* wake_up() will make sure that the head is committed before +		 * waking anyone up */ +		wake_up(consumer); +	} + +	spin_unlock(&producer_lock); + +This will instruct the CPU that the contents of the new item must be written +before the head index makes it available to the consumer and then instructs the +CPU that the revised head index must be written before the consumer is woken. + +Note that wake_up() does not guarantee any sort of barrier unless something +is actually awakened.  We therefore cannot rely on it for ordering.  However, +there is always one element of the array left empty.  Therefore, the +producer must produce two elements before it could possibly corrupt the +element currently being read by the consumer.  Therefore, the unlock-lock +pair between consecutive invocations of the consumer provides the necessary +ordering between the read of the index indicating that the consumer has +vacated a given element and the write by the producer to that same element. + + +The Consumer +------------ + +The consumer will look something like this:: + +	spin_lock(&consumer_lock); + +	/* Read index before reading contents at that index. */ +	unsigned long head = smp_load_acquire(buffer->head); +	unsigned long tail = buffer->tail; + +	if (CIRC_CNT(head, tail, buffer->size) >= 1) { + +		/* extract one item from the buffer */ +		struct item *item = buffer[tail]; + +		consume_item(item); + +		/* Finish reading descriptor before incrementing tail. */ +		smp_store_release(buffer->tail, +				  (tail + 1) & (buffer->size - 1)); +	} + +	spin_unlock(&consumer_lock); + +This will instruct the CPU to make sure the index is up to date before reading +the new item, and then it shall make sure the CPU has finished reading the item +before it writes the new tail pointer, which will erase the item. + +Note the use of READ_ONCE() and smp_load_acquire() to read the +opposition index.  This prevents the compiler from discarding and +reloading its cached value.  This isn't strictly needed if you can +be sure that the opposition index will _only_ be used the once. +The smp_load_acquire() additionally forces the CPU to order against +subsequent memory references.  Similarly, smp_store_release() is used +in both algorithms to write the thread's index.  This documents the +fact that we are writing to something that can be read concurrently, +prevents the compiler from tearing the store, and enforces ordering +against previous accesses. + + +Further reading +=============== + +See also Documentation/memory-barriers.txt for a description of Linux's memory +barrier facilities. diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst new file mode 100644 index 000000000000..e0df8f416582 --- /dev/null +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst @@ -0,0 +1,66 @@ +================================= +GFP masks used from FS/IO context +================================= + +:Date: May, 2018 +:Author: Michal Hocko <[email protected]> + +Introduction +============ + +Code paths in the filesystem and IO stacks must be careful when +allocating memory to prevent recursion deadlocks caused by direct +memory reclaim calling back into the FS or IO paths and blocking on +already held resources (e.g. locks - most commonly those used for the +transaction context). + +The traditional way to avoid this deadlock problem is to clear __GFP_FS +respectively __GFP_IO (note the latter implies clearing the first as well) in +the gfp mask when calling an allocator. GFP_NOFS respectively GFP_NOIO can be +used as shortcut. It turned out though that above approach has led to +abuses when the restricted gfp mask is used "just in case" without a +deeper consideration which leads to problems because an excessive use +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory +reclaim issues. + +New API +======== + +Since 4.12 we do have a generic scope API for both NOFS and NOIO context +``memalloc_nofs_save``, ``memalloc_nofs_restore`` respectively ``memalloc_noio_save``, +``memalloc_noio_restore`` which allow to mark a scope to be a critical +section from a filesystem or I/O point of view. Any allocation from that +scope will inherently drop __GFP_FS respectively __GFP_IO from the given +mask so no memory allocation can recurse back in the FS/IO. + +.. kernel-doc:: include/linux/sched/mm.h +   :functions: memalloc_nofs_save memalloc_nofs_restore +.. kernel-doc:: include/linux/sched/mm.h +   :functions: memalloc_noio_save memalloc_noio_restore + +FS/IO code then simply calls the appropriate save function before +any critical section with respect to the reclaim is started - e.g. +lock shared with the reclaim context or when a transaction context +nesting would be possible via reclaim. The restore function should be +called when the critical section ends. All that ideally along with an +explanation what is the reclaim context for easier maintenance. + +Please note that the proper pairing of save/restore functions +allows nesting so it is safe to call ``memalloc_noio_save`` or +``memalloc_noio_restore`` respectively from an existing NOIO or NOFS +scope. + +What about __vmalloc(GFP_NOFS) +============================== + +vmalloc doesn't support GFP_NOFS semantic because there are hardcoded +GFP_KERNEL allocations deep inside the allocator which are quite non-trivial +to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is +almost always a bug. The good news is that the NOFS/NOIO semantic can be +achieved by the scope API. + +In the ideal world, upper layers should already mark dangerous contexts +and so no special care is required and vmalloc should be called without +any problems. Sometimes if the context is not really clear or there are +layering violations then the recommended way around that is to wrap ``vmalloc`` +by the scope API with a comment explaining the problem. diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index c670a8031786..f5a66b72f984 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -14,6 +14,7 @@ Core utilities     kernel-api     assoc_array     atomic_ops +   cachetlb     refcount-vs-atomic     cpu_hotplug     idr @@ -25,6 +26,8 @@ Core utilities     genalloc     errseq     printk-formats +   circular-buffers +   gfp_mask-from-fs-io  Interfaces for kernel debugging  =============================== diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 92f30006adae..76fe2d0f5e7d 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -39,17 +39,17 @@ String Manipulation  .. kernel-doc:: lib/string.c     :export: +Basic Kernel Library Functions +============================== + +The Linux kernel provides more basic utility functions. +  Bit Operations  --------------  .. kernel-doc:: arch/x86/include/asm/bitops.h     :internal: -Basic Kernel Library Functions -============================== - -The Linux kernel provides more basic utility functions. -  Bitmap Operations  ----------------- @@ -80,6 +80,31 @@ Command-line Parsing  .. kernel-doc:: lib/cmdline.c     :export: +Sorting +------- + +.. kernel-doc:: lib/sort.c +   :export: + +.. kernel-doc:: lib/list_sort.c +   :export: + +Text Searching +-------------- + +.. kernel-doc:: lib/textsearch.c +   :doc: ts_intro + +.. kernel-doc:: lib/textsearch.c +   :export: + +.. kernel-doc:: include/linux/textsearch.h +   :functions: textsearch_find textsearch_next \ +               textsearch_get_pattern textsearch_get_pattern_len + +CRC and Math Functions in Linux +=============================== +  CRC Functions  ------------- @@ -103,9 +128,6 @@ CRC Functions  .. kernel-doc:: lib/crc-itu-t.c     :export: -Math Functions in Linux -======================= -  Base 2 log and power Functions  ------------------------------ @@ -127,28 +149,6 @@ Division Functions  .. kernel-doc:: lib/gcd.c     :export: -Sorting -------- - -.. kernel-doc:: lib/sort.c -   :export: - -.. kernel-doc:: lib/list_sort.c -   :export: - -Text Searching --------------- - -.. kernel-doc:: lib/textsearch.c -   :doc: ts_intro - -.. kernel-doc:: lib/textsearch.c -   :export: - -.. kernel-doc:: include/linux/textsearch.h -   :functions: textsearch_find textsearch_next \ -               textsearch_get_pattern textsearch_get_pattern_len -  UUID/GUID  --------- @@ -284,7 +284,7 @@ Resources Management  MTRR Handling  ------------- -.. kernel-doc:: arch/x86/kernel/cpu/mtrr/main.c +.. kernel-doc:: arch/x86/kernel/cpu/mtrr/mtrr.c     :export:  Security Framework diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst index eb30efdd2e78..25dc591cb110 100644 --- a/Documentation/core-api/printk-formats.rst +++ b/Documentation/core-api/printk-formats.rst @@ -419,11 +419,10 @@ struct clk  	%pC	pll1  	%pCn	pll1 -	%pCr	1560000000  For printing struct clk structures. %pC and %pCn print the name  (Common Clock Framework) or address (legacy clock framework) of the -structure; %pCr prints the current clock rate. +structure.  Passed by reference. diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst index 83351c258cdb..322851bada16 100644 --- a/Documentation/core-api/refcount-vs-atomic.rst +++ b/Documentation/core-api/refcount-vs-atomic.rst @@ -17,7 +17,7 @@ in order to help maintainers validate their code against the change in  these memory ordering guarantees.  The terms used through this document try to follow the formal LKMM defined in -github.com/aparri/memory-model/blob/master/Documentation/explanation.txt +tools/memory-model/Documentation/explanation.txt.  memory-barriers.txt and atomic_t.txt provide more background to the  memory ordering in general and for atomic operations specifically. |