aboutsummaryrefslogtreecommitdiff
path: root/include/linux/rwsem.h
AgeCommit message (Collapse)AuthorFilesLines
2016-06-08locking/rwsem: Convert sem->count to 'atomic_long_t'Jason Low1-3/+5
Convert the rwsem count variable to an atomic_long_t since we use it as an atomic variable. This also allows us to remove the rwsem_atomic_{add,update}() "abstraction" which would now be an unnecesary level of indirection. In follow up patches, we also remove the rwsem_atomic_{add,update}() definitions across the various architectures. Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Jason Low <[email protected]> [ Build warning fixes on various architectures. ] Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Fenghua Yu <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Jason Low <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Hurley <[email protected]> Cc: Terry Rudd <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tim Chen <[email protected]> Cc: Tony Luck <[email protected]> Cc: Waiman Long <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-05-26add down_write_killable_nested()Al Viro1-0/+2
Signed-off-by: Al Viro <[email protected]>
2016-04-22locking/rwsem: Provide down_write_killable()Michal Hocko1-0/+1
Now that all the architectures implement the necessary glue code we can introduce down_write_killable(). The only difference wrt. regular down_write() is that the slow path waits in TASK_KILLABLE state and the interruption by the fatal signal is reported as -EINTR to the caller. Signed-off-by: Michal Hocko <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Chris Zankel <[email protected]> Cc: David S. Miller <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Signed-off-by: Davidlohr Bueso <[email protected]> Cc: Signed-off-by: Jason Low <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-04-13locking/rwsem: Introduce basis for down_write_killable()Michal Hocko1-0/+2
Introduce a generic implementation necessary for down_write_killable(). This is a trivial extension of the already existing down_write() call which can be interrupted by SIGKILL. This patch doesn't provide down_write_killable() yet because arches have to provide the necessary pieces before. rwsem_down_write_failed() which is a generic slow path for the write lock is extended to take a task state and renamed to __rwsem_down_write_failed_common(). The return value is either a valid semaphore pointer or ERR_PTR(-EINTR). rwsem_down_write_failed_killable() is exported as a new way to wait for the lock and be killable. For rwsem-spinlock implementation the current __down_write() it updated in a similar way as __rwsem_down_write_failed_common() except it doesn't need new exports just visible __down_write_killable(). Architectures which are not using the generic rwsem implementation are supposed to provide their __down_write_killable() implementation and use rwsem_down_write_failed_killable() for the slow path. Signed-off-by: Michal Hocko <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Chris Zankel <[email protected]> Cc: David S. Miller <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Max Filippov <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Signed-off-by: Davidlohr Bueso <[email protected]> Cc: Signed-off-by: Jason Low <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-08-13locking/Documentation: Move locking related docs into Documentation/locking/Davidlohr Bueso1-1/+1
Specifically: Documentation/locking/lockdep-design.txt Documentation/locking/lockstat.txt Documentation/locking/mutex-design.txt Documentation/locking/rt-mutex-design.txt Documentation/locking/rt-mutex.txt Documentation/locking/spinlocks.txt Documentation/locking/ww-mutex-design.txt Signed-off-by: Davidlohr Bueso <[email protected]> Acked-by: Randy Dunlap <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Alexei Starovoitov <[email protected]> Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Chris Mason <[email protected]> Cc: Dan Streetman <[email protected]> Cc: David Airlie <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: David S. Miller <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Jason Low <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Kees Cook <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Lubomir Rintel <[email protected]> Cc: Masanari Iida <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Tim Chen <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNERDavidlohr Bueso1-2/+4
Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER), encapsulate the dependencies for rwsem optimistic spinning. No logical changes here as it continues to depend on both SMP and the XADD algorithm variant. Signed-off-by: Davidlohr Bueso <[email protected]> Acked-by: Jason Low <[email protected]> [ Also make it depend on ARCH_SUPPORTS_ATOMIC_RMW. ] Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Cc: Chris Mason <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Waiman Long <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Reduce the size of struct rw_semaphoreJason Low1-14/+11
Recent optimistic spinning additions to rwsem provide significant performance benefits on many workloads on large machines. The cost of it was increasing the size of the rwsem structure by up to 128 bits. However, now that the previous patches in this series bring the overhead of struct optimistic_spin_queue to 32 bits, this patch reorders some fields in struct rw_semaphore such that we can reduce the overhead of the rwsem structure by 64 bits (on 64 bit systems). The extra overhead required for rwsem optimistic spinning would now be up to 8 additional bytes instead of up to 16 bytes. Additionally, the size of rwsem would now be more in line with mutexes. Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Introduce and use init macro and function for osq locksJason Low1-1/+1
Currently, we initialize the osq lock by directly setting the lock's values. It would be preferable if we use an init macro to do the initialization like we do with other locks. This patch introduces and uses a macro and function for initializing the osq lock. Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overheadJason Low1-4/+3
The cancellable MCS spinlock is currently used to queue threads that are doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining the lock would access and queue the local node corresponding to the CPU that it's running on. Currently, the cancellable MCS lock is implemented by using pointers to these nodes. In this patch, instead of operating on pointers to the per-cpu nodes, we store the CPU numbers in which the per-cpu nodes correspond to in atomic_t. A similar concept is used with the qspinlock. By operating on the CPU # of the nodes using atomic_t instead of pointers to those nodes, this can reduce the overhead of the cancellable MCS spinlock by 32 bits (on 64 bit systems). Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Rename optimistic_spin_queue() to optimistic_spin_node()Jason Low1-2/+2
Currently, the per-cpu nodes structure for the cancellable MCS spinlock is named "optimistic_spin_queue". However, in a follow up patch in the series we will be introducing a new structure that serves as the new "handle" for the lock. It would make more sense if that structure is named "optimistic_spin_queue". Additionally, since the current use of the "optimistic_spin_queue" structure are "nodes", it might be better if we rename them to "node" anyway. This preparatory patch renames all current "optimistic_spin_queue" to "optimistic_spin_node". Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-06-05locking/rwsem: Fix warnings for CONFIG_RWSEM_GENERIC_SPINLOCKDavidlohr Bueso1-1/+1
Optimistic spinning is only used by the xadd variant of rw-semaphores. Make sure that we use the old version of the __RWSEM_INITIALIZER macro for systems that rely on the spinlock one, otherwise warnings can be triggered, such as the following reported on an arm box: ipc/ipcns_notifier.c:22:8: warning: excess elements in struct initializer [enabled by default] ipc/ipcns_notifier.c:22:8: warning: (near initialization for 'ipcns_chain.rwsem') [enabled by default] ipc/ipcns_notifier.c:22:8: warning: excess elements in struct initializer [enabled by default] ipc/ipcns_notifier.c:22:8: warning: (near initialization for 'ipcns_chain.rwsem') [enabled by default] Signed-off-by: Davidlohr Bueso <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Tim Chen <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Peter Hurley <[email protected]> Cc: Alex Shi <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Jason Low <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-06-05locking/rwsem: Support optimistic spinningDavidlohr Bueso1-3/+22
We have reached the point where our mutexes are quite fine tuned for a number of situations. This includes the use of heuristics and optimistic spinning, based on MCS locking techniques. Exclusive ownership of read-write semaphores are, conceptually, just about the same as mutexes, making them close cousins. To this end we need to make them both perform similarly, and right now, rwsems are simply not up to it. This was discovered by both reverting commit 4fc3f1d6 (mm/rmap, migration: Make rmap_walk_anon() and try_to_unmap_anon() more scalable) and similarly, converting some other mutexes (ie: i_mmap_mutex) to rwsems. This creates a situation where users have to choose between a rwsem and mutex taking into account this important performance difference. Specifically, biggest difference between both locks is when we fail to acquire a mutex in the fastpath, optimistic spinning comes in to play and we can avoid a large amount of unnecessary sleeping and overhead of moving tasks in and out of wait queue. Rwsems do not have such logic. This patch, based on the work from Tim Chen and I, adds support for write-side optimistic spinning when the lock is contended. It also includes support for the recently added cancelable MCS locking for adaptive spinning. Note that is is only applicable to the xadd method, and the spinlock rwsem variant remains intact. Allowing optimistic spinning before putting the writer on the wait queue reduces wait queue contention and provided greater chance for the rwsem to get acquired. With these changes, rwsem is on par with mutex. The performance benefits can be seen on a number of workloads. For instance, on a 8 socket, 80 core 64bit Westmere box, aim7 shows the following improvements in throughput: +--------------+---------------------+-----------------+ | Workload | throughput-increase | number of users | +--------------+---------------------+-----------------+ | alltests | 20% | >1000 | | custom | 27%, 60% | 10-100, >1000 | | high_systime | 36%, 30% | >100, >1000 | | shared | 58%, 29% | 10-100, >1000 | +--------------+---------------------+-----------------+ There was also improvement on smaller systems, such as a quad-core x86-64 laptop running a 30Gb PostgreSQL (pgbench) workload for up to +60% in throughput for over 50 clients. Additionally, benefits were also noticed in exim (mail server) workloads. Furthermore, no performance regression have been seen at all. Based-on-work-from: Tim Chen <[email protected]> Signed-off-by: Davidlohr Bueso <[email protected]> [peterz: rej fixup due to comment patches, sched/rt.h header] Signed-off-by: Peter Zijlstra <[email protected]> Cc: Alex Shi <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Peter Hurley <[email protected]> Cc: "Paul E.McKenney" <[email protected]> Cc: Jason Low <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Scott J Norton" <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-01-28rwsem: add rwsem_is_contendedJosef Bacik1-0/+11
Btrfs needs a simple way to know if it needs to let go of it's read lock on a rwsem. Introduce rwsem_is_contended to check to see if there are any waiters on this rwsem currently. This is just a hueristic, it is meant to be light and not 100% accurate and called by somebody already holding on to the rwsem in either read or write. Thanks, Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: Chris Mason <[email protected]> Acked-by: Ingo Molnar <[email protected]>
2013-03-23Revert "rw_semaphore: remove up/down_read_non_owner"Kent Overstreet1-0/+10
This reverts commit 11b80f459adaf91a712f95e7734a17655a36bf30. Bcache needs rw semaphores for cache coherency in writeback mode - writes have to take a read lock on a per cache device rw sem, and release it when the bio completes. But since this is for bios it's naturally not in the context of the process that originally took the lock. Signed-off-by: Kent Overstreet <[email protected]> CC: Christoph Hellwig <[email protected]> CC: David Howells <[email protected]>
2013-01-16lockdep, rwsem: fix down_write_nest_lock() if !CONFIG_DEBUG_LOCK_ALLOCJiri Kosina1-1/+1
Commit 1b963c81b145 ("lockdep, rwsem: provide down_write_nest_lock()") contains a bug in a codepath when CONFIG_DEBUG_LOCK_ALLOC is disabled, which causes down_read() to be called instead of down_write() by mistake on such configurations. Fix that. Reported-and-tested-by: Andrew Clayton <[email protected]> Reported-and-tested-by: Zlatko Calusic <[email protected]> Signed-off-by: Jiri Kosina <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-01-11lockdep, rwsem: provide down_write_nest_lock()Jiri Kosina1-0/+9
down_write_nest_lock() provides a means to annotate locking scenario where an outer lock is guaranteed to serialize the order nested locks are being acquired. This is analogoue to already existing mutex_lock_nest_lock() and spin_lock_nest_lock(). Signed-off-by: Jiri Kosina <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Mel Gorman <[email protected]> Tested-by: Sedat Dilek <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2012-03-28Remove all #inclusions of asm/system.hDavid Howells1-1/+0
Remove all #inclusions of asm/system.h preparatory to splitting and killing it. Performed with the following command: perl -p -i -e 's!^#\s*include\s*<asm/system[.]h>.*\n!!' `grep -Irl '^#\s*include\s*<asm/system[.]h>' *` Signed-off-by: David Howells <[email protected]>
2011-09-13locking, rwsem: Annotate inner lock as rawThomas Gleixner1-4/+6
There is no reason to allow the lock protecting rwsems (the ownerless variant) to be preemptible on -rt. Convert it to raw. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2011-07-26atomic: use <linux/atomic.h>Arun Sharma1-1/+1
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: Arun Sharma <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: David Miller <[email protected]> Cc: Eric Dumazet <[email protected]> Acked-by: Mike Frysinger <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-07-20rw_semaphore: remove up/down_read_non_ownerChristoph Hellwig1-10/+0
Now that the last users is gone these can be removed. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-01-27rwsem: Remove redundant asmregparm annotationThomas Gleixner1-4/+4
Peter Zijlstra pointed out, that the only user of asmregparm (x86) is compiling the kernel already with -mregparm=3. So the annotation of the rwsem functions is redundant. Remove it. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Matt Turner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Cc: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2011-01-27rwsem: Move duplicate function prototypes to linux/rwsem.hThomas Gleixner1-0/+5
All architecture specific rwsem headers carry the same function prototypes. Just x86 adds asmregparm, which is an empty define on all other architectures. S390 has a stale rwsem_downgrade_write() prototype. Remove the duplicates and add the prototypes to linux/rwsem.h Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Richard Henderson <[email protected]> Acked-by: Tony Luck <[email protected]> Acked-by: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Acked-by: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2011-01-27rwsem: Unify the duplicate rwsem_is_locked() inlinesThomas Gleixner1-0/+7
Instead of having the same implementation in each architecture, move it to linux/rwsem.h and remove the duplicates. It's unlikely that an arch will ever implement something different, but we can deal with that when it happens. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Matt Turner <[email protected]> Acked-by: Tony Luck <[email protected]> Acked-by: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Acked-by: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2011-01-27rwsem: Move duplicate init macros and functions to linux/rwsem.hThomas Gleixner1-0/+25
The rwsem initializers and related macros and functions are mostly the same. Some of them lack the lockdep initializer, but having it in place does not matter for architectures which do not support lockdep. powerpc, sparc, x86: No functional change sh, s390: Removes the duplicate init_rwsem (inline and #define) alpha, ia64, xtensa: Use the lockdep capable init function in lib/rwsem.c which is just uninlining the init function for the LOCKDEP=n case Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Matt Turner <[email protected]> Acked-by: Tony Luck <[email protected]> Acked-by: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Acked-by: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]>
2011-01-27rwsem: Move duplicate struct rwsem declaration to linux/rwsem.hThomas Gleixner1-1/+12
The difference between these declarations is the data type of the count member and the lack of lockdep in some architectures/ long is equivivalent to signed long and the #ifdef guarded dep_map member does not hurt anyone. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Matt Turner <[email protected]> Acked-by: Tony Luck <[email protected]> Acked-by: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Acked-by: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2011-01-27rwsem: Cleanup includesThomas Gleixner1-0/+3
All rwsem implementations include the same headers. Include them from include/linux/rwsem.h Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: David Howells <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Matt Turner <[email protected]> Acked-by: Tony Luck <[email protected]> Acked-by: Heiko Carstens <[email protected]> Cc: Paul Mundt <[email protected]> Acked-by: David Miller <[email protected]> Cc: Chris Zankel <[email protected]> LKML-Reference: <[email protected]>
2008-04-30Remove "#ifdef __KERNEL__" checks from unexported headersRobert P. J. Day1-3/+0
Remove the "#ifdef __KERNEL__" tests from unexported header files in linux/include whose entire contents are wrapped in that preprocessor test. Signed-off-by: Robert P. J. Day <[email protected]> Cc: David Woodhouse <[email protected]> Cc: Sam Ravnborg <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-07-10[PATCH] lockdep: add more rwsem.h documentationIngo Molnar1-2/+15
Add more documentation to rwsem.h. Signed-off-by: Ingo Molnar <[email protected]> Cc: David Howells <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-07-03[PATCH] lockdep: prove rwsem locking correctnessIngo Molnar1-34/+25
Use the lock validator framework to prove rwsem locking correctness. Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Arjan van de Ven <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-07-03[PATCH] lockdep: clean up rwsemsIngo Molnar1-24/+0
Clean up rwsems. Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2006-04-26Don't include linux/config.h from anywhere else in include/David Woodhouse1-1/+0
Signed-off-by: David Woodhouse <[email protected]>
2005-04-16Linux-2.6.12-rc2Linus Torvalds1-0/+115
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!