aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2015-07-01fuse: separate pqueue for clonesMiklos Szeredi3-30/+45
Make each fuse device clone refer to a separate processing queue. The only constraint on userspace code is that the request answer must be written to the same device clone as it was read off. Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: introduce per-instance fuse_dev structureMiklos Szeredi4-34/+114
Allow fuse device clones to refer to be distinguished. This patch just adds the infrastructure by associating a separate "struct fuse_dev" with each clone. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: device fd cloneMiklos Szeredi3-0/+44
Allow an open fuse device to be "cloned". Userspace can create a clone by: newfd = open("/dev/fuse", O_RDWR) ioctl(newfd, FUSE_DEV_IOC_CLONE, &oldfd); At this point newfd will refer to the same fuse connection as oldfd. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: abort: no fc->lock needed for request endingMiklos Szeredi1-9/+5
In fuse_abort_conn() when all requests are on private lists we no longer need fc->lock protection. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: no fc->lock for pqueue partsMiklos Szeredi1-14/+2
Remove fc->lock protection from processing queue members, now protected by fpq->lock. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: no fc->lock in request_end()Miklos Szeredi1-7/+8
No longer need to call request_end() with the connection lock held. We still protect the background counters and queue with fc->lock, so acquire it if necessary. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: cleanup request_end()Miklos Szeredi1-4/+2
Now that we atomically test having already done everything we no longer need other protection. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: request_end(): do onceMiklos Szeredi1-2/+6
When the connection is aborted it is possible that request_end() will be called twice. Use atomic test and set to do the actual ending only once. test_and_set_bit() also provides the necessary barrier semantics so no explicit smp_wmb() is necessary. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: add req flag for private listMiklos Szeredi2-3/+9
When an unlocked request is aborted, it is moved from fpq->io to a private list. Then, after unlocking fpq->lock, the private list is processed and the requests are finished off. To protect the private list, we need to mark the request with a flag, so if in the meantime the request is unlocked the list is not corrupted. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: pqueue lockingMiklos Szeredi3-2/+21
Add a fpq->lock for protecting members of struct fuse_pqueue and FR_LOCKED request flag. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: abort: group pqueue accessesMiklos Szeredi1-1/+1
Rearrange fuse_abort_conn() so that processing queue accesses are grouped together. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: cleanup fuse_dev_do_read()Miklos Szeredi1-20/+20
- locked list_add() + list_del_init() cancel out - common handling of case when request is ended here in the read phase Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: move list_del_init() from request_end() into callersMiklos Szeredi1-1/+7
Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: duplicate ->connected in pqueueMiklos Szeredi3-3/+8
This will allow checking ->connected just with the processing queue lock. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: separate out processing queueMiklos Szeredi3-16/+30
This is just two fields: fc->io and fc->processing. This patch just rearranges the fields, no functional change. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: simplify request_wait()Miklos Szeredi1-25/+5
wait_event_interruptible_exclusive_locked() will do everything request_wait() does, so replace it. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: no fc->lock for iqueue partsMiklos Szeredi1-51/+20
Remove fc->lock protection from input queue members, now protected by fiq->waitq.lock. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: allow interrupt queuing without fc->lockMiklos Szeredi1-3/+9
Interrupt is only queued after the request has been sent to userspace. This is either done in request_wait_answer() or fuse_dev_do_read() depending on which state the request is in at the time of the interrupt. If it's not yet sent, then queuing the interrupt is postponed until the request is read. Otherwise (the request has already been read and is waiting for an answer) the interrupt is queued immedidately. We want to call queue_interrupt() without fc->lock protection, in which case there can be a race between the two functions: - neither of them queue the interrupt (thinking the other one has already done it). - both of them queue the interrupt The first one is prevented by adding memory barriers, the second is prevented by checking (under fiq->waitq.lock) if the interrupt has already been queued. Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: iqueue lockingMiklos Szeredi1-6/+45
Use fiq->waitq.lock for protecting members of struct fuse_iqueue and FR_PENDING request flag, previously protected by fc->lock. Following patches will remove fc->lock protection from these members. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: dev read: split list_moveMiklos Szeredi1-1/+2
Different lists will need different locks. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: abort: group iqueue accessesMiklos Szeredi1-5/+7
Rearrange fuse_abort_conn() so that input queue accesses are grouped together. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: duplicate ->connected in iqueueMiklos Szeredi4-11/+15
This will allow checking ->connected just with the input queue lock. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: separate out input queueMiklos Szeredi3-84/+111
The input queue contains normal requests (fc->pending), forgets (fc->forget_*) and interrupts (fc->interrupts). There's also fc->waitq and fc->fasync for waking up the readers of the fuse device when a request is available. The fc->reqctr is also moved to the input queue (assigned to the request when the request is added to the input queue. This patch just rearranges the fields, no functional change. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: req state use flagsMiklos Szeredi3-21/+21
Use flags for representing the state in fuse_req. This is needed since req->list will be protected by different locks in different states, hence we'll want the state itself to be split into distinct bits, each protected with the relevant lock in that state. Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: simplify req statesMiklos Szeredi3-9/+5
FUSE_REQ_INIT is actually the same state as FUSE_REQ_PENDING and FUSE_REQ_READING and FUSE_REQ_WRITING can be merged into a common FUSE_REQ_IO state. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: don't hold lock over request_wait_answer()Miklos Szeredi1-25/+20
Only hold fc->lock over sections of request_wait_answer() that actually need it. If wait_event_interruptible() returns zero, it means that the request finished. Need to add memory barriers, though, to make sure that all relevant data in the request is synchronized. Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: simplify unique ctrMiklos Szeredi2-7/+1
Since it's a 64bit counter, it's never gonna wrap around. Remove code dealing with that possibility. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: rework abortMiklos Szeredi1-11/+10
Splice fc->pending and fc->processing lists into a common kill list while holding fc->lock. By the time we release fc->lock, pending and processing lists are empty and the io list contains only locked requests. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: fold helpers into abortMiklos Szeredi1-55/+38
Fold end_io_requests() and end_queued_requests() into fuse_abort_conn(). Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: use per req lock for lock/unlock_request()Miklos Szeredi2-22/+24
Reuse req->waitq.lock for protecting FR_ABORTED and FR_LOCKED flags. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: req use bitopsMiklos Szeredi4-72/+71
Finer grained locking will mean there's no single lock to protect modification of bitfileds in fuse_req. So move to using bitops. Can use the non-atomic variants for those which happen while the request definitely has only one reference. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: simplify request abortMiklos Szeredi1-73/+46
- don't end the request while req->locked is true - make unlock_request() return an error if the connection was aborted Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: call fuse_abort_conn() in dev releaseMiklos Szeredi1-8/+3
fuse_abort_conn() does all the work done by fuse_dev_release() and more. "More" consists of: end_io_requests(fc); wake_up_all(&fc->waitq); kill_fasync(&fc->fasync, SIGIO, POLL_IN); All of which should be no-op (WARN_ON's added). Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: fold fuse_request_send_nowait() into single callerMiklos Szeredi1-22/+10
And the same with fuse_request_send_nowait_locked(). Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: check conn_error earlierMiklos Szeredi1-2/+4
fc->conn_error is set once in FUSE_INIT reply and never cleared. Check it in request allocation, there's no sense in doing all the preparation if sending will surely fail. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: account as waiting before queuing for backgroundMiklos Szeredi1-4/+8
Move accounting of fc->num_waiting to the point where the request actually starts waiting. This is earlier than the current queue_request() for background requests, since they might be waiting on the fc->bg_queue before being queued on fc->pending. Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: reset waitingMiklos Szeredi1-1/+3
Reset req->waiting in fuse_put_request(). This is needed for correct accounting in fc->num_waiting for reserved requests. Signed-off-by: Miklos Szeredi <[email protected]>
2015-07-01fuse: fix background request if not connectedMiklos Szeredi1-1/+4
request_end() expects fc->num_background and fc->active_background to have been incremented, which is not the case in fuse_request_send_nowait() failure path. So instead just call the ->end() callback (which is actually set by all callers). Signed-off-by: Miklos Szeredi <[email protected]> Reviewed-by: Ashish Samant <[email protected]>
2015-07-01fuse: initialize fc->release before calling itMiklos Szeredi1-1/+1
fc->release is called from fuse_conn_put() which was used in the error cleanup before fc->release was initialized. [Jeremiah Mahler <[email protected]>: assign fc->release after calling fuse_conn_init(fc) instead of before.] Signed-off-by: Miklos Szeredi <[email protected]> Fixes: a325f9b92273 ("fuse: update fuse_conn_init() and separate out fuse_conn_kill()") Cc: <[email protected]> #v2.6.31+
2015-07-01arm64: Don't report clear pmds and puds as hugeChristoffer Dall1-2/+2
The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: Christoffer Dall <[email protected]> Reviewed-by: Steve Capper <[email protected]> Acked-by: Marc Zyngier <[email protected]> Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.") Cc: <[email protected]> # 3.11+ Signed-off-by: Catalin Marinas <[email protected]>
2015-07-01hwspinlock: qcom: Correct msb in regmap_fieldBjorn Andersson1-1/+1
msb of the regmap_field was mistakenly given the value 32, to set all bits in the regmap update mask; although incorrect this worked until 921cc294, where the mask calculation was corrected. Signed-off-by: Bjorn Andersson <[email protected]> Signed-off-by: Ohad Ben-Cohen <[email protected]>
2015-07-01printk: Increase maximum CONFIG_LOG_BUF_SHIFT from 21 to 25Ingo Molnar1-1/+1
So I tried to some kernel debugging that produced a ton of kernel messages on a big box, and wanted to save them all: but CONFIG_LOG_BUF_SHIFT maxes out at 21 (2 MB). Increase it to 25 (32 MB). This does not affect any existing config or defaults. Cc: Linus Torvalds <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-07-01time: Remove development rules from Kbuild/MakefileThomas Gleixner2-3/+0
time.o gets rebuilt unconditionally due to a leftover Makefile rule which was placed there for development purposes. Remove it along with the commented out always rule in the toplevel Kbuild file. Fixes: 0a227985d4a9 'time: Move timeconst.h into include/generated' Reported-by; Stephen Boyd <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Nicholas Mc Guire <[email protected]>
2015-07-01iommu/amd: Introduce protection_domain_init() functionJoerg Roedel1-10/+16
This function contains the common parts between the initialization of dma_ops_domains and usual protection domains. This also fixes a long-standing bug which was uncovered by recent changes, in which the api_lock was not initialized for dma_ops_domains. Reported-by: George Wang <[email protected]> Tested-by: George Wang <[email protected]> Signed-off-by: Joerg Roedel <[email protected]>
2015-07-01fs/file.c: __fget() and dup2() atomicity rulesEric Dumazet1-2/+8
__fget() does lockless fetch of pointer from the descriptor table, attempts to grab a reference and treats "it was already zero" as "it's already gone from the table, we just hadn't seen the store, let's fail". Unfortunately, that breaks the atomicity of dup2() - __fget() might see the old pointer, notice that it's been already dropped and treat that as "it's closed". What we should be getting is either the old file or new one, depending whether we come before or after dup2(). Dmitry had following test failing sometimes : int fd; void *Thread(void *x) { char buf; int n = read(fd, &buf, 1); if (n != 1) exit(printf("read failed: n=%d errno=%d\n", n, errno)); return 0; } int main() { fd = open("/dev/urandom", O_RDONLY); int fd2 = open("/dev/urandom", O_RDONLY); if (fd == -1 || fd2 == -1) exit(printf("open failed\n")); pthread_t th; pthread_create(&th, 0, Thread, 0); if (dup2(fd2, fd) == -1) exit(printf("dup2 failed\n")); pthread_join(th, 0); if (close(fd) == -1) exit(printf("close failed\n")); if (close(fd2) == -1) exit(printf("close failed\n")); printf("DONE\n"); return 0; } Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Dmitry Vyukov <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-07-01fs/file.c: don't acquire files->file_lock in fd_install()Eric Dumazet3-19/+55
Mateusz Guzik reported : Currently obtaining a new file descriptor results in locking fdtable twice - once in order to reserve a slot and second time to fill it. Holding the spinlock in __fd_install() is needed in case a resize is done, or to prevent a resize. Mateusz provided an RFC patch and a micro benchmark : http://people.redhat.com/~mguzik/pipebench.c A resize is an unlikely operation in a process lifetime, as table size is at least doubled at every resize. We can use RCU instead of the spinlock. __fd_install() must wait if a resize is in progress. The resize must block new __fd_install() callers from starting, and wait that ongoing install are finished (synchronize_sched()) resize should be attempted by a single thread to not waste resources. rcu_sched variant is used, as __fd_install() and expand_fdtable() run from process context. It gives us a ~30% speedup using pipebench on a dual Intel(R) Xeon(R) CPU E5-2696 v2 @ 2.50GHz Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Mateusz Guzik <[email protected]> Acked-by: Mateusz Guzik <[email protected]> Tested-by: Mateusz Guzik <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-07-01fs:super:get_anon_bdev: fix race condition could cause dev exceed its upper ↵Wang YanQing1-1/+1
limitation Execution of get_anon_bdev concurrently and preemptive kernel all could bring race condition, it isn't enough to check dev against its upper limitation with equality operator only. This patch fix it. Signed-off-by: Wang YanQing <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-06-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tileLinus Torvalds27-254/+402
Pull arch/tile updates from Chris Metcalf: "These are a grab bag of changes to improve debugging and respond to a variety of issues raised on LKML over the last couple of months" * git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: tile: avoid a "label not used" warning in do_page_fault() tile: vdso: use raw_read_seqcount_begin() in vdso tile: force CONFIG_TILEGX if ARCH != tilepro tile: improve stack backtrace tile: fix "odd fault" warning for stack backtraces tile: set up initial stack top to honor STACK_TOP_DELTA tile: support delivering NMIs for multicore backtrace drivers/tty/hvc/hvc_tile.c: properly return -EAGAIN tile: add <asm/word-at-a-time.h> and enable support functions tile: use READ_ONCE() in arch_spin_is_locked() tile: modify arch_spin_unlock_wait() semantics
2015-06-30Merge branch 'for-linus' of ↵Linus Torvalds19-357/+401
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull more s390 updates from Martin Schwidefsky: "There is one larger patch for the AP bus code to make it work with the longer reset periods of the latest crypto cards. A new default configuration, a naming cleanup for SMP and a few fixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/kdump: fix compile for !SMP s390/kdump: fix nosmt kernel parameter s390: new default configuration s390/smp: cleanup core vs. cpu in the SCLP interface s390/smp: fix sigp cpu detection loop s390/zcrypt: Fixed reset and interrupt handling of AP queues s390/kdump: fix REGSET_VX_LOW vector register ELF notes s390/bpf: Fix backward jumps
2015-06-30Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds10-18/+392
Pull CIFS/SMB3 updates from Steve French: "Includes two bug fixes, as well as (minimal) support for the new protocol dialect (SMB3.1.1), and support for two ioctls including reflink (duplicate extents) file copy and set integrity" * 'for-next' of git://git.samba.org/sfrench/cifs-2.6: cifs: Unset CIFS_MOUNT_POSIX_PATHS flag when following dfs mounts Update negotiate protocol for SMB3.11 dialect Add ioctl to set integrity Add Get/Set Integrity Information structure definitions Add reflink copy over SMB3.11 with new FSCTL_DUPLICATE_EXTENTS Add SMB3.11 mount option synonym for new dialect add struct FILE_STANDARD_INFO Make dialect negotiation warning message easier to read Add defines and structs for smb3.1 dialect Allow parsing vers=3.11 on cifs mount client MUST ignore EncryptionKeyLength if CAP_EXTENDED_SECURITY is set