aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2016-01-21libceph: fix authorizer invalidation, take 2Ilya Dryomov2-5/+23
Back in 2013, commit 4b8e8b5d78b8 ("libceph: fix authorizer invalidation") tried to fix authorizer invalidation issues by clearing validity field. However, nothing ever consults this field, so it doesn't force us to request any new secrets in any way and therefore we never get out of the exponential backoff mode: [ 129.973812] libceph: osd2 192.168.122.1:6810 connect authorization failure [ 130.706785] libceph: osd2 192.168.122.1:6810 connect authorization failure [ 131.710088] libceph: osd2 192.168.122.1:6810 connect authorization failure [ 133.708321] libceph: osd2 192.168.122.1:6810 connect authorization failure [ 137.706598] libceph: osd2 192.168.122.1:6810 connect authorization failure ... AFAICT this was the case at the time 4b8e8b5d78b8 was merged, too. Using timespec solely as a bool isn't nice, so introduce a new have_key flag, specifically for this purpose. Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2016-01-21libceph: clear messenger auth_retry flag if we faultIlya Dryomov1-3/+7
Commit 20e55c4cc758 ("libceph: clear messenger auth_retry flag when we authenticate") got us only half way there. We clear the flag if the second attempt succeeds, but it also needs to be cleared if that attempt fails, to allow for the exponential backoff to kick in. Otherwise, if ->should_authenticate() thinks our keys are valid, we will busy loop, incrementing auth_retry to no avail: process_connect ffff880079a63830 got BADAUTHORIZER attempt 1 process_connect ffff880079a63830 got BADAUTHORIZER attempt 2 process_connect ffff880079a63830 got BADAUTHORIZER attempt 3 process_connect ffff880079a63830 got BADAUTHORIZER attempt 4 process_connect ffff880079a63830 got BADAUTHORIZER attempt 5 ... Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2016-01-21libceph: fix ceph_msg_revoke()Ilya Dryomov2-19/+59
There are a number of problems with revoking a "was sending" message: (1) We never make any attempt to revoke data - only kvecs contibute to con->out_skip. However, once the header (envelope) is written to the socket, our peer learns data_len and sets itself to expect at least data_len bytes to follow front or front+middle. If ceph_msg_revoke() is called while the messenger is sending message's data portion, anything we send after that call is counted by the OSD towards the now revoked message's data portion. The effects vary, the most common one is the eventual hang - higher layers get stuck waiting for the reply to the message that was sent out after ceph_msg_revoke() returned and treated by the OSD as a bunch of data bytes. This is what Matt ran into. (2) Flat out zeroing con->out_kvec_bytes worth of bytes to handle kvecs is wrong. If ceph_msg_revoke() is called before the tag is sent out or while the messenger is sending the header, we will get a connection reset, either due to a bad tag (0 is not a valid tag) or a bad header CRC, which kind of defeats the purpose of revoke. Currently the kernel client refuses to work with header CRCs disabled, but that will likely change in the future, making this even worse. (3) con->out_skip is not reset on connection reset, leading to one or more spurious connection resets if we happen to get a real one between con->out_skip is set in ceph_msg_revoke() and before it's cleared in write_partial_skip(). Fixing (1) and (3) is trivial. The idea behind fixing (2) is to never zero the tag or the header, i.e. send out tag+header regardless of when ceph_msg_revoke() is called. That way the header is always correct, no unnecessary resets are induced and revoke stands ready for disabled CRCs. Since ceph_msg_revoke() rips out con->out_msg, introduce a new "message out temp" and copy the header into it before sending. Cc: [email protected] # 4.0+ Reported-by: Matt Conner <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]> Tested-by: Matt Conner <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2016-01-21libceph: use list_for_each_entry_safeGeliang Tang1-9/+3
Use list_for_each_entry_safe() instead of list_for_each_safe() to simplify the code. Signed-off-by: Geliang Tang <[email protected]> [[email protected]: nuke call to list_splice_init() as well] Signed-off-by: Ilya Dryomov <[email protected]>
2016-01-21ceph: use i_size_{read,write} to get/set i_sizeYan, Zheng4-26/+25
Cap message from MDS can update i_size. In that case, we don't hold i_mutex. So it's unsafe to directly access inode->i_size while holding i_mutex. Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21ceph: re-send AIO write request when getting -EOLDSNAP errorYan, Zheng1-4/+86
When receiving -EOLDSNAP from OSD, we need to re-send corresponding write request. Due to locking issue, we can send new request inside another OSD request's complete callback. So we use worker to re-send request for AIO write. Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21ceph: Asynchronous IO supportYan, Zheng1-119/+278
The basic idea of AIO support is simple, just call kiocb::ki_complete() in OSD request's complete callback. But there are several special cases. when IO span multiple objects, we need to wait until all OSD requests are complete, then call kiocb::ki_complete(). Error handling in this case is tricky too. For simplify, AIO both span multiple objects and extends i_size are not allowed. Another special case is check EOF for reading (other client can write to the file and extend i_size concurrently). For simplify, the direct-IO/AIO code path does do the check, fallback to normal syn read instead. Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21ceph: Avoid to propagate the invalid page pointMinfei Huang1-1/+0
The variant pagep will still get the invalid page point, although ceph fails in function ceph_update_writeable_page. To fix this issue, Assigne the page to pagep until there is no failure in function ceph_update_writeable_page. Signed-off-by: Minfei Huang <[email protected]> Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21ceph: fix double page_unlock() in page_mkwrite()Yan, Zheng1-4/+4
ceph_update_writeable_page() unlocks the page on errors, so page_mkwrite() should not unlock the page again. Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21rbd: delete an unnecessary check before rbd_dev_destroy()Markus Elfring1-2/+1
The rbd_dev_destroy() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2016-01-21libceph: use list_next_entry instead of list_entry_nextGeliang Tang1-5/+2
list_next_entry has been defined in list.h, so I replace list_entry_next with it. Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2016-01-21ceph: ceph_frag_contains_value can be booleanYaowei Bai1-1/+1
This patch makes ceph_frag_contains_value return bool to improve readability due to this particular function only using either one or zero as its return value. No functional change. Signed-off-by: Yaowei Bai <[email protected]> Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21ceph: remove unused functions in ceph_frag.hYaowei Bai1-35/+0
These functions were introduced in commit 3d14c5d2b ("ceph: factor out libceph from Ceph file system"). Howover, there's no user of these functions since then, so remove them for simplicity. Signed-off-by: Yaowei Bai <[email protected]> Signed-off-by: Yan, Zheng <[email protected]>
2016-01-21btrfs: synchronize incompat feature bits with sysfs filesDavid Sterba4-0/+17
The files under /sys/fs/UUID/features get out of sync with the actual incompat bits set for the filesystem if they change after mount (eg. the LZO compression). Synchronize the feature bits with the sysfs files representing them right after we set/clear them. Signed-off-by: David Sterba <[email protected]>
2016-01-21perf: Synchronously free aux pages in case of allocation failureAlexander Shishkin1-20/+20
We are currently using asynchronous deallocation in the error path in AUX mmap code, which is unnecessary and also presents a problem for users that wish to probe for the biggest possible buffer size they can get: they'll get -EINVAL on all subsequent attemts to allocate a smaller buffer before the asynchronous deallocation callback frees up the pages from the previous unsuccessful attempt. Currently, gdb does that for allocating AUX buffers for Intel PT traces. More specifically, overwrite mode of AUX pmus that don't support hardware sg (some implementations of Intel PT, for instance) is limited to only one contiguous high order allocation for its buffer and there is no way of knowing its size without trying. This patch changes error path freeing to be synchronous as there won't be any contenders for the AUX pages at that point. Reported-by: Markus Metzger <[email protected]> Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/1453216469-9509-1-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf/x86: add Intel SkyLake uncore IMC PMU supportStephane Eranian3-0/+24
This patch enables the uncore_imc PMU for Intel SkyLake Desktop processors (Core i7-6700, model 94). It is possible to compute memory read/write bandwidth using: $ perf stat -a -e uncore_imc/data_reads/,uncore_imc/data_writes/ .... Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Fix perf_event_exit_task() racePeter Zijlstra1-66/+85
There is a race against perf_event_exit_task() vs event_function_call(),find_get_context(),perf_install_in_context() (iow, everyone). Since there is no permanent marker on a context that its dead, it is quite possible that we access (and even modify) a context after its passed through perf_event_exit_task(). For instance, find_get_context() might find the context still installed, but by the time we get to perf_install_in_context() it might already have passed through perf_event_exit_task() and be considered dead, we will however still add the event to it. Solve this by marking a ctx dead by setting its ctx->task value to -1, it must be !0 so we still know its a (former) task context. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Add more assertionsPeter Zijlstra1-0/+9
Try to trigger warnings before races do damage. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Collapse and fix event_function_call() usersPeter Zijlstra3-201/+168
There is one common bug left in all the event_function_call() users, between loading ctx->task and getting to the remote_function(), ctx->task can already have been changed. Therefore we need to double check and retry if ctx->task != current. Insert another trampoline specific to event_function_call() that checks for this and further validates state. This also allows getting rid of the active/inactive functions. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Specialize perf_event_exit_task()Peter Zijlstra1-7/+11
The perf_remove_from_context() usage in __perf_event_exit_task() is different from the other usages in that this site has already detached and scheduled out the task context. This will stand in the way of stronger assertions checking the (task) context scheduling invariants. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Fix task context schedulingPeter Zijlstra1-64/+91
There is a very nasty problem wrt disabling the perf task scheduling hooks. Currently we {set,clear} ctx->is_active on every __perf_event_task_sched_{in,out}, _however_ this means that if we disable these calls we'll have task contexts with ->is_active set that are not active and 'active' task contexts without ->is_active set. This can result in event_function_call() looping on the ctx->is_active condition basically indefinitely. Resolve this by changing things such that contexts without events do not set ->is_active like we used to. From this invariant it trivially follows that if there are no (task) events, every task ctx is inactive and disabling the context switch hooks is harmless. This leaves two places that need attention (and already had accumulated weird and wonderful hacks to work around, without recognising this actual problem). Namely: - perf_install_in_context() will need to deal with installing events in an inactive context, meaning it cannot rely on ctx-is_active for its IPIs. - perf_remove_from_context() will have to mark a context as inactive when it removes the last event. For specific detail, see the patch/comments. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Make ctx->is_active and cpuctx->task_ctx consistentPeter Zijlstra1-7/+14
For no apparent reason and to great confusion the rules for ctx->is_active and cpuctx->task_ctx are different. This means that its not always possible to find all active (task) contexts. Fix this such that if ctx->is_active gets set, we also set (or verify) cpuctx->task_ctx. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Optimize perf_sched_events() usagePeter Zijlstra1-6/+16
It doesn't make sense to take up-to _4_ references on perf_sched_events() per event, avoid doing this. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Simplify/fix perf_event_enable() event schedulingPeter Zijlstra1-26/+5
Like perf_enable_on_exec(), perf_event_enable() event scheduling has problems respecting the context hierarchy when trying to schedule events (for example, it will try and add a pinned event without first removing existing flexible events). So simplify it by using the new ctx_resched() call which will DTRT. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Use task_ctx_sched_out()Peter Zijlstra1-2/+1
We have a function that does exactly what we want here, use it. This reduces the amount of cpuctx->task_ctx muckery. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Fix perf_enable_on_exec() event schedulingPeter Zijlstra1-20/+27
There are two problems with the current perf_enable_on_exec() event scheduling: - the newly enabled events will be immediately scheduled irrespective of their ctx event list order. - there's a hole in the ctx->lock between scheduling the events out and putting them back on. Esp. the latter issue is a real problem because a hole in event scheduling leaves the thing in an observable inconsistent state, confusing things. Fix both issues by first doing the enable iteration and at the end, when there are newly enabled events, reschedule the ctx in one go. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Remove stale commentPeter Zijlstra1-7/+0
The comment here is horribly out of date, remove it. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Fix cgroup scheduling in perf_enable_on_exec()Peter Zijlstra1-24/+7
There is a comment that states that perf_event_context_sched_in() will also switch in the cgroup events, I cannot find it does so. Therefore all the resulting logic goes out the window too. Clean that up. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Fix cgroup event schedulingPeter Zijlstra1-7/+10
There appears to be a problem in __perf_event_task_sched_in() wrt cgroup event scheduling. The normal event scheduling order is: CPU pinned Task pinned CPU flexible Task flexible And since perf_cgroup_sched*() only schedules the cpu context, we must call this _before_ adding the task events. Note: double check what happens on the ctx switch optimization where the task ctx isn't scheduled. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21perf: Add lockdep assertionsPeter Zijlstra1-2/+8
Make various bugs easier to see. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: David Ahern <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2016-01-21btrfs: sysfs: introduce helper for syncing bits with sysfs filesDavid Sterba2-0/+33
The files under /sys/fs/UUID/features get out of sync with the actual incompat bits set for the filesystem if they change after mount. We're going to sync them and need a helper to do that. Signed-off-by: David Sterba <[email protected]>
2016-01-21btrfs: sysfs: add free-space-tree bit attributeDavid Sterba1-0/+2
The incompat bit representing the newly added free space tree feature is missing. Right now it will be listed only among features supported by the module, not per-fs. Signed-off-by: David Sterba <[email protected]>
2016-01-21IB/mlx5: Unify CQ create flags checkLeon Romanovsky2-9/+3
The create_cq() can receive creation flags which were used differently by two commits which added create_cq extended command and cross-channel. The merged code caused to not accept any flags at all. This patch unifies the check into one function and one return error code. Fixes: 972ecb821379 ("IB/mlx5: Add create_cq extended command") Fixes: 051f263098a9 ("IB/mlx5: Add driver cross-channel support") Signed-off-by: Leon Romanovsky <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Expose Raw Packet QP to user space consumers[email protected]1-12/+127
Added Raw Packet QP modify functionality which will enable user space consumers to use it. Since Raw Packet QP is built of SQ and RQ sub-objects, therefore Raw Packet QP state changes are implemented by changing the state of the sub-objects. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21{IB, net}/mlx5: Move the modify QP operation table to mlx5_ib[email protected]3-54/+50
When modifying a QP, the desired operation was determined in the mlx5_core using a transition table that takes the current state, the final state, and returns the desired operation. Since this logic will be used for Raw Packet QP, move the operation table to the mlx5_ib. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Support setting Ethernet priority for Raw Packet QPs[email protected]4-4/+57
When the user changes the Address Vector(AV) in the modify QP, he provides an SL. This SL should be translated to Ethernet Priority by taking the 3 LSB bits, and modify the QP's TIS according to this Ethernet priority. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Add Raw Packet QP query functionality[email protected]4-23/+205
Since Raw Packet QP is composed of RQ and SQ, the IB QP's state is derived from the sub-objects. Therefore we need to query each one of the sub-objects, and decide on the IB QP's state. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Add create and destroy functionality for Raw Packet QP[email protected]3-18/+365
This patch adds support for Raw Packet QP for the mlx5 device. Raw Packet QP, unlike other QP types, has no matching mlx5_core_qp object but rather it is built of RQ/SQ/TIR/TIS/TD mlx5_core object. Since the SQ and RQ work-queue (WQ) buffers are not contiguous like other QPs, we allocate separate buffers in the user-space and pass the address of each one of them separately to the kernel. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Refactor mlx5_ib_qp to accommodate other QP types[email protected]3-95/+147
Extract specific IB QP fields to mlx5_ib_qp_trans structure. The mlx5_core QP object resides in mlx5_ib_qp_base, which all QP types inherit from. When we need to find mlx5_ib_qp using mlx5_core QP (event handling and co), we use a pointer that resides in mlx5_ib_qp_base. In addition, we delete all redundant fields that weren't used anywhere in the code: -doorbell_qpn -sq_max_wqes_per_wr -sq_spare_wqes Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Allocate a Transport Domain for each ucontext[email protected]2-1/+18
Transport Domain groups several TIS and TIR object. By grouping these object, it defines wheather local loopback packets that are sent from the TIS objects in the group are received by the TIR objects in the same group. Allocate a Transport Domain(TD) for each user context to be used in the future by Raw Packet QP for Self-Loopback Control. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21net/mlx5_core: Warn on unsupported events of QP/RQ/SQ[email protected]1-0/+52
When an event arrives on QP/RQ/SQ, check whether it's supported, and print a warning message otherwise. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21net/mlx5_core: Add RQ and SQ event handling[email protected]5-23/+132
RQ/SQ will be used to implement IB verbs QPs, so the IB QP affiliated events are affiliated also with SQs and RQs. Since SQ, RQ and QP resource numbers do not share the same name space, a queue type field was added to the event data to specify the SW object that the event is affiliated with. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21net/mlx5_core: Export transport objects[email protected]5-10/+20
To be used by mlx5_ib in the following patches for implementing RAW PACKET QP. Add mlx5_core_ prefix to alloc and delloc transport_domain since they are exposed now. Signed-off-by: Majd Dibbiny <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Expose CQE version to user-spaceHaggai Abramovsky2-5/+18
Per user context, work with CQE version that both the user-space and the kernel support. Report this CQE version via the response of the alloc_ucontext command. Signed-off-by: Haggai Abramovsky <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Add CQE version 1 support to user QPs and SRQsHaggai Abramovsky7-21/+114
Enforce working with CQE version 1 when the user supports CQE version 1 and asked to work this way. If the user still works with CQE version 0, then use the default CQE version to tell the Firmware that the user still works in the older mode. After this patch, the kernel still reports CQE version 0. Signed-off-by: Haggai Abramovsky <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/mlx5: Fix data validation in mlx5_ib_alloc_ucontextHaggai Abramovsky1-1/+4
The wrong buffer size was passed to ib_is_udata_cleared. Signed-off-by: Haggai Abramovsky <[email protected]> Reviewed-by: Matan Barak <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21IB/sa: Fix netlink local service GFP crashKaike Wan1-2/+6
The rdma netlink local service registers a handler to handle RESOLVE response and another handler to handle SET_TIMEOUT request. The first thing these handlers do is to call netlink_capable() to check the access right of the received skb to make sure that the sender has root access. Under normal conditions, such responses and requests will be directly forwarded to the handlers without going through the netlink_dump pathway (see ibnl_rcv_msg() in drivers/infiniband/core/netlink.c). However, a user application could send a RESOLVE request (not response) to the local service, which will fall into the netlink_dump pathway, where a new skb will be created without initializing the control block. This new skb will be eventually forwarded to the local service RESOLVE response handler. Unfortunately, netlink_capable() will cause general protection fault if the skb's control block is not initialized. This patch will address the problem by checking the skb first. Signed-off-by: Kaike Wan <[email protected]> Reported-by: Dmitry Vyukov <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-01-21ALSA: timer: Introduce disconnect op to snd_timer_instanceTakashi Iwai2-12/+12
Instead of the previous ugly hack, introduce a new op, disconnect, to snd_timer_instance object for handling the wake up of pending tasks more cleanly. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=109431 Signed-off-by: Takashi Iwai <[email protected]>
2016-01-21ALSA: timer: Handle disconnection more safelyTakashi Iwai1-0/+48
Currently ALSA timer device doesn't take the disconnection into account very well; it merely unlinks the timer device at disconnection callback but does nothing else. Because of this, when an application accessing the timer device is disconnected, it may release the resource before actually closed. In most cases, it results in a warning message indicating a leftover timer instance like: ALSA: timer xxxx is busy? But basically this is an open race. This patch tries to address it. The strategy is like other ALSA devices: namely, - Manage card's refcount at each open/close - Wake up the pending tasks at disconnection - Check the shutdown flag appropriately at each possible call Note that this patch has one ugly hack to handle the wakeup of pending tasks. It'd be cleaner to introduce a new disconnect op to snd_timer_instance ops. But since it would lead to internal ABI breakage and it eventually increase my own work when backporting to stable kernels, I took a different path to implement locally in timer.c. A cleanup patch will follow at next for 4.5 kernel. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=109431 Cc: <[email protected]> # v3.15+ Signed-off-by: Takashi Iwai <[email protected]>
2016-01-21pwm: Mark all devices as "might sleep"Thierry Reding1-1/+1
Commit d1cd21427747 ("pwm: Set enable state properly on failed call to enable") introduced a mutex that is needed to protect internal state of PWM devices. Since that mutex is acquired in pwm_set_polarity() and in pwm_enable() and might potentially block, all PWM devices effectively become "might sleep". It's rather pointless to keep the .can_sleep field around, but given that there are external users let's postpone the removal for the next release cycle. Signed-off-by: Thierry Reding <[email protected]>