aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2014-10-14x86: bpf_jit: fix two bugs in eBPF JIT compilerAlexei Starovoitov1-6/+19
1. JIT compiler using multi-pass approach to converge to final image size, since x86 instructions are variable length. It starts with large gaps between instructions (so some jumps may use imm32 instead of imm8) and iterates until total program size is the same as in previous pass. This algorithm works only if program size is strictly decreasing. Programs that use LD_ABS insn need additional code in prologue, but it was not emitted during 1st pass, so there was a chance that 2nd pass would adjust imm32->imm8 jump offsets to the same number of bytes as increase in prologue, which may cause algorithm to erroneously decide that size converged. Fix it by always emitting largest prologue in the first pass which is detected by oldproglen==0 check. Also change error check condition 'proglen != oldproglen' to fail gracefully. 2. while staring at the code realized that 64-byte buffer may not be enough when 1st insn is large, so increase it to 128 to avoid buffer overflow (theoretical maximum size of prologue+div is 109) and add runtime check. Fixes: 622582786c9e ("net: filter: x86: internal BPF JIT") Reported-by: Darrick J. Wong <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Tested-by: Darrick J. Wong <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14tcp: fix ooo_okay setting vs Small QueuesEric Dumazet1-2/+6
TCP Small Queues (tcp_tsq_handler()) can hold one reference on sk->sk_wmem_alloc, preventing skb->ooo_okay being set. We should relax test done to set skb->ooo_okay to take care of this extra reference. Minimal truesize of skb containing one byte of payload is SKB_TRUESIZE(1) Without this fix, we have more chance locking flows into the wrong transmit queue. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14skbuff: fix ftrace handling in skb_unshareAlexander Aring1-1/+6
If the skb is not dropped afterwards we should run consume_skb instead kfree_skb. Inside of function skb_unshare we do always a kfree_skb, doesn't depend if skb_copy failed or was successful. This patch switch this behaviour like skb_share_check, if allocation of sk_buff failed we use kfree_skb otherwise consume_skb. Signed-off-by: Alexander Aring <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14fm10k: Add skb->xmit_more supportAlexander Duyck1-31/+34
This change adds support for skb->xmit_more based on the changes that were made to igb to support the feature. The main changes are moving up the check for maybe_stop_tx so that we can check netif_xmit_stopped to determine if we must write the tail because we can add no further buffers. Acked-by: Matthew Vick <[email protected]> Signed-off-by: Alexander Duyck <[email protected]> Acked-by: Jeff Kirsher <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14ceph: remove redundant code for max file size verificationChao Yu2-15/+0
Both ceph_update_writeable_page and ceph_setattr will verify file size with max size ceph supported. There are two caller for ceph_update_writeable_page, ceph_write_begin and ceph_page_mkwrite. For ceph_write_begin, we have already verified the size in generic_write_checks of ceph_write_iter; for ceph_page_mkwrite, we have no chance to change file size when mmap. Likewise we have already verified the size in inode_change_ok when we call ceph_setattr. So let's remove the redundant code for max file size verification. Signed-off-by: Chao Yu <[email protected]> Reviewed-by: Yan, Zheng <[email protected]>
2014-10-14ceph: remove redundant io_iter_advance()Yan, Zheng1-1/+0
ceph_sync_read and generic_file_read_iter() have already advanced the IO iterator. Signed-off-by: Yan, Zheng <[email protected]>
2014-10-14ceph: move ceph_find_inode() outside the s_mutexYan, Zheng2-8/+10
ceph_find_inode() may wait on freeing inode, using it inside the s_mutex may cause deadlock. (the freeing inode is waiting for OSD read reply, but dispatch thread is blocked by the s_mutex) Signed-off-by: Yan, Zheng <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2014-10-14ceph: request xattrs if xattr_version is zeroYan, Zheng5-30/+20
Following sequence of events can happen. - Client releases an inode, queues cap release message. - A 'lookup' reply brings the same inode back, but the reply doesn't contain xattrs because MDS didn't receive the cap release message and thought client already has up-to-data xattrs. The fix is force sending a getattr request to MDS if xattrs_version is 0. The getattr mask is set to CEPH_STAT_CAP_XATTR, so MDS knows client does not have xattr. Signed-off-by: Yan, Zheng <[email protected]>
2014-10-14rbd: set the remaining discard properties to enable supportJosh Durgin1-0/+2
max_discard_sectors must be set for the queue to support discard. Operations implementing discard for rbd zero data, so report that. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: use helpers to handle discard for layered images correctlyJosh Durgin1-32/+22
Only allocate two osd ops for discard requests, since the preallocation hint is only added for regular writes. Use rbd_img_obj_request_fill() to recreate the original write or discard osd operations, isolating that logic to one place, and change the assert in rbd_osd_req_create_copyup() to accept discard requests as well. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: extract a method for adding object operationsJosh Durgin1-55/+78
rbd_img_request_fill() creates a ceph_osd_request and has logic for adding the appropriate osd ops to it based on the request type and image properties. For layered images, the original rbd_obj_request is resent with a copyup operation in front, using a new ceph_osd_request. The logic for adding the original operations should be the same as when first sending them, so move it to a helper function. op_type only needs to be checked once, so create a helper for that as well and call it outside the loop in rbd_img_request_fill(). Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: make discard trigger copy-on-writeJosh Durgin1-1/+2
Discard requests are a form of write, so they should go through the same process as plain write requests and trigger copy-on-write for layered images. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: tolerate -ENOENT for discard operationsJosh Durgin1-0/+3
Discard may try to delete an object from a non-layered image that does not exist. If this occurs, the image already has no data in that range, so change the result to success. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: fix snapshot context reference count for discardsJosh Durgin1-1/+2
Discards take a reference to the snapshot context of an image when they are created. This reference needs to be cleaned up when the request is done just as it is for regular writes. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: read image size for discard check safelyJosh Durgin1-6/+12
In rbd_img_request_fill() the image size is only checked to determine whether we can truncate an object instead of zeroing it for discard requests. Take rbd_dev->header_rwsem while reading the image size, and move this read into the discard check, so that non-discard ops don't need to take the semaphore in this function. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: initial discard bits from Guangliang ZhaoGuangliang Zhao1-15/+89
This patch add the discard support for rbd driver. There are three types operation in the driver: 1. The objects would be removed if they completely contained within the discard range. 2. The objects would be truncated if they partly contained within the discard range, and align with their boundary. 3. Others would be zeroed. A discard request from blkdev_issue_discard() is defined which REQ_WRITE and REQ_DISCARD both marked and no data, so we must check the REQ_DISCARD first when getting the request type. This resolve: http://tracker.ceph.com/issues/190 [ Ilya Dryomov: This is incomplete and somewhat buggy, see follow up commits by Josh Durgin for refinements and fixes which weren't folded in to preserve authorship. ] Signed-off-by: Guangliang Zhao <[email protected]> Reviewed-by: Josh Durgin <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14rbd: extend the operation typeGuangliang Zhao1-34/+63
It could only handle the read and write operations now, extend it for the coming discard support. Signed-off-by: Guangliang Zhao <[email protected]> Reviewed-by: Josh Durgin <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14rbd: skip the copyup when an entire object writingGuangliang Zhao1-0/+8
It need to copyup the parent's content when layered writing, but an entire object write would overwrite it, so skip it. Signed-off-by: Guangliang Zhao <[email protected]> Reviewed-by: Josh Durgin <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14rbd: add img_obj_request_simple() helperIlya Dryomov1-16/+28
To clarify the conditions and make it easier to add new ones. Signed-off-by: Ilya Dryomov <[email protected]>
2014-10-14rbd: access snapshot context and mapping size safelyJosh Durgin1-13/+21
These fields may both change while the image is mapped if a snapshot is created or deleted or the image is resized. They are guarded by rbd_dev->header_rwsem, so hold that while reading them, and store a local copy to refer to outside of the critical section. The local copy will stay consistent since the snapshot context is reference counted, and the mapping size is just a u64. This prevents torn loads from giving us inconsistent values. Move reading header.snapc into the caller of rbd_img_request_create() so that we only need to take the semaphore once. The read-only caller, rbd_parent_request_create() can just pass NULL for snapc, since the snapshot context is only relevant for writes. Signed-off-by: Josh Durgin <[email protected]>
2014-10-14rbd: do not return -ERANGE on auth failuresIlya Dryomov1-3/+1
Trying to map an image out of a pool for which we don't have an 'x' permission bit fails with -ERANGE from ceph_extract_encoded_string() due to an unsigned vs signed bug. Fix it and get rid of the -EINVAL sink, thus propagating rbd::get_id cls method errors. (I've seen a bunch of unexplained -ERANGE reports, I bet this is it). Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14libceph: don't try checking queue_work() return valueIlya Dryomov1-10/+5
queue_work() doesn't "fail to queue", it returns false if work was already on a queue, which can't happen here since we allocate event_work right before we queue it. So don't bother at all. Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14ceph: make sure request isn't in any waiting list when kicking request.Yan, Zheng1-0/+1
we may corrupt waiting list if a request in the waiting list is kicked. Signed-off-by: Yan, Zheng <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2014-10-14ceph: protect kick_requests() with mdsc->mutexYan, Zheng1-2/+3
Signed-off-by: Yan, Zheng <[email protected]> Reviewed-by: Sage Weil <[email protected]>
2014-10-14libceph: Convert pr_warning to pr_warnJoe Perches5-37/+39
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: Joe Perches <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2014-10-14ceph: trim unused inodes before reconnecting to recovering MDSYan, Zheng1-10/+13
So the recovering MDS does not need to fetch these ununsed inodes during cache rejoin. This may reduce MDS recovery time. Signed-off-by: Yan, Zheng <[email protected]>
2014-10-14libceph: fix a use after free issue in osdmap_set_max_osdLi RongQing1-16/+16
If the state variable is krealloced successfully, map->osd_state will be freed, once following two reallocation failed, and exit the function without resetting map->osd_state, map->osd_state become a wild pointer. fix it by resetting them after krealloc successfully. Signed-off-by: Li RongQing <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
2014-10-14libceph: select CRYPTO_CBC in addition to CRYPTO_AESIlya Dryomov1-0/+1
We want "cbc(aes)" algorithm, so select CRYPTO_CBC too, not just CRYPTO_AES. Otherwise on !CRYPTO_CBC kernels we fail rbd map/mount with libceph: error -2 building auth method x request Signed-off-by: Ilya Dryomov <[email protected]>
2014-10-14libceph: resend lingering requests with a new tidIlya Dryomov1-19/+54
Both not yet registered (r_linger && list_empty(&r_linger_item)) and registered linger requests should use the new tid on resend to avoid the dup op detection logic on the OSDs, yet we were doing this only for "registered" case. Factor out and simplify the "registered" logic and use the new helper for "not registered" case as well. Fixes: http://tracker.ceph.com/issues/8806 Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14libceph: abstract out ceph_osd_request enqueue logicIlya Dryomov1-7/+17
Introduce __enqueue_request() and switch to it. Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Alex Elder <[email protected]>
2014-10-14net: fec: Fix sparse warnings with different lock contexts for basic blockNimrod Andy1-10/+14
reproduce: make ARCH=arm C=1 2>fec.txt drivers/net/ethernet/freescale/fec_main.o cat fec.txt sparse warnings: drivers/net/ethernet/freescale/fec_main.c:2916:12: warning: context imbalance in 'fec_set_features' - different lock contexts for basic block Christopher Li suggest to change as below: if (need_lock) { lock(); do_something_real(); unlock(); } else { do_something_real(); } Reported-by: Fabio Estevam <[email protected]> Suggested-by: Christopher Li <[email protected]> Signed-off-by: Fugang Duan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14MAINTAINERS: Update contact information for Vince BridgersVince Bridgers1-1/+1
Signed-off-by: Vince Bridgers <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14Merge branch 'sctp'David S. Miller6-89/+77
Daniel Borkmann says: ==================== Here are some SCTP fixes. [ Note, immediate workaround would be to disable ASCONF (it is sysctl disabled by default). It is actually only used together with chunk authentication. ] ==================== Signed-off-by: David S. Miller <[email protected]>
2014-10-14net: sctp: fix remote memory pressure from excessive queueingDaniel Borkmann2-26/+10
This scenario is not limited to ASCONF, just taken as one example triggering the issue. When receiving ASCONF probes in the form of ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ---- ASCONF_a; [ASCONF_b; ...; ASCONF_n;] JUNK ------> [...] ---- ASCONF_m; [ASCONF_o; ...; ASCONF_z;] JUNK ------> ... where ASCONF_a, ASCONF_b, ..., ASCONF_z are good-formed ASCONFs and have increasing serial numbers, we process such ASCONF chunk(s) marked with !end_of_packet and !singleton, since we have not yet reached the SCTP packet end. SCTP does only do verification on a chunk by chunk basis, as an SCTP packet is nothing more than just a container of a stream of chunks which it eats up one by one. We could run into the case that we receive a packet with a malformed tail, above marked as trailing JUNK. All previous chunks are here goodformed, so the stack will eat up all previous chunks up to this point. In case JUNK does not fit into a chunk header and there are no more other chunks in the input queue, or in case JUNK contains a garbage chunk header, but the encoded chunk length would exceed the skb tail, or we came here from an entirely different scenario and the chunk has pdiscard=1 mark (without having had a flush point), it will happen, that we will excessively queue up the association's output queue (a correct final chunk may then turn it into a response flood when flushing the queue ;)): I ran a simple script with incremental ASCONF serial numbers and could see the server side consuming excessive amount of RAM [before/after: up to 2GB and more]. The issue at heart is that the chunk train basically ends with !end_of_packet and !singleton markers and since commit 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") therefore preventing an output queue flush point in sctp_do_sm() -> sctp_cmd_interpreter() on the input chunk (chunk = event_arg) even though local_cork is set, but its precedence has changed since then. In the normal case, the last chunk with end_of_packet=1 would trigger the queue flush to accommodate possible outgoing bundling. In the input queue, sctp_inq_pop() seems to do the right thing in terms of discarding invalid chunks. So, above JUNK will not enter the state machine and instead be released and exit the sctp_assoc_bh_rcv() chunk processing loop. It's simply the flush point being missing at loop exit. Adding a try-flush approach on the output queue might not work as the underlying infrastructure might be long gone at this point due to the side-effect interpreter run. One possibility, albeit a bit of a kludge, would be to defer invalid chunk freeing into the state machine in order to possibly trigger packet discards and thus indirectly a queue flush on error. It would surely be better to discard chunks as in the current, perhaps better controlled environment, but going back and forth, it's simply architecturally not possible. I tried various trailing JUNK attack cases and it seems to look good now. Joint work with Vlad Yasevich. Fixes: 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Vlad Yasevich <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14net: sctp: fix panic on duplicate ASCONF chunksDaniel Borkmann2-0/+7
When receiving a e.g. semi-good formed connection scan in the form of ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ---------------- ASCONF_a; ASCONF_b -----------------> ... where ASCONF_a equals ASCONF_b chunk (at least both serials need to be equal), we panic an SCTP server! The problem is that good-formed ASCONF chunks that we reply with ASCONF_ACK chunks are cached per serial. Thus, when we receive a same ASCONF chunk twice (e.g. through a lost ASCONF_ACK), we do not need to process them again on the server side (that was the idea, also proposed in the RFC). Instead, we know it was cached and we just resend the cached chunk instead. So far, so good. Where things get nasty is in SCTP's side effect interpreter, that is, sctp_cmd_interpreter(): While incoming ASCONF_a (chunk = event_arg) is being marked !end_of_packet and !singleton, and we have an association context, we do not flush the outqueue the first time after processing the ASCONF_ACK singleton chunk via SCTP_CMD_REPLY. Instead, we keep it queued up, although we set local_cork to 1. Commit 2e3216cd54b1 changed the precedence, so that as long as we get bundled, incoming chunks we try possible bundling on outgoing queue as well. Before this commit, we would just flush the output queue. Now, while ASCONF_a's ASCONF_ACK sits in the corked outq, we continue to process the same ASCONF_b chunk from the packet. As we have cached the previous ASCONF_ACK, we find it, grab it and do another SCTP_CMD_REPLY command on it. So, effectively, we rip the chunk->list pointers and requeue the same ASCONF_ACK chunk another time. Since we process ASCONF_b, it's correctly marked with end_of_packet and we enforce an uncork, and thus flush, thus crashing the kernel. Fix it by testing if the ASCONF_ACK is currently pending and if that is the case, do not requeue it. When flushing the output queue we may relink the chunk for preparing an outgoing packet, but eventually unlink it when it's copied into the skb right before transmission. Joint work with Vlad Yasevich. Fixes: 2e3216cd54b1 ("sctp: Follow security requirement of responding with 1 packet") Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Vlad Yasevich <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14net: sctp: fix skb_over_panic when receiving malformed ASCONF chunksDaniel Borkmann3-63/+60
Commit 6f4c618ddb0 ("SCTP : Add paramters validity check for ASCONF chunk") added basic verification of ASCONF chunks, however, it is still possible to remotely crash a server by sending a special crafted ASCONF chunk, even up to pre 2.6.12 kernels: skb_over_panic: text:ffffffffa01ea1c3 len:31056 put:30768 head:ffff88011bd81800 data:ffff88011bd81800 tail:0x7950 end:0x440 dev:<NULL> ------------[ cut here ]------------ kernel BUG at net/core/skbuff.c:129! [...] Call Trace: <IRQ> [<ffffffff8144fb1c>] skb_put+0x5c/0x70 [<ffffffffa01ea1c3>] sctp_addto_chunk+0x63/0xd0 [sctp] [<ffffffffa01eadaf>] sctp_process_asconf+0x1af/0x540 [sctp] [<ffffffff8152d025>] ? _read_unlock_bh+0x15/0x20 [<ffffffffa01e0038>] sctp_sf_do_asconf+0x168/0x240 [sctp] [<ffffffffa01e3751>] sctp_do_sm+0x71/0x1210 [sctp] [<ffffffff8147645d>] ? fib_rules_lookup+0xad/0xf0 [<ffffffffa01e6b22>] ? sctp_cmp_addr_exact+0x32/0x40 [sctp] [<ffffffffa01e8393>] sctp_assoc_bh_rcv+0xd3/0x180 [sctp] [<ffffffffa01ee986>] sctp_inq_push+0x56/0x80 [sctp] [<ffffffffa01fcc42>] sctp_rcv+0x982/0xa10 [sctp] [<ffffffffa01d5123>] ? ipt_local_in_hook+0x23/0x28 [iptable_filter] [<ffffffff8148bdc9>] ? nf_iterate+0x69/0xb0 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [<ffffffff8148bf86>] ? nf_hook_slow+0x76/0x120 [<ffffffff81496d10>] ? ip_local_deliver_finish+0x0/0x2d0 [<ffffffff81496ded>] ip_local_deliver_finish+0xdd/0x2d0 [<ffffffff81497078>] ip_local_deliver+0x98/0xa0 [<ffffffff8149653d>] ip_rcv_finish+0x12d/0x440 [<ffffffff81496ac5>] ip_rcv+0x275/0x350 [<ffffffff8145c88b>] __netif_receive_skb+0x4ab/0x750 [<ffffffff81460588>] netif_receive_skb+0x58/0x60 This can be triggered e.g., through a simple scripted nmap connection scan injecting the chunk after the handshake, for example, ... -------------- INIT[ASCONF; ASCONF_ACK] -------------> <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------ -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- ------------------ ASCONF; UNKNOWN ------------------> ... where ASCONF chunk of length 280 contains 2 parameters ... 1) Add IP address parameter (param length: 16) 2) Add/del IP address parameter (param length: 255) ... followed by an UNKNOWN chunk of e.g. 4 bytes. Here, the Address Parameter in the ASCONF chunk is even missing, too. This is just an example and similarly-crafted ASCONF chunks could be used just as well. The ASCONF chunk passes through sctp_verify_asconf() as all parameters passed sanity checks, and after walking, we ended up successfully at the chunk end boundary, and thus may invoke sctp_process_asconf(). Parameter walking is done with WORD_ROUND() to take padding into account. In sctp_process_asconf()'s TLV processing, we may fail in sctp_process_asconf_param() e.g., due to removal of the IP address that is also the source address of the packet containing the ASCONF chunk, and thus we need to add all TLVs after the failure to our ASCONF response to remote via helper function sctp_add_asconf_response(), which basically invokes a sctp_addto_chunk() adding the error parameters to the given skb. When walking to the next parameter this time, we proceed with ... length = ntohs(asconf_param->param_hdr.length); asconf_param = (void *)asconf_param + length; ... instead of the WORD_ROUND()'ed length, thus resulting here in an off-by-one that leads to reading the follow-up garbage parameter length of 12336, and thus throwing an skb_over_panic for the reply when trying to sctp_addto_chunk() next time, which implicitly calls the skb_put() with that length. Fix it by using sctp_walk_params() [ which is also used in INIT parameter processing ] macro in the verification *and* in ASCONF processing: it will make sure we don't spill over, that we walk parameters WORD_ROUND()'ed. Moreover, we're being more defensive and guard against unknown parameter types and missized addresses. Joint work with Vlad Yasevich. Fixes: b896b82be4ae ("[SCTP] ADDIP: Support for processing incoming ASCONF_ACK chunks.") Signed-off-by: Daniel Borkmann <[email protected]> Signed-off-by: Vlad Yasevich <[email protected]> Acked-by: Neil Horman <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14phy/micrel: KSZ8031RNL RMII clock reconfiguration bugBruno Thomsen1-1/+3
Bug: Unable to send and receive Ethernet packets with Micrel PHY. Affected devices: KSZ8031RNL (commercial temp) KSZ8031RNLI (industrial temp) Description: PHY device is correctly detected during probe. PHY power-up default is 25MHz crystal clock input and output 50MHz RMII clock to MAC. Reconfiguration of PHY to input 50MHz RMII clock from MAC causes PHY to become unresponsive if clock source is changed after Operation Mode Strap Override (OMSO) register setup. Cause: Long lead times on parts where clock setup match circuit design forces the usage of similar parts with wrong default setup. Solution: Swapped KSZ8031 register setup and added phy_write return code validation. Tested with Freescale i.MX28 Fast Ethernet Controler (fec). Signed-off-by: Bruno Thomsen <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-10-14block: Remove REQ_KERNELMartin K. Petersen5-15/+9
REQ_KERNEL is no longer used. Remove it and drop the redundant uio argument to nfs_file_direct_{read,write}. Signed-off-by: Martin K. Petersen <[email protected]> Cc: Christoph Hellwig <[email protected]> Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2014-10-14perf session: Remove last reference to hists structArnaldo Carvalho de Melo6-8/+19
Now perf_session doesn't require that the evsels in its evlist are hists containing ones. Tools that are hists based and want to do per evsel events_stats updates, if at some point this turns into a necessity, should do it in the tool specific code, keeping the session class hists agnostic. Cc: Adrian Hunter <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: David Ahern <[email protected]> Cc: Don Zickus <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Jean Pihet <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2014-10-14arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZEChristoffer Dall1-0/+11
When creating or moving a memslot, make sure the IPA space is within the addressable range of the guest. Otherwise, user space can create too large a memslot and KVM would try to access potentially unallocated page table entries when inserting entries in the Stage-2 page tables. Acked-by: Catalin Marinas <[email protected]> Acked-by: Marc Zyngier <[email protected]> Signed-off-by: Christoffer Dall <[email protected]>
2014-10-14arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2Christoffer Dall4-40/+249
This patch adds the necessary support for all host kernel PGSIZE and VA_SPACE configuration options for both EL2 and the Stage-2 page tables. However, for 40bit and 42bit PARange systems, the architecture mandates that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2 pagge tables than levels of host kernel page tables. At the same time, systems with a PARange > 42bit, we limit the IPA range by always setting VTCR_EL2.T0SZ to 24. To solve the situation with different levels of page tables for Stage-2 translation than the host kernel page tables, we allocate a dummy PGD with pointers to our actual inital level Stage-2 page table, in order for us to reuse the kernel pgtable manipulation primitives. Reproducing all these in KVM does not look pretty and unnecessarily complicates the 32-bit side. Systems with a PARange < 40bits are not yet supported. [ I have reworked this patch from its original form submitted by Jungseok to take the architecture constraints into consideration. There were too many changes from the original patch for me to preserve the authorship. Thanks to Catalin Marinas for his help in figuring out a good solution to this challenge. I have also fixed various bugs and missing error code handling from the original patch. - Christoffer ] Reviewed-by: Catalin Marinas <[email protected]> Acked-by: Marc Zyngier <[email protected]> Signed-off-by: Jungseok Lee <[email protected]> Signed-off-by: Christoffer Dall <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS usage from crypto/testmgr.cJan-Simon Möller1-8/+6
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Jan-Simon Möller <[email protected]> Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Acked-by: Herbert Xu <[email protected]> Cc: [email protected]
2014-10-14security, crypto: LLVMLinux: Remove VLAIS from ima_crypto.cBehan Webster1-28/+19
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Reviewed-by: Jan-Simon Möller <[email protected]> Acked-by: Herbert Xu <[email protected]> Acked-by: Dmitry Kasatkin <[email protected]> Cc: [email protected]
2014-10-14crypto: LLVMLinux: Remove VLAIS usage from libcrc32c.cJan-Simon Möller1-9/+7
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Jan-Simon Möller <[email protected]> Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Acked-by: Herbert Xu <[email protected]> Cc: [email protected] Cc: "David S. Miller" <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS usage from crypto/hmac.cJan-Simon Möller1-14/+11
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Jan-Simon Möller <[email protected]> Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Acked-by: Herbert Xu <[email protected]> Cc: [email protected]
2014-10-14crypto, dm: LLVMLinux: Remove VLAIS usage from dm-cryptJan-Simon Möller1-20/+14
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Jan-Simon Möller <[email protected]> Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Acked-by: Herbert Xu <[email protected]> Cc: [email protected] Cc: [email protected] Cc: "David S. Miller" <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS from crypto/.../qat_algs.cBehan Webster1-17/+14
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Reviewed-by: Jan-Simon Möller <[email protected]> Acked-by: Herbert Xu <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS from crypto/omap_sham.cBehan Webster1-17/+11
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Reviewed-by: Jan-Simon Möller <[email protected]> Acked-by: Herbert Xu <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS from crypto/n2_core.cBehan Webster1-7/+4
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Reviewed-by: Jan-Simon Möller <[email protected]> Acked-by: Herbert Xu <[email protected]>
2014-10-14crypto: LLVMLinux: Remove VLAIS from crypto/mv_cesa.cBehan Webster1-23/+18
Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster <[email protected]> Reviewed-by: Mark Charlebois <[email protected]> Reviewed-by: Jan-Simon Möller <[email protected]> Acked-by: Herbert Xu <[email protected]>