Age | Commit message (Collapse) | Author | Files | Lines |
|
To avoid sleep while atomic bugs, allocate qh before calling
dwc2_hcd_urb_enqueue. qh pointer can be used directly now instead of
passing ep->hcpriv as double pointer.
Acked-by: John Youn <[email protected]>
Tested-by: Heiko Stuebner <[email protected]>
Tested-by: Doug Anderson <[email protected]>
Signed-off-by: Mian Yousaf Kaukab <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
|
|
Align buffer must be allocated using kmalloc since irqs are disabled.
Coherency is handled through dma_map_single which can be used with irqs
disabled.
Reviewed-by: Julius Werner <[email protected]>
Acked-by: John Youn <[email protected]>
Signed-off-by: Gregory Herrero <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
|
|
During urb_enqueue, if the urb can't be queued to the endpoint,
the urb is freed without any spinlock protection.
This leads to memory corruption when concurrent urb_dequeue try to free
same urb->hcpriv.
Thus, ensure the whole urb_enqueue in spinlocked.
Acked-by: John Youn <[email protected]>
Signed-off-by: Gregory Herrero <[email protected]>
Signed-off-by: Felipe Balbi <[email protected]>
|
|
The driver's handling of DMA buffers for non-aligned transfers
was kind of nuts. For IN transfers, it left the URB DMA buffer
mapped until the transfer completed, then synced it, copied the
data from the bounce buffer, then synced it again.
Instead of that, just call usb_hcd_unmap_urb_for_dma() to unmap
the buffer before starting the transfer. Then no syncing is
required when doing the copy. This should also allow handling of
other types of mappings besides just dma_map_single() ones.
Also reduce the size of the bounce buffer allocation for Isoc
endpoints to 3K, since that's the largest possible transfer size.
Tested on Raspberry Pi and Altera SOCFPGA.
Signed-off-by: Paul Zimmerman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
The DWC2 driver should now be in good enough shape to move out of
staging. I have stress tested it overnight on RPI running mass
storage and Ethernet transfers in parallel, and for several days
on our proprietary PCI-based platform.
Signed-off-by: Paul Zimmerman <[email protected]>
Cc: Felipe Balbi <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|