aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2011-07-20Ext4: handle SEEK_HOLE/SEEK_DATA genericallyJosef Bacik1-0/+21
Since Ext4 has its own lseek we need to make sure it handles SEEK_HOLE/SEEK_DATA. For now just do the same thing that is done in the generic case, somebody else can come along and make it do fancy things later. Thanks, Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20Btrfs: implement our own ->llseekJosef Bacik2-1/+150
In order to handle SEEK_HOLE/SEEK_DATA we need to implement our own llseek. Basically for the normal SEEK_*'s we will just defer to the generic helper, and for SEEK_HOLE/SEEK_DATA we will use our fiemap helper to figure out the nearest hole or data. Currently this helper doesn't check for delalloc bytes for prealloc space, so for now treat prealloc as data until that is fixed. Thanks, Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: add SEEK_HOLE and SEEK_DATA flagsJosef Bacik3-4/+54
This just gets us ready to support the SEEK_HOLE and SEEK_DATA flags. Turns out using fiemap in things like cp cause more problems than it solves, so lets try and give userspace an interface that doesn't suck. We need to match solaris here, and the definitions are *o* If /whence/ is SEEK_HOLE, the offset of the start of the next hole greater than or equal to the supplied offset is returned. The definition of a hole is provided near the end of the DESCRIPTION. *o* If /whence/ is SEEK_DATA, the file pointer is set to the start of the next non-hole file region greater than or equal to the supplied offset. So in the generic case the entire file is data and there is a virtual hole at the end. That means we will just return i_size for SEEK_HOLE and will return the same offset for SEEK_DATA. This is how Solaris does it so we have to do it the same way. Thanks, Signed-off-by: Josef Bacik <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20reiserfs: make reiserfs default to barrier=flushChristoph Hellwig1-0/+1
Change the default reiserfs mount option to barrier=flush. Based on a patch from Jeff Mahoney in the SuSE tree. Signed-off-by: Jeff Mahoney <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20ext3: make ext3 mount default to barrier=1Christoph Hellwig1-0/+2
This patch turns on barriers by default for ext3. mount -o barrier=0 will turn them off. Based on a patch from Chris Mason in the SuSE tree. Signed-off-by: Chris Mason <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Eric Sandeen <[email protected]> Acked-by: Jan Kara <[email protected]> Acked-by: Jeff Mahoney <[email protected]> Acked-by: Ted Ts'o <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20don't open-code parent_ino() in assorted ->readdir()Al Viro3-3/+3
Signed-off-by: Al Viro <[email protected]>
2011-07-20minix_getattr(): don't bother with ->d_parentAl Viro1-2/+1
we can find superblock easier, TYVM... Signed-off-by: Al Viro <[email protected]>
2011-07-20coda_venus_readdir(): use offsetof()Al Viro1-2/+1
Signed-off-by: Al Viro <[email protected]>
2011-07-20arm: don't create useless copies to pass into debugfs_create_dir()Al Viro2-12/+5
its first argument is const char * and it's really not modified... Signed-off-by: Al Viro <[email protected]>
2011-07-20switch assorted clock drivers to debugfs_remove_recursive()Al Viro6-41/+13
Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: seq_file - add event counter to simplify poll() supportKay Sievers6-43/+20
Moving the event counter into the dynamically allocated 'struc seq_file' allows poll() support without the need to allocate its own tracking structure. All current users are switched over to use the new counter. Requested-by: Andrew Morton [email protected] Acked-by: NeilBrown <[email protected]> Tested-by: Lucas De Marchi [email protected] Signed-off-by: Kay Sievers <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: move inode_dio_done to the end_io handlerChristoph Hellwig4-3/+13
For filesystems that delay their end_io processing we should keep our i_dio_count until the the processing is done. Enable this by moving the inode_dio_done call to the end_io handler if one exist. Note that the actual move to the workqueue for ext4 and XFS is not done in this patch yet, but left to the filesystem maintainers. At least for XFS it's not needed yet either as XFS has an internal equivalent to i_dio_count. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: simplify the blockdev_direct_IO prototypeChristoph Hellwig10-29/+26
Simple filesystems always pass inode->i_sb_bdev as the block device argument, and never need a end_io handler. Let's simply things for them and for my grepping activity by dropping these arguments. The only thing not falling into that scheme is ext4, which passes and end_io handler without needing special flags (yet), but given how messy the direct I/O code there is use of __blockdev_direct_IO in one instead of two out of three cases isn't going to make a large difference anyway. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: always maintain i_dio_countChristoph Hellwig3-24/+17
Maintain i_dio_count for all filesystems, not just those using DIO_LOCKING. This these filesystems to also protect truncate against direct I/O requests by using common code. Right now the only non-DIO_LOCKING filesystem that appears to do so is XFS, which uses an opencoded variant of the i_dio_count scheme. Behaviour doesn't change for filesystems never calling inode_dio_wait. For ext4 behaviour changes when using the dioread_nonlock option, which previously was missing any protection between truncate and direct I/O reads. For ocfs2 that handcrafted i_dio_count manipulations are replaced with the common code now enable. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: move inode_dio_wait calls into ->setattrChristoph Hellwig12-3/+24
Let filesystems handle waiting for direct I/O requests themselves instead of doing it beforehand. This means filesystem-specific locks to prevent new dio referenes from appearing can be held. This is important to allow generalizing i_dio_count to non-DIO_LOCKING filesystems. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20rw_semaphore: remove up/down_read_non_ownerChristoph Hellwig2-26/+0
Now that the last users is gone these can be removed. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: kill i_alloc_semChristoph Hellwig13-53/+78
i_alloc_sem is a rather special rw_semaphore. It's the last one that may be released by a non-owner, and it's write side is always mirrored by real exclusion. It's intended use it to wait for all pending direct I/O requests to finish before starting a truncate. Replace it with a hand-grown construct: - exclusion for truncates is already guaranteed by i_mutex, so it can simply fall way - the reader side is replaced by an i_dio_count member in struct inode that counts the number of pending direct I/O requests. Truncate can't proceed as long as it's non-zero - when i_dio_count reaches non-zero we wake up a pending truncate using wake_up_bit on a new bit in i_flags - new references to i_dio_count can't appear while we are waiting for it to read zero because the direct I/O count always needs i_mutex (or an equivalent like XFS's i_iolock) for starting a new operation. This scheme is much simpler, and saves the space of a spinlock_t and a struct list_head in struct inode (typically 160 bits on a non-debug 64-bit system). Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fs: simplify handling of zero sized reads in __blockdev_direct_IOChristoph Hellwig1-2/+5
Reject zero sized reads as soon as we know our I/O length, and don't borther with locks or allocations that might have to be cleaned up otherwise. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20ext4: Rewrite ext4_page_mkwrite() to use generic helpersJan Kara1-51/+55
Rewrite ext4_page_mkwrite() to use __block_page_mkwrite() helper. This removes the need of using i_alloc_sem to avoid races with truncate which seems to be the wrong locking order according to lock ordering documented in mm/rmap.c. Also calling ext4_da_write_begin() as used by the old code seems to be problematic because we can decide to flush delay-allocated blocks which will acquire s_umount semaphore - again creating unpleasant lock dependency if not directly a deadlock. Also add a check for frozen filesystem so that we don't busyloop in page fault when the filesystem is frozen. Signed-off-by: Jan Kara <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20fat: remove i_alloc_sem abuseChristoph Hellwig3-2/+7
Add a new rw_semaphore to protect bmap against truncate. Previous i_alloc_sem was abused for this, but it's going away in this series. Note that we can't simply use i_mutex, given that the swapon code calls ->bmap under it. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20VFS: Fixup kerneldoc for generic_permission()Tobias Klauser1-1/+0
The flags parameter went away in d749519b444db985e40b897f73ce1898b11f997e Signed-off-by: Tobias Klauser <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20anonfd: fix missing declarationTomasz Stanislawski1-0/+2
The forward declaration of struct file_operations is added to avoid compilation warnings. Signed-off-by: Tomasz Stanislawski <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20xfs: make use of new shrinker callout for the inode cacheDave Chinner3-56/+46
Convert the inode reclaim shrinker to use the new per-sb shrinker operations. This allows much bigger reclaim batches to be used, and allows the XFS inode cache to be shrunk in proportion with the VFS dentry and inode caches. This avoids the problem of the VFS caches being shrunk significantly before the XFS inode cache is shrunk resulting in imbalances in the caches during reclaim. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20vfs: increase shrinker batch sizeDave Chinner2-0/+7
Now that the per-sb shrinker is responsible for shrinking 2 or more caches, increase the batch size to keep econmies of scale for shrinking each cache. Increase the shrinker batch size to 1024 objects. To allow for a large increase in batch size, add a conditional reschedule to prune_icache_sb() so that we don't hold the LRU spin lock for too long. This mirrors the behaviour of the __shrink_dcache_sb(), and allows us to increase the batch size without needing to worry about problems caused by long lock hold times. To ensure that filesystems using the per-sb shrinker callouts don't cause problems, document that the object freeing method must reschedule appropriately inside loops. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20superblock: add filesystem shrinker operationsDave Chinner3-12/+51
Now we have a per-superblock shrinker implementation, we can add a filesystem specific callout to it to allow filesystem internal caches to be shrunk by the superblock shrinker. Rather than perpetuate the multipurpose shrinker callback API (i.e. nr_to_scan == 0 meaning "tell me how many objects freeable in the cache), two operations will be added. The first will return the number of objects that are freeable, the second is the actual shrinker call. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20inode: remove iprune_semDave Chinner1-21/+0
Now that we have per-sb shrinkers with a lifecycle that is a subset of the superblock lifecycle and can reliably detect a filesystem being unmounted, there is not longer any race condition for the iprune_sem to protect against. Hence we can remove it. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20superblock: introduce per-sb cache shrinker infrastructureDave Chinner6-257/+121
With context based shrinkers, we can implement a per-superblock shrinker that shrinks the caches attached to the superblock. We currently have global shrinkers for the inode and dentry caches that split up into per-superblock operations via a coarse proportioning method that does not batch very well. The global shrinkers also have a dependency - dentries pin inodes - so we have to be very careful about how we register the global shrinkers so that the implicit call order is always correct. With a per-sb shrinker callout, we can encode this dependency directly into the per-sb shrinker, hence avoiding the need for strictly ordering shrinker registrations. We also have no need for any proportioning code for the shrinker subsystem already provides this functionality across all shrinkers. Allowing the shrinker to operate on a single superblock at a time means that we do less superblock list traversals and locking and reclaim should batch more effectively. This should result in less CPU overhead for reclaim and potentially faster reclaim of items from each filesystem. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>
2011-07-20xfs: add size update tracepoint to IO completionDave Chinner2-4/+9
For improving insight into IO completion behaviour. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20xfs: convert AIL cursors to use struct list_headDave Chinner2-55/+28
The list of active AIL cursors uses a roll-your-own linked list with special casing for the AIL push cursor. Simplify this code by replacing the list with standard struct list_head lists, and use a separate list_head to track the active cursors. This allows us to treat the AIL push cursor as a generic cursor rather than as a special case, further simplifying the code. Further, fix the duplicate push cursor initialisation that the special case handling was hiding, and clean up all the comments around the active cursor list handling. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20xfs: remove confusing ail cursor wrapperDave Chinner1-31/+19
xfs_trans_ail_cursor_set() doesn't set the cursor to the current log item, it sets it to the next item. There is already a function for doing this - xfs_trans_ail_cursor_next() - and the _set function is simply a two line wrapper. Remove it and open code the setting of the cursor in the two locations that call it to remove the confusion. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20xfs: use a cursor for bulk AIL insertionDave Chinner3-28/+118
Delayed logging can insert tens of thousands of log items into the AIL at the same LSN. When the committing of log commit records occur, we can get insertions occurring at an LSN that is not at the end of the AIL. If there are thousands of items in the AIL on the tail LSN, each insertion has to walk the AIL to find the correct place to insert the new item into the AIL. This can consume large amounts of CPU time and block other operations from occurring while the traversals are in progress. To avoid this repeated walk, use a AIL cursor to record where we should be inserting the new items into the AIL without having to repeat the walk. The cursor infrastructure already provides this functionality for push walks, so is a simple extension of existing code. While this will not avoid the initial walk, it will avoid repeating it tens of thousands of times during a single checkpoint commit. This version includes logic improvements from Christoph Hellwig. Signed-off-by: Dave Chinner <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20xfs: failure mapping nfs fh to inode should return ESTALEJ. Bruce Fields1-2/+2
On xfs exports, nfsd is incorrectly returning ENOENT instead of ESTALE on attempts to use a filehandle of a deleted file (spotted with pynfs test PUTFH3). The ENOENT was coming from xfs_iget. (It's tempting to wonder whether we should just map all xfs_iget errors to ESTALE, but I don't believe so--xfs_iget can also return ENOMEM at least, which we wouldn't want mapped to ESTALE.) While we're at it, the other return of ENOENT in xfs_nfs_get_inode() also looks wrong. Signed-off-by: J. Bruce Fields <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20xfs: Remove the second parameter to xfs_sb_count()Chandra Seetharaman3-12/+7
Remove the second parameter to xfs_sb_count() since all callers of the function set them. Also, fix the header comment regarding it being called periodically. Signed-off-by: Chandra Seetharaman <[email protected]> Signed-off-by: Alex Elder <[email protected]>
2011-07-20Merge branch 'core-urgent-for-linus' of ↵Linus Torvalds5-28/+103
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: signal: align __lock_task_sighand() irq disabling and RCU softirq,rcu: Inform RCU of irq_exit() activity sched: Add irq_{enter,exit}() to scheduler_ipi() rcu: protect __rcu_read_unlock() against scheduler-using irq handlers rcu: Streamline code produced by __rcu_read_unlock() rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special rcu: decrease rcu_report_exp_rnp coupling with scheduler
2011-07-20Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds4-61/+190
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: Avoid creating superfluous NUMA domains on non-NUMA systems sched: Allow for overlapping sched_domain spans sched: Break out cpu_power from the sched_group structure
2011-07-20time: Fix stupid KERN_WARN compile issueJohn Stultz1-1/+1
Terribly embarassing. Don't know how I committed this, but its KERN_WARNING not KERN_WARN. This fixes the following compile error: kernel/time/timekeeping.c: In function ‘__timekeeping_inject_sleeptime’: kernel/time/timekeeping.c:608: error: ‘KERN_WARN’ undeclared (first use in this function) kernel/time/timekeeping.c:608: error: (Each undeclared identifier is reported only once kernel/time/timekeeping.c:608: error: for each function it appears in.) kernel/time/timekeeping.c:608: error: expected ‘)’ before string constant make[2]: *** [kernel/time/timekeeping.o] Error 1 Reported-by: Ingo Molnar <[email protected]> Signed-off-by: John Stultz <[email protected]>
2011-07-20Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds3-2/+10
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86. reboot: Make Dell Latitude E6320 use reboot=pci x86, doc only: Correct real-mode kernel header offset for init_size x86: Disable AMD_NUMA for 32bit for now
2011-07-20mmc: omap_hsmmc: Remove unused iclkBalaji T K1-10/+0
After runtime conversion to handle clk, iclk node is not used. However fclk node is still used to get clock rate. Signed-off-by: Balaji T K <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: omap_hsmmc: add runtime pm supportBalaji T K1-55/+56
* Add runtime pm support to HSMMC host controller. * Use runtime pm API to enable/disable HSMMC clock. * Use runtime autosuspend APIs to enable auto suspend delay. Based on OMAP HSMMC runtime implementation by Kevin Hilman and Kishore Kadiyala. Signed-off-by: Balaji T K <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: omap_hsmmc: Remove lazy_disableBalaji T K1-244/+2
lazy_disable framework in OMAP HSMMC manages multiple low power states and card is powered off after inactivity time of 8 seconds. Based on previous discussion on the list, card power (regulator) handling (when to power OFF/ON) should ideally be handled by core layer. Remove usage of lazy disable to allow core layer _only_ to handle card power. With the removal of lazy disable framework, MMC regulators are left ON until MMC_POWER_OFF via set_ios. Signed-off-by: Balaji T K <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: core: Set non-default Drive Strength via platform hookPhilip Rakity2-29/+40
Non default Drive Strength cannot be set automatically. It is a function of the board design and only if there is a specific platform handler can it be set. The platform handler needs to take into account the board design. Pass to the platform code the necessary information. For example: The card and host controller may indicate they support HIGH and LOW drive strength. There is no way to know what should be chosen without specific board knowledge. Setting HIGH may lead to reflections and setting LOW may not suffice. There is no mechanism (like ethernet duplex or speed pulses) to determine what should be done automatically. If no platform handler is defined -- use the default value. Signed-off-by: Philip Rakity <[email protected]> Reviewed-by: Arindam Nath <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: block: add handling for two parallel block requests in issue_rw_rqPer Forlin3-20/+84
Change mmc_blk_issue_rw_rq() to become asynchronous. The execution flow looks like this: * The mmc-queue calls issue_rw_rq(), which sends the request to the host and returns back to the mmc-queue. * The mmc-queue calls issue_rw_rq() again with a new request. * This new request is prepared in issue_rw_rq(), then it waits for the active request to complete before pushing it to the host. * When the mmc-queue is empty it will call issue_rw_rq() with a NULL req to finish off the active request without starting a new request. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: queue: add a second mmc queue request memberPer Forlin2-3/+44
Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: block: move error path in issue_rw_rq to a separate function.Per Forlin1-89/+131
Break out code without functional changes. This simplifies the code and makes way for handling two parallel requests. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar<[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: block: add a block request prepare functionPer Forlin1-104/+114
Break out code from mmc_blk_issue_rw_rq to create a block request prepare function. This doesn't change any functionallity. This helps when handling more than one active block request. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: block: add member in mmc queue struct to hold request dataPer Forlin3-128/+141
The way the request data is organized in the mmc queue struct, it only allows processing of one request at a time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This prepares for using multiple active requests in one mmc queue. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: mmc_test: test to measure how sg_len affect performancePer Forlin1-12/+139
Add a test that measures how the mmc bandwidth depends on the numbers of sg elements in the sg list. The transfer size if fixed and sg length goes from a few up to 512. The purpose is to measure overhead caused by multiple sg elements. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: mmc_test: add test for non-blocking transfersPer Forlin1-8/+310
Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using non-blocking mmc request * Write using blocking mmc request * Write using non-blocking mmc request The host driver must support pre_req() and post_req() in order to run the non-blocking test cases. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar<[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: mmc_test: add debugfs file to list all testsPer Forlin1-1/+38
Add a debugfs file "testlist" to print all available tests. Signed-off-by: Per Forlin <[email protected]> Acked-by: Kyungmin Park <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Reviewed-by: Venkatraman S <[email protected]> Tested-by: Sourav Poddar<[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>
2011-07-20mmc: mmci: implement pre_req() and post_req()Per Forlin2-13/+142
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req() before mmci_request(), mmci_request() will prepare the cache and dma just like it did it before. It is optional to use pre_req() and post_req() for mmci. Signed-off-by: Per Forlin <[email protected]> Tested-by: Linus Walleij <[email protected]> Signed-off-by: Chris Ball <[email protected]>