aboutsummaryrefslogtreecommitdiff
path: root/drivers/md/raid1.c
AgeCommit message (Collapse)AuthorFilesLines
2011-07-28md/raid1: avoid reading known bad blocks during resyncNeilBrown1-22/+75
When performing resync/etc, keep the size of the request small enough that it doesn't overlap any known bad blocks. Devices with badblocks at the start of the request are completely excluded. If there is nowhere to read from due to bad blocks, record a bad block on each target device. Now that we never read from known-bad-blocks we can allow devices with known-bad-blocks into a RAID1. Signed-off-by: NeilBrown <[email protected]>
2011-07-28md/raid1: avoid reading from known bad blocks.NeilBrown1-29/+179
Now that we have a bad block list, we should not read from those blocks. There are several main parts to this: 1/ read_balance needs to check for bad blocks, and return not only the chosen device, but also how many good blocks are available there. 2/ fix_read_error needs to avoid trying to read from bad blocks. 3/ read submission must be ready to issue multiple reads to different devices as different bad blocks on different devices could mean that a single large read cannot be served by any one device, but can still be served by the array. This requires keeping count of the number of outstanding requests per bio. This count is stored in 'bi_phys_segments' 4/ retrying a read needs to also be ready to submit a smaller read and queue another request for the rest. This does not yet handle bad blocks when reading to perform resync, recovery, or check. 'md_trim_bio' will also be used for RAID10, so put it in md.c and export it. Signed-off-by: NeilBrown <[email protected]>
2011-07-28md: don't allow arrays to contain devices with bad blocks.NeilBrown1-0/+7
As no personality understand bad block lists yet, we must reject any device that is known to contain bad blocks. As the personalities get taught, these tests can be removed. This only applies to raid1/raid5/raid10. For linear/raid0/multipath/faulty the whole concept of bad blocks doesn't mean anything so there is no point adding the checks. Signed-off-by: NeilBrown <[email protected]> Reviewed-by: Namhyung Kim <[email protected]>
2011-07-27MD: raid1 s/sysfs_notify_dirent/sysfs_notify_dirent_safeJonathan Brassow1-1/+1
If device-mapper creates a RAID1 array that includes devices to be rebuilt, it will deref a NULL pointer when finished because sysfs is not used by device-mapper instantiated RAID devices. Signed-off-by: Jonathan Brassow <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2011-07-27md/raid1: move rdev->corrected_errors countingNamhyung Kim1-11/+6
Read errors are considered to corrected if write-back and re-read cycle is finished without further problems. Thus moving the rdev-> corrected_errors counting after the re-reading looks more reasonable IMHO. Also included a couple of whitespace fixes on sync_page_io(). Signed-off-by: Namhyung Kim <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2011-07-27md: change managed of recovery_disabled.NeilBrown1-2/+5
If we hit a read error while recovering a mirror, we want to abort the recovery without necessarily failing the disk - as having a disk this a read error is better than not having an array at all. Currently this is managed with a per-array flag "recovery_disabled" and is only implemented for RAID1. For RAID10 we will need finer grained control as we might want to disable recovery for individual devices separately. So push more of the decision making into the personality. 'recovery_disabled' is now a 'cookie' which is copied when the personality want to disable recovery and is changed when a device is added to the array as this is used as a trigger to 'try recovery again'. This will allow RAID10 to get the control that it needs. Signed-off-by: NeilBrown <[email protected]>
2011-07-27md: introduce link/unlink_rdev() helpersNamhyung Kim1-10/+5
There are places where sysfs links to rdev are handled in a same way. Add the helper functions to consolidate them. Signed-off-by: Namhyung Kim <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2011-07-27md/raid: use printk_ratelimited instead of printk_ratelimitChristian Dietrich1-10/+15
As per printk_ratelimit comment, it should not be used. Signed-off-by: Christian Dietrich <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2011-06-08MD: raid1 changes to allow use by device mapperJonathan Brassow1-7/+17
MD RAID1: Changes to allow RAID1 to be used by device-mapper (dm-raid.c) Added the necessary congestion function and conditionalize calls requiring an array 'queue' or 'gendisk'. Signed-off-by: Jonathan Brassow <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2011-05-11md: allow resync_start to be set while an array is active.NeilBrown1-1/+1
The sysfs attribute 'resync_start' (known internally as recovery_cp), records where a resync is up to. A value of 0 means the array is not known to be in-sync at all. A value of MaxSector means the array is believed to be fully in-sync. When the size of member devices of an array (RAID1,RAID4/5/6) is increased, the array can be increased to match. This process sets resync_start to the old end-of-device offset so that the new part of the array gets resynced. However with RAID1 (and RAID6) a resync is not technically necessary and may be undesirable. So it would be good if the implied resync after the array is resized could be avoided. So: change 'resync_start' so the value can be changed while the array is active, and as a precaution only allow it to be changed while resync/recovery is 'frozen'. Changing it once resync has started is not going to be useful anyway. This allows the array to be resized without a resync by: write 'frozen' to 'sync_action' write new size to 'component_size' (this will set resync_start) write 'none' to 'resync_start' write 'idle' to 'sync_action'. Also slightly improve some tests on recovery_cp when resizing raid1/raid5. Now that an arbitrary value could be set we should be more careful in our tests. Signed-off-by: NeilBrown <[email protected]>
2011-05-11md/raid1: improve handling of pages allocated for write-behind.NeilBrown1-29/+26
The current handling and freeing of these pages is a bit fragile. We only keep the list of allocated pages in each bio, so we need to still have a valid bio when freeing the pages, which is a bit clumsy. So simply store the allocated page list in the r1_bio so it can easily be found and freed when we are finished with the r1_bio. Signed-off-by: NeilBrown <[email protected]>
2011-05-11md/raid1: try fix_sync_read_error before process_checks.NeilBrown1-14/+5
If we get a read error during resync/recovery we current repeat with single-page reads to find out just where the error is, and possibly read each page from a different device. With check/repair we don't currently do that, we just fail. However it is possible that while all devices fail on the large 64K read, we might be able to satisfy each 4K from one device or another. So call fix_sync_read_error before process_checks to maximise the chance of finding good data and writing it out to the devices with read errors. For this to work, we need to set the 'uptodate' flags properly after fix_sync_read_error has succeeded. Signed-off-by: NeilBrown <[email protected]>
2011-05-11md/raid1: tidy up new functions: process_checks and fix_sync_read_error.NeilBrown1-89/+95
These changes are mostly cosmetic: 1/ change mddev->raid_disks to conf->raid_disks because the later is technically safer, though in current practice it doesn't matter in this particular context. 2/ Rearrange two for / if loops to have an early 'continue' so the body of the 'if' doesn't need to be indented so much. Signed-off-by: NeilBrown <[email protected]>
2011-05-11md/raid1: split out two sub-functions from sync_request_writeNeilBrown1-173/+192
sync_request_write is too big and too deep. So split out two self-contains bits of functionality into separate function. Signed-off-by: NeilBrown <[email protected]>
2011-05-11md/raid1: clean up read_balance.NeilBrown1-49/+34
read_balance has two loops which both look for a 'best' device based on slightly different criteria. This is clumsy and makes is hard to add extra criteria. So replace it all with a single loop that combines everything. Signed-off-by: NeilBrown <[email protected]>
2011-04-18md: fix up raid1/raid10 unplugging.NeilBrown1-14/+10
We just need to make sure that an unplug event wakes up the md thread, which is exactly what mddev_check_plugged does. Also remove some plug-related code that is no longer needed. Signed-off-by: NeilBrown <[email protected]>
2011-04-18md: use new plugging interface for RAID IO.NeilBrown1-1/+4
md/raid submits a lot of IO from the various raid threads. So adding start/finish plug calls to those so that some plugging happens. Signed-off-by: NeilBrown <[email protected]>
2011-03-17block: Require subsystems to explicitly allocate bio_set integrity mempoolMartin K. Petersen1-3/+2
MD and DM create a new bio_set for every metadevice. Each bio_set has an integrity mempool attached regardless of whether the metadevice is capable of passing integrity metadata. This is a waste of memory. Instead we defer the allocation decision to MD and DM since we know at metadevice creation time whether integrity passthrough is needed or not. Automatic integrity mempool allocation can then be removed from bioset_create() and we make an explicit integrity allocation for the fs_bio_set. Signed-off-by: Martin K. Petersen <[email protected]> Reported-by: Zdenek Kabelac <[email protected]> Acked-by: Mike Snitzer <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2011-03-10Merge branch 'for-2.6.39/stack-plug' into for-2.6.39/coreJens Axboe1-69/+17
Conflicts: block/blk-core.c block/blk-flush.c drivers/md/raid1.c drivers/md/raid10.c drivers/md/raid5.c fs/nilfs2/btnode.c fs/nilfs2/mdt.c Signed-off-by: Jens Axboe <[email protected]>
2011-03-10block: remove per-queue pluggingJens Axboe1-66/+17
Code has been converted over to the new explicit on-stack plugging, and delay users have been converted to use the new API for that. So lets kill off the old plugging along with aops->sync_page(). Signed-off-by: Jens Axboe <[email protected]>
2011-02-21md: avoid spinlock problem in blk_throtl_exitNeilBrown1-2/+4
blk_throtl_exit assumes that ->queue_lock still exists, so make sure that it does. To do this, we stop redirecting ->queue_lock to conf->device_lock and leave it pointing where it is initialised - __queue_lock. As the blk_plug functions check the ->queue_lock is held, we now take that spin_lock explicitly around the plug functions. We don't need the locking, just the warning removal. This is needed for any kernel with the blk_throtl code, which is which is 2.6.37 and later. Cc: [email protected] Signed-off-by: NeilBrown <[email protected]>
2011-01-14md-new-param-to_sync_page_ioJonathan Brassow1-16/+12
Add new parameter to 'sync_page_io'. The new parameter allows us to distinguish between metadata and data operations. This becomes important later when we add the ability to use separate devices for data and metadata. Signed-off-by: Jonathan Brassow <[email protected]>
2011-01-14md: Fix single printks with multiple KERN_<level>sJoe Perches1-2/+3
Noticed-by: Russell King <[email protected]> Signed-off-by: Joe Perches <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-11-24md/raid1: really fix recovery looping when single good device fails.NeilBrown1-0/+1
Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a problem where if a raid1 with just one good device gets a read-error during recovery, the recovery would abort and immediately restart in an infinite loop. However it depended on raid1_remove_disk removing the spare device from the array. But that does not happen in this case. So add a test so that in the 'recovery_disabled' case, the device will be removed. This suitable for any kernel since 2.6.29 which is when recovery_disabled was introduced. Cc: [email protected] Reported-by: Sebastian Färber <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-10-29md: tidy up device searches in read_balance.NeilBrown1-56/+36
The code for searching through the device list to read-balance in raid1 is rather clumsy and hard to follow. Try to simplify it a bit. No important functionality change here. Signed-off-by: NeilBrown <[email protected]>
2010-10-29md/raid1: fix some typos in comments.NeilBrown1-3/+3
Signed-off-by: NeilBrown <[email protected]>
2010-10-29md/raid1: discard unused variable.NeilBrown1-1/+0
This structure field (flushing_bio_list) is never used, so remove it. Signed-off-by: NeilBrown <[email protected]>
2010-10-28md: use separate bio pool for each md device.NeilBrown1-3/+4
bio_clone and bio_alloc allocate from a common bio pool. If an md device is stacked with other devices that use this pool, or under something like swap which uses the pool, then the multiple calls on the pool can cause deadlocks. So allocate a local bio pool for each md array and use that rather than the common pool. This pool is used both for regular IO and metadata updates. Signed-off-by: NeilBrown <[email protected]>
2010-10-28md: change type of first arg to sync_page_io.NeilBrown1-6/+6
Currently sync_page_io takes a 'bdev'. Every caller passes 'rdev->bdev'. We will soon want another field out of the rdev in sync_page_io, So just pass the rdev instead of the bdev out of it. Signed-off-by: NeilBrown <[email protected]>
2010-10-28md/raid1: perform mem allocation before disabling writes during resync.NeilBrown1-1/+1
Though this mem alloc is GFP_NOIO an so will not deadlock, it seems better to do the allocation before 'raise_barrier' which stops any IO requests while the resync proceeds. raid10 always uses this order, so it is at least consistent to do the same in raid1. Signed-off-by: NeilBrown <[email protected]>
2010-10-28md: use bio_kmalloc rather than bio_alloc when failure is acceptable.NeilBrown1-1/+1
bio_alloc can never fail (as it uses a mempool) but an block indefinitely, especially if the caller is holding a reference to a previously allocated bio. So these to places which both handle failure and hold multiple bios should not use bio_alloc, they should use bio_kmalloc. Signed-off-by: NeilBrown <[email protected]>
2010-10-28md: Fix possible deadlock with multiple mempool allocations.NeilBrown1-52/+46
It is not safe to allocate from a mempool while holding an item previously allocated from that mempool as that can deadlock when the mempool is close to exhaustion. So don't use a bio list to collect the bios to write to multiple devices in raid1 and raid10. Instead queue each bio as it becomes available so an unplug will activate all previously allocated bios and so a new bio has a chance of being allocated. This means we must set the 'remaining' count to '1' before submitting any requests, then when all are submitted, decrement 'remaining' and possible handle the write completion at that point. Reported-by: Torsten Kaiser <[email protected]> Tested-by: Torsten Kaiser <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-10-28md: use sector_t in bitmap_get_counterNeilBrown1-2/+2
bitmap_get_counter returns the number of sectors covered by the counter in a pass-by-reference variable. In some cases this can be very large, so make it a sector_t for safety. Signed-off-by: NeilBrown <[email protected]>
2010-10-19Merge branch 'v2.6.36-rc8' into for-2.6.37/barrierJens Axboe1-1/+3
Conflicts: block/blk-core.c drivers/block/loop.c mm/swapfile.c Signed-off-by: Jens Axboe <[email protected]>
2010-10-07md/raid1: minor bio initialisation improvements.NeilBrown1-0/+2
When performing a resync we pre-allocate some bios and repeatedly use them. This requires us to re-initialise them each time. One field (bi_comp_cpu) and some flags weren't being initiaised reliably. Signed-off-by: NeilBrown <[email protected]>
2010-10-07md/raid1: avoid overflow in raid1 resync when bitmap is in use.NeilBrown1-1/+1
bitmap_start_sync returns - via a pass-by-reference variable - the number of sectors before we need to check with the bitmap again. Since commit ef4256733506f245 this number can be substantially larger, 2^27 is a common value. Unfortunately it is an 'int' and so when raid1.c:sync_request shifts it 9 places to the left it becomes 0. This results in a zero-length read which the scsi layer justifiably complains about. This patch just removes the shift so the common case becomes safe with a trivially-correct patch. In the next merge window we will convert this 'int' to a 'sector_t' Reported-by: "George Spelvin" <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-09-10md: implment REQ_FLUSH/FUA supportTejun Heo1-117/+59
This patch converts md to support REQ_FLUSH/FUA instead of now deprecated REQ_HARDBARRIER. In the core part (md.c), the following changes are notable. * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with processing of other requests and thus there is no reason to mark the queue congested while FLUSH/FUA is in progress. * REQ_FLUSH/FUA failures are final and its users don't need retry logic. Retry logic is removed. * Preflush needs to be issued to all member devices but FUA writes can be handled the same way as other writes - their processing can be deferred to request_queue of member devices. md_barrier_request() is renamed to md_flush_request() and simplified accordingly. For linear, raid0 and multipath, the core changes are enough. raid1, 5 and 10 need the following conversions. * raid1: Handling of FLUSH/FUA bio's can simply be deferred to request_queues of member devices. Barrier related logic removed. * raid5: Queue draining logic dropped. FUA bit is propagated through biodrain and stripe resconstruction such that all the updated parts of the stripe are written out with FUA writes if any of the dirtying writes was FUA. preread_active_stripes handling in make_request() is updated as suggested by Neil Brown. * raid10: FUA bit needs to be propagated to write clones. linear, raid0, 1, 5 and 10 tested. Signed-off-by: Tejun Heo <[email protected]> Reviewed-by: Neil Brown <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-08-18md raid-1/10 Fix bio_rw bit manipulations againNeilBrown1-4/+4
commit 7b6d91daee5cac6402186ff224c3af39d79f4a0e changed the behaviour of a few variables in raid1 and raid10 from flags to bit-sets, but left them as type 'bool' so they did not work. Change them (back) to unsigned long. (historical note: see 1ef04fefe2241087d9db7e9615c3f11b516e36cf) Signed-off-by: NeilBrown <[email protected]> Reported-by: Jiri Slaby <[email protected]> and many others
2010-08-18md: provide appropriate return value for spare_active functions.NeilBrown1-5/+7
md_check_recovery expects ->spare_active to return 'true' if any spares were activated, but none of them do, so the consequent change in 'degraded' is not notified through sysfs. So count the number of spares activated, subtract it from 'degraded' just once, and return it. Reported-by: Adrian Drzewiecki <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-08-18md: Notify sysfs when RAID1/5/10 disk is In_sync.Adrian Drzewiecki1-0/+1
When RAID1 is done syncing disks, it'll update the state of synced rdevs to In_sync. But it neglected to notify sysfs that the attribute changed. So any programs that are waiting for an rdev's state to change will not be woken. (raid5/raid10 added by neilb) Signed-off-by: Adrian Drzewiecki <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-08-07block: unify flags for struct bio and struct requestChristoph Hellwig1-12/+10
Remove the current bio flags and reuse the request flags for the bio, too. This allows to more easily trace the type of I/O from the filesystem down to the block driver. There were two flags in the bio that were missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've renamed two request flags that had a superflous RW in them. Note that the flags are in bio.h despite having the REQ_ name - as blkdev.h includes bio.h that is the only way to go for now. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2010-05-22Merge commit '3ff195b011d7decf501a4d55aeed312731094796' into for-linusNeilBrown1-0/+1
Conflicts: drivers/md/md.c - Resolved conflict in md_update_sb - Added extra 'NULL' arg to new instance of sysfs_get_dirent. Signed-off-by: NeilBrown <[email protected]>
2010-05-18md: Fix read balancing in RAID1 and RAID10 on drives > 2TBNeilBrown1-2/+2
read_balance uses a "unsigned long" for a sector number which will get truncated beyond 2TB. This will cause read-balancing to be non-optimal, and can cause data to be read from the 'wrong' branch during a resync. This has a very small chance of returning wrong data. Reported-by: Jordan Russell <[email protected]> Cc: [email protected] Signed-off-by: NeilBrown <[email protected]>
2010-05-18md/raid1: improve printk messagesNeilBrown1-28/+29
Make sure the array name is included in a uniform way in all printk messages. Signed-off-by: NeilBrown <[email protected]>
2010-05-18md/raid1: delay reads that could overtake behind-writes.NeilBrown1-7/+18
When a raid1 array is configured to support write-behind on some devices, it normally only reads from other devices. If all devices are write-behind (because the rest have failed) it is possible for a read request to be serviced before a behind-write request, which would appear as data corruption. So when forced to read from a WriteMostly device, wait for any write-behind to complete, and don't start any more behind-writes. Signed-off-by: NeilBrown <[email protected]>
2010-05-18md/raid1: fix confusing 'redirect sector' message.NeilBrown1-4/+4
This message seems to suggest the named device is the one on which a read failed, however it is actually the device that the read will be redirected to. So make the message a little clearer. Reported-by: Tim Burgess <[email protected]> Signed-off-by: NeilBrown <[email protected]>
2010-05-18md: pass mddev to make_request functions rather than request_queueNeilBrown1-2/+1
We used to pass the personality make_request function direct to the block layer so the first argument had to be a queue. But now we have the intermediary md_make_request so it makes at lot more sense to pass a struct mddev_s. It makes it possible to have an mddev without its own queue too. Signed-off-by: NeilBrown <[email protected]>
2010-05-18md: remove ->changed and related code.NeilBrown1-1/+0
We set ->changed to 1 and call check_disk_change at the end of md_open so that bd_invalidated would be set and thus partition rescan would happen appropriately. Now that we call revalidate_disk directly, which sets bd_invalidates, that indirection is no longer needed and can be removed. Signed-off-by: NeilBrown <[email protected]>
2010-05-18md: move io accounting out of personalities into md_make_requestNeilBrown1-7/+0
While I generally prefer letting personalities do as much as possible, given that we have a central md_make_request anyway we may as well use it to simplify code. Also this centralises knowledge of ->gendisk which will help later. Signed-off-by: NeilBrown <[email protected]>
2010-05-18drivers/md: Remove unnecessary casts of void *H Hartley Sweeten1-4/+4
void pointers do not need to be cast to other pointer types. Signed-off-by: H Hartley Sweeten <[email protected]> Signed-off-by: NeilBrown <[email protected]>