Age | Commit message (Collapse) | Author | Files | Lines |
|
Try to improve some of the naming a bit to be more consistent, and also
improve the flow of control in request_write() a bit.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
More random refactoring.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
Some refactoring - better to explicitly pass stuff around instead of
having it all in the "big bag of state", struct btree_op. Going to prune
struct btree_op quite a bit over time.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
This was the main point of all this refactoring - now,
btree_insert_check_key() won't fail just because the leaf node happened
to be full.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
We'll often end up with a list of adjacent keys to insert -
because bch_data_insert() may have to fragment the data it writes.
Originally, to simplify things and avoid having to deal with corner
cases bch_btree_insert() would pass keys from this list one at a time to
btree_insert_recurse() - mainly because the list of keys might span leaf
nodes, so it was easier this way.
With the btree_insert_node() refactoring, it's now a lot easier to just
pass down the whole list and have btree_insert_recurse() iterate over
leaf nodes until it's done.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
The flow of control in the old btree insertion code was rather -
backwards; we'd recurse down the btree (in btree_insert_recurse()), and
then if we needed to split the keys to be inserted into the parent node
would be effectively returned up to btree_insert_recurse(), which would
notice there was more work to do and finish the insertion.
The main problem with this was that the full logic for btree insertion
could only be used by calling btree_insert_recurse; if you'd gotten to a
btree leaf some other way and had a key to insert, if it turned out that
node needed to be split you were SOL.
This inverts the flow of control so btree_insert_node() does _full_
btree insertion, including splitting - and takes a (leaf) btree node to
insert into as a parameter.
This means we can now _correctly_ handle cache misses - for cache
misses, we need to insert a fake "check" key into the btree when we
discover we have a cache miss - while we still have the btree locked.
Previously, if the btree node was full inserting a cache miss would just
fail.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
This is prep work for the reworked btree insertion code.
The way we set b->parent is ugly and hacky... the problem is, when
btree_split() or garbage collection splits or rewrites a btree node, the
parent changes for all its (potentially already cached) children.
I may change this later and add some code to look through the btree node
cache and find all our cached child nodes and change the parent pointer
then...
Signed-off-by: Kent Overstreet <[email protected]>
|
|
Checking i->seq was redundant, because since ages ago we always
initialize the new bset when advancing b->written
Signed-off-by: Kent Overstreet <[email protected]>
|
|
Originally I got this right... except that the divides didn't use
do_div(), which broke 32 bit kernels. When I went to fix that, I forgot
that the raid stripe size usually isn't a power of two... doh
Signed-off-by: Kent Overstreet <[email protected]>
|
|
Works kind of like the ext4 setting, to panic or remount read only on
errors.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
The old asynchronous discard code was really a relic from when all the
allocation code was asynchronous - now that allocation runs out of a
dedicated thread there's no point in keeping around all that complicated
machinery.
Signed-off-by: Kent Overstreet <[email protected]>
|
|
bch_keybuf_del() takes a spinlock that can't be taken in interrupt context -
whoops. Fortunately, this code isn't enabled by default (you have to toggle a
sysfs thing).
Signed-off-by: Kent Overstreet <[email protected]>
|
|
|
|
Dirty data accounting wasn't quite right - firstly, we were adding the key we're
inserting after it could have merged with another dirty key already in the
btree, and secondly we could sometimes pass the wrong offset to
bcache_dev_sectors_dirty_add() for dirty data we were overwriting - which is
important when tracking dirty data by stripe.
Signed-off-by: Kent Overstreet <[email protected]>
Cc: linux-stable <[email protected]> # >= v3.10
|
|
The max_vfs parameter has a limit of 63 and silently fails (adding 0 vfs) when
it is out of range. This patch adds a warning so that the user knows something
went wrong. Also, this patch moves the warning in ixgbe_enable_sriov() to where
max_vfs is checked, so that even an out of range value will show the deprecated
warning. Previously, an out of range parameter didn't even warn the user to use
the new sysfs interface instead.
Signed-off-by: Jacob Keller <[email protected]>
Tested-by: Phil Schmitt <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This patch fixes multiple problems in the link modes display in ethtool.
Newer parts have more complicated methods to determine actual link
capabilities. Older parts cannot communicate with their SFP modules.
Finally, all the available defines are not displayed by ethtool. This
updates the link modes to be as accurate as possible depending on what data
is available to the driver at any given time.
Signed-off-by: Carolyn Wyborny <[email protected]>
Tested-by: Jeff Pieper <[email protected]>
Signed-off-by: Jeff Kirsher <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If virtqueue_get_buf() returned with a NULL pointer avoid a possibly
endless loop by checking for a broken virtqueue.
Signed-off-by: Heinz Graalfs <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
|
|
SDVO support for minnowboard
* 'gma500-next' of git://github.com/patjak/drm-gma500:
drm/gma500/mrst: Add SDVO to output init
drm/gma500/mrst: Don't blindly guess a mode for LVDS
drm/gma500/mrst: Setup GMBUS for oaktrail/mrst
drm/gma500/mrst: Replace WMs and chickenbits with values from EMGD
drm/gma500/mrst: Add aux register writes to SDVO
drm/gma500/mrst: Properly route oaktrail hdmi hooks
drm/gma500/mrst: Add aux register writes when programming pipe
drm/gma500/mrst: Add SDVO clock calculation
drm/gma500: Add aux device support for gmbus
drm/gma500: Add support for aux pci vdc device
drm/gma500: Add chip specific sdvo masks
drm/gma500: Add Minnowboard to the IS_MRST() macro
|
|
Turn clk_enable() and clk_disable() calls into clk_prepare_enable() and
clk_disable_unprepare() to get ready for the migration to the common
clock framework.
Cc: David Airlie <[email protected]>
Cc: [email protected]
Signed-off-by: Laurent Pinchart <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
|
|
git://people.freedesktop.org/~danvet/drm-intel into drm-next
So here's the Broadwell pull request. From a kernel driver pov there's
two areas with big changes in Broadwell:
- Completely new enumerated interrupt bits. On the plus side it now looks
fairly unform and sane.
- Completely new pagetable layout.
To ensure minimal impact on existing platforms we've refactored both the
irq and low-level gtt handling code a lot in anticipation of the bdw push.
So now bdw enabling in these areas just plugs in a bunch of vfuncs.
Otherwise it's all fairly harmless adjusting of switch cases and
if-ladders to shovel bdw into the right blocks. So minimized impact on
existing platforms. I've also merged the bdw-stage1 branch into our
-nightly integration branch for the past week to make sure we don't break
anything.
Note that there's still quite a flurry or patches floating around, but
I've figured I'll push this out. I plan to keep the bdw fixes separate
from my usual -fixes stream so that you can reject them easily in case it
still looks like too much churn. Also, bdw is for now hidden behind the
preliminary hw enabling module option. So there's no real pressure to get
follow-up patches all into 3.13.
* tag 'bdw-stage1-2013-11-08-v2' of git://people.freedesktop.org/~danvet/drm-intel: (75 commits)
drm/i915: Mask the vblank interrupt on bdw by default
drm/i915: Wire up cpu fifo underrun reporting support for bdw
drm/i915: Optimize gen8_enable|disable_vblank functions
drm/i915: Wire up pipe CRC support for bdw
drm/i915: Wire up PCH interrupts for bdw
drm/i915: Wire up port A aux channel
drm/i915: Fix up the bdw pipe interrupt enable lists
drm/i915: Optimize pipe irq handling on bdw
drm/i915/bdw: Take render error interrupt out of the mask
drm/i915/bdw: Add BDW PCH check first
drm/i915: Use hsw_crt_get_config on BDW
drm/i915/bdw: Change dp aux timeout to 600us on DDIA
drm/i915/bdw: Enable trickle feed on Broadwell
drm/i915/bdw: WaSingleSubspanDispatchOnAALinesAndPoints
drm/i915/bdw: conservative SBE VUE cache mode
drm/i915/bdw: Limit SDE poly depth FIFO to 2
drm/i915/bdw: Sampler power bypass disable
ddrm/i915/bdw: Disable centroid pixel perf optimization
drm/i915/bdw: BWGTLB clock gate disable
drm/i915/bdw: Implement edp PSR workarounds
...
|
|
into drm-next
A few more patches for 3.13. The big one here is Hawaii support.
I wanted to get that out sooner, but was sick earlier this week. That
said, it's mostly self contained, so it shouldn't impact other asics.
The rest are just bug fixes and a merge fix.
* 'drm-next-3.13' of git://people.freedesktop.org/~agd5f/linux: (23 commits)
Revert "drm/radeon/audio: don't set speaker allocation on DCE4+"
drm/radeon/audio: improve ACR calculation
drm/radeon/audio: correct ACR table
drm/radeon: fix mismerge of drm-next with 3.12
drm/radeon: add pci ids for hawaii
drm/radeon: fill in radeon_asic_init for hawaii
drm/radeon: modesetting updates for hawaii
drm/radeon: atombios.h updates for hawaii
drm/radeon: update cik_get_csb_buffer for hawaii
drm/radeon: add hawaii dpm support
drm/radeon/cik: add hawaii UVD support
drm/radeon: update firmware loading for hawaii
drm/radeon: update rb setup for hawaii
drm/radeon: add golden register settings for hawaii
drm/radeon: update cik_tiling_mode_table_init() for hawaii
drm/radeon: minor updates to cik.c for hawaii
drm/radeon: update cik_gpu_init() for hawaii
drm/radeon: add Hawaii chip family
drm/radeon: fix-up some float to fixed conversion thinkos
drm/radeon: use HDP_MEM_COHERENCY_FLUSH_CNTL for sdma as well
...
|
|
drm-next
prime support, inactive rework, render nodes
* 'msm-next' of git://people.freedesktop.org/~robclark/linux:
drm/msm/mdp4: page_flip cleanups/fixes
drm/msm: EBUSY status handling in msm_gem_fault()
drm/msm: rework inactive-work
drm/msm: add plane support
drm/msm: resync generated headers
drm/msm: support render nodes
drm/msm: prime support
|
|
Pull Request for 3.13 for FCOE tree.
Signed-off-by: James Bottomley <[email protected]>
|
|
This uses the proper div macro.
Signed-off-by: Dave Airlie <[email protected]>
|
|
If a write block triggers promotion and covers a whole block we can
avoid a copy.
Introduce dm_{hook,unhook}_bio to simplify saving and restoring bio
fields (bi_private is now used by overwrite). Switch writethrough
support over to using these helpers too.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Previously these promotions only got priority if there were unused cache
blocks. Now we give them priority if there are any clean blocks in the
cache.
The fio_soak_test in the device-mapper-test-suite now gives uniform
performance across subvolumes (~16 seconds).
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
There are now two multiqueues for in cache blocks. A clean one and a
dirty one.
writeback_work comes from the dirty one. Demotions come from the clean
one.
There are two benefits:
- Performance improvement, since demoting a clean block is a noop.
- The cache cleans itself when io load is light.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Heinz Mauelshagen <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Check commit_requested flag _before_ calling
dm_cache_changed_this_transaction() superfluously.
Also, be sure to set last_commit_jiffies _after_ dm_cache_commit()
completes.
Signed-off-by: Heinz Mauelshagen <[email protected]>
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Don't waste time spotting blocks that have been allocated and then freed
in the same transaction.
The extra lookup is expensive, and I don't think it really gives us much.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
The option DM_LOG_USERSPACE is sub-option of DM_MIRROR, so place it
right after DM_MIRROR. Doing so fixes various other Device mapper
targets/features to be properly nested under "Device mapper support".
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
This patch allows the removal of an open device to be deferred until
it is closed. (Previously such a removal attempt would fail.)
The deferred remove functionality is enabled by setting the flag
DM_DEFERRED_REMOVE in the ioctl structure on DM_DEV_REMOVE or
DM_REMOVE_ALL ioctl.
On return from DM_DEV_REMOVE, the flag DM_DEFERRED_REMOVE indicates if
the device was removed immediately or flagged to be removed on close -
if the flag is clear, the device was removed.
On return from DM_DEV_STATUS and other ioctls, the flag
DM_DEFERRED_REMOVE is set if the device is scheduled to be removed on
closure.
A device that is scheduled to be deleted can be revived using the
message "@cancel_deferred_remove". This message clears the
DMF_DEFERRED_REMOVE flag so that the device won't be deleted on close.
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Alasdair G Kergon <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
If preresume fails it is worth logging an error given that a device is
left suspended due to the failure.
This change was motivated by local preresume error logging that was
added to the cache target ("preresume failed"). Elevating this
target-agnostic context for the where the target-specific error occurred
relative to the DM core's callouts makes sense.
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Joe Thornber <[email protected]>
|
|
dm-crypt can already activate TCRYPT (TrueCrypt compatible) containers
in LRW or XTS block encryption mode.
TCRYPT containers prior to version 4.1 use CBC mode with some additional
tweaks, this patch adds support for these containers.
This new mode is implemented using special IV generator named TCW
(TrueCrypt IV with whitening). TCW IV only supports containers that are
encrypted with one cipher (Tested with AES, Twofish, Serpent, CAST5 and
TripleDES).
While this mode is legacy and is known to be vulnerable to some
watermarking attacks (e.g. revealing of hidden disk existence) it can
still be useful to activate old containers without using 3rd party
software or for independent forensic analysis of such containers.
(Both the userspace and kernel code is an independent implementation
based on the format documentation and it completely avoids use of
original source code.)
The TCW IV generator uses two additional keys: Kw (whitening seed, size
is always 16 bytes - TCW_WHITENING_SIZE) and Kiv (IV seed, size is
always the IV size of the selected cipher). These keys are concatenated
at the end of the main encryption key provided in mapping table.
While whitening is completely independent from IV, it is implemented
inside IV generator for simplification.
The whitening value is always 16 bytes long and is calculated per sector
from provided Kw as initial seed, xored with sector number and mixed
with CRC32 algorithm. Resulting value is xored with ciphertext sector
content.
IV is calculated from the provided Kiv as initial IV seed and xored with
sector number.
Detailed calculation can be found in the Truecrypt documentation for
version < 4.1 and will also be described on dm-crypt site, see:
http://code.google.com/p/cryptsetup/wiki/DMCrypt
The experimental support for activation of these containers is already
present in git devel brach of cryptsetup.
Signed-off-by: Milan Broz <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Some encryption modes use extra keys (e.g. loopAES has IV seed) which
are not used in block cipher initialization but are part of key string
in table constructor.
This patch adds an additional field which describes the length of the
extra key(s) and substracts it before real key encryption setting.
The key_size always includes the size, in bytes, of the key provided
in mapping table.
The key_parts describes how many parts (usually keys) are contained in
the whole key buffer. And key_extra_size contains size in bytes of
additional keys part (this number of bytes must be subtracted because it
is processed by the IV generator).
| K1 | K2 | .... | K64 | Kiv |
|----------- key_size ----------------- |
| |-key_extra_size-|
| [64 keys] | [1 key] | => key_parts = 65
Example where key string contains main key K, whitening key
Kw and IV seed Kiv:
| K | Kiv | Kw |
|--------------- key_size --------------|
| |-----key_extra_size------|
| [1 key] | [1 key] | [1 key] | => key_parts = 3
Because key_extra_size is calculated during IV mode setting, key
initialization is moved after this step.
For now, this change has no effect to supported modes (thanks to ilog2
rounding) but it is required by the following patch.
Also, fix a sparse warning in crypt_iv_lmk_one().
Signed-off-by: Milan Broz <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
A migration failure should be logged (albeit limited).
Signed-off-by: Heinz Mauelshagen <[email protected]>
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Fix a few cell_defer() calls that weren't passing a bool.
Signed-off-by: Heinz Mauelshagen <[email protected]>
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Return -EINVAL when the specified cache policy is unknown rather than
returning -ENOMEM.
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Rename takeout_queue to concat_queue.
Fix a harmless bug in mq policies pop() function. Currently pop()
always succeeds, with up coming changes this wont be the case.
Fix typo in comment above pre_cache_to_cache prototype.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
No need to return from a void function.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Make the quiescing flag an atomic_t and stop protecting it with a spin
lock.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
for a shutdown
The code that was trying to do this was inadequate. The postsuspend
method (in ioctl context), needs to wait for the worker thread to
acknowledge the request to quiesce. Otherwise the migration count may
drop to zero temporarily before the worker thread realises we're
quiescing. In this case the target will be taken down, but the worker
thread may have issued a new migration, which will cause an oops when
it completes.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Cc: [email protected] # 3.9+
|
|
Previously only origin bios could trigger ticks, which meant if all
the io was destined for the cache no ticks were generated. If no ticks
are generated then multiple hits, and movements in general, are
attributed to the same tick.
Only a stop gap fix, we need a better solution.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
It is safe to use a mutex in mq_residency() at this point since it is
only called from ioctl context. But future-proof mq_residency() by
using might_sleep() to catch new contexts that cannot sleep.
Signed-off-by: Joe Thornber <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Catch missing regulator-type property in DT and return an error
gracefully instead of deferencing a NULL pointer and crashing.
Signed-off-by: Laurent Pinchart <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
|
|
The driver is missing the iio_buffer_attach() call. As such it will attempt
to free the buffer twice on removal.
Introduced in commit 9e69c9 ("iio: Add reference counting for buffers").
Signed-off-by: Lars-Peter Clausen <[email protected]>
Reported-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
|
|
trigger / buffer handling was introduced in changeset
cb9b9a82 staging:iio:hmc5843: Add trigger handling
Signed-off-by: Peter Meerwald <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
|
|
Signed-off-by: Peter Meerwald <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
|
|
Signed-off-by: Peter Meerwald <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
|
|
last argument of IIO_ST is shift, not endianness
Signed-off-by: Peter Meerwald <[email protected]>
Signed-off-by: Jonathan Cameron <[email protected]>
|