aboutsummaryrefslogtreecommitdiff
path: root/include/linux/timekeeper_internal.h
AgeCommit message (Collapse)AuthorFilesLines
2017-06-20time: Clean up CLOCK_MONOTONIC_RAW time handlingJohn Stultz1-2/+2
Now that we fixed the sub-ns handling for CLOCK_MONOTONIC_RAW, remove the duplicitive tk->raw_time.tv_nsec, which can be stored in tk->tkr_raw.xtime_nsec (similarly to how its handled for monotonic time). Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Miroslav Lichvar <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Will Deacon <[email protected]> Cc: Daniel Mentz <[email protected]> Tested-by: Daniel Mentz <[email protected]> Signed-off-by: John Stultz <[email protected]>
2017-06-20time: Fix CLOCK_MONOTONIC_RAW sub-nanosecond accountingJohn Stultz1-2/+2
Due to how the MONOTONIC_RAW accumulation logic was handled, there is the potential for a 1ns discontinuity when we do accumulations. This small discontinuity has for the most part gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW in their vDSO clock_gettime implementation, we've seen failures with the inconsistency-check test in kselftest. This patch addresses the issue by using the same sub-ns accumulation handling that CLOCK_MONOTONIC uses, which avoids the issue for in-kernel users. Since the ARM64 vDSO implementation has its own clock_gettime calculation logic, this patch reduces the frequency of errors, but failures are still seen. The ARM64 vDSO will need to be updated to include the sub-nanosecond xtime_nsec values in its calculation for this issue to be completely fixed. Signed-off-by: John Stultz <[email protected]> Tested-by: Daniel Mentz <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Kevin Brodsky <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Will Deacon <[email protected]> Cc: "stable #4 . 8+" <[email protected]> Cc: Miroslav Lichvar <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2017-06-20time: Fix clock->read(clock) race around clocksource changesJohn Stultz1-1/+0
In tests, which excercise switching of clocksources, a NULL pointer dereference can be observed on AMR64 platforms in the clocksource read() function: u64 clocksource_mmio_readl_down(struct clocksource *c) { return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; } This is called from the core timekeeping code via: cycle_now = tkr->read(tkr->clock); tkr->read is the cached tkr->clock->read() function pointer. When the clocksource is changed then tkr->clock and tkr->read are updated sequentially. The code above results in a sequential load operation of tkr->read and tkr->clock as well. If the store to tkr->clock hits between the loads of tkr->read and tkr->clock, then the old read() function is called with the new clock pointer. As a consequence the read() function dereferences a different data structure and the resulting 'reg' pointer can point anywhere including NULL. This problem was introduced when the timekeeping code was switched over to use struct tk_read_base. Before that, it was theoretically possible as well when the compiler decided to reload clock in the code sequence: now = tk->clock->read(tk->clock); Add a helper function which avoids the issue by reading tk_read_base->clock once into a local variable clk and then issue the read function via clk->read(clk). This guarantees that the read() function always gets the proper clocksource pointer handed in. Since there is now no use for the tkr.read pointer, this patch also removes it, and to address stopping the fast timekeeper during suspend/resume, it introduces a dummy clocksource to use rather then just a dummy read function. Signed-off-by: John Stultz <[email protected]> Acked-by: Ingo Molnar <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: stable <[email protected]> Cc: Miroslav Lichvar <[email protected]> Cc: Daniel Mentz <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-12-25clocksource: Use a plain u64 instead of cycle_tThomas Gleixner1-5/+5
There is no point in having an extra type for extra confusion. u64 is unambiguous. Conversion was done with the following coccinelle script: @rem@ @@ -typedef u64 cycle_t; @fix@ typedef cycle_t; @@ -cycle_t +u64 Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: John Stultz <[email protected]>
2016-03-02time: Add history to cross timestamp interface supporting slower devicesChristopher S. Hall1-0/+2
Another representative use case of time sync and the correlated clocksource (in addition to PTP noted above) is PTP synchronized audio. In a streaming application, as an example, samples will be sent and/or received by multiple devices with a presentation time that is in terms of the PTP master clock. Synchronizing the audio output on these devices requires correlating the audio clock with the PTP master clock. The more precise this correlation is, the better the audio quality (i.e. out of sync audio sounds bad). From an application standpoint, to correlate the PTP master clock with the audio device clock, the system clock is used as a intermediate timebase. The transforms such an application would perform are: System Clock <-> Audio clock System Clock <-> Network Device Clock [<-> PTP Master Clock] Modern Intel platforms can perform a more accurate cross timestamp in hardware (ART,audio device clock). The audio driver requires ART->system time transforms -- the same as required for the network driver. These platforms offload audio processing (including cross-timestamps) to a DSP which to ensure uninterrupted audio processing, communicates and response to the host only once every millsecond. As a result is takes up to a millisecond for the DSP to receive a request, the request is processed by the DSP, the audio output hardware is polled for completion, the result is copied into shared memory, and the host is notified. All of these operation occur on a millisecond cadence. This transaction requires about 2 ms, but under heavier workloads it may take up to 4 ms. Adding a history allows these slow devices the option of providing an ART value outside of the current interval. In this case, the callback provided is an accessor function for the previously obtained counter value. If get_system_device_crosststamp() receives a counter value previous to cycle_last, it consults the history provided as an argument in history_ref and interpolates the realtime and monotonic raw system time using the provided counter value. If there are any clock discontinuities, e.g. from calling settimeofday(), the monotonic raw time is interpolated in the usual way, but the realtime clock time is adjusted by scaling the monotonic raw adjustment. When an accessor function is used a history argument *must* be provided. The history is initialized using ktime_get_snapshot() and must be called before the counter values are read. Cc: Prarit Bhargava <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Reviewed-by: Thomas Gleixner <[email protected]> Signed-off-by: Christopher S. Hall <[email protected]> [jstultz: Fixed up cycles_t/cycle_t type confusion] Signed-off-by: John Stultz <[email protected]>
2015-06-12time: Prevent early expiry of hrtimers[CLOCK_REALTIME] at the leap second edgeJohn Stultz1-0/+2
Currently, leapsecond adjustments are done at tick time. As a result, the leapsecond was applied at the first timer tick *after* the leapsecond (~1-10ms late depending on HZ), rather then exactly on the second edge. This was in part historical from back when we were always tick based, but correcting this since has been avoided since it adds extra conditional checks in the gettime fastpath, which has performance overhead. However, it was recently pointed out that ABS_TIME CLOCK_REALTIME timers set for right after the leapsecond could fire a second early, since some timers may be expired before we trigger the timekeeping timer, which then applies the leapsecond. This isn't quite as bad as it sounds, since behaviorally it is similar to what is possible w/ ntpd made leapsecond adjustments done w/o using the kernel discipline. Where due to latencies, timers may fire just prior to the settimeofday call. (Also, one should note that all applications using CLOCK_REALTIME timers should always be careful, since they are prone to quirks from settimeofday() disturbances.) However, the purpose of having the kernel do the leap adjustment is to avoid such latencies, so I think this is worth fixing. So in order to properly keep those timers from firing a second early, this patch modifies the ntp and timekeeping logic so that we keep enough state so that the update_base_offsets_now accessor, which provides the hrtimer core the current time, can check and apply the leapsecond adjustment on the second edge. This prevents the hrtimer core from expiring timers too early. This patch does not modify any other time read path, so no additional overhead is incurred. However, this also means that the leap-second continues to be applied at tick time for all other read-paths. Apologies to Richard Cochran, who pushed for similar changes years ago, which I resisted due to the concerns about the performance overhead. While I suspect this isn't extremely critical, folks who care about strict leap-second correctness will likely want to watch this. Potentially a -stable candidate eventually. Originally-suggested-by: Richard Cochran <[email protected]> Reported-by: Daniel Bristot de Oliveira <[email protected]> Reported-by: Prarit Bhargava <[email protected]> Signed-off-by: John Stultz <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jiri Bohac <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Ingo Molnar <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2015-05-22time: Rework debugging variables so they aren't globalJohn Stultz1-0/+15
Ingo suggested that the timekeeping debugging variables recently added should not be global, and should be tied to the timekeeper's read_base. Thus this patch implements that suggestion. This version is different from the earlier versions as it keeps the variables in the timekeeper structure rather then in the tkr. Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Richard Cochran <[email protected]> Signed-off-by: John Stultz <[email protected]>
2015-04-22hrtimer: Make offset update smarterThomas Gleixner1-0/+2
On every tick/hrtimer interrupt we update the offset variables of the clock bases. That's silly because these offsets change very seldom. Add a sequence counter to the time keeping code which keeps track of the offset updates (clock_was_set()). Have a sequence cache in the hrtimer cpu bases to evaluate whether the offsets must be updated or not. This allows us later to avoid pointless cacheline pollution. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Preeti U Murthy <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Cc: Viresh Kumar <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: John Stultz <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Cc: John Stultz <[email protected]>
2015-03-27time: Add timerkeeper::tkr_rawPeter Zijlstra1-2/+2
Introduce tkr_raw and make use of it. base_raw -> tkr_raw.base clock->{mult,shift} -> tkr_raw.{mult.shift} Kill timekeeping_get_ns_raw() in favour of timekeeping_get_ns(&tkr_raw), this removes all mono_raw special casing. Duplicate the updates to tkr_mono.cycle_last into tkr_raw.cycle_last, both need the same value. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: John Stultz <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-03-27time: Rename timekeeper::tkr to timekeeper::tkr_monoPeter Zijlstra1-6/+6
In preparation of adding another tkr field, rename this one to tkr_mono. Also rename tk_read_base::base_mono to tk_read_base::base, since the structure is not specific to CLOCK_MONOTONIC and the mono name got added to the tk_read_base instance. Lots of trivial churn. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: John Stultz <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-10-29timekeeping: Provide fast accessor to the seconds part of CLOCK_MONOTONICHeena Sirwani1-0/+2
This is the counterpart to get_seconds() based on CLOCK_MONOTONIC. The use case for this interface are kernel internal coarse grained timestamps which do neither require the nanoseconds fraction of current time nor the CLOCK_REALTIME properties. Such timestamps can currently only retrieved by calling ktime_get_ts64() and using the tv_sec field of the returned timespec64. That's inefficient as it involves the read of the clocksource, math operations and must be protected by the timekeeper sequence counter. To avoid the sequence counter protection we restrict the return value to unsigned 32bit on 32bit machines. This covers ~136 years of uptime and therefor an overflow is not expected to hit anytime soon. To avoid math in the function we calculate the current seconds portion of CLOCK_MONOTONIC when the timekeeper gets updated in tk_update_ktime_data() similar to the CLOCK_REALTIME counterpart xtime_sec. [ tglx: Massaged changelog, simplified and commented the update function, added docbook comment ] Signed-off-by: Heena Sirwani <[email protected]> Reviewed-by: Arnd Bergman <[email protected]> Cc: John Stultz <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/da0b63f4bdf3478909f92becb35861197da3a905.1414578445.git.heenasirwani@gmail.com Signed-off-by: Thomas Gleixner <[email protected]>
2014-07-30timekeeping: Fixup typo in update_vsyscall_old definitionJohn Stultz1-1/+1
In commit 4a0e637738f0 ("clocksource: Get rid of cycle_last"), currently in the -tip tree, there was a small typo where cycles_t was used intstead of cycle_t. This broke ppc64 builds. Fix this by using the proper cycle_t type for this usage, in both the definition and the ia64 implementation. Now, having both cycle_t and cycles_t types seems like a very bad idea just asking for these sorts of issues. But that will be a cleanup for another day. Reported-by: Stephen Rothwell <[email protected]> Signed-off-by: John Stultz <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-07-23timekeeping: Use cached ntp_tick_length when accumulating errorJohn Stultz1-0/+9
By caching the ntp_tick_length() when we correct the frequency error, and then using that cached value to accumulate error, we avoid large initial errors when the tick length is changed. This makes convergence happen much faster in the simulator, since the initial error doesn't have to be slowly whittled away. This initially seems like an accounting error, but Miroslav pointed out that ntp_tick_length() can change mid-tick, so when we apply it in the error accumulation, we are applying any recent change to the entire tick. This approach chooses to apply changes in the ntp_tick_length() only to the next tick, which allows us to calculate the freq correction before using the new tick length, which avoids accummulating error. Credit to Miroslav for pointing this out and providing the original patch this functionality has been pulled out from, along with the rational. Cc: Miroslav Lichvar <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Reported-by: Miroslav Lichvar <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Rework frequency adjustments to work better w/ nohzJohn Stultz1-0/+1
The existing timekeeping_adjust logic has always been complicated to understand. Further, since it was developed prior to NOHZ becoming common, its not surprising it performs poorly when NOHZ is enabled. Since Miroslav pointed out the problematic nature of the existing code in the NOHZ case, I've tried to refactor the code to perform better. The problem with the previous approach was that it tried to adjust for the total cumulative error using a scaled dampening factor. This resulted in large errors to be corrected slowly, while small errors were corrected quickly. With NOHZ the timekeeping code doesn't know how far out the next tick will be, so this results in bad over-correction to small errors, and insufficient correction to large errors. Inspired by Miroslav's patch, I've refactored the code to try to address the correction in two steps. 1) Check the future freq error for the next tick, and if the frequency error is large, try to make sure we correct it so it doesn't cause much accumulated error. 2) Then make a small single unit adjustment to correct any cumulative error that has collected over time. This method performs fairly well in the simulator Miroslav created. Major credit to Miroslav for pointing out the issue, providing the original patch to resolve this, a simulator for testing, as well as helping debug and resolve issues in my implementation so that it performed closer to his original implementation. Cc: Miroslav Lichvar <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Reported-by: Miroslav Lichvar <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Create struct tk_read_base and use it in struct timekeeperThomas Gleixner1-48/+55
The members of the new struct are the required ones for the new NMI safe accessor to clcok monotonic. In order to reuse the existing timekeeping code and to make the update of the fast NMI safe timekeepers a simple memcpy use the struct for the timekeeper as well and convert all users. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Restructure the timekeeper some moreThomas Gleixner1-0/+4
Access to time requires to touch two cachelines at minimum 1) The timekeeper data structure 2) The clocksource data structure The access to the clocksource data structure can be avoided as almost all clocksource implementations ignore the argument to the read callback, which is a pointer to the clocksource. But the core needs to touch it to access the members @read and @mask. So we are better off by copying the @read function pointer and the @mask from the clocksource to the core data structure itself. For the most used ktime_get() access all required data including the @read and @mask copies fits together with the sequence counter into a single 64 byte cacheline. For the other time access functions we touch in the current code three cache lines in the worst case. But with the clocksource data copies we can reduce that to two adjacent cachelines, which is more efficient than disjunct cache lines. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23clocksource: Get rid of cycle_lastThomas Gleixner1-3/+4
cycle_last was added to the clocksource to support the TSC validation. We moved that to the core code, so we can get rid of the extra copy. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Provide ktime_get_raw()Thomas Gleixner1-0/+3
Provide a ktime_t based interface for raw monotonic time. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Remove timekeeper.total_sleep_timeThomas Gleixner1-4/+2
No more users. Remove it Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Provide internal ktime_t based dataThomas Gleixner1-0/+3
The ktime_t based interfaces are used a lot in performance critical code pathes. Add ktime_t based data so the interfaces don't have to convert from the xtime/timespec based data. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Cache optimize struct timekeeperThomas Gleixner1-38/+46
struct timekeeper is quite badly sorted for the hot readout path. Most time access functions need to load two cache lines. Rearrange it so ktime_get() and getnstimeofday() are happy with a single cache line. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeper: Move tk_xtime to core codeThomas Gleixner1-18/+0
No users outside of the core. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2014-07-23timekeeping: Convert timekeeping core to use timespec64sJohn Stultz1-5/+5
Convert the core timekeeping logic to use timespec64s. This moves the 2038 issues out of the core logic and into all of the accessor functions. Future changes will need to push the timespec64s out to all timekeeping users, but that can be done interface by interface. Signed-off-by: John Stultz <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2013-04-04timekeeping: Store cycle_last value in timekeeper struct as wellThomas Gleixner1-0/+2
For implementing a shadow timekeeper and a split calculation/update region we need to store the cycle_last value in the timekeeper and update the value in the clocksource struct only in the update region. Add the extra storage to the timekeeper. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2013-03-22timekeeping: Move lock out of timekeeper structThomas Gleixner1-2/+0
Make the lock a separate entity. Preparatory patch for shadow timekeeper structure. Signed-off-by: Thomas Gleixner <[email protected]> [Merged with CLOCK_TAI changes] Signed-off-by: John Stultz <[email protected]>
2013-03-22hrtimer: Add hrtimer support for CLOCK_TAIJohn Stultz1-0/+2
Add hrtimer support for CLOCK_TAI, as well as posix timer interfaces. Signed-off-by: John Stultz <[email protected]>
2013-03-22timekeeping: Move TAI managment into timekeeping core from ntpJohn Stultz1-0/+3
Currently NTP manages the TAI offset. Since there's plans for a CLOCK_TAI clockid, push the TAI management into the timekeeping core. CC: Thomas Gleixner <[email protected]> CC: Eric Dumazet <[email protected]> CC: Richard Cochran <[email protected]> Signed-off-by: John Stultz <[email protected]>
2012-09-24time: Introduce new GENERIC_TIME_VSYSCALLJohn Stultz1-7/+29
Now that we moved everyone over to GENERIC_TIME_VSYSCALL_OLD, introduce the new declaration and config option for the new update_vsyscall method. Cc: Tony Luck <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Turner <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2012-09-24time: Convert CONFIG_GENERIC_TIME_VSYSCALL to CONFIG_GENERIC_TIME_VSYSCALL_OLDJohn Stultz1-3/+4
To help migrate archtectures over to the new update_vsyscall method, redfine CONFIG_GENERIC_TIME_VSYSCALL as CONFIG_GENERIC_TIME_VSYSCALL_OLD Cc: Tony Luck <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Turner <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2012-09-24time: Move update_vsyscall definitions to timekeeper_internal.hJohn Stultz1-0/+17
Since users will need to include timekeeper_internal.h, move update_vsyscall definitions to timekeeper_internal.h. Cc: Tony Luck <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Turner <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>
2012-09-24time: Move timekeeper structure to timekeeper_internal.h for vsyscall changesJohn Stultz1-0/+68
We're going to need to access the timekeeper in update_vsyscall, so make the structure available for those who need it. Cc: Tony Luck <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Turner <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Prarit Bhargava <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: John Stultz <[email protected]>