diff options
1168 files changed, 21405 insertions, 13674 deletions
@@ -55,6 +55,8 @@ Bart Van Assche <[email protected]> <[email protected]> Ben Gardner <[email protected]> Ben M Cahill <[email protected]> Björn Steinbrink <[email protected]> +Björn Töpel <[email protected]> <[email protected]> +Björn Töpel <[email protected]> <[email protected]> Boris Brezillon <[email protected]> <[email protected]> Boris Brezillon <[email protected]> <[email protected]> Boris Brezillon <[email protected]> <[email protected]> @@ -710,6 +710,10 @@ S: Las Cuevas 2385 - Bo Guemes S: Las Heras, Mendoza CP 5539 S: Argentina +N: Jay Cliburn +D: ATLX Ethernet drivers + N: Steven P. Cole @@ -1284,6 +1288,10 @@ D: Major kbuild rework during the 2.5 cycle D: ISDN Maintainer S: USA +N: Gerrit Renker +D: DCCP protocol support. + N: Philip Gladstone D: Kernel / timekeeping stuff @@ -2138,6 +2146,10 @@ E: [email protected] D: Original author of software suspend +N: Alexey Kuznetsov +D: Author and maintainer of large parts of the networking stack + N: Jaroslav Kysela W: https://www.perex.cz @@ -2696,6 +2708,10 @@ N: Wolfgang Muees D: Auerswald USB driver +N: Shrijeet Mukherjee +D: Network routing domains (VRF). + N: Paul Mundt D: SuperH maintainer @@ -4110,6 +4126,10 @@ S: B-1206 Jingmao Guojigongyu S: 16 Baliqiao Nanjie, Beijing 101100 S: People's Repulic of China +N: Aviad Yehezkel +D: Kernel TLS implementation and offload support. + N: Victor Yodaiken D: RTLinux (RealTime Linux) @@ -4167,6 +4187,10 @@ S: 1507 145th Place SE #B5 S: Bellevue, Washington 98007 S: USA +N: Wensong Zhang +D: IP virtual server (IPVS). + N: Haojian Zhuang D: MMP support diff --git a/Documentation/ABI/testing/sysfs-class-devlink b/Documentation/ABI/testing/sysfs-class-devlink index b662f747c83e..8a21ce515f61 100644 --- a/Documentation/ABI/testing/sysfs-class-devlink +++ b/Documentation/ABI/testing/sysfs-class-devlink @@ -5,8 +5,8 @@ Description: Provide a place in sysfs for the device link objects in the kernel at any given time. The name of a device link directory, denoted as ... above, is of the form <supplier>--<consumer> - where <supplier> is the supplier device name and <consumer> is - the consumer device name. + where <supplier> is the supplier bus:device name and <consumer> + is the consumer bus:device name. What: /sys/class/devlink/.../auto_remove_on Date: May 2020 diff --git a/Documentation/ABI/testing/sysfs-devices-consumer b/Documentation/ABI/testing/sysfs-devices-consumer index 1f06d74d1c3c..0809fda092e6 100644 --- a/Documentation/ABI/testing/sysfs-devices-consumer +++ b/Documentation/ABI/testing/sysfs-devices-consumer @@ -4,5 +4,6 @@ Contact: Saravana Kannan <[email protected]> Description: The /sys/devices/.../consumer:<consumer> are symlinks to device links where this device is the supplier. <consumer> denotes the - name of the consumer in that device link. There can be zero or - more of these symlinks for a given device. + name of the consumer in that device link and is of the form + bus:device name. There can be zero or more of these symlinks + for a given device. diff --git a/Documentation/ABI/testing/sysfs-devices-supplier b/Documentation/ABI/testing/sysfs-devices-supplier index a919e0db5e90..207f5972e98d 100644 --- a/Documentation/ABI/testing/sysfs-devices-supplier +++ b/Documentation/ABI/testing/sysfs-devices-supplier @@ -4,5 +4,6 @@ Contact: Saravana Kannan <[email protected]> Description: The /sys/devices/.../supplier:<supplier> are symlinks to device links where this device is the consumer. <supplier> denotes the - name of the supplier in that device link. There can be zero or - more of these symlinks for a given device. + name of the supplier in that device link and is of the form + bus:device name. There can be zero or more of these symlinks + for a given device. diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index adc0d0e91607..75ccc5c62b3c 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -916,21 +916,25 @@ Date: September 2014 Contact: Subhash Jadavani <[email protected]> Description: This entry could be used to set or show the UFS device runtime power management level. The current driver - implementation supports 6 levels with next target states: + implementation supports 7 levels with next target states: == ==================================================== - 0 an UFS device will stay active, an UIC link will + 0 UFS device will stay active, UIC link will stay active - 1 an UFS device will stay active, an UIC link will + 1 UFS device will stay active, UIC link will hibernate - 2 an UFS device will moved to sleep, an UIC link will + 2 UFS device will be moved to sleep, UIC link will stay active - 3 an UFS device will moved to sleep, an UIC link will + 3 UFS device will be moved to sleep, UIC link will hibernate - 4 an UFS device will be powered off, an UIC link will + 4 UFS device will be powered off, UIC link will hibernate - 5 an UFS device will be powered off, an UIC link will + 5 UFS device will be powered off, UIC link will be powered off + 6 UFS device will be moved to deep sleep, UIC link + will be powered off. Note, deep sleep might not be + supported in which case this value will not be + accepted == ==================================================== What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state @@ -954,21 +958,25 @@ Date: September 2014 Contact: Subhash Jadavani <[email protected]> Description: This entry could be used to set or show the UFS device system power management level. The current driver - implementation supports 6 levels with next target states: + implementation supports 7 levels with next target states: == ==================================================== - 0 an UFS device will stay active, an UIC link will + 0 UFS device will stay active, UIC link will stay active - 1 an UFS device will stay active, an UIC link will + 1 UFS device will stay active, UIC link will hibernate - 2 an UFS device will moved to sleep, an UIC link will + 2 UFS device will be moved to sleep, UIC link will stay active - 3 an UFS device will moved to sleep, an UIC link will + 3 UFS device will be moved to sleep, UIC link will hibernate - 4 an UFS device will be powered off, an UIC link will + 4 UFS device will be powered off, UIC link will hibernate - 5 an UFS device will be powered off, an UIC link will + 5 UFS device will be powered off, UIC link will be powered off + 6 UFS device will be moved to deep sleep, UIC link + will be powered off. Note, deep sleep might not be + supported in which case this value will not be + accepted == ==================================================== What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state diff --git a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst index 83ae3b79a643..a648b423ba0e 100644 --- a/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst +++ b/Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst @@ -473,7 +473,7 @@ read-side critical sections that follow the idle period (the oval near the bottom of the diagram above). Plumbing this into the full grace-period execution is described -`below <#Forcing%20Quiescent%20States>`__. +`below <Forcing Quiescent States_>`__. CPU-Hotplug Interface ^^^^^^^^^^^^^^^^^^^^^ @@ -494,7 +494,7 @@ mask to detect CPUs having gone offline since the beginning of this grace period. Plumbing this into the full grace-period execution is described -`below <#Forcing%20Quiescent%20States>`__. +`below <Forcing Quiescent States_>`__. Forcing Quiescent States ^^^^^^^^^^^^^^^^^^^^^^^^ @@ -532,7 +532,7 @@ from other CPUs. | RCU. But this diagram is complex enough as it is, so simplicity | | overrode accuracy. You can think of it as poetic license, or you can | | think of it as misdirection that is resolved in the | -| `stitched-together diagram <#Putting%20It%20All%20Together>`__. | +| `stitched-together diagram <Putting It All Together_>`__. | +-----------------------------------------------------------------------+ Grace-Period Cleanup @@ -596,7 +596,7 @@ maintain ordering. For example, if the callback function wakes up a task that runs on some other CPU, proper ordering must in place in both the callback function and the task being awakened. To see why this is important, consider the top half of the `grace-period -cleanup <#Grace-Period%20Cleanup>`__ diagram. The callback might be +cleanup`_ diagram. The callback might be running on a CPU corresponding to the leftmost leaf ``rcu_node`` structure, and awaken a task that is to run on a CPU corresponding to the rightmost leaf ``rcu_node`` structure, and the grace-period kernel diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index e8c84fcc0507..d4c9a016074b 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -45,7 +45,7 @@ requirements: #. `Other RCU Flavors`_ #. `Possible Future Changes`_ -This is followed by a `summary <#Summary>`__, however, the answers to +This is followed by a summary_, however, the answers to each quick quiz immediately follows the quiz. Select the big white space with your mouse to see the answer. @@ -1096,7 +1096,7 @@ memory barriers. | case, voluntary context switch) within an RCU read-side critical | | section. However, sleeping locks may be used within userspace RCU | | read-side critical sections, and also within Linux-kernel sleepable | -| RCU `(SRCU) <#Sleepable%20RCU>`__ read-side critical sections. In | +| RCU `(SRCU) <Sleepable RCU_>`__ read-side critical sections. In | | addition, the -rt patchset turns spinlocks into a sleeping locks so | | that the corresponding critical sections can be preempted, which also | | means that these sleeplockified spinlocks (but not other sleeping | @@ -1186,7 +1186,7 @@ non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny RCU <https://lkml.kernel.org/g/[email protected]>`__ was born. Josh Triplett has since taken over the small-memory banner with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__ -project, which resulted in `SRCU <#Sleepable%20RCU>`__ becoming optional +project, which resulted in `SRCU <Sleepable RCU_>`__ becoming optional for those kernels not needing it. The remaining performance requirements are, for the most part, @@ -1457,8 +1457,8 @@ will vary as the value of ``HZ`` varies, and can also be changed using the relevant Kconfig options and kernel boot parameters. RCU currently does not do much sanity checking of these parameters, so please use caution when changing them. Note that these forward-progress measures -are provided only for RCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks -RCU <#Tasks%20RCU>`__. +are provided only for RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks +RCU`_. RCU takes the following steps in ``call_rcu()`` to encourage timely invocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has @@ -1477,8 +1477,8 @@ encouragement was provided: Again, these are default values when running at ``HZ=1000``, and can be overridden. Again, these forward-progress measures are provided only for -RCU, not for `SRCU <#Sleepable%20RCU>`__ or `Tasks -RCU <#Tasks%20RCU>`__. Even for RCU, callback-invocation forward +RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks +RCU`_. Even for RCU, callback-invocation forward progress for ``rcu_nocbs`` CPUs is much less well-developed, in part because workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke ``call_rcu()`` relatively infrequently. If workloads emerge that need @@ -1920,7 +1920,7 @@ Hotplug CPU The Linux kernel supports CPU hotplug, which means that CPUs can come and go. It is of course illegal to use any RCU API member from an -offline CPU, with the exception of `SRCU <#Sleepable%20RCU>`__ read-side +offline CPU, with the exception of `SRCU <Sleepable RCU_>`__ read-side critical sections. This requirement was present from day one in DYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug implementation is “interesting.” @@ -2177,7 +2177,7 @@ handles these states differently: However, RCU must be reliably informed as to whether any given CPU is currently in the idle loop, and, for ``NO_HZ_FULL``, also whether that CPU is executing in usermode, as discussed -`earlier <#Energy%20Efficiency>`__. It also requires that the +`earlier <Energy Efficiency_>`__. It also requires that the scheduling-clock interrupt be enabled when RCU needs it to be: #. If a CPU is either idle or executing in usermode, and RCU believes it @@ -2294,7 +2294,7 @@ Performance, Scalability, Response Time, and Reliability ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Expanding on the `earlier -discussion <#Performance%20and%20Scalability>`__, RCU is used heavily by +discussion <Performance and Scalability_>`__, RCU is used heavily by hot code paths in performance-critical portions of the Linux kernel's networking, security, virtualization, and scheduling code paths. RCU must therefore use efficient implementations, especially in its diff --git a/Documentation/admin-guide/binfmt-misc.rst b/Documentation/admin-guide/binfmt-misc.rst index 7a864131e5ea..59cd902e3549 100644 --- a/Documentation/admin-guide/binfmt-misc.rst +++ b/Documentation/admin-guide/binfmt-misc.rst @@ -23,7 +23,7 @@ Here is what the fields mean: - ``name`` is an identifier string. A new /proc file will be created with this - ``name below /proc/sys/fs/binfmt_misc``; cannot contain slashes ``/`` for + name below ``/proc/sys/fs/binfmt_misc``; cannot contain slashes ``/`` for obvious reasons. - ``type`` is the type of recognition. Give ``M`` for magic and ``E`` for extension. @@ -83,7 +83,7 @@ Here is what the fields mean: ``F`` - fix binary The usual behaviour of binfmt_misc is to spawn the binary lazily when the misc format file is invoked. However, - this doesn``t work very well in the face of mount namespaces and + this doesn't work very well in the face of mount namespaces and changeroots, so the ``F`` mode opens the binary as soon as the emulation is installed and uses the opened image to spawn the emulator, meaning it is always available once installed, diff --git a/Documentation/admin-guide/bootconfig.rst b/Documentation/admin-guide/bootconfig.rst index 9b90efcc3a35..452b7dcd7f6b 100644 --- a/Documentation/admin-guide/bootconfig.rst +++ b/Documentation/admin-guide/bootconfig.rst @@ -154,7 +154,7 @@ get the boot configuration data. Because of this "piggyback" method, there is no need to change or update the boot loader and the kernel image itself as long as the boot loader passes the correct initrd file size. If by any chance, the boot -loader passes a longer size, the kernel feils to find the bootconfig data. +loader passes a longer size, the kernel fails to find the bootconfig data. To do this operation, Linux kernel provides "bootconfig" command under tools/bootconfig, which allows admin to apply or delete the config file diff --git a/Documentation/admin-guide/device-mapper/dm-integrity.rst b/Documentation/admin-guide/device-mapper/dm-integrity.rst index 4e6f504474ac..2cc5488acbd9 100644 --- a/Documentation/admin-guide/device-mapper/dm-integrity.rst +++ b/Documentation/admin-guide/device-mapper/dm-integrity.rst @@ -177,14 +177,20 @@ bitmap_flush_interval:number The bitmap flush interval in milliseconds. The metadata buffers are synchronized when this interval expires. +allow_discards + Allow block discard requests (a.k.a. TRIM) for the integrity device. + Discards are only allowed to devices using internal hash. + fix_padding Use a smaller padding of the tag area that is more space-efficient. If this option is not present, large padding is used - that is for compatibility with older kernels. -allow_discards - Allow block discard requests (a.k.a. TRIM) for the integrity device. - Discards are only allowed to devices using internal hash. +legacy_recalculate + Allow recalculating of volumes with HMAC keys. This is disabled by + default for security reasons - an attacker could modify the volume, + set recalc_sector to zero, and the kernel would not detect the + modification. The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and allow_discards can be changed when reloading the target (load an inactive diff --git a/Documentation/admin-guide/kernel-parameters.rst b/Documentation/admin-guide/kernel-parameters.rst index 06fb1b4aa849..682ab28b5c94 100644 --- a/Documentation/admin-guide/kernel-parameters.rst +++ b/Documentation/admin-guide/kernel-parameters.rst @@ -3,8 +3,8 @@ The kernel's command-line parameters ==================================== -The following is a consolidated list of the kernel parameters as -implemented by the __setup(), core_param() and module_param() macros +The following is a consolidated list of the kernel parameters as implemented +by the __setup(), early_param(), core_param() and module_param() macros and sorted into English Dictionary order (defined as ignoring all punctuation and sorting digits before letters in a case insensitive manner), and with descriptions where known. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index c722ec19cd00..a10b545c2070 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1385,7 +1385,7 @@ ftrace_filter=[function-list] [FTRACE] Limit the functions traced by the function - tracer at boot up. function-list is a comma separated + tracer at boot up. function-list is a comma-separated list of functions. This list can be changed at run time by the set_ftrace_filter file in the debugfs tracing directory. @@ -1399,13 +1399,13 @@ ftrace_graph_filter=[function-list] [FTRACE] Limit the top level callers functions traced by the function graph tracer at boot up. - function-list is a comma separated list of functions + function-list is a comma-separated list of functions that can be changed at run time by the set_graph_function file in the debugfs tracing directory. ftrace_graph_notrace=[function-list] [FTRACE] Do not trace from the functions specified in - function-list. This list is a comma separated list of + function-list. This list is a comma-separated list of functions that can be changed at run time by the set_graph_notrace file in the debugfs tracing directory. @@ -2421,7 +2421,7 @@ when set. Format: <int> - libata.force= [LIBATA] Force configurations. The format is comma + libata.force= [LIBATA] Force configurations. The format is comma- separated list of "[ID:]VAL" where ID is PORT[.DEVICE]. PORT and DEVICE are decimal numbers matching port, link or device. Basically, it matches @@ -5145,7 +5145,7 @@ stacktrace_filter=[function-list] [FTRACE] Limit the functions that the stack tracer - will trace at boot up. function-list is a comma separated + will trace at boot up. function-list is a comma-separated list of functions. This list can be changed at run time by the stack_trace_filter file in the debugfs tracing directory. Note, this enables stack tracing @@ -5348,7 +5348,7 @@ trace_event=[event-list] [FTRACE] Set and start specified trace events in order to facilitate early boot debugging. The event-list is a - comma separated list of trace events to enable. See + comma-separated list of trace events to enable. See also Documentation/trace/events.rst trace_options=[option-list] @@ -5972,6 +5972,10 @@ This option is obsoleted by the "nopv" option, which has equivalent effect for XEN platform. + xen_no_vector_callback + [KNL,X86,XEN] Disable the vector callback for Xen + event channel interrupts. + xen_scrub_pages= [XEN] Boolean option to control scrubbing pages before giving them back to Xen, for use by other domains. Can be also changed at runtime diff --git a/Documentation/admin-guide/mm/concepts.rst b/Documentation/admin-guide/mm/concepts.rst index fa0974fbeae7..b966fcff993b 100644 --- a/Documentation/admin-guide/mm/concepts.rst +++ b/Documentation/admin-guide/mm/concepts.rst @@ -184,7 +184,7 @@ pages either asynchronously or synchronously, depending on the state of the system. When the system is not loaded, most of the memory is free and allocation requests will be satisfied immediately from the free pages supply. As the load increases, the amount of the free pages goes -down and when it reaches a certain threshold (high watermark), an +down and when it reaches a certain threshold (low watermark), an allocation request will awaken the ``kswapd`` daemon. It will asynchronously scan memory pages and either just free them if the data they contain is available elsewhere, or evict to the backing storage diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index 69171b1799f2..f1c9d20bd42d 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -53,7 +53,6 @@ How Linux keeps everything from happening at the same time. See .. toctree:: :maxdepth: 1 - atomic_ops refcount-vs-atomic irq/index local_ops diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index 0fc3fb1860c4..1651d961f06a 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -160,29 +160,14 @@ intended for use in production as a security mitigation. Therefore it supports boot parameters that allow to disable KASAN competely or otherwise control particular KASAN features. -The things that can be controlled are: +- ``kasan=off`` or ``=on`` controls whether KASAN is enabled (default: ``on``). -1. Whether KASAN is enabled at all. -2. Whether KASAN collects and saves alloc/free stacks. -3. Whether KASAN panics on a detected bug or not. +- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack + traces collection (default: ``on`` for ``CONFIG_DEBUG_KERNEL=y``, otherwise + ``off``). -The ``kasan.mode`` boot parameter allows to choose one of three main modes: - -- ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed -- ``kasan.mode=prod`` - only essential production features are enabled -- ``kasan.mode=full`` - all KASAN features are enabled - -The chosen mode provides default control values for the features mentioned -above. However it's also possible to override the default values by providing: - -- ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection - (default: ``on`` for ``mode=full``, - otherwise ``off``) -- ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic - (default: ``report``) - -If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when -``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise. +- ``kasan.fault=report`` or ``=panic`` controls whether to only print a KASAN + report or also panic the kernel (default: ``report``). For developers ~~~~~~~~~~~~~~ diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst index d9fdc14f0677..650f99590df5 100644 --- a/Documentation/dev-tools/kunit/usage.rst +++ b/Documentation/dev-tools/kunit/usage.rst @@ -522,6 +522,63 @@ There's more boilerplate involved, but it can: * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256`` field and reuse ``cases``. +* be converted to a "parameterized test", see below. + +Parameterized Testing +~~~~~~~~~~~~~~~~~~~~~ + +The table-driven testing pattern is common enough that KUnit has special +support for it. + +Reusing the same ``cases`` array from above, we can write the test as a +"parameterized test" with the following. + +.. code-block:: c + + // This is copy-pasted from above. + struct sha1_test_case { + const char *str; + const char *sha1; + }; + struct sha1_test_case cases[] = { + { + .str = "hello world", + .sha1 = "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed", + }, + { + .str = "hello world!", + .sha1 = "430ce34d020724ed75a196dfc2ad67c77772d169", + }, + }; + + // Need a helper function to generate a name for each test case. + static void case_to_desc(const struct sha1_test_case *t, char *desc) + { + strcpy(desc, t->str); + } + // Creates `sha1_gen_params()` to iterate over `cases`. + KUNIT_ARRAY_PARAM(sha1, cases, case_to_desc); + + // Looks no different from a normal test. + static void sha1_test(struct kunit *test) + { + // This function can just contain the body of the for-loop. + // The former `cases[i]` is accessible under test->param_value. + char out[40]; + struct sha1_test_case *test_param = (struct sha1_test_case *)(test->param_value); + + sha1sum(test_param->str, out); + KUNIT_EXPECT_STREQ_MSG(test, (char *)out, test_param->sha1, + "sha1sum(%s)", test_param->str); + } + + // Instead of KUNIT_CASE, we use KUNIT_CASE_PARAM and pass in the + // function declared by KUNIT_ARRAY_PARAM. + static struct kunit_case sha1_test_cases[] = { + KUNIT_CASE_PARAM(sha1_test, sha1_gen_params), + {} + }; + .. _kunit-on-non-uml: KUnit on non-UML architectures diff --git a/Documentation/devicetree/bindings/display/mediatek/mediatek,disp.txt b/Documentation/devicetree/bindings/display/mediatek/mediatek,disp.txt index 33977e15bebd..b47e1a0597c2 100644 --- a/Documentation/devicetree/bindings/display/mediatek/mediatek,disp.txt +++ b/Documentation/devicetree/bindings/display/mediatek/mediatek,disp.txt @@ -37,13 +37,14 @@ Required properties (all function blocks): "mediatek,<chip>-disp-aal" - adaptive ambient light controller "mediatek,<chip>-disp-gamma" - gamma correction "mediatek,<chip>-disp-merge" - merge streams from two RDMA sources + "mediatek,<chip>-disp-postmask" - control round corner for display frame "mediatek,<chip>-disp-split" - split stream to two encoders "mediatek,<chip>-disp-ufoe" - data compression engine "mediatek,<chip>-dsi" - DSI controller, see mediatek,dsi.txt "mediatek,<chip>-dpi" - DPI controller, see mediatek,dpi.txt "mediatek,<chip>-disp-mutex" - display mutex "mediatek,<chip>-disp-od" - overdrive - the supported chips are mt2701, mt7623, mt2712, mt8167 and mt8173. + the supported chips are mt2701, mt7623, mt2712, mt8167, mt8173, mt8183 and mt8192. - reg: Physical base address and length of the function block register space - interrupts: The interrupt signal from the function block (required, except for merge and split function blocks). @@ -66,6 +67,14 @@ Required properties (DMA function blocks): argument, see Documentation/devicetree/bindings/iommu/mediatek,iommu.txt for details. +Optional properties (RDMA function blocks): +- mediatek,rdma-fifo-size: rdma fifo size may be different even in same SOC, add this + property to the corresponding rdma + the value is the Max value which defined in hardware data sheet. + mediatek,rdma-fifo-size of mt8173-rdma0 is 8K + mediatek,rdma-fifo-size of mt8183-rdma0 is 5K + mediatek,rdma-fifo-size of mt8183-rdma1 is 2K + Examples: mmsys: clock-controller@14000000 { @@ -103,6 +112,7 @@ rdma0: rdma@1400e000 { clocks = <&mmsys CLK_MM_DISP_RDMA0>; iommus = <&iommu M4U_PORT_DISP_RDMA0>; mediatek,larb = <&larb0>; + mediatek,rdma-fifosize = <8192>; }; rdma1: rdma@1400f000 { diff --git a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml index b15f68c499cb..df29d59d13a8 100644 --- a/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml +++ b/Documentation/devicetree/bindings/dma/ti/k3-bcdma.yaml @@ -1,4 +1,6 @@ # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 Texas Instruments Incorporated +# Author: Peter Ujfalusi <[email protected]> %YAML 1.2 --- $id: http://devicetree.org/schemas/dma/ti/k3-bcdma.yaml# @@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Texas Instruments K3 DMSS BCDMA Device Tree Bindings maintainers: - - Peter Ujfalusi <[email protected]> + - Peter Ujfalusi <[email protected]> description: | The Block Copy DMA (BCDMA) is intended to perform similar functions as the TR diff --git a/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml index b13ab60cd740..ea19d12a9337 100644 --- a/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml +++ b/Documentation/devicetree/bindings/dma/ti/k3-pktdma.yaml @@ -1,4 +1,6 @@ # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 Texas Instruments Incorporated +# Author: Peter Ujfalusi <[email protected]> %YAML 1.2 --- $id: http://devicetree.org/schemas/dma/ti/k3-pktdma.yaml# @@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Texas Instruments K3 DMSS PKTDMA Device Tree Bindings maintainers: - - Peter Ujfalusi <[email protected]> + - Peter Ujfalusi <[email protected]> description: | The Packet DMA (PKTDMA) is intended to perform similar functions as the packet diff --git a/Documentation/devicetree/bindings/dma/ti/k3-udma.yaml b/Documentation/devicetree/bindings/dma/ti/k3-udma.yaml index 9a87fd9041eb..6a09bbf83d46 100644 --- a/Documentation/devicetree/bindings/dma/ti/k3-udma.yaml +++ b/Documentation/devicetree/bindings/dma/ti/k3-udma.yaml @@ -1,4 +1,6 @@ # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2019 Texas Instruments Incorporated +# Author: Peter Ujfalusi <[email protected]> %YAML 1.2 --- $id: http://devicetree.org/schemas/dma/ti/k3-udma.yaml# @@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Texas Instruments K3 NAVSS Unified DMA Device Tree Bindings maintainers: - - Peter Ujfalusi <[email protected]> + - Peter Ujfalusi <[email protected]> description: | The UDMA-P is intended to perform similar (but significantly upgraded) diff --git a/Documentation/devicetree/bindings/iio/accel/bosch,bma255.yaml b/Documentation/devicetree/bindings/iio/accel/bosch,bma255.yaml index 6eef3480ea8f..c2efbb813ca2 100644 --- a/Documentation/devicetree/bindings/iio/accel/bosch,bma255.yaml +++ b/Documentation/devicetree/bindings/iio/accel/bosch,bma255.yaml @@ -16,8 +16,8 @@ description: properties: compatible: enum: - - bosch,bmc150 - - bosch,bmi055 + - bosch,bmc150_accel + - bosch,bmi055_accel - bosch,bma255 - bosch,bma250e - bosch,bma222 diff --git a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml index 244befb6402a..de9dd574a2f9 100644 --- a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml +++ b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml @@ -163,6 +163,7 @@ allOf: enum: - renesas,etheravb-r8a774a1 - renesas,etheravb-r8a774b1 + - renesas,etheravb-r8a774e1 - renesas,etheravb-r8a7795 - renesas,etheravb-r8a7796 - renesas,etheravb-r8a77961 diff --git a/Documentation/devicetree/bindings/net/snps,dwmac.yaml b/Documentation/devicetree/bindings/net/snps,dwmac.yaml index b2f6083f556a..dfbf5fe4547a 100644 --- a/Documentation/devicetree/bindings/net/snps,dwmac.yaml +++ b/Documentation/devicetree/bindings/net/snps,dwmac.yaml @@ -161,7 +161,8 @@ properties: * snps,route-dcbcp, DCB Control Packets * snps,route-up, Untagged Packets * snps,route-multi-broad, Multicast & Broadcast Packets - * snps,priority, RX queue priority (Range 0x0 to 0xF) + * snps,priority, bitmask of the tagged frames priorities assigned to + the queue snps,mtl-tx-config: $ref: /schemas/types.yaml#/definitions/phandle @@ -188,7 +189,10 @@ properties: * snps,idle_slope, unlock on WoL * snps,high_credit, max write outstanding req. limit * snps,low_credit, max read outstanding req. limit - * snps,priority, TX queue priority (Range 0x0 to 0xF) + * snps,priority, bitmask of the priorities assigned to the queue. + When a PFC frame is received with priorities matching the bitmask, + the queue is blocked from transmitting for the pause time specified + in the PFC frame. snps,reset-gpio: deprecated: true diff --git a/Documentation/devicetree/bindings/regulator/nxp,pf8x00-regulator.yaml b/Documentation/devicetree/bindings/regulator/nxp,pf8x00-regulator.yaml index a6c259ce9785..956156fe52a3 100644 --- a/Documentation/devicetree/bindings/regulator/nxp,pf8x00-regulator.yaml +++ b/Documentation/devicetree/bindings/regulator/nxp,pf8x00-regulator.yaml @@ -19,7 +19,9 @@ description: | properties: compatible: enum: - - nxp,pf8x00 + - nxp,pf8100 + - nxp,pf8121a + - nxp,pf8200 reg: maxItems: 1 @@ -118,7 +120,7 @@ examples: #size-cells = <0>; pmic@8 { - compatible = "nxp,pf8x00"; + compatible = "nxp,pf8100"; reg = <0x08>; regulators { diff --git a/Documentation/devicetree/bindings/regulator/qcom,rpmh-regulator.txt b/Documentation/devicetree/bindings/regulator/qcom,rpmh-regulator.txt index b8f0b7809c02..7d462b899473 100644 --- a/Documentation/devicetree/bindings/regulator/qcom,rpmh-regulator.txt +++ b/Documentation/devicetree/bindings/regulator/qcom,rpmh-regulator.txt @@ -44,6 +44,7 @@ First Level Nodes - PMIC Definition: Must be one of below: "qcom,pm8005-rpmh-regulators" "qcom,pm8009-rpmh-regulators" + "qcom,pm8009-1-rpmh-regulators" "qcom,pm8150-rpmh-regulators" "qcom,pm8150l-rpmh-regulators" "qcom,pm8350-rpmh-regulators" diff --git a/Documentation/devicetree/bindings/sound/ti,j721e-cpb-audio.yaml b/Documentation/devicetree/bindings/sound/ti,j721e-cpb-audio.yaml index 805da4d6a88e..ec06789b21df 100644 --- a/Documentation/devicetree/bindings/sound/ti,j721e-cpb-audio.yaml +++ b/Documentation/devicetree/bindings/sound/ti,j721e-cpb-audio.yaml @@ -1,4 +1,6 @@ # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 Texas Instruments Incorporated +# Author: Peter Ujfalusi <[email protected]> %YAML 1.2 --- $id: http://devicetree.org/schemas/sound/ti,j721e-cpb-audio.yaml# @@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Texas Instruments J721e Common Processor Board Audio Support maintainers: - - Peter Ujfalusi <[email protected]> + - Peter Ujfalusi <[email protected]> description: | The audio support on the board is using pcm3168a codec connected to McASP10 diff --git a/Documentation/devicetree/bindings/sound/ti,j721e-cpb-ivi-audio.yaml b/Documentation/devicetree/bindings/sound/ti,j721e-cpb-ivi-audio.yaml index bb780f621628..ee9f960de36b 100644 --- a/Documentation/devicetree/bindings/sound/ti,j721e-cpb-ivi-audio.yaml +++ b/Documentation/devicetree/bindings/sound/ti,j721e-cpb-ivi-audio.yaml @@ -1,4 +1,6 @@ # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +# Copyright (C) 2020 Texas Instruments Incorporated +# Author: Peter Ujfalusi <[email protected]> %YAML 1.2 --- $id: http://devicetree.org/schemas/sound/ti,j721e-cpb-ivi-audio.yaml# @@ -7,7 +9,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml# title: Texas Instruments J721e Common Processor Board Audio Support maintainers: - - Peter Ujfalusi <[email protected]> + - Peter Ujfalusi <[email protected]> description: | The Infotainment board plugs into the Common Processor Board, the support of the diff --git a/Documentation/devicetree/bindings/usb/ti,j721e-usb.yaml b/Documentation/devicetree/bindings/usb/ti,j721e-usb.yaml index 388245b91a55..148b3fb4ceaf 100644 --- a/Documentation/devicetree/bindings/usb/ti,j721e-usb.yaml +++ b/Documentation/devicetree/bindings/usb/ti,j721e-usb.yaml @@ -11,8 +11,12 @@ maintainers: properties: compatible: - items: + oneOf: - const: ti,j721e-usb + - const: ti,am64-usb + - items: + - const: ti,j721e-usb + - const: ti,am64-usb reg: description: module registers diff --git a/Documentation/doc-guide/sphinx.rst b/Documentation/doc-guide/sphinx.rst index 2fb2ff297d69..36ac2166ad67 100644 --- a/Documentation/doc-guide/sphinx.rst +++ b/Documentation/doc-guide/sphinx.rst @@ -48,12 +48,12 @@ or ``virtualenv``, depending on how your distribution packaged Python 3. those versions, you should run ``pip install 'docutils==0.12'``. #) It is recommended to use the RTD theme for html output. Depending - on the Sphinx version, it should be installed in separate, + on the Sphinx version, it should be installed separately, with ``pip install sphinx_rtd_theme``. - #) Some ReST pages contain math expressions. Due to the way Sphinx work, + #) Some ReST pages contain math expressions. Due to the way Sphinx works, those expressions are written using LaTeX notation. It needs texlive - installed with amdfonts and amsmath in order to evaluate them. + installed with amsfonts and amsmath in order to evaluate them. In summary, if you want to install Sphinx version 1.7.9, you should do:: @@ -128,7 +128,7 @@ Sphinx Build ============ The usual way to generate the documentation is to run ``make htmldocs`` or -``make pdfdocs``. There are also other formats available, see the documentation +``make pdfdocs``. There are also other formats available: see the documentation section of ``make help``. The generated documentation is placed in format-specific subdirectories under ``Documentation/output``. @@ -303,17 +303,17 @@ and *targets* (e.g. a ref to ``:ref:`last row <last row>``` / :ref:`last row - head col 3 - head col 4 - * - column 1 + * - row 1 - field 1.1 - field 1.2 with autospan - * - column 2 + * - row 2 - field 2.1 - :rspan:`1` :cspan:`1` field 2.2 - 3.3 * .. _`last row`: - - column 3 + - row 3 Rendered as: @@ -325,17 +325,17 @@ Rendered as: - head col 3 - head col 4 - * - column 1 + * - row 1 - field 1.1 - field 1.2 with autospan - * - column 2 + * - row 2 - field 2.1 - :rspan:`1` :cspan:`1` field 2.2 - 3.3 * .. _`last row`: - - column 3 + - row 3 Cross-referencing ----------------- @@ -361,7 +361,7 @@ Figures & Images If you want to add an image, you should use the ``kernel-figure`` and ``kernel-image`` directives. E.g. to insert a figure with a scalable -image format use SVG (:ref:`svg_image_example`):: +image format, use SVG (:ref:`svg_image_example`):: .. kernel-figure:: svg_image.svg :alt: simple SVG image @@ -375,7 +375,7 @@ image format use SVG (:ref:`svg_image_example`):: SVG image example -The kernel figure (and image) directive support **DOT** formatted files, see +The kernel figure (and image) directive supports **DOT** formatted files, see * DOT: http://graphviz.org/pdf/dotguide.pdf * Graphviz: http://www.graphviz.org/content/dot-language @@ -394,7 +394,7 @@ A simple example (:ref:`hello_dot_file`):: DOT's hello world example -Embed *render* markups (or languages) like Graphviz's **DOT** is provided by the +Embedded *render* markups (or languages) like Graphviz's **DOT** are provided by the ``kernel-render`` directives.:: .. kernel-render:: DOT @@ -406,7 +406,7 @@ Embed *render* markups (or languages) like Graphviz's **DOT** is provided by the } How this will be rendered depends on the installed tools. If Graphviz is -installed, you will see an vector image. If not the raw markup is inserted as +installed, you will see a vector image. If not, the raw markup is inserted as *literal-block* (:ref:`hello_dot_render`). .. _hello_dot_render: @@ -421,8 +421,8 @@ installed, you will see an vector image. If not the raw markup is inserted as The *render* directive has all the options known from the *figure* directive, plus option ``caption``. If ``caption`` has a value, a *figure* node is -inserted. If not, a *image* node is inserted. A ``caption`` is also needed, if -you want to refer it (:ref:`hello_svg_render`). +inserted. If not, an *image* node is inserted. A ``caption`` is also needed, if +you want to refer to it (:ref:`hello_svg_render`). Embedded **SVG**:: diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst index e588bccf5158..c042176e1707 100644 --- a/Documentation/firmware-guide/acpi/apei/einj.rst +++ b/Documentation/firmware-guide/acpi/apei/einj.rst @@ -50,8 +50,8 @@ The following files belong to it: 0x00000010 Memory Uncorrectable non-fatal 0x00000020 Memory Uncorrectable fatal 0x00000040 PCI Express Correctable - 0x00000080 PCI Express Uncorrectable fatal - 0x00000100 PCI Express Uncorrectable non-fatal + 0x00000080 PCI Express Uncorrectable non-fatal + 0x00000100 PCI Express Uncorrectable fatal 0x00000200 Platform Correctable 0x00000400 Platform Uncorrectable non-fatal 0x00000800 Platform Uncorrectable fatal diff --git a/Documentation/hwmon/sbtsi_temp.rst b/Documentation/hwmon/sbtsi_temp.rst index 922b3c8db666..749f518389c3 100644 --- a/Documentation/hwmon/sbtsi_temp.rst +++ b/Documentation/hwmon/sbtsi_temp.rst @@ -1,7 +1,7 @@ .. SPDX-License-Identifier: GPL-2.0-or-later Kernel driver sbtsi_temp -================== +======================== Supported hardware: diff --git a/Documentation/kbuild/makefiles.rst b/Documentation/kbuild/makefiles.rst index d36768cf1250..9f6a11881951 100644 --- a/Documentation/kbuild/makefiles.rst +++ b/Documentation/kbuild/makefiles.rst @@ -598,7 +598,7 @@ more details, with real examples. explicitly added to $(targets). Assignments to $(targets) are without $(obj)/ prefix. if_changed may be - used in conjunction with custom rules as defined in "3.9 Custom Rules". + used in conjunction with custom rules as defined in "3.11 Custom Rules". Note: It is a typical mistake to forget the FORCE prerequisite. Another common pitfall is that whitespace is sometimes significant; for diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst index 6ed806e6061b..c3448929a824 100644 --- a/Documentation/kernel-hacking/locking.rst +++ b/Documentation/kernel-hacking/locking.rst @@ -118,11 +118,11 @@ spinlock, but you may block holding a mutex. If you can't lock a mutex, your task will suspend itself, and be woken up when the mutex is released. This means the CPU can do something else while you are waiting. There are many cases when you simply can't sleep (see -`What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__), +`What Functions Are Safe To Call From Interrupts?`_), and so have to use a spinlock instead. Neither type of lock is recursive: see -`Deadlock: Simple and Advanced <#deadlock>`__. +`Deadlock: Simple and Advanced`_. Locks and Uniprocessor Kernels ------------------------------ @@ -179,7 +179,7 @@ perfect world). Note that you can also use spin_lock_irq() or spin_lock_irqsave() here, which stop hardware interrupts -as well: see `Hard IRQ Context <#hard-irq-context>`__. +as well: see `Hard IRQ Context`_. This works perfectly for UP as well: the spin lock vanishes, and this macro simply becomes local_bh_disable() @@ -230,7 +230,7 @@ The Same Softirq ~~~~~~~~~~~~~~~~ The same softirq can run on the other CPUs: you can use a per-CPU array -(see `Per-CPU Data <#per-cpu-data>`__) for better performance. If you're +(see `Per-CPU Data`_) for better performance. If you're going so far as to use a softirq, you probably care about scalable performance enough to justify the extra complexity. diff --git a/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst b/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst index d3fcf536d14e..61e850460e18 100644 --- a/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst +++ b/Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst @@ -164,46 +164,56 @@ Devlink health reporters NPA Reporters ------------- -The NPA reporters are responsible for reporting and recovering the following group of errors +The NPA reporters are responsible for reporting and recovering the following group of errors: + 1. GENERAL events + - Error due to operation of unmapped PF. - Error due to disabled alloc/free for other HW blocks (NIX, SSO, TIM, DPI and AURA). + 2. ERROR events + - Fault due to NPA_AQ_INST_S read or NPA_AQ_RES_S write. - AQ Doorbell Error. + 3. RAS events + - RAS Error Reporting for NPA_AQ_INST_S/NPA_AQ_RES_S. + 4. RVU events + - Error due to unmapped slot. -Sample Output -------------- -~# devlink health -pci/0002:01:00.0: - reporter hw_npa_intr - state healthy error 2872 recover 2872 last_dump_date 2020-12-10 last_dump_time 09:39:09 grace_period 0 auto_recover true auto_dump true - reporter hw_npa_gen - state healthy error 2872 recover 2872 last_dump_date 2020-12-11 last_dump_time 04:43:04 grace_period 0 auto_recover true auto_dump true - reporter hw_npa_err - state healthy error 2871 recover 2871 last_dump_date 2020-12-10 last_dump_time 09:39:17 grace_period 0 auto_recover true auto_dump true - reporter hw_npa_ras - state healthy error 0 recover 0 last_dump_date 2020-12-10 last_dump_time 09:32:40 grace_period 0 auto_recover true auto_dump true +Sample Output:: + + ~# devlink health + pci/0002:01:00.0: + reporter hw_npa_intr + state healthy error 2872 recover 2872 last_dump_date 2020-12-10 last_dump_time 09:39:09 grace_period 0 auto_recover true auto_dump true + reporter hw_npa_gen + state healthy error 2872 recover 2872 last_dump_date 2020-12-11 last_dump_time 04:43:04 grace_period 0 auto_recover true auto_dump true + reporter hw_npa_err + state healthy error 2871 recover 2871 last_dump_date 2020-12-10 last_dump_time 09:39:17 grace_period 0 auto_recover true auto_dump true + reporter hw_npa_ras + state healthy error 0 recover 0 last_dump_date 2020-12-10 last_dump_time 09:32:40 grace_period 0 auto_recover true auto_dump true Each reporter dumps the + - Error Type - Error Register value - Reason in words -For eg: -~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_gen - NPA_AF_GENERAL: - NPA General Interrupt Reg : 1 - NIX0: free disabled RX -~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_intr - NPA_AF_RVU: - NPA RVU Interrupt Reg : 1 - Unmap Slot Error -~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_err - NPA_AF_ERR: - NPA Error Interrupt Reg : 4096 - AQ Doorbell Error +For example:: + + ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_gen + NPA_AF_GENERAL: + NPA General Interrupt Reg : 1 + NIX0: free disabled RX + ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_intr + NPA_AF_RVU: + NPA RVU Interrupt Reg : 1 + Unmap Slot Error + ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_err + NPA_AF_ERR: + NPA Error Interrupt Reg : 4096 + AQ Doorbell Error diff --git a/Documentation/networking/netdev-FAQ.rst b/Documentation/networking/netdev-FAQ.rst index 4b9ed5874d5a..ae2ae37cd921 100644 --- a/Documentation/networking/netdev-FAQ.rst +++ b/Documentation/networking/netdev-FAQ.rst @@ -6,9 +6,9 @@ netdev FAQ ========== -Q: What is netdev? ------------------- -A: It is a mailing list for all network-related Linux stuff. This +What is netdev? +--------------- +It is a mailing list for all network-related Linux stuff. This includes anything found under net/ (i.e. core code like IPv6) and drivers/net (i.e. hardware specific drivers) in the Linux source tree. @@ -25,9 +25,9 @@ Aside from subsystems like that mentioned above, all network-related Linux development (i.e. RFC, review, comments, etc.) takes place on netdev. -Q: How do the changes posted to netdev make their way into Linux? ------------------------------------------------------------------ -A: There are always two trees (git repositories) in play. Both are +How do the changes posted to netdev make their way into Linux? +-------------------------------------------------------------- +There are always two trees (git repositories) in play. Both are driven by David Miller, the main network maintainer. There is the ``net`` tree, and the ``net-next`` tree. As you can probably guess from the names, the ``net`` tree is for fixes to existing code already in the @@ -37,9 +37,9 @@ for the future release. You can find the trees here: - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git -Q: How often do changes from these trees make it to the mainline Linus tree? ----------------------------------------------------------------------------- -A: To understand this, you need to know a bit of background information on +How often do changes from these trees make it to the mainline Linus tree? +------------------------------------------------------------------------- +To understand this, you need to know a bit of background information on the cadence of Linux development. Each new release starts off with a two week "merge window" where the main maintainers feed their new stuff to Linus for merging into the mainline tree. After the two weeks, the @@ -81,7 +81,8 @@ focus for ``net`` is on stabilization and bug fixes. Finally, the vX.Y gets released, and the whole cycle starts over. -Q: So where are we now in this cycle? +So where are we now in this cycle? +---------------------------------- Load the mainline (Linus) page here: @@ -91,9 +92,9 @@ and note the top of the "tags" section. If it is rc1, it is early in the dev cycle. If it was tagged rc7 a week ago, then a release is probably imminent. -Q: How do I indicate which tree (net vs. net-next) my patch should be in? -------------------------------------------------------------------------- -A: Firstly, think whether you have a bug fix or new "next-like" content. +How do I indicate which tree (net vs. net-next) my patch should be in? +---------------------------------------------------------------------- +Firstly, think whether you have a bug fix or new "next-like" content. Then once decided, assuming that you use git, use the prefix flag, i.e. :: @@ -105,48 +106,45 @@ in the above is just the subject text of the outgoing e-mail, and you can manually change it yourself with whatever MUA you are comfortable with. -Q: I sent a patch and I'm wondering what happened to it? --------------------------------------------------------- -Q: How can I tell whether it got merged? -A: Start by looking at the main patchworks queue for netdev: +I sent a patch and I'm wondering what happened to it - how can I tell whether it got merged? +-------------------------------------------------------------------------------------------- +Start by looking at the main patchworks queue for netdev: https://patchwork.kernel.org/project/netdevbpf/list/ The "State" field will tell you exactly where things are at with your patch. -Q: The above only says "Under Review". How can I find out more? ----------------------------------------------------------------- -A: Generally speaking, the patches get triaged quickly (in less than +The above only says "Under Review". How can I find out more? +------------------------------------------------------------- +Generally speaking, the patches get triaged quickly (in less than 48h). So be patient. Asking the maintainer for status updates on your patch is a good way to ensure your patch is ignored or pushed to the bottom of the priority list. -Q: I submitted multiple versions of the patch series ----------------------------------------------------- -Q: should I directly update patchwork for the previous versions of these -patch series? -A: No, please don't interfere with the patch status on patchwork, leave +I submitted multiple versions of the patch series. Should I directly update patchwork for the previous versions of these patch series? +-------------------------------------------------------------------------------------------------------------------------------------- +No, please don't interfere with the patch status on patchwork, leave it to the maintainer to figure out what is the most recent and current version that should be applied. If there is any doubt, the maintainer will reply and ask what should be done. -Q: I made changes to only a few patches in a patch series should I resend only those changed? ---------------------------------------------------------------------------------------------- -A: No, please resend the entire patch series and make sure you do number your +I made changes to only a few patches in a patch series should I resend only those changed? +------------------------------------------------------------------------------------------ +No, please resend the entire patch series and make sure you do number your patches such that it is clear this is the latest and greatest set of patches that can be applied. -Q: I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do? -------------------------------------------------------------------------------------------------------------------------------------------- -A: There is no revert possible, once it is pushed out, it stays like that. +I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do? +---------------------------------------------------------------------------------------------------------------------------------------- +There is no revert possible, once it is pushed out, it stays like that. Please send incremental versions on top of what has been merged in order to fix the patches the way they would look like if your latest patch series was to be merged. -Q: How can I tell what patches are queued up for backporting to the various stable releases? --------------------------------------------------------------------------------------------- -A: Normally Greg Kroah-Hartman collects stable commits himself, but for +How can I tell what patches are queued up for backporting to the various stable releases? +----------------------------------------------------------------------------------------- +Normally Greg Kroah-Hartman collects stable commits himself, but for networking, Dave collects up patches he deems critical for the networking subsystem, and then hands them off to Greg. @@ -169,11 +167,9 @@ simply clone the repo, and then git grep the mainline commit ID, e.g. releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch stable/stable-queue$ -Q: I see a network patch and I think it should be backported to stable. ------------------------------------------------------------------------ -Q: Should I request it via [email protected] like the references in -the kernel's Documentation/process/stable-kernel-rules.rst file say? -A: No, not for networking. Check the stable queues as per above first +I see a network patch and I think it should be backported to stable. Should I request it via [email protected] like the references in the kernel's Documentation/process/stable-kernel-rules.rst file say? +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +No, not for networking. Check the stable queues as per above first to see if it is already queued. If not, then send a mail to netdev, listing the upstream commit ID and why you think it should be a stable candidate. @@ -190,11 +186,9 @@ mainline, the better the odds that it is an OK candidate for stable. So scrambling to request a commit be added the day after it appears should be avoided. -Q: I have created a network patch and I think it should be backported to stable. --------------------------------------------------------------------------------- -Q: Should I add a Cc: [email protected] like the references in the -kernel's Documentation/ directory say? -A: No. See above answer. In short, if you think it really belongs in +I have created a network patch and I think it should be backported to stable. Should I add a Cc: [email protected] like the references in the kernel's Documentation/ directory say? +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +No. See above answer. In short, if you think it really belongs in stable, then ensure you write a decent commit log that describes who gets impacted by the bug fix and how it manifests itself, and when the bug was introduced. If you do that properly, then the commit will get @@ -207,18 +201,18 @@ marker line as described in :ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>` to temporarily embed that information into the patch that you send. -Q: Are all networking bug fixes backported to all stable releases? ------------------------------------------------------------------- -A: Due to capacity, Dave could only take care of the backports for the +Are all networking bug fixes backported to all stable releases? +--------------------------------------------------------------- +Due to capacity, Dave could only take care of the backports for the last two stable releases. For earlier stable releases, each stable branch maintainer is supposed to take care of them. If you find any patch is missing from an earlier stable branch, please notify [email protected] with either a commit ID or a formal patch backported, and CC Dave and other relevant networking developers. -Q: Is the comment style convention different for the networking content? ------------------------------------------------------------------------- -A: Yes, in a largely trivial way. Instead of this:: +Is the comment style convention different for the networking content? +--------------------------------------------------------------------- +Yes, in a largely trivial way. Instead of this:: /* * foobar blah blah blah @@ -231,32 +225,30 @@ it is requested that you make it look like this:: * another line of text */ -Q: I am working in existing code that has the former comment style and not the latter. --------------------------------------------------------------------------------------- -Q: Should I submit new code in the former style or the latter? -A: Make it the latter style, so that eventually all code in the domain +I am working in existing code that has the former comment style and not the latter. Should I submit new code in the former style or the latter? +----------------------------------------------------------------------------------------------------------------------------------------------- +Make it the latter style, so that eventually all code in the domain of netdev is of this format. -Q: I found a bug that might have possible security implications or similar. ---------------------------------------------------------------------------- -Q: Should I mail the main netdev maintainer off-list?** -A: No. The current netdev maintainer has consistently requested that +I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list? +--------------------------------------------------------------------------------------------------------------------------- +No. The current netdev maintainer has consistently requested that people use the mailing lists and not reach out directly. If you aren't OK with that, then perhaps consider mailing [email protected] or reading about http://oss-security.openwall.org/wiki/mailing-lists/distros as possible alternative mechanisms. -Q: What level of testing is expected before I submit my change? ---------------------------------------------------------------- -A: If your changes are against ``net-next``, the expectation is that you +What level of testing is expected before I submit my change? +------------------------------------------------------------ +If your changes are against ``net-next``, the expectation is that you have tested by layering your changes on top of ``net-next``. Ideally you will have done run-time testing specific to your change, but at a minimum, your changes should survive an ``allyesconfig`` and an ``allmodconfig`` build without new warnings or failures. -Q: How do I post corresponding changes to user space components? ----------------------------------------------------------------- -A: User space code exercising kernel features should be posted +How do I post corresponding changes to user space components? +------------------------------------------------------------- +User space code exercising kernel features should be posted alongside kernel patches. This gives reviewers a chance to see how any new interface is used and how well it works. @@ -280,9 +272,9 @@ to the mailing list, e.g.:: Posting as one thread is discouraged because it confuses patchwork (as of patchwork 2.2.2). -Q: Any other tips to help ensure my net/net-next patch gets OK'd? ------------------------------------------------------------------ -A: Attention to detail. Re-read your own work as if you were the +Any other tips to help ensure my net/net-next patch gets OK'd? +-------------------------------------------------------------- +Attention to detail. Re-read your own work as if you were the reviewer. You can start with using ``checkpatch.pl``, perhaps even with the ``--strict`` flag. But do not be mindlessly robotic in doing so. If your change is a bug fix, make sure your commit log indicates the diff --git a/Documentation/networking/netdevices.rst b/Documentation/networking/netdevices.rst index 5a85fcc80c76..17bdcb746dcf 100644 --- a/Documentation/networking/netdevices.rst +++ b/Documentation/networking/netdevices.rst @@ -10,18 +10,177 @@ Introduction The following is a random collection of documentation regarding network devices. -struct net_device allocation rules -================================== +struct net_device lifetime rules +================================ Network device structures need to persist even after module is unloaded and must be allocated with alloc_netdev_mqs() and friends. If device has registered successfully, it will be freed on last use -by free_netdev(). This is required to handle the pathologic case cleanly -(example: rmmod mydriver </sys/class/net/myeth/mtu ) +by free_netdev(). This is required to handle the pathological case cleanly +(example: ``rmmod mydriver </sys/class/net/myeth/mtu``) -alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver +alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver private data which gets freed when the network device is freed. If separately allocated data is attached to the network device -(netdev_priv(dev)) then it is up to the module exit handler to free that. +(netdev_priv()) then it is up to the module exit handler to free that. + +There are two groups of APIs for registering struct net_device. +First group can be used in normal contexts where ``rtnl_lock`` is not already +held: register_netdev(), unregister_netdev(). +Second group can be used when ``rtnl_lock`` is already held: +register_netdevice(), unregister_netdevice(), free_netdevice(). + +Simple drivers +-------------- + +Most drivers (especially device drivers) handle lifetime of struct net_device +in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths). + +In that case the struct net_device registration is done using +the register_netdev(), and unregister_netdev() functions: + +.. code-block:: c + + int probe() + { + struct my_device_priv *priv; + int err; + + dev = alloc_netdev_mqs(...); + if (!dev) + return -ENOMEM; + priv = netdev_priv(dev); + + /* ... do all device setup before calling register_netdev() ... + */ + + err = register_netdev(dev); + if (err) + goto err_undo; + + /* net_device is visible to the user! */ + + err_undo: + /* ... undo the device setup ... */ + free_netdev(dev); + return err; + } + + void remove() + { + unregister_netdev(dev); + free_netdev(dev); + } + +Note that after calling register_netdev() the device is visible in the system. +Users can open it and start sending / receiving traffic immediately, +or run any other callback, so all initialization must be done prior to +registration. + +unregister_netdev() closes the device and waits for all users to be done +with it. The memory of struct net_device itself may still be referenced +by sysfs but all operations on that device will fail. + +free_netdev() can be called after unregister_netdev() returns on when +register_netdev() failed. + +Device management under RTNL +---------------------------- + +Registering struct net_device while in context which already holds +the ``rtnl_lock`` requires extra care. In those scenarios most drivers +will want to make use of struct net_device's ``needs_free_netdev`` +and ``priv_destructor`` members for freeing of state. + +Example flow of netdev handling under ``rtnl_lock``: + +.. code-block:: c + + static void my_setup(struct net_device *dev) + { + dev->needs_free_netdev = true; + } + + static void my_destructor(struct net_device *dev) + { + some_obj_destroy(priv->obj); + some_uninit(priv); + } + + int create_link() + { + struct my_device_priv *priv; + int err; + + ASSERT_RTNL(); + + dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup); + if (!dev) + return -ENOMEM; + priv = netdev_priv(dev); + + /* Implicit constructor */ + err = some_init(priv); + if (err) + goto err_free_dev; + + priv->obj = some_obj_create(); + if (!priv->obj) { + err = -ENOMEM; + goto err_some_uninit; + } + /* End of constructor, set the destructor: */ + dev->priv_destructor = my_destructor; + + err = register_netdevice(dev); + if (err) + /* register_netdevice() calls destructor on failure */ + goto err_free_dev; + + /* If anything fails now unregister_netdevice() (or unregister_netdev()) + * will take care of calling my_destructor and free_netdev(). + */ + + return 0; + + err_some_uninit: + some_uninit(priv); + err_free_dev: + free_netdev(dev); + return err; + } + +If struct net_device.priv_destructor is set it will be called by the core +some time after unregister_netdevice(), it will also be called if +register_netdevice() fails. The callback may be invoked with or without +``rtnl_lock`` held. + +There is no explicit constructor callback, driver "constructs" the private +netdev state after allocating it and before registration. + +Setting struct net_device.needs_free_netdev makes core call free_netdevice() +automatically after unregister_netdevice() when all references to the device +are gone. It only takes effect after a successful call to register_netdevice() +so if register_netdevice() fails driver is responsible for calling +free_netdev(). + +free_netdev() is safe to call on error paths right after unregister_netdevice() +or when register_netdevice() fails. Parts of netdev (de)registration process +happen after ``rtnl_lock`` is released, therefore in those cases free_netdev() +will defer some of the processing until ``rtnl_lock`` is released. + +Devices spawned from struct rtnl_link_ops should never free the +struct net_device directly. + +.ndo_init and .ndo_uninit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device +registration and de-registration, under ``rtnl_lock``. Drivers can use +those e.g. when parts of their init process need to run under ``rtnl_lock``. + +``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit`` +runs during de-registering after device is closed but other subsystems +may still have outstanding references to the netdevice. MTU === @@ -64,8 +223,8 @@ ndo_do_ioctl: Context: process ndo_get_stats: - Synchronization: dev_base_lock rwlock. - Context: nominally process, but don't sleep inside an rwlock + Synchronization: rtnl_lock() semaphore, dev_base_lock rwlock, or RCU. + Context: atomic (can't sleep under rwlock or RCU) ndo_start_xmit: Synchronization: __netif_tx_lock spinlock. diff --git a/Documentation/networking/packet_mmap.rst b/Documentation/networking/packet_mmap.rst index 6c009ceb1183..500ef60b1b82 100644 --- a/Documentation/networking/packet_mmap.rst +++ b/Documentation/networking/packet_mmap.rst @@ -8,7 +8,7 @@ Abstract ======== This file documents the mmap() facility available with the PACKET -socket interface on 2.4/2.6/3.x kernels. This type of sockets is used for +socket interface. This type of sockets is used for i) capture network traffic with utilities like tcpdump, ii) transmit network traffic, or any other that needs raw @@ -25,12 +25,12 @@ Please send your comments to Why use PACKET_MMAP =================== -In Linux 2.4/2.6/3.x if PACKET_MMAP is not enabled, the capture process is very +Non PACKET_MMAP capture process (plain AF_PACKET) is very inefficient. It uses very limited buffers and requires one system call to capture each packet, it requires two if you want to get packet's timestamp (like libpcap always does). -In the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size +On the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size configurable circular buffer mapped in user space that can be used to either send or receive packets. This way reading packets just needs to wait for them, most of the time there is no need to issue a single system call. Concerning @@ -252,8 +252,7 @@ PACKET_MMAP setting constraints In kernel versions prior to 2.4.26 (for the 2.4 branch) and 2.6.5 (2.6 branch), the PACKET_MMAP buffer could hold only 32768 frames in a 32 bit architecture or -16384 in a 64 bit architecture. For information on these kernel versions -see http://pusa.uv.es/~ulisses/packet_mmap/packet_mmap.pre-2.4.26_2.6.5.txt +16384 in a 64 bit architecture. Block size limit ---------------- @@ -437,7 +436,7 @@ and the following flags apply: Capture process ^^^^^^^^^^^^^^^ - from include/linux/if_packet.h +From include/linux/if_packet.h:: #define TP_STATUS_COPY (1 << 1) #define TP_STATUS_LOSING (1 << 2) diff --git a/Documentation/networking/tls-offload.rst b/Documentation/networking/tls-offload.rst index 0f55c6d540f9..5f0dea3d571e 100644 --- a/Documentation/networking/tls-offload.rst +++ b/Documentation/networking/tls-offload.rst @@ -530,7 +530,10 @@ TLS device feature flags only control adding of new TLS connection offloads, old connections will remain active after flags are cleared. TLS encryption cannot be offloaded to devices without checksum calculation -offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set. +offload. Hence, TLS TX device feature flag requires TX csum offload being set. Disabling the latter implies clearing the former. Disabling TX checksum offload should not affect old connections, and drivers should make sure checksum calculation does not break for them. +Similarly, device-offloaded TLS decryption implies doing RXCSUM. If the user +does not want to enable RX csum offload, TLS RX device feature is disabled +as well. diff --git a/Documentation/process/4.Coding.rst b/Documentation/process/4.Coding.rst index c27e59d2f702..0825dc496f22 100644 --- a/Documentation/process/4.Coding.rst +++ b/Documentation/process/4.Coding.rst @@ -249,10 +249,8 @@ features; most of these are found in the "kernel hacking" submenu. Several of these options should be turned on for any kernel used for development or testing purposes. In particular, you should turn on: - - ENABLE_MUST_CHECK and FRAME_WARN to get an - extra set of warnings for problems like the use of deprecated interfaces - or ignoring an important return value from a function. The output - generated by these warnings can be verbose, but one need not worry about + - FRAME_WARN to get warnings for stack frames larger than a given amount. + The output generated can be verbose, but one need not worry about warnings from other parts of the kernel. - DEBUG_OBJECTS will add code to track the lifetime of various objects diff --git a/Documentation/sound/alsa-configuration.rst b/Documentation/sound/alsa-configuration.rst index fe52c314b763..b36af65a08ed 100644 --- a/Documentation/sound/alsa-configuration.rst +++ b/Documentation/sound/alsa-configuration.rst @@ -1501,7 +1501,7 @@ Module for Digigram miXart8 sound cards. This module supports multiple cards. Note: One miXart8 board will be represented as 4 alsa cards. -See MIXART.txt for details. +See Documentation/sound/cards/mixart.rst for details. When the driver is compiled as a module and the hotplug firmware is supported, the firmware data is loaded via hotplug automatically. diff --git a/Documentation/sound/kernel-api/writing-an-alsa-driver.rst b/Documentation/sound/kernel-api/writing-an-alsa-driver.rst index 73bbd59afc33..e6365836fa8b 100644 --- a/Documentation/sound/kernel-api/writing-an-alsa-driver.rst +++ b/Documentation/sound/kernel-api/writing-an-alsa-driver.rst @@ -71,7 +71,7 @@ core/oss The codes for PCM and mixer OSS emulation modules are stored in this directory. The rawmidi OSS emulation is included in the ALSA rawmidi code since it's quite small. The sequencer code is stored in -``core/seq/oss`` directory (see `below <#core-seq-oss>`__). +``core/seq/oss`` directory (see `below <core/seq/oss_>`__). core/seq ~~~~~~~~ @@ -382,7 +382,7 @@ where ``enable[dev]`` is the module option. Each time the ``probe`` callback is called, check the availability of the device. If not available, simply increment the device index and returns. dev will be incremented also later (`step 7 -<#set-the-pci-driver-data-and-return-zero>`__). +<7) Set the PCI driver data and return zero._>`__). 2) Create a card instance ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -450,10 +450,10 @@ field contains the information shown in ``/proc/asound/cards``. 5) Create other components, such as mixer, MIDI, etc. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Here you define the basic components such as `PCM <#PCM-Interface>`__, -mixer (e.g. `AC97 <#API-for-AC97-Codec>`__), MIDI (e.g. -`MPU-401 <#MIDI-MPU401-UART-Interface>`__), and other interfaces. -Also, if you want a `proc file <#Proc-Interface>`__, define it here, +Here you define the basic components such as `PCM <PCM Interface_>`__, +mixer (e.g. `AC97 <API for AC97 Codec_>`__), MIDI (e.g. +`MPU-401 <MIDI (MPU401-UART) Interface_>`__), and other interfaces. +Also, if you want a `proc file <Proc Interface_>`__, define it here, too. 6) Register the card instance. @@ -941,7 +941,7 @@ The allocation of an interrupt source is done like this: chip->irq = pci->irq; where :c:func:`snd_mychip_interrupt()` is the interrupt handler -defined `later <#pcm-interface-interrupt-handler>`__. Note that +defined `later <PCM Interrupt Handler_>`__. Note that ``chip->irq`` should be defined only when :c:func:`request_irq()` succeeded. @@ -3104,7 +3104,7 @@ processing the output stream in the irq handler. If the MPU-401 interface shares its interrupt with the other logical devices on the card, set ``MPU401_INFO_IRQ_HOOK`` (see -`below <#MIDI-Interrupt-Handler>`__). +`below <MIDI Interrupt Handler_>`__). Usually, the port address corresponds to the command port and port + 1 corresponds to the data port. If not, you may change the ``cport`` diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 70254eaa5229..c136e254b496 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -392,9 +392,14 @@ This ioctl is obsolete and has been removed. Errors: - ===== ============================= + ======= ============================================================== EINTR an unmasked signal is pending - ===== ============================= + ENOEXEC the vcpu hasn't been initialized or the guest tried to execute + instructions from device memory (arm64) + ENOSYS data abort outside memslots with no syndrome info and + KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) + EPERM SVE feature set but not finalized (arm64) + ======= ============================================================== This ioctl is used to run a guest virtual cpu. While there are no explicit parameters, there is an implicit parameter block that can be diff --git a/MAINTAINERS b/MAINTAINERS index fd386791e267..d6dd31cb5ad1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -203,8 +203,8 @@ F: include/uapi/linux/nl80211.h F: net/wireless/ 8169 10/100/1000 GIGABIT ETHERNET DRIVER -M: Realtek linux nic maintainers <[email protected]> M: Heiner Kallweit <[email protected]> S: Maintained F: drivers/net/ethernet/realtek/r8169* @@ -820,7 +820,6 @@ M: Netanel Belgazal <[email protected]> M: Arthur Kiyanovski <[email protected]> R: Guy Tzalik <[email protected]> R: Saeed Bishara <[email protected]> -R: Zorik Machulsky <[email protected]> S: Supported F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst @@ -2119,7 +2118,7 @@ N: atmel ARM/Microchip Sparx5 SoC support M: Lars Povlsen <[email protected]> M: Steen Hegelund <[email protected]> -M: Microchip Linux Driver Support <[email protected]> L: [email protected] (moderated for non-subscribers) S: Supported T: git git://github.com/microchip-ung/linux-upstream.git @@ -2942,7 +2941,6 @@ S: Maintained F: drivers/hwmon/asus_atk0110.c ATLX ETHERNET DRIVERS -M: Jay Cliburn <[email protected]> M: Chris Snook <[email protected]> S: Maintained @@ -3336,7 +3334,7 @@ F: arch/riscv/net/ X: arch/riscv/net/bpf_jit_comp64.c BPF JIT for RISC-V (64-bit) -M: Björn Töpel <[email protected]> +M: Björn Töpel <[email protected]> S: Maintained @@ -3556,7 +3554,7 @@ S: Supported F: drivers/net/ethernet/broadcom/bnxt/ BROADCOM BRCM80211 IEEE802.11n WIRELESS DRIVER -M: Arend van Spriel <[email protected]> +M: Arend van Spriel <[email protected]> M: Franky Lin <[email protected]> M: Hante Meuleman <[email protected]> M: Chi-hsien Lin <[email protected]> @@ -3881,9 +3879,9 @@ F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt F: drivers/mtd/nand/raw/cadence-nand-controller.c CADENCE USB3 DRD IP DRIVER -M: Peter Chen <[email protected]> +M: Peter Chen <[email protected]> M: Pawel Laszczak <[email protected]> -M: Roger Quadros <[email protected]> +R: Roger Quadros <[email protected]> R: Aswath Govindraju <[email protected]> S: Maintained @@ -3961,7 +3959,7 @@ F: net/can/ CAN-J1939 NETWORK LAYER M: Robin van der Gracht <[email protected]> M: Oleksij Rempel <[email protected]> -R: Pengutronix Kernel Team <[email protected]> S: Maintained F: Documentation/networking/j1939.rst @@ -4163,7 +4161,7 @@ S: Maintained F: Documentation/translations/zh_CN/ CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER -M: Peter Chen <[email protected]> +M: Peter Chen <[email protected]> S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git @@ -4313,7 +4311,9 @@ W: https://clangbuiltlinux.github.io/ B: https://github.com/ClangBuiltLinux/linux/issues C: irc://chat.freenode.net/clangbuiltlinux F: Documentation/kbuild/llvm.rst +F: include/linux/compiler-clang.h F: scripts/clang-tools/ +F: scripts/clang-version.sh F: scripts/lld-version.sh K: \b(?i:clang|llvm)\b @@ -4922,9 +4922,8 @@ F: Documentation/scsi/dc395x.rst F: drivers/scsi/dc395x.* DCCP PROTOCOL -M: Gerrit Renker <[email protected]> -S: Maintained +S: Orphan W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp F: include/linux/dccp.h F: include/linux/tfrc.h @@ -7364,7 +7363,6 @@ L: [email protected] S: Maintained F: Documentation/kbuild/gcc-plugins.rst F: scripts/Makefile.gcc-plugins -F: scripts/gcc-plugin.sh F: scripts/gcc-plugins/ GCOV BASED KERNEL PROFILING @@ -9241,7 +9239,7 @@ F: tools/testing/selftests/sgx/* K: \bSGX_ INTERCONNECT API -M: Georgi Djakov <[email protected]> +M: Georgi Djakov <[email protected]> S: Maintained F: Documentation/devicetree/bindings/interconnect/ @@ -9274,7 +9272,7 @@ F: drivers/net/ethernet/sgi/ioc3-eth.c IOMAP FILESYSTEM LIBRARY M: Christoph Hellwig <[email protected]> -M: Darrick J. Wong <[email protected]> +M: Darrick J. Wong <[email protected]> @@ -9328,7 +9326,6 @@ W: http://www.adaptec.com/ F: drivers/scsi/ips* IPVS -M: Wensong Zhang <[email protected]> M: Simon Horman <[email protected]> M: Julian Anastasov <[email protected]> @@ -9777,7 +9774,7 @@ F: tools/testing/selftests/kvm/s390x/ KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86) M: Paolo Bonzini <[email protected]> -R: Sean Christopherson <[email protected]> +R: Sean Christopherson <[email protected]> R: Vitaly Kuznetsov <[email protected]> R: Wanpeng Li <[email protected]> R: Jim Mattson <[email protected]> @@ -10261,7 +10258,6 @@ S: Supported T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev F: Documentation/atomic_bitops.txt F: Documentation/atomic_t.txt -F: Documentation/core-api/atomic_ops.rst F: Documentation/core-api/refcount-vs-atomic.rst F: Documentation/litmus-tests/ F: Documentation/memory-barriers.txt @@ -10848,7 +10844,7 @@ F: drivers/media/radio/radio-maxiradio* MCAN MMIO DEVICE DRIVER M: Dan Murphy <[email protected]> -M: Sriram Dash <[email protected]> +M: Pankaj Sharma <[email protected]> S: Maintained F: Documentation/devicetree/bindings/net/can/bosch,m_can.yaml @@ -11668,7 +11664,7 @@ F: drivers/media/platform/atmel/atmel-isi.h MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER M: Woojung Huh <[email protected]> -M: Microchip Linux Driver Support <[email protected]> S: Maintained F: Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml @@ -11678,7 +11674,7 @@ F: net/dsa/tag_ksz.c MICROCHIP LAN743X ETHERNET DRIVER M: Bryan Whitehead <[email protected]> -M: Microchip Linux Driver Support <[email protected]> S: Maintained F: drivers/net/ethernet/microchip/lan743x_* @@ -11772,7 +11768,7 @@ F: drivers/net/wireless/microchip/wilc1000/ MICROSEMI MIPS SOCS M: Alexandre Belloni <[email protected]> -M: Microchip Linux Driver Support <[email protected]> S: Supported F: Documentation/devicetree/bindings/mips/mscc.txt @@ -12419,7 +12415,6 @@ F: tools/testing/selftests/net/ipsec.c NETWORKING [IPv4/IPv6] M: "David S. Miller" <[email protected]> -M: Alexey Kuznetsov <[email protected]> M: Hideaki YOSHIFUJI <[email protected]> S: Maintained @@ -12476,7 +12471,6 @@ F: net/ipv6/tcp*.c NETWORKING [TLS] M: Boris Pismenny <[email protected]> -M: Aviad Yehezkel <[email protected]> M: John Fastabend <[email protected]> M: Daniel Borkmann <[email protected]> M: Jakub Kicinski <[email protected]> @@ -12826,10 +12820,10 @@ F: tools/objtool/ F: include/linux/objtool.h OCELOT ETHERNET SWITCH DRIVER -M: Microchip Linux Driver Support <[email protected]> M: Vladimir Oltean <[email protected]> M: Claudiu Manoil <[email protected]> M: Alexandre Belloni <[email protected]> S: Supported F: drivers/net/dsa/ocelot/* @@ -12851,7 +12845,7 @@ F: include/misc/ocxl* F: include/uapi/misc/ocxl.h OMAP AUDIO SUPPORT -M: Peter Ujfalusi <[email protected]> +M: Peter Ujfalusi <[email protected]> M: Jarkko Nikula <[email protected]> L: [email protected] (moderated for non-subscribers) @@ -13891,7 +13885,7 @@ F: drivers/platform/x86/peaq-wmi.c PENSANDO ETHERNET DRIVERS M: Shannon Nelson <[email protected]> -M: Pensando Drivers <[email protected]> S: Supported F: Documentation/networking/device_drivers/ethernet/pensando/ionic.rst @@ -14513,10 +14507,18 @@ S: Supported F: drivers/crypto/qat/ QCOM AUDIO (ASoC) DRIVERS -M: Patrick Lai <[email protected]> +M: Srinivas Kandagatla <[email protected]> M: Banajit Goswami <[email protected]> L: [email protected] (moderated for non-subscribers) S: Supported +F: sound/soc/codecs/lpass-va-macro.c +F: sound/soc/codecs/lpass-wsa-macro.* +F: sound/soc/codecs/msm8916-wcd-analog.c +F: sound/soc/codecs/msm8916-wcd-digital.c +F: sound/soc/codecs/wcd9335.* +F: sound/soc/codecs/wcd934x.c +F: sound/soc/codecs/wcd-clsh-v2.* +F: sound/soc/codecs/wsa881x.c F: sound/soc/qcom/ QCOM IPA DRIVER @@ -14670,7 +14672,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git F: drivers/net/wireless/ath/ath11k/ QUALCOMM ATHEROS ATH9K WIRELESS DRIVER -M: QCA ath9k Development <[email protected]> S: Supported W: https://wireless.wiki.kernel.org/en/users/Drivers/ath9k @@ -16322,6 +16324,7 @@ M: Pekka Enberg <[email protected]> M: David Rientjes <[email protected]> M: Joonsoo Kim <[email protected]> M: Andrew Morton <[email protected]> +M: Vlastimil Babka <[email protected]> S: Maintained F: include/linux/sl?b*.h @@ -16711,6 +16714,8 @@ M: Samuel Thibault <[email protected]> S: Odd Fixes W: http://www.linux-speakup.org/ +W: https://github.com/linux-speakup/speakup +B: https://github.com/linux-speakup/speakup/issues F: drivers/accessibility/speakup/ SPEAR CLOCK FRAMEWORK SUPPORT @@ -16965,7 +16970,7 @@ M: Olivier Moysan <[email protected]> M: Arnaud Pouliquen <[email protected]> L: [email protected] (moderated for non-subscribers) S: Maintained -F: Documentation/devicetree/bindings/sound/st,stm32-*.txt +F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml F: sound/soc/stm/ STM32 TIMER/LPTIMER DRIVERS @@ -17542,7 +17547,7 @@ F: arch/xtensa/ F: drivers/irqchip/irq-xtensa-* TEXAS INSTRUMENTS ASoC DRIVERS -M: Peter Ujfalusi <[email protected]> +M: Peter Ujfalusi <[email protected]> L: [email protected] (moderated for non-subscribers) S: Maintained F: sound/soc/ti/ @@ -17554,6 +17559,19 @@ S: Supported F: Documentation/devicetree/bindings/iio/dac/ti,dac7612.txt F: drivers/iio/dac/ti-dac7612.c +TEXAS INSTRUMENTS DMA DRIVERS +M: Peter Ujfalusi <[email protected]> +S: Maintained +F: Documentation/devicetree/bindings/dma/ti-dma-crossbar.txt +F: Documentation/devicetree/bindings/dma/ti-edma.txt +F: Documentation/devicetree/bindings/dma/ti/ +F: drivers/dma/ti/ +X: drivers/dma/ti/cppi41.c +F: include/linux/dma/k3-udma-glue.h +F: include/linux/dma/ti-cppi5.h +F: include/linux/dma/k3-psil.h + TEXAS INSTRUMENTS' SYSTEM CONTROL INTERFACE (TISCI) PROTOCOL DRIVER M: Nishanth Menon <[email protected]> M: Tero Kristo <[email protected]> @@ -17839,7 +17857,7 @@ F: Documentation/devicetree/bindings/net/nfc/trf7970a.txt F: drivers/nfc/trf7970a.c TI TWL4030 SERIES SOC CODEC DRIVER -M: Peter Ujfalusi <[email protected]> +M: Peter Ujfalusi <[email protected]> L: [email protected] (moderated for non-subscribers) S: Maintained F: sound/soc/codecs/twl4030* @@ -18371,7 +18389,7 @@ F: include/linux/usb/isp116x.h USB LAN78XX ETHERNET DRIVER M: Woojung Huh <[email protected]> -M: Microchip Linux Driver Support <[email protected]> S: Maintained F: Documentation/devicetree/bindings/net/microchip,lan78xx.txt @@ -18405,7 +18423,7 @@ F: Documentation/usb/ohci.rst F: drivers/usb/host/ohci* USB OTG FSM (Finite State Machine) -M: Peter Chen <[email protected]> +M: Peter Chen <[email protected]> S: Maintained T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git @@ -18485,7 +18503,7 @@ F: drivers/net/usb/smsc75xx.* USB SMSC95XX ETHERNET DRIVER M: Steve Glendinning <[email protected]> -M: Microchip Linux Driver Support <[email protected]> S: Maintained F: drivers/net/usb/smsc95xx.* @@ -19032,7 +19050,7 @@ F: drivers/input/mouse/vmmouse.h VMWARE VMXNET3 ETHERNET DRIVER M: Ronak Doshi <[email protected]> -M: "VMware, Inc." <[email protected]> S: Maintained F: drivers/net/vmxnet3/ @@ -19059,7 +19077,6 @@ K: regulator_get_optional VRF M: David Ahern <[email protected]> -M: Shrijeet Mukherjee <[email protected]> S: Maintained F: Documentation/networking/vrf.rst @@ -19410,7 +19427,7 @@ F: drivers/net/ethernet/*/*/*xdp* K: (?:\b|_)xdp(?:\b|_) XDP SOCKETS (AF_XDP) -M: Björn Töpel <[email protected]> +M: Björn Töpel <[email protected]> M: Magnus Karlsson <[email protected]> R: Jonathan Lemon <[email protected]> @@ -19506,7 +19523,7 @@ F: arch/x86/xen/*swiotlb* F: drivers/xen/*swiotlb* XFS FILESYSTEM -M: Darrick J. Wong <[email protected]> +M: Darrick J. Wong <[email protected]> S: Supported @@ -2,7 +2,7 @@ VERSION = 5 PATCHLEVEL = 11 SUBLEVEL = 0 -EXTRAVERSION = -rc2 +EXTRAVERSION = -rc5 NAME = Kleptomaniac Octopus # *DOCUMENTATION* diff --git a/arch/Kconfig b/arch/Kconfig index 78c6f05b10f9..24862d15f3a3 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1105,6 +1105,12 @@ config HAVE_ARCH_PFN_VALID config ARCH_SUPPORTS_DEBUG_PAGEALLOC bool +config ARCH_SPLIT_ARG64 + bool + help + If a 32-bit architecture requires 64-bit arguments to be split into + pairs of 32-bit arguments, select this option. + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/arc/Makefile b/arch/arc/Makefile index 0c6bf0d1df7a..578bdbbb0fa7 100644 --- a/arch/arc/Makefile +++ b/arch/arc/Makefile @@ -102,16 +102,22 @@ libs-y += arch/arc/lib/ $(LIBGCC) boot := arch/arc/boot -#default target for make without any arguments. -KBUILD_IMAGE := $(boot)/bootpImage - -all: bootpImage -bootpImage: vmlinux - -boot_targets += uImage uImage.bin uImage.gz +boot_targets := uImage.bin uImage.gz uImage.lzma +PHONY += $(boot_targets) $(boot_targets): vmlinux $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ +uimage-default-y := uImage.bin +uimage-default-$(CONFIG_KERNEL_GZIP) := uImage.gz +uimage-default-$(CONFIG_KERNEL_LZMA) := uImage.lzma + +PHONY += uImage +uImage: $(uimage-default-y) + @ln -sf $< $(boot)/uImage + @$(kecho) ' Image $(boot)/uImage is ready' + +CLEAN_FILES += $(boot)/uImage + archclean: $(Q)$(MAKE) $(clean)=$(boot) diff --git a/arch/arc/boot/Makefile b/arch/arc/boot/Makefile index 538b92f4dd25..5648748c285f 100644 --- a/arch/arc/boot/Makefile +++ b/arch/arc/boot/Makefile @@ -1,5 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -targets := vmlinux.bin vmlinux.bin.gz uImage # uImage build relies on mkimage being availble on your host for ARC target # You will need to build u-boot for ARC, rename mkimage to arc-elf32-mkimage @@ -7,23 +6,18 @@ targets := vmlinux.bin vmlinux.bin.gz uImage OBJCOPYFLAGS= -O binary -R .note -R .note.gnu.build-id -R .comment -S -LINUX_START_TEXT = $$(readelf -h vmlinux | \ +LINUX_START_TEXT = $$($(READELF) -h vmlinux | \ grep "Entry point address" | grep -o 0x.*) UIMAGE_LOADADDR = $(CONFIG_LINUX_LINK_BASE) UIMAGE_ENTRYADDR = $(LINUX_START_TEXT) -suffix-y := bin -suffix-$(CONFIG_KERNEL_GZIP) := gz -suffix-$(CONFIG_KERNEL_LZMA) := lzma - -targets += uImage +targets += vmlinux.bin +targets += vmlinux.bin.gz +targets += vmlinux.bin.lzma targets += uImage.bin targets += uImage.gz targets += uImage.lzma -extra-y += vmlinux.bin -extra-y += vmlinux.bin.gz -extra-y += vmlinux.bin.lzma $(obj)/vmlinux.bin: vmlinux FORCE $(call if_changed,objcopy) @@ -42,7 +36,3 @@ $(obj)/uImage.gz: $(obj)/vmlinux.bin.gz FORCE $(obj)/uImage.lzma: $(obj)/vmlinux.bin.lzma FORCE $(call if_changed,uimage,lzma) - -$(obj)/uImage: $(obj)/uImage.$(suffix-y) - @ln -sf $(notdir $<) $@ - @echo ' Image $@ is ready' diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 23e41e890eda..ad9b7fe4dba3 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -10,6 +10,7 @@ #ifndef __ASSEMBLY__ #define clear_page(paddr) memset((paddr), 0, PAGE_SIZE) +#define copy_user_page(to, from, vaddr, pg) copy_page(to, from) #define copy_page(to, from) memcpy((to), (from), PAGE_SIZE) struct vm_area_struct; diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S index 1f5308abf36d..1743506081da 100644 --- a/arch/arc/kernel/entry.S +++ b/arch/arc/kernel/entry.S @@ -307,7 +307,7 @@ resume_user_mode_begin: mov r0, sp ; pt_regs for arg to do_signal()/do_notify_resume() GET_CURR_THR_INFO_FLAGS r9 - and.f 0, r9, TIF_SIGPENDING|TIF_NOTIFY_SIGNAL + and.f 0, r9, _TIF_SIGPENDING|_TIF_NOTIFY_SIGNAL bz .Lchk_notify_resume ; Normal Trap/IRQ entry only saves Scratch (caller-saved) regs diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig index 6b5c54576f54..a2d10c29fbcc 100644 --- a/arch/arc/plat-hsdk/Kconfig +++ b/arch/arc/plat-hsdk/Kconfig @@ -7,6 +7,7 @@ menuconfig ARC_SOC_HSDK depends on ISA_ARCV2 select ARC_HAS_ACCL_REGS select ARC_IRQ_NO_AUTOSAVE + select ARC_FPU_SAVE_RESTORE select CLK_HSDK select RESET_CONTROLLER select RESET_HSDK diff --git a/arch/arm/boot/dts/omap3-n950-n9.dtsi b/arch/arm/boot/dts/omap3-n950-n9.dtsi index 11d41e86f814..7dde9fbb06d3 100644 --- a/arch/arm/boot/dts/omap3-n950-n9.dtsi +++ b/arch/arm/boot/dts/omap3-n950-n9.dtsi @@ -494,3 +494,11 @@ clock-names = "sysclk"; }; }; + +&aes1_target { + status = "disabled"; +}; + +&aes2_target { + status = "disabled"; +}; diff --git a/arch/arm/boot/dts/picoxcell-pc3x2.dtsi b/arch/arm/boot/dts/picoxcell-pc3x2.dtsi index c4c6c7e9e37b..5898879a3038 100644 --- a/arch/arm/boot/dts/picoxcell-pc3x2.dtsi +++ b/arch/arm/boot/dts/picoxcell-pc3x2.dtsi @@ -45,18 +45,21 @@ emac: gem@30000 { compatible = "cadence,gem"; reg = <0x30000 0x10000>; + interrupt-parent = <&vic0>; interrupts = <31>; }; dmac1: dmac@40000 { compatible = "snps,dw-dmac"; reg = <0x40000 0x10000>; + interrupt-parent = <&vic0>; interrupts = <25>; }; dmac2: dmac@50000 { compatible = "snps,dw-dmac"; reg = <0x50000 0x10000>; + interrupt-parent = <&vic0>; interrupts = <26>; }; @@ -233,6 +236,7 @@ axi2pico@c0000000 { compatible = "picochip,axi2pico-pc3x2"; reg = <0xc0000000 0x10000>; + interrupt-parent = <&vic0>; interrupts = <13 14 15 16 17 18 19 20 21>; }; }; diff --git a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts index 496f9d3ba7b7..60fe6189e728 100644 --- a/arch/arm/boot/dts/ste-ux500-samsung-golden.dts +++ b/arch/arm/boot/dts/ste-ux500-samsung-golden.dts @@ -329,6 +329,7 @@ panel@0 { compatible = "samsung,s6e63m0"; reg = <0>; + max-brightness = <15>; vdd3-supply = <&panel_reg_3v0>; vci-supply = <&panel_reg_1v8>; reset-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>; diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig index e449a80d1143..20eb381e9761 100644 --- a/arch/arm/configs/omap2plus_defconfig +++ b/arch/arm/configs/omap2plus_defconfig @@ -279,6 +279,7 @@ CONFIG_SERIAL_OMAP_CONSOLE=y CONFIG_SERIAL_DEV_BUS=y CONFIG_I2C_CHARDEV=y CONFIG_SPI=y +CONFIG_SPI_GPIO=m CONFIG_SPI_OMAP24XX=y CONFIG_SPI_TI_QSPI=m CONFIG_HSI=m @@ -296,7 +297,6 @@ CONFIG_GPIO_TWL4030=y CONFIG_W1=m CONFIG_HDQ_MASTER_OMAP=m CONFIG_W1_SLAVE_DS250X=m -CONFIG_POWER_AVS=y CONFIG_POWER_RESET=y CONFIG_POWER_RESET_GPIO=y CONFIG_BATTERY_BQ27XXX=m diff --git a/arch/arm/crypto/chacha-glue.c b/arch/arm/crypto/chacha-glue.c index 7b5cf8430c6d..cdde8fd01f8f 100644 --- a/arch/arm/crypto/chacha-glue.c +++ b/arch/arm/crypto/chacha-glue.c @@ -60,6 +60,7 @@ static void chacha_doneon(u32 *state, u8 *dst, const u8 *src, chacha_block_xor_neon(state, d, s, nrounds); if (d != dst) memcpy(dst, buf, bytes); + state[12]++; } } diff --git a/arch/arm/mach-omap2/omap_device.c b/arch/arm/mach-omap2/omap_device.c index f3191704cab9..56d6814bec26 100644 --- a/arch/arm/mach-omap2/omap_device.c +++ b/arch/arm/mach-omap2/omap_device.c @@ -230,10 +230,12 @@ static int _omap_device_notifier_call(struct notifier_block *nb, break; case BUS_NOTIFY_BIND_DRIVER: od = to_omap_device(pdev); - if (od && (od->_state == OMAP_DEVICE_STATE_ENABLED) && - pm_runtime_status_suspended(dev)) { + if (od) { od->_driver_status = BUS_NOTIFY_BIND_DRIVER; - pm_runtime_set_active(dev); + if (od->_state == OMAP_DEVICE_STATE_ENABLED && + pm_runtime_status_suspended(dev)) { + pm_runtime_set_active(dev); + } } break; case BUS_NOTIFY_ADD_DEVICE: diff --git a/arch/arm/mach-omap2/pmic-cpcap.c b/arch/arm/mach-omap2/pmic-cpcap.c index eab281a5fc9f..09076ad0576d 100644 --- a/arch/arm/mach-omap2/pmic-cpcap.c +++ b/arch/arm/mach-omap2/pmic-cpcap.c @@ -71,7 +71,7 @@ static struct omap_voltdm_pmic omap_cpcap_iva = { .vp_vstepmin = OMAP4_VP_VSTEPMIN_VSTEPMIN, .vp_vstepmax = OMAP4_VP_VSTEPMAX_VSTEPMAX, .vddmin = 900000, - .vddmax = 1350000, + .vddmax = 1375000, .vp_timeout_us = OMAP4_VP_VLIMITTO_TIMEOUT_US, .i2c_slave_addr = 0x44, .volt_reg_addr = 0x0, diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c index 60e901cd0de6..5a957a9a0984 100644 --- a/arch/arm/xen/enlighten.c +++ b/arch/arm/xen/enlighten.c @@ -371,7 +371,7 @@ static int __init xen_guest_init(void) } gnttab_init(); if (!xen_initial_domain()) - xenbus_probe(NULL); + xenbus_probe(); /* * Making sure board specific code will not set up ops for diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 05e17351e4f3..f39568b28ec1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -174,8 +174,6 @@ config ARM64 select HAVE_NMI select HAVE_PATA_PLATFORM select HAVE_PERF_EVENTS - select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI && HW_PERF_EVENTS - select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 6be9b3750250..90309208bb28 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -10,7 +10,7 @@ # # Copyright (C) 1995-2001 by Russell King -LDFLAGS_vmlinux :=--no-undefined -X -z norelro +LDFLAGS_vmlinux :=--no-undefined -X ifeq ($(CONFIG_RELOCATABLE), y) # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour @@ -115,16 +115,20 @@ KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ # Prefer the baremetal ELF build target, but not all toolchains include # it so fall back to the standard linux version if needed. -KBUILD_LDFLAGS += -EB $(call ld-option, -maarch64elfb, -maarch64linuxb) +KBUILD_LDFLAGS += -EB $(call ld-option, -maarch64elfb, -maarch64linuxb -z norelro) UTS_MACHINE := aarch64_be else KBUILD_CPPFLAGS += -mlittle-endian CHECKFLAGS += -D__AARCH64EL__ # Same as above, prefer ELF but fall back to linux target if needed. -KBUILD_LDFLAGS += -EL $(call ld-option, -maarch64elf, -maarch64linux) +KBUILD_LDFLAGS += -EL $(call ld-option, -maarch64elf, -maarch64linux -z norelro) UTS_MACHINE := aarch64 endif +ifeq ($(CONFIG_LD_IS_LLD), y) +KBUILD_LDFLAGS += -z norelro +endif + CHECKFLAGS += -D__aarch64__ ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y) diff --git a/arch/arm64/boot/dts/bitmain/bm1880.dtsi b/arch/arm64/boot/dts/bitmain/bm1880.dtsi index fa6e6905f588..53a9b76057aa 100644 --- a/arch/arm64/boot/dts/bitmain/bm1880.dtsi +++ b/arch/arm64/boot/dts/bitmain/bm1880.dtsi @@ -127,7 +127,7 @@ compatible = "snps,dw-apb-gpio-port"; gpio-controller; #gpio-cells = <2>; - snps,nr-gpios = <32>; + ngpios = <32>; reg = <0>; interrupt-controller; #interrupt-cells = <2>; @@ -145,7 +145,7 @@ compatible = "snps,dw-apb-gpio-port"; gpio-controller; #gpio-cells = <2>; - snps,nr-gpios = <32>; + ngpios = <32>; reg = <0>; interrupt-controller; #interrupt-cells = <2>; @@ -163,7 +163,7 @@ compatible = "snps,dw-apb-gpio-port"; gpio-controller; #gpio-cells = <2>; - snps,nr-gpios = <8>; + ngpios = <8>; reg = <0>; interrupt-controller; #interrupt-cells = <2>; diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h index 015ddffaf6ca..b56a4b2bc248 100644 --- a/arch/arm64/include/asm/atomic.h +++ b/arch/arm64/include/asm/atomic.h @@ -17,7 +17,7 @@ #include <asm/lse.h> #define ATOMIC_OP(op) \ -static inline void arch_##op(int i, atomic_t *v) \ +static __always_inline void arch_##op(int i, atomic_t *v) \ { \ __lse_ll_sc_body(op, i, v); \ } @@ -32,7 +32,7 @@ ATOMIC_OP(atomic_sub) #undef ATOMIC_OP #define ATOMIC_FETCH_OP(name, op) \ -static inline int arch_##op##name(int i, atomic_t *v) \ +static __always_inline int arch_##op##name(int i, atomic_t *v) \ { \ return __lse_ll_sc_body(op##name, i, v); \ } @@ -56,7 +56,7 @@ ATOMIC_FETCH_OPS(atomic_sub_return) #undef ATOMIC_FETCH_OPS #define ATOMIC64_OP(op) \ -static inline void arch_##op(long i, atomic64_t *v) \ +static __always_inline void arch_##op(long i, atomic64_t *v) \ { \ __lse_ll_sc_body(op, i, v); \ } @@ -71,7 +71,7 @@ ATOMIC64_OP(atomic64_sub) #undef ATOMIC64_OP #define ATOMIC64_FETCH_OP(name, op) \ -static inline long arch_##op##name(long i, atomic64_t *v) \ +static __always_inline long arch_##op##name(long i, atomic64_t *v) \ { \ return __lse_ll_sc_body(op##name, i, v); \ } @@ -94,7 +94,7 @@ ATOMIC64_FETCH_OPS(atomic64_sub_return) #undef ATOMIC64_FETCH_OP #undef ATOMIC64_FETCH_OPS -static inline long arch_atomic64_dec_if_positive(atomic64_t *v) +static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v) { return __lse_ll_sc_body(atomic64_dec_if_positive, v); } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 11beda85ee7e..8fcfab0c2567 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -17,6 +17,7 @@ #include <linux/jump_label.h> #include <linux/kvm_types.h> #include <linux/percpu.h> +#include <linux/psci.h> #include <asm/arch_gicv3.h> #include <asm/barrier.h> #include <asm/cpufeature.h> @@ -240,6 +241,28 @@ struct kvm_host_data { struct kvm_pmu_events pmu_events; }; +struct kvm_host_psci_config { + /* PSCI version used by host. */ + u32 version; + + /* Function IDs used by host if version is v0.1. */ + struct psci_0_1_function_ids function_ids_0_1; + + bool psci_0_1_cpu_suspend_implemented; + bool psci_0_1_cpu_on_implemented; + bool psci_0_1_cpu_off_implemented; + bool psci_0_1_migrate_implemented; +}; + +extern struct kvm_host_psci_config kvm_nvhe_sym(kvm_host_psci_config); +#define kvm_host_psci_config CHOOSE_NVHE_SYM(kvm_host_psci_config) + +extern s64 kvm_nvhe_sym(hyp_physvirt_offset); +#define hyp_physvirt_offset CHOOSE_NVHE_SYM(hyp_physvirt_offset) + +extern u64 kvm_nvhe_sym(hyp_cpu_logical_map)[NR_CPUS]; +#define hyp_cpu_logical_map CHOOSE_NVHE_SYM(hyp_cpu_logical_map) + struct vcpu_reset_state { unsigned long pc; unsigned long r0; diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 6f986e09a781..f0fe0cc6abe0 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -176,10 +176,21 @@ static inline void __uaccess_enable_hw_pan(void) * The Tag check override (TCO) bit disables temporarily the tag checking * preventing the issue. */ -static inline void uaccess_disable_privileged(void) +static inline void __uaccess_disable_tco(void) { asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(0), ARM64_MTE, CONFIG_KASAN_HW_TAGS)); +} + +static inline void __uaccess_enable_tco(void) +{ + asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1), + ARM64_MTE, CONFIG_KASAN_HW_TAGS)); +} + +static inline void uaccess_disable_privileged(void) +{ + __uaccess_disable_tco(); if (uaccess_ttbr0_disable()) return; @@ -189,8 +200,7 @@ static inline void uaccess_disable_privileged(void) static inline void uaccess_enable_privileged(void) { - asm volatile(ALTERNATIVE("nop", SET_PSTATE_TCO(1), - ARM64_MTE, CONFIG_KASAN_HW_TAGS)); + __uaccess_enable_tco(); if (uaccess_ttbr0_enable()) return; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index f42fd9e33981..301784463587 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -75,7 +75,7 @@ int main(void) DEFINE(S_SDEI_TTBR1, offsetof(struct pt_regs, sdei_ttbr1)); DEFINE(S_PMR_SAVE, offsetof(struct pt_regs, pmr_save)); DEFINE(S_STACKFRAME, offsetof(struct pt_regs, stackframe)); - DEFINE(S_FRAME_SIZE, sizeof(struct pt_regs)); + DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs)); BLANK(); #ifdef CONFIG_COMPAT DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0)); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 7ffb5f1d8b68..e99eddec0a46 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2568,7 +2568,7 @@ static void verify_hyp_capabilities(void) int parange, ipa_max; unsigned int safe_vmid_bits, vmid_bits; - if (!IS_ENABLED(CONFIG_KVM) || !IS_ENABLED(CONFIG_KVM_ARM_HOST)) + if (!IS_ENABLED(CONFIG_KVM)) return; safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S index a338f40e64d3..b3e4f9a088b1 100644 --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -35,7 +35,7 @@ */ .macro ftrace_regs_entry, allregs=0 /* Make room for pt_regs, plus a callee frame */ - sub sp, sp, #(S_FRAME_SIZE + 16) + sub sp, sp, #(PT_REGS_SIZE + 16) /* Save function arguments (and x9 for simplicity) */ stp x0, x1, [sp, #S_X0] @@ -61,15 +61,15 @@ .endif /* Save the callsite's SP and LR */ - add x10, sp, #(S_FRAME_SIZE + 16) + add x10, sp, #(PT_REGS_SIZE + 16) stp x9, x10, [sp, #S_LR] /* Save the PC after the ftrace callsite */ str x30, [sp, #S_PC] /* Create a frame record for the callsite above pt_regs */ - stp x29, x9, [sp, #S_FRAME_SIZE] - add x29, sp, #S_FRAME_SIZE + stp x29, x9, [sp, #PT_REGS_SIZE] + add x29, sp, #PT_REGS_SIZE /* Create our frame record within pt_regs. */ stp x29, x30, [sp, #S_STACKFRAME] @@ -120,7 +120,7 @@ ftrace_common_return: ldr x9, [sp, #S_PC] /* Restore the callsite's SP */ - add sp, sp, #S_FRAME_SIZE + 16 + add sp, sp, #PT_REGS_SIZE + 16 ret x9 SYM_CODE_END(ftrace_common) @@ -130,7 +130,7 @@ SYM_CODE_START(ftrace_graph_caller) ldr x0, [sp, #S_PC] sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn) add x1, sp, #S_LR // parent_ip (callsite's LR) - ldr x2, [sp, #S_FRAME_SIZE] // parent fp (callsite's FP) + ldr x2, [sp, #PT_REGS_SIZE] // parent fp (callsite's FP) bl prepare_ftrace_return b ftrace_common_return SYM_CODE_END(ftrace_graph_caller) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 2a93fa5f4e49..c9bae73f2621 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -75,7 +75,7 @@ alternative_else_nop_endif .endif #endif - sub sp, sp, #S_FRAME_SIZE + sub sp, sp, #PT_REGS_SIZE #ifdef CONFIG_VMAP_STACK /* * Test whether the SP has overflowed, without corrupting a GPR. @@ -96,7 +96,7 @@ alternative_else_nop_endif * userspace, and can clobber EL0 registers to free up GPRs. */ - /* Stash the original SP (minus S_FRAME_SIZE) in tpidr_el0. */ + /* Stash the original SP (minus PT_REGS_SIZE) in tpidr_el0. */ msr tpidr_el0, x0 /* Recover the original x0 value and stash it in tpidrro_el0 */ @@ -182,7 +182,6 @@ alternative_else_nop_endif mrs_s \tmp2, SYS_GCR_EL1 bfi \tmp2, \tmp, #0, #16 msr_s SYS_GCR_EL1, \tmp2 - isb #endif .endm @@ -194,6 +193,7 @@ alternative_else_nop_endif ldr_l \tmp, gcr_kernel_excl mte_set_gcr \tmp, \tmp2 + isb 1: #endif .endm @@ -253,7 +253,7 @@ alternative_else_nop_endif scs_load tsk, x20 .else - add x21, sp, #S_FRAME_SIZE + add x21, sp, #PT_REGS_SIZE get_current_task tsk .endif /* \el == 0 */ mrs x22, elr_el1 @@ -377,7 +377,7 @@ alternative_else_nop_endif ldp x26, x27, [sp, #16 * 13] ldp x28, x29, [sp, #16 * 14] ldr lr, [sp, #S_LR] - add sp, sp, #S_FRAME_SIZE // restore sp + add sp, sp, #PT_REGS_SIZE // restore sp .if \el == 0 alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 @@ -580,12 +580,12 @@ __bad_stack: /* * Store the original GPRs to the new stack. The orginal SP (minus - * S_FRAME_SIZE) was stashed in tpidr_el0 by kernel_ventry. + * PT_REGS_SIZE) was stashed in tpidr_el0 by kernel_ventry. */ - sub sp, sp, #S_FRAME_SIZE + sub sp, sp, #PT_REGS_SIZE kernel_entry 1 mrs x0, tpidr_el0 - add x0, x0, #S_FRAME_SIZE + add x0, x0, #PT_REGS_SIZE str x0, [sp, #S_SP] /* Stash the regs for handle_bad_stack */ diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 38bb07eff872..3605f77ad4df 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -23,8 +23,6 @@ #include <linux/platform_device.h> #include <linux/sched_clock.h> #include <linux/smp.h> -#include <linux/nmi.h> -#include <linux/cpufreq.h> /* ARMv8 Cortex-A53 specific event types. */ #define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2 @@ -1250,21 +1248,10 @@ static struct platform_driver armv8_pmu_driver = { static int __init armv8_pmu_driver_init(void) { - int ret; - if (acpi_disabled) - ret = platform_driver_register(&armv8_pmu_driver); + return platform_driver_register(&armv8_pmu_driver); else - ret = arm_pmu_acpi_probe(armv8_pmuv3_init); - - /* - * Try to re-initialize lockup detector after PMU init in - * case PMU events are triggered via NMIs. - */ - if (ret == 0 && arm_pmu_irq_is_nmi()) - lockup_detector_init(); - - return ret; + return arm_pmu_acpi_probe(armv8_pmuv3_init); } device_initcall(armv8_pmu_driver_init) @@ -1322,27 +1309,3 @@ void arch_perf_update_userpage(struct perf_event *event, userpg->cap_user_time_zero = 1; userpg->cap_user_time_short = 1; } - -#ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF -/* - * Safe maximum CPU frequency in case a particular platform doesn't implement - * cpufreq driver. Although, architecture doesn't put any restrictions on - * maximum frequency but 5 GHz seems to be safe maximum given the available - * Arm CPUs in the market which are clocked much less than 5 GHz. On the other - * hand, we can't make it much higher as it would lead to a large hard-lockup - * detection timeout on parts which are running slower (eg. 1GHz on - * Developerbox) and doesn't possess a cpufreq driver. - */ -#define SAFE_MAX_CPU_FREQ 5000000000UL // 5 GHz -u64 hw_nmi_get_sample_period(int watchdog_thresh) -{ - unsigned int cpu = smp_processor_id(); - unsigned long max_cpu_freq; - - max_cpu_freq = cpufreq_get_hw_max_freq(cpu) * 1000UL; - if (!max_cpu_freq) - max_cpu_freq = SAFE_MAX_CPU_FREQ; - - return (u64)max_cpu_freq * watchdog_thresh; -} -#endif diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 89c64ada8732..66aac2881ba8 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -352,8 +352,8 @@ kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned int esr) unsigned long addr = instruction_pointer(regs); struct kprobe *cur = kprobe_running(); - if (cur && (kcb->kprobe_status == KPROBE_HIT_SS) - && ((unsigned long)&cur->ainsn.api.insn[1] == addr)) { + if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) && + ((unsigned long)&cur->ainsn.api.insn[1] == addr)) { kprobes_restore_local_irqflag(kcb, regs); post_kprobe_handler(cur, kcb, regs); diff --git a/arch/arm64/kernel/probes/kprobes_trampoline.S b/arch/arm64/kernel/probes/kprobes_trampoline.S index 890ca72c5a51..288a84e253cc 100644 --- a/arch/arm64/kernel/probes/kprobes_trampoline.S +++ b/arch/arm64/kernel/probes/kprobes_trampoline.S @@ -25,7 +25,7 @@ stp x24, x25, [sp, #S_X24] stp x26, x27, [sp, #S_X26] stp x28, x29, [sp, #S_X28] - add x0, sp, #S_FRAME_SIZE + add x0, sp, #PT_REGS_SIZE stp lr, x0, [sp, #S_LR] /* * Construct a useful saved PSTATE @@ -62,7 +62,7 @@ .endm SYM_CODE_START(kretprobe_trampoline) - sub sp, sp, #S_FRAME_SIZE + sub sp, sp, #PT_REGS_SIZE save_all_base_regs @@ -76,7 +76,7 @@ SYM_CODE_START(kretprobe_trampoline) restore_all_base_regs - add sp, sp, #S_FRAME_SIZE + add sp, sp, #PT_REGS_SIZE ret SYM_CODE_END(kretprobe_trampoline) diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index f71d6ce4673f..6237486ff6bb 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -914,13 +914,6 @@ static void do_signal(struct pt_regs *regs) asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { - /* - * The assembly code enters us with IRQs off, but it hasn't - * informed the tracing code of that for efficiency reasons. - * Update the trace code with the current status. - */ - trace_hardirqs_off(); - do { if (thread_flags & _TIF_NEED_RESCHED) { /* Unmask Debug and SError for the next task */ diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 6bc3a3698c3d..ad00f99ee9b0 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -434,7 +434,7 @@ static void __init hyp_mode_check(void) "CPU: CPUs started in inconsistent modes"); else pr_info("CPU: All CPU(s) started at EL1\n"); - if (IS_ENABLED(CONFIG_KVM)) + if (IS_ENABLED(CONFIG_KVM) && !is_kernel_in_hyp_mode()) kvm_compute_layout(); } @@ -807,7 +807,6 @@ int arch_show_interrupts(struct seq_file *p, int prec) unsigned int cpu, i; for (i = 0; i < NR_IPI; i++) { - unsigned int irq = irq_desc_get_irq(ipi_desc[i]); seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i, prec >= 4 ? " " : ""); for_each_online_cpu(cpu) diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index f61e9d8cc55a..c2877c332f2d 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -9,6 +9,7 @@ #include <asm/daifflags.h> #include <asm/debug-monitors.h> +#include <asm/exception.h> #include <asm/fpsimd.h> #include <asm/syscall.h> #include <asm/thread_info.h> @@ -165,15 +166,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) { local_daif_mask(); flags = current_thread_info()->flags; - if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) { - /* - * We're off to userspace, where interrupts are - * always enabled after we restore the flags from - * the SPSR. - */ - trace_hardirqs_on(); + if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) return; - } local_daif_restore(DAIF_PROCCTX); } diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 08156be75569..6895ce777e7f 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -42,7 +42,6 @@ #include <asm/smp.h> #include <asm/stack_pointer.h> #include <asm/stacktrace.h> -#include <asm/exception.h> #include <asm/system_misc.h> #include <asm/sysreg.h> diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index a8f8e409e2bf..cd9c3fa25902 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -24,8 +24,7 @@ btildflags-$(CONFIG_ARM64_BTI_KERNEL) += -z force-bti # routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so # preparation in build-time C")). ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ - -Bsymbolic $(call ld-option, --no-eh-frame-hdr) --build-id=sha1 -n \ - $(btildflags-y) -T + -Bsymbolic --build-id=sha1 -n $(btildflags-y) -T ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S index d808ad31e01f..61dbb4c838ef 100644 --- a/arch/arm64/kernel/vdso/vdso.lds.S +++ b/arch/arm64/kernel/vdso/vdso.lds.S @@ -40,9 +40,6 @@ SECTIONS PROVIDE (_etext = .); PROVIDE (etext = .); - .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr - .eh_frame : { KEEP (*(.eh_frame)) } :text - .dynamic : { *(.dynamic) } :text :dynamic .rodata : { *(.rodata*) } :text @@ -54,6 +51,7 @@ SECTIONS *(.note.GNU-stack) *(.data .data.* .gnu.linkonce.d.* .sdata*) *(.bss .sbss .dynbss .dynsbss) + *(.eh_frame .eh_frame_hdr) } } @@ -66,7 +64,6 @@ PHDRS text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ note PT_NOTE FLAGS(4); /* PF_R */ - eh_frame_hdr PT_GNU_EH_FRAME; } /* diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 043756db8f6e..3964acf5451e 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -49,14 +49,6 @@ if KVM source "virt/kvm/Kconfig" -config KVM_ARM_PMU - bool "Virtual Performance Monitoring Unit (PMU) support" - depends on HW_PERF_EVENTS - default y - help - Adds support for a virtual Performance Monitoring Unit (PMU) in - virtual machines. - endif # KVM endif # VIRTUALIZATION diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 60fd181df624..13b017284bf9 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -24,4 +24,4 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \ vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ vgic/vgic-its.o vgic/vgic-debug.o -kvm-$(CONFIG_KVM_ARM_PMU) += pmu-emul.o +kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 32ba6fbc3814..74e0699661e9 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -1129,9 +1129,10 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu) if (!irqchip_in_kernel(vcpu->kvm)) goto no_vgic; - if (!vgic_initialized(vcpu->kvm)) - return -ENODEV; - + /* + * At this stage, we have the guarantee that the vgic is both + * available and initialized. + */ if (!timer_irqs_are_valid(vcpu)) { kvm_debug("incorrectly configured timer irqs\n"); return -EINVAL; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 6e637d2b4cfb..04c44853b103 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -65,10 +65,6 @@ static bool vgic_present; static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled); DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use); -extern u64 kvm_nvhe_sym(__cpu_logical_map)[NR_CPUS]; -extern u32 kvm_nvhe_sym(kvm_host_psci_version); -extern struct psci_0_1_function_ids kvm_nvhe_sym(kvm_host_psci_0_1_function_ids); - int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) { return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; @@ -584,11 +580,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) * Map the VGIC hardware resources before running a vcpu the * first time on this VM. */ - if (unlikely(!vgic_ready(kvm))) { - ret = kvm_vgic_map_resources(kvm); - if (ret) - return ret; - } + ret = kvm_vgic_map_resources(kvm); + if (ret) + return ret; } else { /* * Tell the rest of the code that there are userspace irqchip @@ -1574,12 +1568,12 @@ static struct notifier_block hyp_init_cpu_pm_nb = { .notifier_call = hyp_init_cpu_pm_notifier, }; -static void __init hyp_cpu_pm_init(void) +static void hyp_cpu_pm_init(void) { if (!is_protected_kvm_enabled()) cpu_pm_register_notifier(&hyp_init_cpu_pm_nb); } -static void __init hyp_cpu_pm_exit(void) +static void hyp_cpu_pm_exit(void) { if (!is_protected_kvm_enabled()) cpu_pm_unregister_notifier(&hyp_init_cpu_pm_nb); @@ -1604,9 +1598,12 @@ static void init_cpu_logical_map(void) * allow any other CPUs from the `possible` set to boot. */ for_each_online_cpu(cpu) - kvm_nvhe_sym(__cpu_logical_map)[cpu] = cpu_logical_map(cpu); + hyp_cpu_logical_map[cpu] = cpu_logical_map(cpu); } +#define init_psci_0_1_impl_state(config, what) \ + config.psci_0_1_ ## what ## _implemented = psci_ops.what + static bool init_psci_relay(void) { /* @@ -1618,8 +1615,15 @@ static bool init_psci_relay(void) return false; } - kvm_nvhe_sym(kvm_host_psci_version) = psci_ops.get_version(); - kvm_nvhe_sym(kvm_host_psci_0_1_function_ids) = get_psci_0_1_function_ids(); + kvm_host_psci_config.version = psci_ops.get_version(); + + if (kvm_host_psci_config.version == PSCI_VERSION(0, 1)) { + kvm_host_psci_config.function_ids_0_1 = get_psci_0_1_function_ids(); + init_psci_0_1_impl_state(kvm_host_psci_config, cpu_suspend); + init_psci_0_1_impl_state(kvm_host_psci_config, cpu_on); + init_psci_0_1_impl_state(kvm_host_psci_config, cpu_off); + init_psci_0_1_impl_state(kvm_host_psci_config, migrate); + } return true; } diff --git a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h index b1f60923a8fe..61716359035d 100644 --- a/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h +++ b/arch/arm64/kvm/hyp/include/hyp/adjust_pc.h @@ -59,4 +59,13 @@ static inline void __adjust_pc(struct kvm_vcpu *vcpu) } } +/* + * Skip an instruction while host sysregs are live. + * Assumes host is always 64-bit. + */ +static inline void kvm_skip_host_instr(void) +{ + write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR); +} + #endif diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index bde658d51404..a906f9e2ff34 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -157,11 +157,6 @@ static void default_host_smc_handler(struct kvm_cpu_context *host_ctxt) __kvm_hyp_host_forward_smc(host_ctxt); } -static void skip_host_instruction(void) -{ - write_sysreg_el2(read_sysreg_el2(SYS_ELR) + 4, SYS_ELR); -} - static void handle_host_smc(struct kvm_cpu_context *host_ctxt) { bool handled; @@ -170,11 +165,8 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt) if (!handled) default_host_smc_handler(host_ctxt); - /* - * Unlike HVC, the return address of an SMC is the instruction's PC. - * Move the return address past the instruction. - */ - skip_host_instruction(); + /* SMC was trapped, move ELR past the current PC. */ + kvm_skip_host_instr(); } void handle_trap(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c index cbab0c6246e2..2997aa156d8e 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-smp.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-smp.c @@ -14,14 +14,14 @@ * Other CPUs should not be allowed to boot because their features were * not checked against the finalized system capabilities. */ -u64 __ro_after_init __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID }; +u64 __ro_after_init hyp_cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID }; u64 cpu_logical_map(unsigned int cpu) { - if (cpu >= ARRAY_SIZE(__cpu_logical_map)) + if (cpu >= ARRAY_SIZE(hyp_cpu_logical_map)) hyp_panic(); - return __cpu_logical_map[cpu]; + return hyp_cpu_logical_map[cpu]; } unsigned long __hyp_per_cpu_offset(unsigned int cpu) diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c b/arch/arm64/kvm/hyp/nvhe/psci-relay.c index 08dc9de69314..e3947846ffcb 100644 --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c @@ -7,11 +7,8 @@ #include <asm/kvm_asm.h> #include <asm/kvm_hyp.h> #include <asm/kvm_mmu.h> -#include <kvm/arm_hypercalls.h> #include <linux/arm-smccc.h> #include <linux/kvm_host.h> -#include <linux/psci.h> -#include <kvm/arm_psci.h> #include <uapi/linux/psci.h> #include <nvhe/trap_handler.h> @@ -22,9 +19,8 @@ void kvm_hyp_cpu_resume(unsigned long r0); void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); /* Config options set by the host. */ -__ro_after_init u32 kvm_host_psci_version; -__ro_after_init struct psci_0_1_function_ids kvm_host_psci_0_1_function_ids; -__ro_after_init s64 hyp_physvirt_offset; +struct kvm_host_psci_config __ro_after_init kvm_host_psci_config; +s64 __ro_after_init hyp_physvirt_offset; #define __hyp_pa(x) ((phys_addr_t)((x)) + hyp_physvirt_offset) @@ -47,19 +43,16 @@ struct psci_boot_args { static DEFINE_PER_CPU(struct psci_boot_args, cpu_on_args) = PSCI_BOOT_ARGS_INIT; static DEFINE_PER_CPU(struct psci_boot_args, suspend_args) = PSCI_BOOT_ARGS_INIT; -static u64 get_psci_func_id(struct kvm_cpu_context *host_ctxt) -{ - DECLARE_REG(u64, func_id, host_ctxt, 0); - - return func_id; -} +#define is_psci_0_1(what, func_id) \ + (kvm_host_psci_config.psci_0_1_ ## what ## _implemented && \ + (func_id) == kvm_host_psci_config.function_ids_0_1.what) static bool is_psci_0_1_call(u64 func_id) { - return (func_id == kvm_host_psci_0_1_function_ids.cpu_suspend) || - (func_id == kvm_host_psci_0_1_function_ids.cpu_on) || - (func_id == kvm_host_psci_0_1_function_ids.cpu_off) || - (func_id == kvm_host_psci_0_1_function_ids.migrate); + return (is_psci_0_1(cpu_suspend, func_id) || + is_psci_0_1(cpu_on, func_id) || + is_psci_0_1(cpu_off, func_id) || + is_psci_0_1(migrate, func_id)); } static bool is_psci_0_2_call(u64 func_id) @@ -69,16 +62,6 @@ static bool is_psci_0_2_call(u64 func_id) (PSCI_0_2_FN64(0) <= func_id && func_id <= PSCI_0_2_FN64(31)); } -static bool is_psci_call(u64 func_id) -{ - switch (kvm_host_psci_version) { - case PSCI_VERSION(0, 1): - return is_psci_0_1_call(func_id); - default: - return is_psci_0_2_call(func_id); - } -} - static unsigned long psci_call(unsigned long fn, unsigned long arg0, unsigned long arg1, unsigned long arg2) { @@ -248,15 +231,14 @@ asmlinkage void __noreturn kvm_host_psci_cpu_entry(bool is_cpu_on) static unsigned long psci_0_1_handler(u64 func_id, struct kvm_cpu_context *host_ctxt) { - if ((func_id == kvm_host_psci_0_1_function_ids.cpu_off) || - (func_id == kvm_host_psci_0_1_function_ids.migrate)) + if (is_psci_0_1(cpu_off, func_id) || is_psci_0_1(migrate, func_id)) return psci_forward(host_ctxt); - else if (func_id == kvm_host_psci_0_1_function_ids.cpu_on) + if (is_psci_0_1(cpu_on, func_id)) return psci_cpu_on(func_id, host_ctxt); - else if (func_id == kvm_host_psci_0_1_function_ids.cpu_suspend) + if (is_psci_0_1(cpu_suspend, func_id)) return psci_cpu_suspend(func_id, host_ctxt); - else - return PSCI_RET_NOT_SUPPORTED; + + return PSCI_RET_NOT_SUPPORTED; } static unsigned long psci_0_2_handler(u64 func_id, struct kvm_cpu_context *host_ctxt) @@ -298,20 +280,23 @@ static unsigned long psci_1_0_handler(u64 func_id, struct kvm_cpu_context *host_ bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt) { - u64 func_id = get_psci_func_id(host_ctxt); + DECLARE_REG(u64, func_id, host_ctxt, 0); unsigned long ret; - if (!is_psci_call(func_id)) - return false; - - switch (kvm_host_psci_version) { + switch (kvm_host_psci_config.version) { case PSCI_VERSION(0, 1): + if (!is_psci_0_1_call(func_id)) + return false; ret = psci_0_1_handler(func_id, host_ctxt); break; case PSCI_VERSION(0, 2): + if (!is_psci_0_2_call(func_id)) + return false; ret = psci_0_2_handler(func_id, host_ctxt); break; default: + if (!is_psci_0_2_call(func_id)) + return false; ret = psci_1_0_handler(func_id, host_ctxt); break; } diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index 398f6df1bbe4..4ad66a532e38 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -850,8 +850,6 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) return -EINVAL; } - kvm_pmu_vcpu_reset(vcpu); - return 0; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3313dedfa505..42ccc27fb684 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -594,6 +594,10 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 pmcr, val; + /* No PMU available, PMCR_EL0 may UNDEF... */ + if (!kvm_arm_support_pmu_v3()) + return; + pmcr = read_sysreg(pmcr_el0); /* * Writable bits of PMCR_EL0 (ARMV8_PMU_PMCR_MASK) are reset to UNKNOWN @@ -919,7 +923,7 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, #define reg_to_encoding(x) \ sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ - (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2); + (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index d8cc51bd60bf..70fcd6a12fe1 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -34,17 +34,16 @@ static u64 __early_kern_hyp_va(u64 addr) } /* - * Store a hyp VA <-> PA offset into a hyp-owned variable. + * Store a hyp VA <-> PA offset into a EL2-owned variable. */ static void init_hyp_physvirt_offset(void) { - extern s64 kvm_nvhe_sym(hyp_physvirt_offset); u64 kern_va, hyp_va; /* Compute the offset from the hyp VA and PA of a random symbol. */ - kern_va = (u64)kvm_ksym_ref(__hyp_text_start); + kern_va = (u64)lm_alias(__hyp_text_start); hyp_va = __early_kern_hyp_va(kern_va); - CHOOSE_NVHE_SYM(hyp_physvirt_offset) = (s64)__pa(kern_va) - (s64)hyp_va; + hyp_physvirt_offset = (s64)__pa(kern_va) - (s64)hyp_va; } /* diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 32e32d67a127..052917deb149 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -419,7 +419,8 @@ int vgic_lazy_init(struct kvm *kvm) * Map the MMIO regions depending on the VGIC model exposed to the guest * called on the first VCPU run. * Also map the virtual CPU interface into the VM. - * v2/v3 derivatives call vgic_init if not already done. + * v2 calls vgic_init() if not already done. + * v3 and derivatives return an error if the VGIC is not initialized. * vgic_ready() returns true if this function has succeeded. * @kvm: kvm struct pointer */ @@ -428,7 +429,13 @@ int kvm_vgic_map_resources(struct kvm *kvm) struct vgic_dist *dist = &kvm->arch.vgic; int ret = 0; + if (likely(vgic_ready(kvm))) + return 0; + mutex_lock(&kvm->lock); + if (vgic_ready(kvm)) + goto out; + if (!irqchip_in_kernel(kvm)) goto out; @@ -439,6 +446,8 @@ int kvm_vgic_map_resources(struct kvm *kvm) if (ret) __kvm_vgic_destroy(kvm); + else + dist->ready = true; out: mutex_unlock(&kvm->lock); diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c index ebf53a4e1296..11934c2af2f4 100644 --- a/arch/arm64/kvm/vgic/vgic-v2.c +++ b/arch/arm64/kvm/vgic/vgic-v2.c @@ -306,20 +306,15 @@ int vgic_v2_map_resources(struct kvm *kvm) struct vgic_dist *dist = &kvm->arch.vgic; int ret = 0; - if (vgic_ready(kvm)) - goto out; - if (IS_VGIC_ADDR_UNDEF(dist->vgic_dist_base) || IS_VGIC_ADDR_UNDEF(dist->vgic_cpu_base)) { kvm_err("Need to set vgic cpu and dist addresses first\n"); - ret = -ENXIO; - goto out; + return -ENXIO; } if (!vgic_v2_check_base(dist->vgic_dist_base, dist->vgic_cpu_base)) { kvm_err("VGIC CPU and dist frames overlap\n"); - ret = -EINVAL; - goto out; + return -EINVAL; } /* @@ -329,13 +324,13 @@ int vgic_v2_map_resources(struct kvm *kvm) ret = vgic_init(kvm); if (ret) { kvm_err("Unable to initialize VGIC dynamic data structures\n"); - goto out; + return ret; } ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2); if (ret) { kvm_err("Unable to register VGIC MMIO regions\n"); - goto out; + return ret; } if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) { @@ -344,14 +339,11 @@ int vgic_v2_map_resources(struct kvm *kvm) KVM_VGIC_V2_CPU_SIZE, true); if (ret) { kvm_err("Unable to remap VGIC CPU to VCPU\n"); - goto out; + return ret; } } - dist->ready = true; - -out: - return ret; + return 0; } DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap); diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 9cdf39a94a63..52915b342351 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -500,29 +500,23 @@ int vgic_v3_map_resources(struct kvm *kvm) int ret = 0; int c; - if (vgic_ready(kvm)) - goto out; - kvm_for_each_vcpu(c, vcpu, kvm) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; if (IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr)) { kvm_debug("vcpu %d redistributor base not set\n", c); - ret = -ENXIO; - goto out; + return -ENXIO; } } if (IS_VGIC_ADDR_UNDEF(dist->vgic_dist_base)) { kvm_err("Need to set vgic distributor addresses first\n"); - ret = -ENXIO; - goto out; + return -ENXIO; } if (!vgic_v3_check_base(kvm)) { kvm_err("VGIC redist and dist frames overlap\n"); - ret = -EINVAL; - goto out; + return -EINVAL; } /* @@ -530,22 +524,19 @@ int vgic_v3_map_resources(struct kvm *kvm) * the VGIC before we need to use it. */ if (!vgic_initialized(kvm)) { - ret = -EBUSY; - goto out; + return -EBUSY; } ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3); if (ret) { kvm_err("Unable to register VGICv3 dist MMIO regions\n"); - goto out; + return ret; } if (kvm_vgic_global_state.has_gicv4_1) vgic_v4_configure_vsgis(kvm); - dist->ready = true; -out: - return ret; + return 0; } DEFINE_STATIC_KEY_FALSE(vgic_v3_cpuif_trap); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 3c40da479899..35d75c60e2b8 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -709,10 +709,11 @@ static int do_tag_check_fault(unsigned long far, unsigned int esr, struct pt_regs *regs) { /* - * The architecture specifies that bits 63:60 of FAR_EL1 are UNKNOWN for tag - * check faults. Mask them out now so that userspace doesn't see them. + * The architecture specifies that bits 63:60 of FAR_EL1 are UNKNOWN + * for tag check faults. Set them to corresponding bits in the untagged + * address. */ - far &= (1UL << 60) - 1; + far = (__untagged_addr(far) & ~MTE_TAG_MASK) | (far & MTE_TAG_MASK); do_bad_area(far, esr, regs); return 0; } diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 75addb36354a..709d98fea90c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -53,13 +53,13 @@ s64 memstart_addr __ro_after_init = -1; EXPORT_SYMBOL(memstart_addr); /* - * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of - * memory as some devices, namely the Raspberry Pi 4, have peripherals with - * this limited view of the memory. ZONE_DMA32 will cover the rest of the 32 - * bit addressable memory area. + * If the corresponding config options are enabled, we create both ZONE_DMA + * and ZONE_DMA32. By default ZONE_DMA covers the 32-bit addressable memory + * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4). + * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory, + * otherwise it is empty. */ phys_addr_t arm64_dma_phys_limit __ro_after_init; -static phys_addr_t arm64_dma32_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE /* @@ -84,7 +84,7 @@ static void __init reserve_crashkernel(void) if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, + crash_base = memblock_find_in_range(0, arm64_dma_phys_limit, crash_size, SZ_2M); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", @@ -196,6 +196,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; unsigned int __maybe_unused acpi_zone_dma_bits; unsigned int __maybe_unused dt_zone_dma_bits; + phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32); #ifdef CONFIG_ZONE_DMA acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); @@ -205,8 +206,12 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); #endif #ifdef CONFIG_ZONE_DMA32 - max_zone_pfns[ZONE_DMA32] = PFN_DOWN(arm64_dma32_phys_limit); + max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); + if (!arm64_dma_phys_limit) + arm64_dma_phys_limit = dma32_phys_limit; #endif + if (!arm64_dma_phys_limit) + arm64_dma_phys_limit = PHYS_MASK + 1; max_zone_pfns[ZONE_NORMAL] = max; free_area_init(max_zone_pfns); @@ -394,16 +399,9 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (IS_ENABLED(CONFIG_ZONE_DMA32)) - arm64_dma32_phys_limit = max_zone_phys(32); - else - arm64_dma32_phys_limit = PHYS_MASK + 1; - reserve_elfcorehdr(); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; - - dma_contiguous_reserve(arm64_dma32_phys_limit); } void __init bootmem_init(void) @@ -439,6 +437,11 @@ void __init bootmem_init(void) zone_sizes_init(min, max); /* + * Reserve the CMA area after arm64_dma_phys_limit was initialised. + */ + dma_contiguous_reserve(arm64_dma_phys_limit); + + /* * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ @@ -455,7 +458,7 @@ void __init bootmem_init(void) void __init mem_init(void) { if (swiotlb_force == SWIOTLB_FORCE || - max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit)) + max_pfn > PFN_DOWN(arm64_dma_phys_limit)) swiotlb_init(1); else swiotlb_force = SWIOTLB_NO_FORCE; diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 37a54b57178a..1f7ee8c8b7b8 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -46,7 +46,7 @@ #endif #ifdef CONFIG_KASAN_HW_TAGS -#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 +#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #else #define TCR_KASAN_HW_FLAGS 0 #endif diff --git a/arch/ia64/include/asm/sparsemem.h b/arch/ia64/include/asm/sparsemem.h index dd8c166ffd7b..42ed5248fae9 100644 --- a/arch/ia64/include/asm/sparsemem.h +++ b/arch/ia64/include/asm/sparsemem.h @@ -3,6 +3,7 @@ #define _ASM_IA64_SPARSEMEM_H #ifdef CONFIG_SPARSEMEM +#include <asm/page.h> /* * SECTION_SIZE_BITS 2^N: how big each section will be * MAX_PHYSMEM_BITS 2^N: how much memory we can have in that space diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c index c61c641674e6..e3946b06e840 100644 --- a/arch/mips/boot/compressed/decompress.c +++ b/arch/mips/boot/compressed/decompress.c @@ -13,6 +13,7 @@ #include <linux/libfdt.h> #include <asm/addrspace.h> +#include <asm/unaligned.h> /* * These two variables specify the free mem region @@ -117,7 +118,7 @@ void decompress_kernel(unsigned long boot_heap_start) dtb_size = fdt_totalsize((void *)&__appended_dtb); /* last four bytes is always image size in little endian */ - image_size = le32_to_cpup((void *)&__image_end - 4); + image_size = get_unaligned_le32((void *)&__image_end - 4); /* copy dtb to where the booted kernel will expect it */ memcpy((void *)VMLINUX_LOAD_ADDRESS_ULL + image_size, diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c index bd47e15d02c7..be5d4afcd30f 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -1444,7 +1444,7 @@ static void octeon_irq_setup_secondary_ciu2(void) static int __init octeon_irq_init_ciu( struct device_node *ciu_node, struct device_node *parent) { - unsigned int i, r; + int i, r; struct irq_chip *chip; struct irq_chip *chip_edge; struct irq_chip *chip_mbox; diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h index 19edf8e69971..292d0425717f 100644 --- a/arch/mips/include/asm/highmem.h +++ b/arch/mips/include/asm/highmem.h @@ -51,6 +51,7 @@ extern void kmap_flush_tlb(unsigned long addr); #define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases) +#define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) set_pte(ptep, ptev) #define arch_kmap_local_post_map(vaddr, pteval) local_flush_tlb_one(vaddr) #define arch_kmap_local_post_unmap(vaddr) local_flush_tlb_one(vaddr) diff --git a/arch/mips/kernel/binfmt_elfn32.c b/arch/mips/kernel/binfmt_elfn32.c index 6ee3f7218c67..c4441416e96b 100644 --- a/arch/mips/kernel/binfmt_elfn32.c +++ b/arch/mips/kernel/binfmt_elfn32.c @@ -103,4 +103,11 @@ jiffies_to_old_timeval32(unsigned long jiffies, struct old_timeval32 *value) #undef ns_to_kernel_old_timeval #define ns_to_kernel_old_timeval ns_to_old_timeval32 +/* + * Some data types as stored in coredump. + */ +#define user_long_t compat_long_t +#define user_siginfo_t compat_siginfo_t +#define copy_siginfo_to_external copy_siginfo_to_external32 + #include "../../../fs/binfmt_elf.c" diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c index 6dd103d3cebb..7b2a23f48c1a 100644 --- a/arch/mips/kernel/binfmt_elfo32.c +++ b/arch/mips/kernel/binfmt_elfo32.c @@ -106,4 +106,11 @@ jiffies_to_old_timeval32(unsigned long jiffies, struct old_timeval32 *value) #undef ns_to_kernel_old_timeval #define ns_to_kernel_old_timeval ns_to_old_timeval32 +/* + * Some data types as stored in coredump. + */ +#define user_long_t compat_long_t +#define user_siginfo_t compat_siginfo_t +#define copy_siginfo_to_external copy_siginfo_to_external32 + #include "../../../fs/binfmt_elf.c" diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c index 47aeb3350a76..0e365b7c742d 100644 --- a/arch/mips/kernel/relocate.c +++ b/arch/mips/kernel/relocate.c @@ -187,8 +187,14 @@ static int __init relocate_exception_table(long offset) static inline __init unsigned long rotate_xor(unsigned long hash, const void *area, size_t size) { - size_t i; - unsigned long *ptr = (unsigned long *)area; + const typeof(hash) *ptr = PTR_ALIGN(area, sizeof(hash)); + size_t diff, i; + + diff = (void *)ptr - area; + if (unlikely(size < diff + sizeof(hash))) + return hash; + + size = ALIGN_DOWN(size - diff, sizeof(hash)); for (i = 0; i < size / sizeof(hash); i++) { /* Rotate by odd number of bits and XOR. */ diff --git a/arch/openrisc/include/asm/io.h b/arch/openrisc/include/asm/io.h index 7d6b4a77b379..c298061c70a7 100644 --- a/arch/openrisc/include/asm/io.h +++ b/arch/openrisc/include/asm/io.h @@ -31,7 +31,7 @@ void __iomem *ioremap(phys_addr_t offset, unsigned long size); #define iounmap iounmap -extern void iounmap(void *addr); +extern void iounmap(void __iomem *addr); #include <asm-generic/io.h> diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c index 5aed97a18bac..daae13a76743 100644 --- a/arch/openrisc/mm/ioremap.c +++ b/arch/openrisc/mm/ioremap.c @@ -77,7 +77,7 @@ void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size) } EXPORT_SYMBOL(ioremap); -void iounmap(void *addr) +void iounmap(void __iomem *addr) { /* If the page is from the fixmap pool then we just clear out * the fixmap mapping. diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h index 1d32b174ab6a..c1a8aac01cf9 100644 --- a/arch/powerpc/include/asm/exception-64s.h +++ b/arch/powerpc/include/asm/exception-64s.h @@ -63,6 +63,12 @@ nop; \ nop; +#define SCV_ENTRY_FLUSH_SLOT \ + SCV_ENTRY_FLUSH_FIXUP_SECTION; \ + nop; \ + nop; \ + nop; + /* * r10 must be free to use, r13 must be paca */ @@ -71,6 +77,13 @@ ENTRY_FLUSH_SLOT /* + * r10, ctr must be free to use, r13 must be paca + */ +#define SCV_INTERRUPT_TO_KERNEL \ + STF_ENTRY_BARRIER_SLOT; \ + SCV_ENTRY_FLUSH_SLOT + +/* * Macros for annotating the expected destination of (h)rfid * * The nop instructions allow us to insert one or more instructions to flush the diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h index f6d2acb57425..ac605fc369c4 100644 --- a/arch/powerpc/include/asm/feature-fixups.h +++ b/arch/powerpc/include/asm/feature-fixups.h @@ -240,6 +240,14 @@ label##3: \ FTR_ENTRY_OFFSET 957b-958b; \ .popsection; +#define SCV_ENTRY_FLUSH_FIXUP_SECTION \ +957: \ + .pushsection __scv_entry_flush_fixup,"a"; \ + .align 2; \ +958: \ + FTR_ENTRY_OFFSET 957b-958b; \ + .popsection; + #define RFI_FLUSH_FIXUP_SECTION \ 951: \ .pushsection __rfi_flush_fixup,"a"; \ @@ -273,10 +281,12 @@ label##3: \ extern long stf_barrier_fallback; extern long entry_flush_fallback; +extern long scv_entry_flush_fallback; extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; +extern long __start___scv_entry_flush_fixup, __stop___scv_entry_flush_fixup; extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h index 80a5ae771c65..c0fcd1bbdba9 100644 --- a/arch/powerpc/include/asm/highmem.h +++ b/arch/powerpc/include/asm/highmem.h @@ -58,6 +58,8 @@ extern pte_t *pkmap_page_table; #define flush_cache_kmaps() flush_cache_all() +#define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) \ + __set_pte_at(mm, vaddr, ptep, ptev, 1) #define arch_kmap_local_post_map(vaddr, pteval) \ local_flush_tlb_page(NULL, vaddr) #define arch_kmap_local_post_unmap(vaddr) \ diff --git a/arch/powerpc/include/asm/vdso/gettimeofday.h b/arch/powerpc/include/asm/vdso/gettimeofday.h index 81671aa365b3..77c635c2c90d 100644 --- a/arch/powerpc/include/asm/vdso/gettimeofday.h +++ b/arch/powerpc/include/asm/vdso/gettimeofday.h @@ -103,6 +103,8 @@ int gettimeofday_fallback(struct __kernel_old_timeval *_tv, struct timezone *_tz return do_syscall_2(__NR_gettimeofday, (unsigned long)_tv, (unsigned long)_tz); } +#ifdef __powerpc64__ + static __always_inline int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) { @@ -115,11 +117,23 @@ int clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) return do_syscall_2(__NR_clock_getres, _clkid, (unsigned long)_ts); } -#ifdef CONFIG_VDSO32 +#else #define BUILD_VDSO32 1 static __always_inline +int clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) +{ + return do_syscall_2(__NR_clock_gettime64, _clkid, (unsigned long)_ts); +} + +static __always_inline +int clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) +{ + return do_syscall_2(__NR_clock_getres_time64, _clkid, (unsigned long)_ts); +} + +static __always_inline int clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts) { return do_syscall_2(__NR_clock_gettime, _clkid, (unsigned long)_ts); diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index aa1af139d947..33ddfeef4fe9 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -75,7 +75,7 @@ BEGIN_FTR_SECTION bne .Ltabort_syscall END_FTR_SECTION_IFSET(CPU_FTR_TM) #endif - INTERRUPT_TO_KERNEL + SCV_INTERRUPT_TO_KERNEL mr r10,r1 ld r1,PACAKSAVE(r13) std r10,0(r1) diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index e02ad6fefa46..6e53f7638737 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -2993,6 +2993,25 @@ TRAMP_REAL_BEGIN(entry_flush_fallback) ld r11,PACA_EXRFI+EX_R11(r13) blr +/* + * The SCV entry flush happens with interrupts enabled, so it must disable + * to prevent EXRFI being clobbered by NMIs (e.g., soft_nmi_common). r10 + * (containing LR) does not need to be preserved here because scv entry + * puts 0 in the pt_regs, CTR can be clobbered for the same reason. + */ +TRAMP_REAL_BEGIN(scv_entry_flush_fallback) + li r10,0 + mtmsrd r10,1 + lbz r10,PACAIRQHAPPENED(r13) + ori r10,r10,PACA_IRQ_HARD_DIS + stb r10,PACAIRQHAPPENED(r13) + std r11,PACA_EXRFI+EX_R11(r13) + L1D_DISPLACEMENT_FLUSH + ld r11,PACA_EXRFI+EX_R11(r13) + li r10,MSR_RI + mtmsrd r10,1 + blr + TRAMP_REAL_BEGIN(rfi_flush_fallback) SET_SCRATCH0(r13); GET_PACA(r13); diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S index 349bf3f0c3af..858fbc8b19f3 100644 --- a/arch/powerpc/kernel/head_book3s_32.S +++ b/arch/powerpc/kernel/head_book3s_32.S @@ -260,10 +260,19 @@ __secondary_hold_acknowledge: MachineCheck: EXCEPTION_PROLOG_0 #ifdef CONFIG_PPC_CHRP +#ifdef CONFIG_VMAP_STACK + mtspr SPRN_SPRG_SCRATCH2,r1 + mfspr r1, SPRN_SPRG_THREAD + lwz r1, RTAS_SP(r1) + cmpwi cr1, r1, 0 + bne cr1, 7f + mfspr r1, SPRN_SPRG_SCRATCH2 +#else mfspr r11, SPRN_SPRG_THREAD lwz r11, RTAS_SP(r11) cmpwi cr1, r11, 0 bne cr1, 7f +#endif #endif /* CONFIG_PPC_CHRP */ EXCEPTION_PROLOG_1 for_rtas=1 7: EXCEPTION_PROLOG_2 diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S index 0318ba436f34..72fa3c00229a 100644 --- a/arch/powerpc/kernel/vmlinux.lds.S +++ b/arch/powerpc/kernel/vmlinux.lds.S @@ -85,7 +85,7 @@ SECTIONS ALIGN_FUNCTION(); #endif /* careful! __ftr_alt_* sections need to be close to .text */ - *(.text.hot TEXT_MAIN .text.fixup .text.unlikely .fixup __ftr_alt_* .ref.text); + *(.text.hot .text.hot.* TEXT_MAIN .text.fixup .text.unlikely .text.unlikely.* .fixup __ftr_alt_* .ref.text); #ifdef CONFIG_PPC64 *(.tramp.ftrace.text); #endif @@ -146,6 +146,13 @@ SECTIONS } . = ALIGN(8); + __scv_entry_flush_fixup : AT(ADDR(__scv_entry_flush_fixup) - LOAD_OFFSET) { + __start___scv_entry_flush_fixup = .; + *(__scv_entry_flush_fixup) + __stop___scv_entry_flush_fixup = .; + } + + . = ALIGN(8); __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) { __start___stf_exit_barrier_fixup = .; *(__stf_exit_barrier_fixup) @@ -187,6 +194,12 @@ SECTIONS .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { _sinittext = .; INIT_TEXT + + /* + *.init.text might be RO so we must ensure this section ends on + * a page boundary. + */ + . = ALIGN(PAGE_SIZE); _einittext = .; #ifdef CONFIG_PPC64 *(.tramp.ftrace.init); @@ -200,6 +213,8 @@ SECTIONS EXIT_TEXT } + . = ALIGN(PAGE_SIZE); + INIT_DATA_SECTION(16) . = ALIGN(8); diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c index 47821055b94c..1fd31b4b0e13 100644 --- a/arch/powerpc/lib/feature-fixups.c +++ b/arch/powerpc/lib/feature-fixups.c @@ -290,9 +290,6 @@ void do_entry_flush_fixups(enum l1d_flush_type types) long *start, *end; int i; - start = PTRRELOC(&__start___entry_flush_fixup); - end = PTRRELOC(&__stop___entry_flush_fixup); - instrs[0] = 0x60000000; /* nop */ instrs[1] = 0x60000000; /* nop */ instrs[2] = 0x60000000; /* nop */ @@ -312,6 +309,8 @@ void do_entry_flush_fixups(enum l1d_flush_type types) if (types & L1D_FLUSH_MTTRIG) instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ + start = PTRRELOC(&__start___entry_flush_fixup); + end = PTRRELOC(&__stop___entry_flush_fixup); for (i = 0; start < end; start++, i++) { dest = (void *)start + *start; @@ -328,6 +327,25 @@ void do_entry_flush_fixups(enum l1d_flush_type types) patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); } + start = PTRRELOC(&__start___scv_entry_flush_fixup); + end = PTRRELOC(&__stop___scv_entry_flush_fixup); + for (; start < end; start++, i++) { + dest = (void *)start + *start; + + pr_devel("patching dest %lx\n", (unsigned long)dest); + + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); + + if (types == L1D_FLUSH_FALLBACK) + patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback, + BRANCH_SET_LINK); + else + patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); + + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); + } + + printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i, (types == L1D_FLUSH_NONE) ? "no" : (types == L1D_FLUSH_FALLBACK) ? "fallback displacement" : diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 81b76d44725d..e9e2c1f0a690 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -137,7 +137,7 @@ config PA_BITS config PAGE_OFFSET hex - default 0xC0000000 if 32BIT && MAXPHYSMEM_2GB + default 0xC0000000 if 32BIT && MAXPHYSMEM_1GB default 0x80000000 if 64BIT && !MMU default 0xffffffff80000000 if 64BIT && MAXPHYSMEM_2GB default 0xffffffe000000000 if 64BIT && MAXPHYSMEM_128GB @@ -247,10 +247,12 @@ config MODULE_SECTIONS choice prompt "Maximum Physical Memory" - default MAXPHYSMEM_2GB if 32BIT + default MAXPHYSMEM_1GB if 32BIT default MAXPHYSMEM_2GB if 64BIT && CMODEL_MEDLOW default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY + config MAXPHYSMEM_1GB + bool "1GiB" config MAXPHYSMEM_2GB bool "2GiB" config MAXPHYSMEM_128GB diff --git a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts index 4a2729f5ca3f..24d75a146e02 100644 --- a/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts +++ b/arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts @@ -88,7 +88,9 @@ phy-mode = "gmii"; phy-handle = <&phy0>; phy0: ethernet-phy@0 { + compatible = "ethernet-phy-id0007.0771"; reg = <0>; + reset-gpios = <&gpio 12 GPIO_ACTIVE_LOW>; }; }; diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig index d222d353d86d..8c3d1e451703 100644 --- a/arch/riscv/configs/defconfig +++ b/arch/riscv/configs/defconfig @@ -64,6 +64,8 @@ CONFIG_HW_RANDOM=y CONFIG_HW_RANDOM_VIRTIO=y CONFIG_SPI=y CONFIG_SPI_SIFIVE=y +CONFIG_GPIOLIB=y +CONFIG_GPIO_SIFIVE=y # CONFIG_PTP_1588_CLOCK is not set CONFIG_POWER_RESET=y CONFIG_DRM=y diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 41a72861987c..251e1db088fa 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -99,7 +99,6 @@ | _PAGE_DIRTY) #define PAGE_KERNEL __pgprot(_PAGE_KERNEL) -#define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC) #define PAGE_KERNEL_READ __pgprot(_PAGE_KERNEL & ~_PAGE_WRITE) #define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL | _PAGE_EXEC) #define PAGE_KERNEL_READ_EXEC __pgprot((_PAGE_KERNEL & ~_PAGE_WRITE) \ diff --git a/arch/riscv/include/asm/vdso.h b/arch/riscv/include/asm/vdso.h index 8454f746bbfd..1453a2f563bc 100644 --- a/arch/riscv/include/asm/vdso.h +++ b/arch/riscv/include/asm/vdso.h @@ -10,7 +10,7 @@ #include <linux/types.h> -#ifndef GENERIC_TIME_VSYSCALL +#ifndef CONFIG_GENERIC_TIME_VSYSCALL struct vdso_data { }; #endif diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c index de59dd457b41..d86781357044 100644 --- a/arch/riscv/kernel/cacheinfo.c +++ b/arch/riscv/kernel/cacheinfo.c @@ -26,7 +26,16 @@ cache_get_priv_group(struct cacheinfo *this_leaf) static struct cacheinfo *get_cacheinfo(u32 level, enum cache_type type) { - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(smp_processor_id()); + /* + * Using raw_smp_processor_id() elides a preemptability check, but this + * is really indicative of a larger problem: the cacheinfo UABI assumes + * that cores have a homonogenous view of the cache hierarchy. That + * happens to be the case for the current set of RISC-V systems, but + * likely won't be true in general. Since there's no way to provide + * correct information for these systems via the current UABI we're + * just eliding the check for now. + */ + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(raw_smp_processor_id()); struct cacheinfo *this_leaf; int index; diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 524d918f3601..744f3209c48d 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -124,15 +124,15 @@ skip_context_tracking: REG_L a1, (a1) jr a1 1: -#ifdef CONFIG_TRACE_IRQFLAGS - call trace_hardirqs_on -#endif /* * Exceptions run with interrupts enabled or disabled depending on the * state of SR_PIE in m/sstatus. */ andi t0, s1, SR_PIE beqz t0, 1f +#ifdef CONFIG_TRACE_IRQFLAGS + call trace_hardirqs_on +#endif csrs CSR_STATUS, SR_IE 1: @@ -155,6 +155,15 @@ skip_context_tracking: tail do_trap_unknown handle_syscall: +#ifdef CONFIG_RISCV_M_MODE + /* + * When running is M-Mode (no MMU config), MPIE does not get set. + * As a result, we need to force enable interrupts here because + * handle_exception did not do set SR_IE as it always sees SR_PIE + * being cleared. + */ + csrs CSR_STATUS, SR_IE +#endif #if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_CONTEXT_TRACKING) /* Recover a0 - a7 for system calls */ REG_L a0, PT_A0(sp) @@ -186,14 +195,7 @@ check_syscall_nr: * Syscall number held in a7. * If syscall number is above allowed value, redirect to ni_syscall. */ - bge a7, t0, 1f - /* - * Check if syscall is rejected by tracer, i.e., a7 == -1. - * If yes, we pretend it was executed. - */ - li t1, -1 - beq a7, t1, ret_from_syscall_rejected - blt a7, t1, 1f + bgeu a7, t0, 1f /* Call syscall */ la s0, sys_call_table slli t0, a7, RISCV_LGPTR diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 1d85e9bf783c..3fa3f26dde85 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -127,7 +127,9 @@ static void __init init_resources(void) { struct memblock_region *region = NULL; struct resource *res = NULL; - int ret = 0; + struct resource *mem_res = NULL; + size_t mem_res_sz = 0; + int ret = 0, i = 0; code_res.start = __pa_symbol(_text); code_res.end = __pa_symbol(_etext) - 1; @@ -145,16 +147,17 @@ static void __init init_resources(void) bss_res.end = __pa_symbol(__bss_stop) - 1; bss_res.flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; + mem_res_sz = (memblock.memory.cnt + memblock.reserved.cnt) * sizeof(*mem_res); + mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES); + if (!mem_res) + panic("%s: Failed to allocate %zu bytes\n", __func__, mem_res_sz); /* * Start by adding the reserved regions, if they overlap * with /memory regions, insert_resource later on will take * care of it. */ for_each_reserved_mem_region(region) { - res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); - if (!res) - panic("%s: Failed to allocate %zu bytes\n", __func__, - sizeof(struct resource)); + res = &mem_res[i++]; res->name = "Reserved"; res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; @@ -171,8 +174,10 @@ static void __init init_resources(void) * Ignore any other reserved regions within * system memory. */ - if (memblock_is_memory(res->start)) + if (memblock_is_memory(res->start)) { + memblock_free((phys_addr_t) res, sizeof(struct resource)); continue; + } ret = add_resource(&iomem_resource, res); if (ret < 0) @@ -181,10 +186,7 @@ static void __init init_resources(void) /* Add /memory regions to the resource tree */ for_each_mem_region(region) { - res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); - if (!res) - panic("%s: Failed to allocate %zu bytes\n", __func__, - sizeof(struct resource)); + res = &mem_res[i++]; if (unlikely(memblock_is_nomap(region))) { res->name = "Reserved"; @@ -205,9 +207,9 @@ static void __init init_resources(void) return; error: - memblock_free((phys_addr_t) res, sizeof(struct resource)); /* Better an empty resource tree than an inconsistent one */ release_child_resources(&iomem_resource); + memblock_free((phys_addr_t) mem_res, mem_res_sz); } diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c index 48b870a685b3..df5d2da7c40b 100644 --- a/arch/riscv/kernel/stacktrace.c +++ b/arch/riscv/kernel/stacktrace.c @@ -14,7 +14,7 @@ #include <asm/stacktrace.h> -register unsigned long sp_in_global __asm__("sp"); +register const unsigned long sp_in_global __asm__("sp"); #ifdef CONFIG_FRAME_POINTER @@ -28,9 +28,8 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, sp = user_stack_pointer(regs); pc = instruction_pointer(regs); } else if (task == NULL || task == current) { - const register unsigned long current_sp = sp_in_global; fp = (unsigned long)__builtin_frame_address(0); - sp = current_sp; + sp = sp_in_global; pc = (unsigned long)walk_stackframe; } else { /* task blocked in __switch_to */ diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c index 4d3a1048ad8b..8a5cf99c0776 100644 --- a/arch/riscv/kernel/time.c +++ b/arch/riscv/kernel/time.c @@ -4,6 +4,7 @@ * Copyright (C) 2017 SiFive */ +#include <linux/of_clk.h> #include <linux/clocksource.h> #include <linux/delay.h> #include <asm/sbi.h> @@ -24,6 +25,8 @@ void __init time_init(void) riscv_timebase = prop; lpj_fine = riscv_timebase / HZ; + + of_clk_init(NULL); timer_probe(); } diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c index 678204231700..3f1d35e7c98a 100644 --- a/arch/riscv/kernel/vdso.c +++ b/arch/riscv/kernel/vdso.c @@ -12,7 +12,7 @@ #include <linux/binfmts.h> #include <linux/err.h> #include <asm/page.h> -#ifdef GENERIC_TIME_VSYSCALL +#ifdef CONFIG_GENERIC_TIME_VSYSCALL #include <vdso/datapage.h> #else #include <asm/vdso.h> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index bf5379135e39..7cd4993f4ff2 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -157,9 +157,10 @@ disable: void __init setup_bootmem(void) { phys_addr_t mem_start = 0; - phys_addr_t start, end = 0; + phys_addr_t start, dram_end, end = 0; phys_addr_t vmlinux_end = __pa_symbol(&_end); phys_addr_t vmlinux_start = __pa_symbol(&_start); + phys_addr_t max_mapped_addr = __pa(~(ulong)0); u64 i; /* Find the memory region containing the kernel */ @@ -181,7 +182,18 @@ void __init setup_bootmem(void) /* Reserve from the start of the kernel to the end of the kernel */ memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); - max_pfn = PFN_DOWN(memblock_end_of_DRAM()); + dram_end = memblock_end_of_DRAM(); + + /* + * memblock allocator is not aware of the fact that last 4K bytes of + * the addressable memory can not be mapped because of IS_ERR_VALUE + * macro. Make sure that last 4k bytes are not usable by memblock + * if end of dram is equal to maximum addressable memory. + */ + if (max_mapped_addr == (dram_end - 1)) + memblock_set_current_limit(max_mapped_addr - 4096); + + max_pfn = PFN_DOWN(dram_end); max_low_pfn = max_pfn; dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); set_max_mapnr(max_low_pfn); diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 12ddd1f6bf70..a8a2ffd9114a 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -93,8 +93,8 @@ void __init kasan_init(void) VMALLOC_END)); for_each_mem_range(i, &_start, &_end) { - void *start = (void *)_start; - void *end = (void *)_end; + void *start = (void *)__va(_start); + void *end = (void *)__va(_end); if (start >= end) break; diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 5fa580219a86..52646f52f130 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -29,7 +29,6 @@ config SUPERH select HAVE_ARCH_KGDB select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_TRACEHOOK - select HAVE_COPY_THREAD_TLS select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_KMEMLEAK select HAVE_DYNAMIC_FTRACE diff --git a/arch/sh/boards/mach-sh03/rtc.c b/arch/sh/boards/mach-sh03/rtc.c index 8b23ed7c201c..7fb474844a2d 100644 --- a/arch/sh/boards/mach-sh03/rtc.c +++ b/arch/sh/boards/mach-sh03/rtc.c @@ -11,7 +11,6 @@ #include <linux/sched.h> #include <linux/time.h> #include <linux/bcd.h> -#include <linux/rtc.h> #include <linux/spinlock.h> #include <linux/io.h> #include <linux/rtc.h> diff --git a/arch/sh/configs/landisk_defconfig b/arch/sh/configs/landisk_defconfig index ba6ec042606f..e6c5ddf070c0 100644 --- a/arch/sh/configs/landisk_defconfig +++ b/arch/sh/configs/landisk_defconfig @@ -27,13 +27,12 @@ CONFIG_NETFILTER=y CONFIG_ATALK=m CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_RAM=y -CONFIG_IDE=y -CONFIG_BLK_DEV_IDECD=y -CONFIG_BLK_DEV_OFFBOARD=y -CONFIG_BLK_DEV_GENERIC=y -CONFIG_BLK_DEV_AEC62XX=y +CONFIG_ATA=y +CONFIG_ATA_GENERIC=y +CONFIG_PATA_ATP867X=y CONFIG_SCSI=y CONFIG_BLK_DEV_SD=y +CONFIG_BLK_DEV_SR=y CONFIG_SCSI_MULTI_LUN=y CONFIG_MD=y CONFIG_BLK_DEV_MD=m diff --git a/arch/sh/configs/microdev_defconfig b/arch/sh/configs/microdev_defconfig index c65667d00313..e9825196dd66 100644 --- a/arch/sh/configs/microdev_defconfig +++ b/arch/sh/configs/microdev_defconfig @@ -20,8 +20,6 @@ CONFIG_IP_PNP=y # CONFIG_IPV6 is not set # CONFIG_FW_LOADER is not set CONFIG_BLK_DEV_RAM=y -CONFIG_IDE=y -CONFIG_BLK_DEV_IDECD=y CONFIG_NETDEVICES=y CONFIG_NET_ETHERNET=y CONFIG_SMC91X=y diff --git a/arch/sh/configs/sdk7780_defconfig b/arch/sh/configs/sdk7780_defconfig index d10a0414123a..d00376eb044f 100644 --- a/arch/sh/configs/sdk7780_defconfig +++ b/arch/sh/configs/sdk7780_defconfig @@ -44,16 +44,14 @@ CONFIG_NET_SCHED=y CONFIG_PARPORT=y CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_RAM=y -CONFIG_IDE=y -CONFIG_BLK_DEV_IDECD=y -CONFIG_BLK_DEV_PLATFORM=y -CONFIG_BLK_DEV_GENERIC=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y CONFIG_CHR_DEV_SG=y CONFIG_SCSI_SPI_ATTRS=y CONFIG_SCSI_FC_ATTRS=y CONFIG_ATA=y +CONFIG_ATA_GENERIC=y +CONFIG_PATA_PLATFORM=y CONFIG_MD=y CONFIG_BLK_DEV_DM=y CONFIG_NETDEVICES=y diff --git a/arch/sh/configs/sdk7786_defconfig b/arch/sh/configs/sdk7786_defconfig index 61bec46ebd66..4a44cac640bc 100644 --- a/arch/sh/configs/sdk7786_defconfig +++ b/arch/sh/configs/sdk7786_defconfig @@ -116,9 +116,6 @@ CONFIG_MTD_UBI_GLUEBI=m CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_CRYPTOLOOP=y CONFIG_BLK_DEV_RAM=y -CONFIG_IDE=y -CONFIG_BLK_DEV_IDECD=y -CONFIG_BLK_DEV_PLATFORM=y CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SR=y CONFIG_SCSI_MULTI_LUN=y diff --git a/arch/sh/configs/se7750_defconfig b/arch/sh/configs/se7750_defconfig index 3f1c13799d79..4defc7628a49 100644 --- a/arch/sh/configs/se7750_defconfig +++ b/arch/sh/configs/se7750_defconfig @@ -29,7 +29,6 @@ CONFIG_MTD_BLOCK=y CONFIG_MTD_CFI=y CONFIG_MTD_CFI_AMDSTD=y CONFIG_MTD_ROM=y -CONFIG_IDE=y CONFIG_SCSI=y CONFIG_NETDEVICES=y CONFIG_NET_ETHERNET=y diff --git a/arch/sh/configs/sh03_defconfig b/arch/sh/configs/sh03_defconfig index f0073ed39947..48b457d59e79 100644 --- a/arch/sh/configs/sh03_defconfig +++ b/arch/sh/configs/sh03_defconfig @@ -39,9 +39,6 @@ CONFIG_IP_PNP_RARP=y CONFIG_BLK_DEV_LOOP=y CONFIG_BLK_DEV_NBD=y CONFIG_BLK_DEV_RAM=y -CONFIG_IDE=y -CONFIG_BLK_DEV_IDECD=m -CONFIG_BLK_DEV_IDETAPE=m CONFIG_SCSI=m CONFIG_BLK_DEV_SD=m CONFIG_BLK_DEV_SR=m diff --git a/arch/sh/drivers/dma/Kconfig b/arch/sh/drivers/dma/Kconfig index d0de378beefe..7d54f284ce10 100644 --- a/arch/sh/drivers/dma/Kconfig +++ b/arch/sh/drivers/dma/Kconfig @@ -63,8 +63,7 @@ config PVR2_DMA config G2_DMA tristate "G2 Bus DMA support" - depends on SH_DREAMCAST - select SH_DMA_API + depends on SH_DREAMCAST && SH_DMA_API help This enables support for the DMA controller for the Dreamcast's G2 bus. Drivers that want this will generally enable this on diff --git a/arch/sh/include/asm/gpio.h b/arch/sh/include/asm/gpio.h index 351918894e86..d643250f0a0f 100644 --- a/arch/sh/include/asm/gpio.h +++ b/arch/sh/include/asm/gpio.h @@ -16,7 +16,6 @@ #include <cpu/gpio.h> #endif -#define ARCH_NR_GPIOS 512 #include <asm-generic/gpio.h> #ifdef CONFIG_GPIOLIB diff --git a/arch/sh/kernel/cpu/sh3/entry.S b/arch/sh/kernel/cpu/sh3/entry.S index 25eb80905416..e48b3dd996f5 100644 --- a/arch/sh/kernel/cpu/sh3/entry.S +++ b/arch/sh/kernel/cpu/sh3/entry.S @@ -14,7 +14,6 @@ #include <cpu/mmu_context.h> #include <asm/page.h> #include <asm/cache.h> -#include <asm/thread_info.h> ! NOTE: ! GNU as (as of 2.9.1) changes bf/s into bt/s and bra, when the address diff --git a/arch/sh/mm/Kconfig b/arch/sh/mm/Kconfig index 703d3069997c..77aa2f802d8d 100644 --- a/arch/sh/mm/Kconfig +++ b/arch/sh/mm/Kconfig @@ -105,7 +105,7 @@ config VSYSCALL (the default value) say Y. config NUMA - bool "Non Uniform Memory Access (NUMA) Support" + bool "Non-Uniform Memory Access (NUMA) Support" depends on MMU && SYS_SUPPORTS_NUMA select ARCH_WANT_NUMA_VARIABLE_LOCALITY default n diff --git a/arch/sh/mm/asids-debugfs.c b/arch/sh/mm/asids-debugfs.c index 4c1ca197e9c5..d16d6f5ec774 100644 --- a/arch/sh/mm/asids-debugfs.c +++ b/arch/sh/mm/asids-debugfs.c @@ -26,7 +26,7 @@ #include <asm/processor.h> #include <asm/mmu_context.h> -static int asids_seq_show(struct seq_file *file, void *iter) +static int asids_debugfs_show(struct seq_file *file, void *iter) { struct task_struct *p; @@ -48,18 +48,7 @@ static int asids_seq_show(struct seq_file *file, void *iter) return 0; } -static int asids_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, asids_seq_show, inode->i_private); -} - -static const struct file_operations asids_debugfs_fops = { - .owner = THIS_MODULE, - .open = asids_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(asids_debugfs); static int __init asids_debugfs_init(void) { diff --git a/arch/sh/mm/cache-debugfs.c b/arch/sh/mm/cache-debugfs.c index 17d780794497..b0f185169dfa 100644 --- a/arch/sh/mm/cache-debugfs.c +++ b/arch/sh/mm/cache-debugfs.c @@ -22,7 +22,7 @@ enum cache_type { CACHE_TYPE_UNIFIED, }; -static int cache_seq_show(struct seq_file *file, void *iter) +static int cache_debugfs_show(struct seq_file *file, void *iter) { unsigned int cache_type = (unsigned int)file->private; struct cache_info *cache; @@ -94,18 +94,7 @@ static int cache_seq_show(struct seq_file *file, void *iter) return 0; } -static int cache_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, cache_seq_show, inode->i_private); -} - -static const struct file_operations cache_debugfs_fops = { - .owner = THIS_MODULE, - .open = cache_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(cache_debugfs); static int __init cache_debugfs_init(void) { diff --git a/arch/sh/mm/pmb.c b/arch/sh/mm/pmb.c index b20aba6e1b37..68eb7cc6e564 100644 --- a/arch/sh/mm/pmb.c +++ b/arch/sh/mm/pmb.c @@ -812,7 +812,7 @@ bool __in_29bit_mode(void) return (__raw_readl(PMB_PASCR) & PASCR_SE) == 0; } -static int pmb_seq_show(struct seq_file *file, void *iter) +static int pmb_debugfs_show(struct seq_file *file, void *iter) { int i; @@ -846,18 +846,7 @@ static int pmb_seq_show(struct seq_file *file, void *iter) return 0; } -static int pmb_debugfs_open(struct inode *inode, struct file *file) -{ - return single_open(file, pmb_seq_show, NULL); -} - -static const struct file_operations pmb_debugfs_fops = { - .owner = THIS_MODULE, - .open = pmb_debugfs_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(pmb_debugfs); static int __init pmb_debugfs_init(void) { diff --git a/arch/sparc/include/asm/highmem.h b/arch/sparc/include/asm/highmem.h index 875116209ec1..c7b2e208328b 100644 --- a/arch/sparc/include/asm/highmem.h +++ b/arch/sparc/include/asm/highmem.h @@ -50,10 +50,11 @@ extern pte_t *pkmap_page_table; #define flush_cache_kmaps() flush_cache_all() -/* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */ -#define arch_kmap_local_post_map(vaddr, pteval) flush_cache_all() -#define arch_kmap_local_post_unmap(vaddr) flush_cache_all() - +/* FIXME: Use __flush_*_one(vaddr) instead of flush_*_all() -- Anton */ +#define arch_kmap_local_pre_map(vaddr, pteval) flush_cache_all() +#define arch_kmap_local_pre_unmap(vaddr) flush_cache_all() +#define arch_kmap_local_post_map(vaddr, pteval) flush_tlb_all() +#define arch_kmap_local_post_unmap(vaddr) flush_tlb_all() #endif /* __KERNEL__ */ diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7b6dd10b162a..21f851179ff0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -19,6 +19,7 @@ config X86_32 select KMAP_LOCAL select MODULES_USE_ELF_REL select OLD_SIGACTION + select ARCH_SPLIT_ARG64 config X86_64 def_bool y diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 18d8f17f755c..0904f5676e4d 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -73,10 +73,8 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, unsigned int nr) { if (likely(nr < IA32_NR_syscalls)) { - instrumentation_begin(); nr = array_index_nospec(nr, IA32_NR_syscalls); regs->ax = ia32_sys_call_table[nr](regs); - instrumentation_end(); } } @@ -91,8 +89,11 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs) * or may not be necessary, but it matches the old asm behavior. */ nr = (unsigned int)syscall_enter_from_user_mode(regs, nr); + instrumentation_begin(); do_syscall_32_irqs_on(regs, nr); + + instrumentation_end(); syscall_exit_to_user_mode(regs); } @@ -121,11 +122,12 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) res = get_user(*(u32 *)®s->bp, (u32 __user __force *)(unsigned long)(u32)regs->sp); } - instrumentation_end(); if (res) { /* User code screwed up. */ regs->ax = -EFAULT; + + instrumentation_end(); syscall_exit_to_user_mode(regs); return false; } @@ -135,6 +137,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) /* Now this is just like a normal syscall. */ do_syscall_32_irqs_on(regs, nr); + + instrumentation_end(); syscall_exit_to_user_mode(regs); return true; } diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index e04d90af4c27..6375967a8244 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -16,6 +16,7 @@ #include <asm/hyperv-tlfs.h> #include <asm/mshyperv.h> #include <asm/idtentry.h> +#include <linux/kexec.h> #include <linux/version.h> #include <linux/vmalloc.h> #include <linux/mm.h> @@ -26,6 +27,8 @@ #include <linux/syscore_ops.h> #include <clocksource/hyperv_timer.h> +int hyperv_init_cpuhp; + void *hv_hypercall_pg; EXPORT_SYMBOL_GPL(hv_hypercall_pg); @@ -312,6 +315,25 @@ static struct syscore_ops hv_syscore_ops = { .resume = hv_resume, }; +static void (* __initdata old_setup_percpu_clockev)(void); + +static void __init hv_stimer_setup_percpu_clockev(void) +{ + /* + * Ignore any errors in setting up stimer clockevents + * as we can run with the LAPIC timer as a fallback. + */ + (void)hv_stimer_alloc(); + + /* + * Still register the LAPIC timer, because the direct-mode STIMER is + * not supported by old versions of Hyper-V. This also allows users + * to switch to LAPIC timer via /sys, if they want to. + */ + if (old_setup_percpu_clockev) + old_setup_percpu_clockev(); +} + /* * This function is to be invoked early in the boot sequence after the * hypervisor has been detected. @@ -390,10 +412,14 @@ void __init hyperv_init(void) wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); /* - * Ignore any errors in setting up stimer clockevents - * as we can run with the LAPIC timer as a fallback. + * hyperv_init() is called before LAPIC is initialized: see + * apic_intr_mode_init() -> x86_platform.apic_post_init() and + * apic_bsp_setup() -> setup_local_APIC(). The direct-mode STIMER + * depends on LAPIC, so hv_stimer_alloc() should be called from + * x86_init.timers.setup_percpu_clockev. */ - (void)hv_stimer_alloc(); + old_setup_percpu_clockev = x86_init.timers.setup_percpu_clockev; + x86_init.timers.setup_percpu_clockev = hv_stimer_setup_percpu_clockev; hv_apic_init(); @@ -401,6 +427,7 @@ void __init hyperv_init(void) register_syscore_ops(&hv_syscore_ops); + hyperv_init_cpuhp = cpuhp; return; remove_cpuhp_state: diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c index 5208ba49c89a..2c87350c1fb0 100644 --- a/arch/x86/hyperv/mmu.c +++ b/arch/x86/hyperv/mmu.c @@ -66,11 +66,17 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus, if (!hv_hypercall_pg) goto do_native; - if (cpumask_empty(cpus)) - return; - local_irq_save(flags); + /* + * Only check the mask _after_ interrupt has been disabled to avoid the + * mask changing under our feet. + */ + if (cpumask_empty(cpus)) { + local_irq_restore(flags); + return; + } + flush_pcpu = (struct hv_tlb_flush **) this_cpu_ptr(hyperv_pcpu_input_arg); diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index a5aba4ab0224..67a4f1cb2aac 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -16,14 +16,25 @@ * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It * disables preemption so be careful if you intend to use it for long periods * of time. - * If you intend to use the FPU in softirq you need to check first with + * If you intend to use the FPU in irq/softirq you need to check first with * irq_fpu_usable() if it is possible. */ -extern void kernel_fpu_begin(void); + +/* Kernel FPU states to initialize in kernel_fpu_begin_mask() */ +#define KFPU_387 _BITUL(0) /* 387 state will be initialized */ +#define KFPU_MXCSR _BITUL(1) /* MXCSR will be initialized */ + +extern void kernel_fpu_begin_mask(unsigned int kfpu_mask); extern void kernel_fpu_end(void); extern bool irq_fpu_usable(void); extern void fpregs_mark_activate(void); +/* Code that is unaware of kernel_fpu_begin_mask() can use this */ +static inline void kernel_fpu_begin(void) +{ + kernel_fpu_begin_mask(KFPU_387 | KFPU_MXCSR); +} + /* * Use fpregs_lock() while editing CPU's FPU registers or fpu->state. * A context switch will (and softirq might) save CPU's FPU registers to diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h index 5e658ba2654a..9abe842dbd84 100644 --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -97,6 +97,7 @@ #define INTEL_FAM6_LAKEFIELD 0x8A #define INTEL_FAM6_ALDERLAKE 0x97 +#define INTEL_FAM6_ALDERLAKE_L 0x9A /* "Small Core" Processors (Atom) */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3ab7b46087b7..3d6616f6f6ef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1010,9 +1010,21 @@ struct kvm_arch { */ bool tdp_mmu_enabled; - /* List of struct tdp_mmu_pages being used as roots */ + /* + * List of struct kvmp_mmu_pages being used as roots. + * All struct kvm_mmu_pages in the list should have + * tdp_mmu_page set. + * All struct kvm_mmu_pages in the list should have a positive + * root_count except when a thread holds the MMU lock and is removing + * an entry from the list. + */ struct list_head tdp_mmu_roots; - /* List of struct tdp_mmu_pages not being used as roots */ + + /* + * List of struct kvmp_mmu_pages not being used as roots. + * All struct kvm_mmu_pages in the list should have + * tdp_mmu_page set and a root_count of 0. + */ struct list_head tdp_mmu_pages; }; @@ -1287,6 +1299,8 @@ struct kvm_x86_ops { void (*migrate_timers)(struct kvm_vcpu *vcpu); void (*msr_filter_changed)(struct kvm_vcpu *vcpu); int (*complete_emulated_msr)(struct kvm_vcpu *vcpu, int err); + + void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector); }; struct kvm_x86_nested_ops { @@ -1468,6 +1482,7 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in); int kvm_emulate_cpuid(struct kvm_vcpu *vcpu); int kvm_emulate_halt(struct kvm_vcpu *vcpu); int kvm_vcpu_halt(struct kvm_vcpu *vcpu); +int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu); int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu); void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index ffc289992d1b..30f76b966857 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -74,6 +74,8 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {} #if IS_ENABLED(CONFIG_HYPERV) +extern int hyperv_init_cpuhp; + extern void *hv_hypercall_pg; extern void __percpu **hyperv_pcpu_input_arg; diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h index 0b4920a7238e..e16cccdd0420 100644 --- a/arch/x86/include/asm/msr.h +++ b/arch/x86/include/asm/msr.h @@ -86,7 +86,7 @@ static inline void do_trace_rdpmc(unsigned int msr, u64 val, int failed) {} * think of extending them - you will be slapped with a stinking trout or a frozen * shark will reach you, wherever you are! You've been warned. */ -static inline unsigned long long notrace __rdmsr(unsigned int msr) +static __always_inline unsigned long long __rdmsr(unsigned int msr) { DECLARE_ARGS(val, low, high); @@ -98,7 +98,7 @@ static inline unsigned long long notrace __rdmsr(unsigned int msr) return EAX_EDX_VAL(val, low, high); } -static inline void notrace __wrmsr(unsigned int msr, u32 low, u32 high) +static __always_inline void __wrmsr(unsigned int msr, u32 low, u32 high) { asm volatile("1: wrmsr\n" "2:\n" diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 488a8e848754..9239399e5491 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -110,6 +110,8 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu); #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id) #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) +extern unsigned int __max_die_per_package; + #ifdef CONFIG_SMP #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu)) #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) @@ -118,8 +120,6 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu); extern unsigned int __max_logical_packages; #define topology_max_packages() (__max_logical_packages) -extern unsigned int __max_die_per_package; - static inline int topology_max_die_per_package(void) { return __max_die_per_package; diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index f8ca66f3d861..347a956f71ca 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -542,12 +542,12 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) u32 ecx; ecx = cpuid_ecx(0x8000001e); - nodes_per_socket = ((ecx >> 8) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1; } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { u64 value; rdmsrl(MSR_FAM10H_NODE_ID, value); - nodes_per_socket = ((value >> 3) & 7) + 1; + __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; } if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) && diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 13d3f1cbda17..e133ce1e562b 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1992,10 +1992,9 @@ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs) * that out because it's an indirect call. Annotate it. */ instrumentation_begin(); - trace_hardirqs_off_finish(); + machine_check_vector(regs); - if (regs->flags & X86_EFLAGS_IF) - trace_hardirqs_on_prepare(); + instrumentation_end(); irqentry_nmi_exit(regs, irq_state); } @@ -2004,7 +2003,9 @@ static __always_inline void exc_machine_check_user(struct pt_regs *regs) { irqentry_enter_from_user_mode(regs); instrumentation_begin(); + machine_check_vector(regs); + instrumentation_end(); irqentry_exit_to_user_mode(regs); } diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index f628e3dc150f..43b54bef5448 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -135,14 +135,32 @@ static void hv_machine_shutdown(void) { if (kexec_in_progress && hv_kexec_handler) hv_kexec_handler(); + + /* + * Call hv_cpu_die() on all the CPUs, otherwise later the hypervisor + * corrupts the old VP Assist Pages and can crash the kexec kernel. + */ + if (kexec_in_progress && hyperv_init_cpuhp > 0) + cpuhp_remove_state(hyperv_init_cpuhp); + + /* The function calls stop_other_cpus(). */ native_machine_shutdown(); + + /* Disable the hypercall page when there is only 1 active CPU. */ + if (kexec_in_progress) + hyperv_cleanup(); } static void hv_machine_crash_shutdown(struct pt_regs *regs) { if (hv_crash_handler) hv_crash_handler(regs); + + /* The function calls crash_smp_send_stop(). */ native_machine_crash_shutdown(regs); + + /* Disable the hypercall page when there is only 1 active CPU. */ + hyperv_cleanup(); } #endif /* CONFIG_KEXEC_CORE */ #endif /* CONFIG_HYPERV */ diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c index 23ad8e953dfb..a29997e6cf9e 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -167,9 +167,6 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end, *repeat = 0; *uniform = 1; - /* Make end inclusive instead of exclusive */ - end--; - prev_match = MTRR_TYPE_INVALID; for (i = 0; i < num_var_ranges; ++i) { unsigned short start_state, end_state, inclusive; @@ -261,6 +258,9 @@ u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform) int repeat; u64 partial_end; + /* Make end inclusive instead of exclusive */ + end--; + if (!mtrr_state_set) return MTRR_TYPE_INVALID; diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 29ffb95b25ff..460f3e0df106 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -525,89 +525,70 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp) kfree(rdtgrp); } -struct task_move_callback { - struct callback_head work; - struct rdtgroup *rdtgrp; -}; - -static void move_myself(struct callback_head *head) +static void _update_task_closid_rmid(void *task) { - struct task_move_callback *callback; - struct rdtgroup *rdtgrp; - - callback = container_of(head, struct task_move_callback, work); - rdtgrp = callback->rdtgrp; - /* - * If resource group was deleted before this task work callback - * was invoked, then assign the task to root group and free the - * resource group. + * If the task is still current on this CPU, update PQR_ASSOC MSR. + * Otherwise, the MSR is updated when the task is scheduled in. */ - if (atomic_dec_and_test(&rdtgrp->waitcount) && - (rdtgrp->flags & RDT_DELETED)) { - current->closid = 0; - current->rmid = 0; - rdtgroup_remove(rdtgrp); - } - - if (unlikely(current->flags & PF_EXITING)) - goto out; - - preempt_disable(); - /* update PQR_ASSOC MSR to make resource group go into effect */ - resctrl_sched_in(); - preempt_enable(); + if (task == current) + resctrl_sched_in(); +} -out: - kfree(callback); +static void update_task_closid_rmid(struct task_struct *t) +{ + if (IS_ENABLED(CONFIG_SMP) && task_curr(t)) + smp_call_function_single(task_cpu(t), _update_task_closid_rmid, t, 1); + else + _update_task_closid_rmid(t); } static int __rdtgroup_move_task(struct task_struct *tsk, struct rdtgroup *rdtgrp) { - struct task_move_callback *callback; - int ret; - - callback = kzalloc(sizeof(*callback), GFP_KERNEL); - if (!callback) - return -ENOMEM; - callback->work.func = move_myself; - callback->rdtgrp = rdtgrp; + /* If the task is already in rdtgrp, no need to move the task. */ + if ((rdtgrp->type == RDTCTRL_GROUP && tsk->closid == rdtgrp->closid && + tsk->rmid == rdtgrp->mon.rmid) || + (rdtgrp->type == RDTMON_GROUP && tsk->rmid == rdtgrp->mon.rmid && + tsk->closid == rdtgrp->mon.parent->closid)) + return 0; /* - * Take a refcount, so rdtgrp cannot be freed before the - * callback has been invoked. + * Set the task's closid/rmid before the PQR_ASSOC MSR can be + * updated by them. + * + * For ctrl_mon groups, move both closid and rmid. + * For monitor groups, can move the tasks only from + * their parent CTRL group. */ - atomic_inc(&rdtgrp->waitcount); - ret = task_work_add(tsk, &callback->work, TWA_RESUME); - if (ret) { - /* - * Task is exiting. Drop the refcount and free the callback. - * No need to check the refcount as the group cannot be - * deleted before the write function unlocks rdtgroup_mutex. - */ - atomic_dec(&rdtgrp->waitcount); - kfree(callback); - rdt_last_cmd_puts("Task exited\n"); - } else { - /* - * For ctrl_mon groups move both closid and rmid. - * For monitor groups, can move the tasks only from - * their parent CTRL group. - */ - if (rdtgrp->type == RDTCTRL_GROUP) { - tsk->closid = rdtgrp->closid; + + if (rdtgrp->type == RDTCTRL_GROUP) { + tsk->closid = rdtgrp->closid; + tsk->rmid = rdtgrp->mon.rmid; + } else if (rdtgrp->type == RDTMON_GROUP) { + if (rdtgrp->mon.parent->closid == tsk->closid) { tsk->rmid = rdtgrp->mon.rmid; - } else if (rdtgrp->type == RDTMON_GROUP) { - if (rdtgrp->mon.parent->closid == tsk->closid) { - tsk->rmid = rdtgrp->mon.rmid; - } else { - rdt_last_cmd_puts("Can't move task to different control group\n"); - ret = -EINVAL; - } + } else { + rdt_last_cmd_puts("Can't move task to different control group\n"); + return -EINVAL; } } - return ret; + + /* + * Ensure the task's closid and rmid are written before determining if + * the task is current that will decide if it will be interrupted. + */ + barrier(); + + /* + * By now, the task's closid and rmid are set. If the task is current + * on a CPU, the PQR_ASSOC MSR needs to be updated to make the resource + * group go into effect. If the task is not current, the MSR will be + * updated when the task is scheduled in. + */ + update_task_closid_rmid(tsk); + + return 0; } static bool is_closid_match(struct task_struct *t, struct rdtgroup *r) diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c index 1068002c8532..8678864ce712 100644 --- a/arch/x86/kernel/cpu/topology.c +++ b/arch/x86/kernel/cpu/topology.c @@ -25,10 +25,10 @@ #define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f) #define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff) -#ifdef CONFIG_SMP unsigned int __max_die_per_package __read_mostly = 1; EXPORT_SYMBOL(__max_die_per_package); +#ifdef CONFIG_SMP /* * Check if given CPUID extended toplogy "leaf" is implemented */ diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index eb86a2b831b1..571220ac8bea 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -121,7 +121,7 @@ int copy_fpregs_to_fpstate(struct fpu *fpu) } EXPORT_SYMBOL(copy_fpregs_to_fpstate); -void kernel_fpu_begin(void) +void kernel_fpu_begin_mask(unsigned int kfpu_mask) { preempt_disable(); @@ -141,13 +141,14 @@ void kernel_fpu_begin(void) } __cpu_invalidate_fpregs_state(); - if (boot_cpu_has(X86_FEATURE_XMM)) + /* Put sane initial values into the control registers. */ + if (likely(kfpu_mask & KFPU_MXCSR) && boot_cpu_has(X86_FEATURE_XMM)) ldmxcsr(MXCSR_DEFAULT); - if (boot_cpu_has(X86_FEATURE_FPU)) + if (unlikely(kfpu_mask & KFPU_387) && boot_cpu_has(X86_FEATURE_FPU)) asm volatile ("fninit"); } -EXPORT_SYMBOL_GPL(kernel_fpu_begin); +EXPORT_SYMBOL_GPL(kernel_fpu_begin_mask); void kernel_fpu_end(void) { diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 740f3bdb3f61..3412c4595efd 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -661,17 +661,6 @@ static void __init trim_platform_memory_ranges(void) static void __init trim_bios_range(void) { /* - * A special case is the first 4Kb of memory; - * This is a BIOS owned area, not kernel ram, but generally - * not listed as such in the E820 table. - * - * This typically reserves additional memory (64KiB by default) - * since some BIOSes are known to corrupt low memory. See the - * Kconfig help text for X86_RESERVE_LOW. - */ - e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED); - - /* * special case: Some BIOSes report the PC BIOS * area (640Kb -> 1Mb) as RAM even though it is not. * take them out. @@ -728,6 +717,15 @@ early_param("reservelow", parse_reservelow); static void __init trim_low_memory_range(void) { + /* + * A special case is the first 4Kb of memory; + * This is a BIOS owned area, not kernel ram, but generally + * not listed as such in the E820 table. + * + * This typically reserves additional memory (64KiB by default) + * since some BIOSes are known to corrupt low memory. See the + * Kconfig help text for X86_RESERVE_LOW. + */ memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); } diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c index 7d04b356d44d..cdc04d091242 100644 --- a/arch/x86/kernel/sev-es-shared.c +++ b/arch/x86/kernel/sev-es-shared.c @@ -305,14 +305,14 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo) case 0xe4: case 0xe5: *exitinfo |= IOIO_TYPE_IN; - *exitinfo |= (u64)insn->immediate.value << 16; + *exitinfo |= (u8)insn->immediate.value << 16; break; /* OUT immediate opcodes */ case 0xe6: case 0xe7: *exitinfo |= IOIO_TYPE_OUT; - *exitinfo |= (u64)insn->immediate.value << 16; + *exitinfo |= (u8)insn->immediate.value << 16; break; /* IN register opcodes */ diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c index 0bd1a0fc587e..84c1821819af 100644 --- a/arch/x86/kernel/sev-es.c +++ b/arch/x86/kernel/sev-es.c @@ -225,7 +225,7 @@ static inline u64 sev_es_rd_ghcb_msr(void) return __rdmsr(MSR_AMD64_SEV_ES_GHCB); } -static inline void sev_es_wr_ghcb_msr(u64 val) +static __always_inline void sev_es_wr_ghcb_msr(u64 val) { u32 low, high; @@ -286,6 +286,12 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt, u16 d2; u8 d1; + /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ + if (!user_mode(ctxt->regs) && !access_ok(target, size)) { + memcpy(dst, buf, size); + return ES_OK; + } + switch (size) { case 1: memcpy(&d1, buf, 1); @@ -335,6 +341,12 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, u16 d2; u8 d1; + /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ + if (!user_mode(ctxt->regs) && !access_ok(s, size)) { + memcpy(buf, src, size); + return ES_OK; + } + switch (size) { case 1: if (get_user(d1, s)) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 8ca66af96a54..117e24fbfd8a 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -56,6 +56,7 @@ #include <linux/numa.h> #include <linux/pgtable.h> #include <linux/overflow.h> +#include <linux/syscore_ops.h> #include <asm/acpi.h> #include <asm/desc.h> @@ -2083,6 +2084,23 @@ static void init_counter_refs(void) this_cpu_write(arch_prev_mperf, mperf); } +#ifdef CONFIG_PM_SLEEP +static struct syscore_ops freq_invariance_syscore_ops = { + .resume = init_counter_refs, +}; + +static void register_freq_invariance_syscore_ops(void) +{ + /* Bail out if registered already. */ + if (freq_invariance_syscore_ops.node.prev) + return; + + register_syscore_ops(&freq_invariance_syscore_ops); +} +#else +static inline void register_freq_invariance_syscore_ops(void) {} +#endif + static void init_freq_invariance(bool secondary, bool cppc_ready) { bool ret = false; @@ -2109,6 +2127,7 @@ static void init_freq_invariance(bool secondary, bool cppc_ready) if (ret) { init_counter_refs(); static_branch_enable(&arch_scale_freq_key); + register_freq_invariance_syscore_ops(); pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio); } else { pr_debug("Couldn't determine max cpu frequency, necessary for scale-invariant accounting.\n"); diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 3136e05831cf..43cceadd073e 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -674,7 +674,7 @@ static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu) (unsigned long long)vcpu->arch.pv_eoi.msr_val); return false; } - return val & 0x1; + return val & KVM_PV_EOI_ENABLED; } static void pv_eoi_set_pending(struct kvm_vcpu *vcpu) @@ -2898,7 +2898,7 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu) /* evaluate pending_events before reading the vector */ smp_rmb(); sipi_vector = apic->sipi_vector; - kvm_vcpu_deliver_sipi_vector(vcpu, sipi_vector); + kvm_x86_ops.vcpu_deliver_sipi_vector(vcpu, sipi_vector); vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; } } diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 9c4a9c8e43d9..581925e476d6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -49,7 +49,7 @@ static inline u64 rsvd_bits(int s, int e) if (e < s) return 0; - return ((1ULL << (e - s + 1)) - 1) << s; + return ((2ULL << (e - s)) - 1) << s; } void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c478904af518..6d16481aa29d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3493,26 +3493,25 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. */ -static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) +static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { struct kvm_shadow_walk_iterator iterator; - int leaf = vcpu->arch.mmu->root_level; + int leaf = -1; u64 spte; - walk_shadow_page_lockless_begin(vcpu); - for (shadow_walk_init(&iterator, vcpu, addr); + for (shadow_walk_init(&iterator, vcpu, addr), + *root_level = iterator.level; shadow_walk_okay(&iterator); __shadow_walk_next(&iterator, spte)) { leaf = iterator.level; spte = mmu_spte_get_lockless(iterator.sptep); - sptes[leaf - 1] = spte; + sptes[leaf] = spte; if (!is_shadow_present_pte(spte)) break; - } walk_shadow_page_lockless_end(vcpu); @@ -3520,14 +3519,12 @@ static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) return leaf; } -/* return true if reserved bit is detected on spte. */ +/* return true if reserved bit(s) are detected on a valid, non-MMIO SPTE. */ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) { - u64 sptes[PT64_ROOT_MAX_LEVEL]; + u64 sptes[PT64_ROOT_MAX_LEVEL + 1]; struct rsvd_bits_validate *rsvd_check; - int root = vcpu->arch.mmu->shadow_root_level; - int leaf; - int level; + int root, leaf, level; bool reserved = false; if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) { @@ -3536,35 +3533,45 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) } if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) - leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes); + leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root); else - leaf = get_walk(vcpu, addr, sptes); + leaf = get_walk(vcpu, addr, sptes, &root); + + if (unlikely(leaf < 0)) { + *sptep = 0ull; + return reserved; + } + + *sptep = sptes[leaf]; + + /* + * Skip reserved bits checks on the terminal leaf if it's not a valid + * SPTE. Note, this also (intentionally) skips MMIO SPTEs, which, by + * design, always have reserved bits set. The purpose of the checks is + * to detect reserved bits on non-MMIO SPTEs. i.e. buggy SPTEs. + */ + if (!is_shadow_present_pte(sptes[leaf])) + leaf++; rsvd_check = &vcpu->arch.mmu->shadow_zero_check; - for (level = root; level >= leaf; level--) { - if (!is_shadow_present_pte(sptes[level - 1])) - break; + for (level = root; level >= leaf; level--) /* * Use a bitwise-OR instead of a logical-OR to aggregate the * reserved bit and EPT's invalid memtype/XWR checks to avoid * adding a Jcc in the loop. */ - reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) | - __is_rsvd_bits_set(rsvd_check, sptes[level - 1], - level); - } + reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level]) | + __is_rsvd_bits_set(rsvd_check, sptes[level], level); if (reserved) { pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n", __func__, addr); for (level = root; level >= leaf; level--) pr_err("------ spte 0x%llx level %d.\n", - sptes[level - 1], level); + sptes[level], level); } - *sptep = sptes[leaf - 1]; - return reserved; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 4bd2f1dc0172..2ef8615f9dba 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -44,7 +44,48 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); } -#define for_each_tdp_mmu_root(_kvm, _root) \ +static void tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root) +{ + if (kvm_mmu_put_root(kvm, root)) + kvm_tdp_mmu_free_root(kvm, root); +} + +static inline bool tdp_mmu_next_root_valid(struct kvm *kvm, + struct kvm_mmu_page *root) +{ + lockdep_assert_held(&kvm->mmu_lock); + + if (list_entry_is_head(root, &kvm->arch.tdp_mmu_roots, link)) + return false; + + kvm_mmu_get_root(kvm, root); + return true; + +} + +static inline struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, + struct kvm_mmu_page *root) +{ + struct kvm_mmu_page *next_root; + + next_root = list_next_entry(root, link); + tdp_mmu_put_root(kvm, root); + return next_root; +} + +/* + * Note: this iterator gets and puts references to the roots it iterates over. + * This makes it safe to release the MMU lock and yield within the loop, but + * if exiting the loop early, the caller must drop the reference to the most + * recent root. (Unless keeping a live reference is desirable.) + */ +#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ + for (_root = list_first_entry(&_kvm->arch.tdp_mmu_roots, \ + typeof(*_root), link); \ + tdp_mmu_next_root_valid(_kvm, _root); \ + _root = tdp_mmu_next_root(_kvm, _root)) + +#define for_each_tdp_mmu_root(_kvm, _root) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) @@ -447,18 +488,9 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) struct kvm_mmu_page *root; bool flush = false; - for_each_tdp_mmu_root(kvm, root) { - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - + for_each_tdp_mmu_root_yield_safe(kvm, root) flush |= zap_gfn_range(kvm, root, start, end, true); - kvm_mmu_put_root(kvm, root); - } - return flush; } @@ -619,13 +651,7 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start, int ret = 0; int as_id; - for_each_tdp_mmu_root(kvm, root) { - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - + for_each_tdp_mmu_root_yield_safe(kvm, root) { as_id = kvm_mmu_page_as_id(root); slots = __kvm_memslots(kvm, as_id); kvm_for_each_memslot(memslot, slots) { @@ -647,8 +673,6 @@ static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start, ret |= handler(kvm, memslot, root, gfn_start, gfn_end, data); } - - kvm_mmu_put_root(kvm, root); } return ret; @@ -838,21 +862,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot, int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages, min_level); - - kvm_mmu_put_root(kvm, root); } return spte_set; @@ -906,21 +922,13 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot) int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages); - - kvm_mmu_put_root(kvm, root); } return spte_set; @@ -1029,21 +1037,13 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot) int root_as_id; bool spte_set = false; - for_each_tdp_mmu_root(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - spte_set |= set_dirty_gfn_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages); - - kvm_mmu_put_root(kvm, root); } return spte_set; } @@ -1089,21 +1089,13 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_mmu_page *root; int root_as_id; - for_each_tdp_mmu_root(kvm, root) { + for_each_tdp_mmu_root_yield_safe(kvm, root) { root_as_id = kvm_mmu_page_as_id(root); if (root_as_id != slot->as_id) continue; - /* - * Take a reference on the root so that it cannot be freed if - * this thread releases the MMU lock and yields in this loop. - */ - kvm_mmu_get_root(kvm, root); - zap_collapsible_spte_range(kvm, root, slot->base_gfn, slot->base_gfn + slot->npages); - - kvm_mmu_put_root(kvm, root); } } @@ -1160,16 +1152,19 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, * Return the level of the lowest level SPTE added to sptes. * That SPTE may be non-present. */ -int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level) { struct tdp_iter iter; struct kvm_mmu *mmu = vcpu->arch.mmu; - int leaf = vcpu->arch.mmu->shadow_root_level; gfn_t gfn = addr >> PAGE_SHIFT; + int leaf = -1; + + *root_level = vcpu->arch.mmu->shadow_root_level; tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { leaf = iter.level; - sptes[leaf - 1] = iter.old_spte; + sptes[leaf] = iter.old_spte; } return leaf; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 556e065503f6..cbbdbadd1526 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -44,5 +44,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn); -int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes); +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, + int *root_level); + #endif /* __KVM_X86_MMU_TDP_MMU_H */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b0b667456b2e..cb4c6ee10029 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -199,6 +199,7 @@ static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + if (!nested_svm_vmrun_msrpm(svm)) { vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; vcpu->run->internal.suberror = @@ -595,6 +596,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm->nested.vmcb12_gpa = 0; WARN_ON_ONCE(svm->nested.nested_run_pending); + kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu); + /* in case we halted in L2 */ svm->vcpu.arch.mp_state = KVM_MP_STATE_RUNNABLE; @@ -754,6 +757,7 @@ void svm_leave_nested(struct vcpu_svm *svm) leave_guest_mode(&svm->vcpu); copy_vmcb_control_area(&vmcb->control, &hsave->control); nested_svm_uninit_mmu_context(&svm->vcpu); + vmcb_mark_all_dirty(svm->vmcb); } kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, &svm->vcpu); @@ -1194,6 +1198,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, * in the registers, the save area of the nested state instead * contains saved L1 state. */ + + svm->nested.nested_run_pending = + !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); + copy_vmcb_control_area(&hsave->control, &svm->vmcb->control); hsave->save = *save; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 9858d5ae9ddd..c8ffdbc81709 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1563,6 +1563,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) goto vmgexit_err; break; case SVM_VMGEXIT_NMI_COMPLETE: + case SVM_VMGEXIT_AP_HLT_LOOP: case SVM_VMGEXIT_AP_JUMP_TABLE: case SVM_VMGEXIT_UNSUPPORTED_EVENT: break; @@ -1888,6 +1889,9 @@ int sev_handle_vmgexit(struct vcpu_svm *svm) case SVM_VMGEXIT_NMI_COMPLETE: ret = svm_invoke_exit_handler(svm, SVM_EXIT_IRET); break; + case SVM_VMGEXIT_AP_HLT_LOOP: + ret = kvm_emulate_ap_reset_hold(&svm->vcpu); + break; case SVM_VMGEXIT_AP_JUMP_TABLE: { struct kvm_sev_info *sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info; @@ -2001,7 +2005,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) * of which one step is to perform a VMLOAD. Since hardware does not * perform a VMSAVE on VMRUN, the host savearea must be updated. */ - asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory"); + asm volatile(__ex("vmsave %0") : : "a" (__sme_page_pa(sd->save_area)) : "memory"); /* * Certain MSRs are restored on VMEXIT, only save ones that aren't @@ -2040,3 +2044,21 @@ void sev_es_vcpu_put(struct vcpu_svm *svm) wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); } } + +void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + /* First SIPI: Use the values as initially set by the VMM */ + if (!svm->received_first_sipi) { + svm->received_first_sipi = true; + return; + } + + /* + * Subsequent SIPI: Return from an AP Reset Hold VMGEXIT, where + * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a + * non-zero value. + */ + ghcb_set_sw_exit_info_2(svm->ghcb, 1); +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index cce0143a6f80..7ef171790d02 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3677,8 +3677,6 @@ static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } -void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs); - static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_svm *svm) { @@ -4384,6 +4382,14 @@ static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu) (vmcb_is_intercept(&svm->vmcb->control, INTERCEPT_INIT)); } +static void svm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) +{ + if (!sev_es_guest(vcpu->kvm)) + return kvm_vcpu_deliver_sipi_vector(vcpu, vector); + + sev_vcpu_deliver_sipi_vector(vcpu, vector); +} + static void svm_vm_destroy(struct kvm *kvm) { avic_vm_destroy(kvm); @@ -4526,6 +4532,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .msr_filter_changed = svm_msr_filter_changed, .complete_emulated_msr = svm_complete_emulated_msr, + + .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, }; static struct kvm_x86_init_ops svm_init_ops __initdata = { diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5431e6335e2e..0fe874ae5498 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -185,6 +185,7 @@ struct vcpu_svm { struct vmcb_save_area *vmsa; struct ghcb *ghcb; struct kvm_host_map ghcb_map; + bool received_first_sipi; /* SEV-ES scratch area support */ void *ghcb_sa; @@ -591,6 +592,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm); void sev_es_create_vcpu(struct vcpu_svm *svm); void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu); void sev_es_vcpu_put(struct vcpu_svm *svm); +void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); /* vmenter.S */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e2f26564a12d..0fbb46990dfc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4442,6 +4442,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, /* trying to cancel vmlaunch/vmresume is a bug */ WARN_ON_ONCE(vmx->nested.nested_run_pending); + kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); + /* Service the TLB flush request for L2 before switching to L1. */ if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) kvm_vcpu_flush_tlb_current(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 75c9c6a0a3a4..2af05d3b0590 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7707,6 +7707,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .msr_filter_changed = vmx_msr_filter_changed, .complete_emulated_msr = kvm_complete_insn_gp, .cpu_dirty_log_size = vmx_cpu_dirty_log_size, + + .vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector, }; static __init int hardware_setup(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3f7c1fc7a3ce..9a8969a6dd06 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7976,17 +7976,22 @@ void kvm_arch_exit(void) kmem_cache_destroy(x86_fpu_cache); } -int kvm_vcpu_halt(struct kvm_vcpu *vcpu) +static int __kvm_vcpu_halt(struct kvm_vcpu *vcpu, int state, int reason) { ++vcpu->stat.halt_exits; if (lapic_in_kernel(vcpu)) { - vcpu->arch.mp_state = KVM_MP_STATE_HALTED; + vcpu->arch.mp_state = state; return 1; } else { - vcpu->run->exit_reason = KVM_EXIT_HLT; + vcpu->run->exit_reason = reason; return 0; } } + +int kvm_vcpu_halt(struct kvm_vcpu *vcpu) +{ + return __kvm_vcpu_halt(vcpu, KVM_MP_STATE_HALTED, KVM_EXIT_HLT); +} EXPORT_SYMBOL_GPL(kvm_vcpu_halt); int kvm_emulate_halt(struct kvm_vcpu *vcpu) @@ -8000,6 +8005,14 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_emulate_halt); +int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu) +{ + int ret = kvm_skip_emulated_instruction(vcpu); + + return __kvm_vcpu_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret; +} +EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold); + #ifdef CONFIG_X86_64 static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, unsigned long clock_type) @@ -8789,7 +8802,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_request_pending(vcpu)) { if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { - if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) { + if (WARN_ON_ONCE(!is_guest_mode(vcpu))) + ; + else if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) { r = 0; goto out; } @@ -9094,6 +9109,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) kvm_apic_accept_events(vcpu); switch(vcpu->arch.mp_state) { case KVM_MP_STATE_HALTED: + case KVM_MP_STATE_AP_RESET_HOLD: vcpu->arch.pv.pv_unhalted = false; vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; @@ -9520,8 +9536,9 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, kvm_load_guest_fpu(vcpu); kvm_apic_accept_events(vcpu); - if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED && - vcpu->arch.pv.pv_unhalted) + if ((vcpu->arch.mp_state == KVM_MP_STATE_HALTED || + vcpu->arch.mp_state == KVM_MP_STATE_AP_RESET_HOLD) && + vcpu->arch.pv.pv_unhalted) mp_state->mp_state = KVM_MP_STATE_RUNNABLE; else mp_state->mp_state = vcpu->arch.mp_state; @@ -10152,6 +10169,7 @@ void kvm_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) kvm_set_segment(vcpu, &cs, VCPU_SREG_CS); kvm_rip_write(vcpu, 0); } +EXPORT_SYMBOL_GPL(kvm_vcpu_deliver_sipi_vector); int kvm_arch_hardware_enable(void) { diff --git a/arch/x86/lib/mmx_32.c b/arch/x86/lib/mmx_32.c index 4321fa02e18d..419365c48b2a 100644 --- a/arch/x86/lib/mmx_32.c +++ b/arch/x86/lib/mmx_32.c @@ -26,6 +26,16 @@ #include <asm/fpu/api.h> #include <asm/asm.h> +/* + * Use KFPU_387. MMX instructions are not affected by MXCSR, + * but both AMD and Intel documentation states that even integer MMX + * operations will result in #MF if an exception is pending in FCW. + * + * EMMS is not needed afterwards because, after calling kernel_fpu_end(), + * any subsequent user of the 387 stack will reinitialize it using + * KFPU_387. + */ + void *_mmx_memcpy(void *to, const void *from, size_t len) { void *p; @@ -37,7 +47,7 @@ void *_mmx_memcpy(void *to, const void *from, size_t len) p = to; i = len >> 6; /* len/64 */ - kernel_fpu_begin(); + kernel_fpu_begin_mask(KFPU_387); __asm__ __volatile__ ( "1: prefetch (%0)\n" /* This set is 28 bytes */ @@ -127,7 +137,7 @@ static void fast_clear_page(void *page) { int i; - kernel_fpu_begin(); + kernel_fpu_begin_mask(KFPU_387); __asm__ __volatile__ ( " pxor %%mm0, %%mm0\n" : : @@ -160,7 +170,7 @@ static void fast_copy_page(void *to, void *from) { int i; - kernel_fpu_begin(); + kernel_fpu_begin_mask(KFPU_387); /* * maybe the prefetch stuff can go before the expensive fnsave... @@ -247,7 +257,7 @@ static void fast_clear_page(void *page) { int i; - kernel_fpu_begin(); + kernel_fpu_begin_mask(KFPU_387); __asm__ __volatile__ ( " pxor %%mm0, %%mm0\n" : : @@ -282,7 +292,7 @@ static void fast_copy_page(void *to, void *from) { int i; - kernel_fpu_begin(); + kernel_fpu_begin_mask(KFPU_387); __asm__ __volatile__ ( "1: prefetch (%0)\n" diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index dfd82f51ba66..f6a9e2e36642 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -829,6 +829,8 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) } free_page((unsigned long)pmd_sv); + + pgtable_pmd_page_dtor(virt_to_page(pmd)); free_page((unsigned long)pmd); return 1; diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c index 9e87ab010c82..e68ea5f4ad1c 100644 --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -164,10 +164,10 @@ static int xen_cpu_up_prepare_hvm(unsigned int cpu) else per_cpu(xen_vcpu_id, cpu) = cpu; rc = xen_vcpu_setup(cpu); - if (rc) + if (rc || !xen_have_vector_callback) return rc; - if (xen_have_vector_callback && xen_feature(XENFEAT_hvm_safe_pvclock)) + if (xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_timer(cpu); rc = xen_smp_intr_init(cpu); @@ -188,6 +188,8 @@ static int xen_cpu_dead_hvm(unsigned int cpu) return 0; } +static bool no_vector_callback __initdata; + static void __init xen_hvm_guest_init(void) { if (xen_pv_domain()) @@ -207,7 +209,7 @@ static void __init xen_hvm_guest_init(void) xen_panic_handler_init(); - if (xen_feature(XENFEAT_hvm_callback_vector)) + if (!no_vector_callback && xen_feature(XENFEAT_hvm_callback_vector)) xen_have_vector_callback = 1; xen_hvm_smp_init(); @@ -233,6 +235,13 @@ static __init int xen_parse_nopv(char *arg) } early_param("xen_nopv", xen_parse_nopv); +static __init int xen_parse_no_vector_callback(char *arg) +{ + no_vector_callback = true; + return 0; +} +early_param("xen_no_vector_callback", xen_parse_no_vector_callback); + bool __init xen_hvm_need_lapic(void) { if (xen_pv_domain()) diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c index f5e7db4f82ab..6ff3c887e0b9 100644 --- a/arch/x86/xen/smp_hvm.c +++ b/arch/x86/xen/smp_hvm.c @@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus) int cpu; native_smp_prepare_cpus(max_cpus); - WARN_ON(xen_smp_intr_init(0)); - xen_init_lock_cpu(0); + if (xen_have_vector_callback) { + WARN_ON(xen_smp_intr_init(0)); + xen_init_lock_cpu(0); + } for_each_possible_cpu(cpu) { if (cpu == 0) @@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus) static void xen_hvm_cpu_die(unsigned int cpu) { if (common_cpu_die(cpu) == 0) { - xen_smp_intr_free(cpu); - xen_uninit_lock_cpu(cpu); - xen_teardown_timer(cpu); + if (xen_have_vector_callback) { + xen_smp_intr_free(cpu); + xen_uninit_lock_cpu(cpu); + xen_teardown_timer(cpu); + } } } #else @@ -64,14 +68,19 @@ static void xen_hvm_cpu_die(unsigned int cpu) void __init xen_hvm_smp_init(void) { - if (!xen_have_vector_callback) + smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu; + smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus; + smp_ops.smp_cpus_done = xen_smp_cpus_done; + smp_ops.cpu_die = xen_hvm_cpu_die; + + if (!xen_have_vector_callback) { +#ifdef CONFIG_PARAVIRT_SPINLOCKS + nopvspin = true; +#endif return; + } - smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus; smp_ops.smp_send_reschedule = xen_smp_send_reschedule; - smp_ops.cpu_die = xen_hvm_cpu_die; smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi; smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi; - smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu; - smp_ops.smp_cpus_done = xen_smp_cpus_done; } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 9e81d1052091..9e4eb0fc1c16 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6332,13 +6332,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, * limit 'something'. */ /* no more than 50% of tags for async I/O */ - bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U); + bfqd->word_depths[0][0] = max(bt->sb.depth >> 1, 1U); /* * no more than 75% of tags for sync writes (25% extra tags * w.r.t. async I/O, to prevent async I/O from starving sync * writes) */ - bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U); + bfqd->word_depths[0][1] = max((bt->sb.depth * 3) >> 2, 1U); /* * In-word depths in case some bfq_queue is being weight- @@ -6348,9 +6348,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd, * shortage. */ /* no more than ~18% of tags for async I/O */ - bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U); + bfqd->word_depths[1][0] = max((bt->sb.depth * 3) >> 4, 1U); /* no more than ~37% of tags for sync writes (~20% extra tags) */ - bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U); + bfqd->word_depths[1][1] = max((bt->sb.depth * 6) >> 4, 1U); for (i = 0; i < 2; i++) for (j = 0; j < 2; j++) diff --git a/block/blk-iocost.c b/block/blk-iocost.c index ac6078a34939..98d656bdb42b 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -2551,8 +2551,8 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio) bool use_debt, ioc_locked; unsigned long flags; - /* bypass IOs if disabled or for root cgroup */ - if (!ioc->enabled || !iocg->level) + /* bypass IOs if disabled, still initializing, or for root cgroup */ + if (!ioc->enabled || !iocg || !iocg->level) return; /* calculate the absolute vtime cost */ @@ -2679,14 +2679,14 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq, struct bio *bio) { struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg); - struct ioc *ioc = iocg->ioc; + struct ioc *ioc = rqos_to_ioc(rqos); sector_t bio_end = bio_end_sector(bio); struct ioc_now now; u64 vtime, abs_cost, cost; unsigned long flags; - /* bypass if disabled or for root cgroup */ - if (!ioc->enabled || !iocg->level) + /* bypass if disabled, still initializing, or for root cgroup */ + if (!ioc->enabled || !iocg || !iocg->level) return; abs_cost = calc_vtime_cost(bio, iocg, true); @@ -2863,6 +2863,12 @@ static int blk_iocost_init(struct request_queue *q) ioc_refresh_params(ioc, true); spin_unlock_irq(&ioc->lock); + /* + * rqos must be added before activation to allow iocg_pd_init() to + * lookup the ioc from q. This means that the rqos methods may get + * called before policy activation completion, can't assume that the + * target bio has an iocg associated and need to test for NULL iocg. + */ rq_qos_add(q, rqos); ret = blkcg_activate_policy(q, &blkcg_policy_iocost); if (ret) { diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 4d6e83e5b442..4de03da9a624 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -246,6 +246,7 @@ static const char *const hctx_flag_name[] = { HCTX_FLAG_NAME(BLOCKING), HCTX_FLAG_NAME(NO_SCHED), HCTX_FLAG_NAME(STACKING), + HCTX_FLAG_NAME(TAG_HCTX_SHARED), }; #undef HCTX_FLAG_NAME diff --git a/block/genhd.c b/block/genhd.c index 73faec438e49..419548e92d82 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -246,15 +246,18 @@ struct block_device *disk_part_iter_next(struct disk_part_iter *piter) part = rcu_dereference(ptbl->part[piter->idx]); if (!part) continue; + piter->part = bdgrab(part); + if (!piter->part) + continue; if (!bdev_nr_sectors(part) && !(piter->flags & DISK_PITER_INCL_EMPTY) && !(piter->flags & DISK_PITER_INCL_EMPTY_PART0 && - piter->idx == 0)) + piter->idx == 0)) { + bdput(piter->part); + piter->part = NULL; continue; + } - piter->part = bdgrab(part); - if (!piter->part) - continue; piter->idx += inc; break; } diff --git a/crypto/asymmetric_keys/asym_tpm.c b/crypto/asymmetric_keys/asym_tpm.c index 511932aa94a6..0959613560b9 100644 --- a/crypto/asymmetric_keys/asym_tpm.c +++ b/crypto/asymmetric_keys/asym_tpm.c @@ -354,7 +354,7 @@ static uint32_t derive_pub_key(const void *pub_key, uint32_t len, uint8_t *buf) memcpy(cur, e, sizeof(e)); cur += sizeof(e); /* Zero parameters to satisfy set_pub_key ABI. */ - memset(cur, 0, SETKEY_PARAMS_SIZE); + memzero_explicit(cur, SETKEY_PARAMS_SIZE); return cur - buf; } diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index 8892908ad58c..788a4ba1e2e7 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -356,7 +356,8 @@ int public_key_verify_signature(const struct public_key *pkey, if (ret) goto error_free_key; - if (strcmp(sig->pkey_algo, "sm2") == 0 && sig->data_size) { + if (sig->pkey_algo && strcmp(sig->pkey_algo, "sm2") == 0 && + sig->data_size) { ret = cert_sig_digest_update(sig, tfm); if (ret) goto error_free_key; diff --git a/crypto/ecdh.c b/crypto/ecdh.c index d56b8603dec9..96f80c8f8e30 100644 --- a/crypto/ecdh.c +++ b/crypto/ecdh.c @@ -39,7 +39,8 @@ static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, struct ecdh params; unsigned int ndigits; - if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0) + if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0 || + params.key_size > sizeof(ctx->private_key)) return -EINVAL; ndigits = ecdh_supported_curve(params.curve_id); diff --git a/crypto/xor.c b/crypto/xor.c index eacbf4f93990..8f899f898ec9 100644 --- a/crypto/xor.c +++ b/crypto/xor.c @@ -107,6 +107,8 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2) preempt_enable(); // bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s] + if (!min) + min = 1; speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min); tmpl->speed = speed; diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index edf1558c1105..ebcf534514be 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -395,9 +395,6 @@ config ACPI_CONTAINER This helps support hotplug of nodes, CPUs, and memory. - To compile this driver as a module, choose M here: - the module will be called container. - config ACPI_HOTPLUG_MEMORY bool "Memory Hotplug" depends on MEMORY_HOTPLUG @@ -411,9 +408,6 @@ config ACPI_HOTPLUG_MEMORY removing memory devices at runtime, you need not enable this driver. - To compile this driver as a module, choose M here: - the module will be called acpi_memhotplug. - config ACPI_HOTPLUG_IOAPIC bool depends on PCI diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h index cb229e24c563..e6a5d997241c 100644 --- a/drivers/acpi/internal.h +++ b/drivers/acpi/internal.h @@ -97,7 +97,7 @@ void acpi_scan_table_handler(u32 event, void *table, void *context); extern struct list_head acpi_bus_id_list; struct acpi_device_bus_id { - char bus_id[15]; + const char *bus_id; unsigned int instance_no; struct list_head node; }; diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index 80b668c80073..1db063b02f63 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -486,6 +486,7 @@ static void acpi_device_del(struct acpi_device *device) acpi_device_bus_id->instance_no--; else { list_del(&acpi_device_bus_id->node); + kfree_const(acpi_device_bus_id->bus_id); kfree(acpi_device_bus_id); } break; @@ -585,6 +586,8 @@ static int acpi_get_device_data(acpi_handle handle, struct acpi_device **device, if (!device) return -EINVAL; + *device = NULL; + status = acpi_get_data_full(handle, acpi_scan_drop_device, (void **)device, callback); if (ACPI_FAILURE(status) || !*device) { @@ -674,7 +677,14 @@ int acpi_device_add(struct acpi_device *device, } if (!found) { acpi_device_bus_id = new_bus_id; - strcpy(acpi_device_bus_id->bus_id, acpi_device_hid(device)); + acpi_device_bus_id->bus_id = + kstrdup_const(acpi_device_hid(device), GFP_KERNEL); + if (!acpi_device_bus_id->bus_id) { + pr_err(PREFIX "Memory allocation error for bus id\n"); + result = -ENOMEM; + goto err_free_new_bus_id; + } + acpi_device_bus_id->instance_no = 0; list_add_tail(&acpi_device_bus_id->node, &acpi_bus_id_list); } @@ -709,6 +719,11 @@ int acpi_device_add(struct acpi_device *device, if (device->parent) list_del(&device->node); list_del(&device->wakeup_list); + + err_free_new_bus_id: + if (!found) + kfree(new_bus_id); + mutex_unlock(&acpi_device_lock); err_detach: diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c index 25fea34b544c..2b69536cdccb 100644 --- a/drivers/acpi/x86/s2idle.c +++ b/drivers/acpi/x86/s2idle.c @@ -105,18 +105,8 @@ static void lpi_device_get_constraints_amd(void) for (i = 0; i < out_obj->package.count; i++) { union acpi_object *package = &out_obj->package.elements[i]; - struct lpi_device_info_amd info = { }; - if (package->type == ACPI_TYPE_INTEGER) { - switch (i) { - case 0: - info.revision = package->integer.value; - break; - case 1: - info.count = package->integer.value; - break; - } - } else if (package->type == ACPI_TYPE_PACKAGE) { + if (package->type == ACPI_TYPE_PACKAGE) { lpi_constraints_table = kcalloc(package->package.count, sizeof(*lpi_constraints_table), GFP_KERNEL); @@ -135,12 +125,10 @@ static void lpi_device_get_constraints_amd(void) for (k = 0; k < info_obj->package.count; ++k) { union acpi_object *obj = &info_obj->package.elements[k]; - union acpi_object *obj_new; list = &lpi_constraints_table[lpi_constraints_table_size]; list->min_dstate = -1; - obj_new = &obj[k]; switch (k) { case 0: dev_info.enabled = obj->integer.value; diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c index 65a3886f68c9..5f0472c18bcb 100644 --- a/drivers/atm/idt77252.c +++ b/drivers/atm/idt77252.c @@ -3607,7 +3607,7 @@ static int idt77252_init_one(struct pci_dev *pcidev, if ((err = dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32)))) { printk("idt77252: can't enable DMA for PCI device at %s\n", pci_name(pcidev)); - return err; + goto err_out_disable_pdev; } card = kzalloc(sizeof(struct idt77252_dev), GFP_KERNEL); diff --git a/drivers/base/core.c b/drivers/base/core.c index 25e08e5f40bd..6eb4c7a904c5 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -208,6 +208,16 @@ int device_links_read_lock_held(void) #endif #endif /* !CONFIG_SRCU */ +static bool device_is_ancestor(struct device *dev, struct device *target) +{ + while (target->parent) { + target = target->parent; + if (dev == target) + return true; + } + return false; +} + /** * device_is_dependent - Check if one device depends on another one * @dev: Device to check dependencies for. @@ -221,7 +231,12 @@ int device_is_dependent(struct device *dev, void *target) struct device_link *link; int ret; - if (dev == target) + /* + * The "ancestors" check is needed to catch the case when the target + * device has not been completely initialized yet and it is still + * missing from the list of children of its parent device. + */ + if (dev == target || device_is_ancestor(dev, target)) return 1; ret = device_for_each_child(dev, target, device_is_dependent); @@ -456,7 +471,9 @@ static int devlink_add_symlinks(struct device *dev, struct device *con = link->consumer; char *buf; - len = max(strlen(dev_name(sup)), strlen(dev_name(con))); + len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)), + strlen(dev_bus_name(con)) + strlen(dev_name(con))); + len += strlen(":"); len += strlen("supplier:") + 1; buf = kzalloc(len, GFP_KERNEL); if (!buf) @@ -470,12 +487,12 @@ static int devlink_add_symlinks(struct device *dev, if (ret) goto err_con; - snprintf(buf, len, "consumer:%s", dev_name(con)); + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf); if (ret) goto err_con_dev; - snprintf(buf, len, "supplier:%s", dev_name(sup)); + snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup)); ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf); if (ret) goto err_sup_dev; @@ -483,7 +500,7 @@ static int devlink_add_symlinks(struct device *dev, goto out; err_sup_dev: - snprintf(buf, len, "consumer:%s", dev_name(con)); + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); sysfs_remove_link(&sup->kobj, buf); err_con_dev: sysfs_remove_link(&link->link_dev.kobj, "consumer"); @@ -506,7 +523,9 @@ static void devlink_remove_symlinks(struct device *dev, sysfs_remove_link(&link->link_dev.kobj, "consumer"); sysfs_remove_link(&link->link_dev.kobj, "supplier"); - len = max(strlen(dev_name(sup)), strlen(dev_name(con))); + len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)), + strlen(dev_bus_name(con)) + strlen(dev_name(con))); + len += strlen(":"); len += strlen("supplier:") + 1; buf = kzalloc(len, GFP_KERNEL); if (!buf) { @@ -514,9 +533,9 @@ static void devlink_remove_symlinks(struct device *dev, return; } - snprintf(buf, len, "supplier:%s", dev_name(sup)); + snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup)); sysfs_remove_link(&con->kobj, buf); - snprintf(buf, len, "consumer:%s", dev_name(con)); + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); sysfs_remove_link(&sup->kobj, buf); kfree(buf); } @@ -737,8 +756,9 @@ struct device_link *device_link_add(struct device *consumer, link->link_dev.class = &devlink_class; device_set_pm_not_required(&link->link_dev); - dev_set_name(&link->link_dev, "%s--%s", - dev_name(supplier), dev_name(consumer)); + dev_set_name(&link->link_dev, "%s:%s--%s:%s", + dev_bus_name(supplier), dev_name(supplier), + dev_bus_name(consumer), dev_name(consumer)); if (device_register(&link->link_dev)) { put_device(consumer); put_device(supplier); @@ -1808,9 +1828,7 @@ const char *dev_driver_string(const struct device *dev) * never change once they are set, so they don't need special care. */ drv = READ_ONCE(dev->driver); - return drv ? drv->name : - (dev->bus ? dev->bus->name : - (dev->class ? dev->class->name : "")); + return drv ? drv->name : dev_bus_name(dev); } EXPORT_SYMBOL(dev_driver_string); @@ -4414,6 +4432,12 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode) * * Set the device's firmware node pointer to @fwnode, but if a secondary * firmware node of the device is present, preserve it. + * + * Valid fwnode cases are: + * - primary --> secondary --> -ENODEV + * - primary --> NULL + * - secondary --> -ENODEV + * - NULL */ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode) { @@ -4432,8 +4456,9 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode) } else { if (fwnode_is_primary(fn)) { dev->fwnode = fn->secondary; + /* Set fn->secondary = NULL, so fn remains the primary fwnode */ if (!(parent && fn == parent->fwnode)) - fn->secondary = ERR_PTR(-ENODEV); + fn->secondary = NULL; } else { dev->fwnode = NULL; } diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 2f32f38a11ed..9179825ff646 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -371,13 +371,6 @@ static void driver_bound(struct device *dev) device_pm_check_callbacks(dev); /* - * Reorder successfully probed devices to the end of the device list. - * This ensures that suspend/resume order matches probe order, which - * is usually what drivers rely on. - */ - device_pm_move_to_tail(dev); - - /* * Make sure the device is no longer in one of the deferred lists and * kick off retrying all pending devices */ @@ -619,6 +612,8 @@ dev_groups_failed: else if (drv->remove) drv->remove(dev); probe_failed: + kfree(dev->dma_range_map); + dev->dma_range_map = NULL; if (dev->bus) blocking_notifier_call_chain(&dev->bus->p->bus_notifier, BUS_NOTIFY_DRIVER_NOT_BOUND, dev); diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 95fd1549f87d..8456d8384ac8 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -366,6 +366,8 @@ int devm_platform_get_irqs_affinity(struct platform_device *dev, return -ERANGE; nvec = platform_irq_count(dev); + if (nvec < 0) + return nvec; if (nvec < minvec) return -ENOSPC; diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c index 8dfac7f3ed7a..ff2ee87987c7 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c @@ -582,8 +582,12 @@ void regmap_debugfs_init(struct regmap *map) devname = dev_name(map->dev); if (name) { - map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s", + if (!map->debugfs_name) { + map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s", devname, name); + if (!map->debugfs_name) + return; + } name = map->debugfs_name; } else { name = devname; @@ -591,9 +595,10 @@ void regmap_debugfs_init(struct regmap *map) if (!strcmp(name, "dummy")) { kfree(map->debugfs_name); - map->debugfs_name = kasprintf(GFP_KERNEL, "dummy%d", dummy_index); + if (!map->debugfs_name) + return; name = map->debugfs_name; dummy_index++; } diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig index 262326973ee0..583b671b1d2d 100644 --- a/drivers/block/Kconfig +++ b/drivers/block/Kconfig @@ -445,6 +445,7 @@ config BLK_DEV_RBD config BLK_DEV_RSXX tristate "IBM Flash Adapter 900GB Full Height PCIe Device Driver" depends on PCI + select CRC32 help Device driver for IBM's high speed PCIe SSD storage device: Flash Adapter 900GB Full Height. diff --git a/drivers/block/rnbd/Kconfig b/drivers/block/rnbd/Kconfig index 4b6d3d816d1f..2ff05a0d2646 100644 --- a/drivers/block/rnbd/Kconfig +++ b/drivers/block/rnbd/Kconfig @@ -7,6 +7,7 @@ config BLK_DEV_RNBD_CLIENT tristate "RDMA Network Block Device driver client" depends on INFINIBAND_RTRS_CLIENT select BLK_DEV_RNBD + select SG_POOL help RNBD client is a network block device driver using rdma transport. diff --git a/drivers/block/rnbd/README b/drivers/block/rnbd/README index 1773c0aa0bd4..080f58a5400a 100644 --- a/drivers/block/rnbd/README +++ b/drivers/block/rnbd/README @@ -90,3 +90,4 @@ Kleber Souza <[email protected]> Lutz Pogrell <[email protected]> Milind Dumbare <[email protected]> Roman Penyaev <[email protected]> +Swapnil Ingle <[email protected]> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index 96e3f9fe8241..45a470076652 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -375,12 +375,19 @@ static struct rnbd_iu *rnbd_get_iu(struct rnbd_clt_session *sess, init_waitqueue_head(&iu->comp.wait); iu->comp.errno = INT_MAX; + if (sg_alloc_table(&iu->sgt, 1, GFP_KERNEL)) { + rnbd_put_permit(sess, permit); + kfree(iu); + return NULL; + } + return iu; } static void rnbd_put_iu(struct rnbd_clt_session *sess, struct rnbd_iu *iu) { if (atomic_dec_and_test(&iu->refcount)) { + sg_free_table(&iu->sgt); rnbd_put_permit(sess, iu->permit); kfree(iu); } @@ -487,8 +494,6 @@ static int send_msg_close(struct rnbd_clt_dev *dev, u32 device_id, bool wait) iu->buf = NULL; iu->dev = dev; - sg_alloc_table(&iu->sgt, 1, GFP_KERNEL); - msg.hdr.type = cpu_to_le16(RNBD_MSG_CLOSE); msg.device_id = cpu_to_le32(device_id); @@ -502,7 +507,6 @@ static int send_msg_close(struct rnbd_clt_dev *dev, u32 device_id, bool wait) err = errno; } - sg_free_table(&iu->sgt); rnbd_put_iu(sess, iu); return err; } @@ -575,7 +579,6 @@ static int send_msg_open(struct rnbd_clt_dev *dev, bool wait) iu->buf = rsp; iu->dev = dev; - sg_alloc_table(&iu->sgt, 1, GFP_KERNEL); sg_init_one(iu->sgt.sgl, rsp, sizeof(*rsp)); msg.hdr.type = cpu_to_le16(RNBD_MSG_OPEN); @@ -594,7 +597,6 @@ static int send_msg_open(struct rnbd_clt_dev *dev, bool wait) err = errno; } - sg_free_table(&iu->sgt); rnbd_put_iu(sess, iu); return err; } @@ -622,8 +624,6 @@ static int send_msg_sess_info(struct rnbd_clt_session *sess, bool wait) iu->buf = rsp; iu->sess = sess; - - sg_alloc_table(&iu->sgt, 1, GFP_KERNEL); sg_init_one(iu->sgt.sgl, rsp, sizeof(*rsp)); msg.hdr.type = cpu_to_le16(RNBD_MSG_SESS_INFO); @@ -650,7 +650,6 @@ put_iu: } else { err = errno; } - sg_free_table(&iu->sgt); rnbd_put_iu(sess, iu); return err; } @@ -1698,7 +1697,8 @@ static void rnbd_destroy_sessions(void) */ list_for_each_entry_safe(sess, sn, &sess_list, list) { - WARN_ON(!rnbd_clt_get_sess(sess)); + if (!rnbd_clt_get_sess(sess)) + continue; close_rtrs(sess); list_for_each_entry_safe(dev, tn, &sess->devs_list, list) { /* diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c index b8e44331e494..a6a68d44f517 100644 --- a/drivers/block/rnbd/rnbd-srv.c +++ b/drivers/block/rnbd/rnbd-srv.c @@ -338,10 +338,12 @@ static int rnbd_srv_link_ev(struct rtrs_srv *rtrs, void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev) { - mutex_lock(&sess_dev->sess->lock); - rnbd_srv_destroy_dev_session_sysfs(sess_dev); - mutex_unlock(&sess_dev->sess->lock); + struct rnbd_srv_session *sess = sess_dev->sess; + sess_dev->keep_id = true; + mutex_lock(&sess->lock); + rnbd_srv_destroy_dev_session_sysfs(sess_dev); + mutex_unlock(&sess->lock); } static int process_msg_close(struct rtrs_srv *rtrs, diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c index 37244a7e68c2..9cf249c344d9 100644 --- a/drivers/clk/tegra/clk-tegra30.c +++ b/drivers/clk/tegra/clk-tegra30.c @@ -1256,6 +1256,8 @@ static struct tegra_clk_init_table init_table[] __initdata = { { TEGRA30_CLK_I2S3_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, { TEGRA30_CLK_I2S4_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, { TEGRA30_CLK_VIMCLK_SYNC, TEGRA30_CLK_CLK_MAX, 24000000, 0 }, + { TEGRA30_CLK_HDA, TEGRA30_CLK_PLL_P, 102000000, 0 }, + { TEGRA30_CLK_HDA2CODEC_2X, TEGRA30_CLK_PLL_P, 48000000, 0 }, /* must be the last entry */ { TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 }, }; diff --git a/drivers/counter/ti-eqep.c b/drivers/counter/ti-eqep.c index a60aee1a1a29..65df9ef5b5bc 100644 --- a/drivers/counter/ti-eqep.c +++ b/drivers/counter/ti-eqep.c @@ -235,36 +235,6 @@ static ssize_t ti_eqep_position_ceiling_write(struct counter_device *counter, return len; } -static ssize_t ti_eqep_position_floor_read(struct counter_device *counter, - struct counter_count *count, - void *ext_priv, char *buf) -{ - struct ti_eqep_cnt *priv = counter->priv; - u32 qposinit; - - regmap_read(priv->regmap32, QPOSINIT, &qposinit); - - return sprintf(buf, "%u\n", qposinit); -} - -static ssize_t ti_eqep_position_floor_write(struct counter_device *counter, - struct counter_count *count, - void *ext_priv, const char *buf, - size_t len) -{ - struct ti_eqep_cnt *priv = counter->priv; - int err; - u32 res; - - err = kstrtouint(buf, 0, &res); - if (err < 0) - return err; - - regmap_write(priv->regmap32, QPOSINIT, res); - - return len; -} - static ssize_t ti_eqep_position_enable_read(struct counter_device *counter, struct counter_count *count, void *ext_priv, char *buf) @@ -302,11 +272,6 @@ static struct counter_count_ext ti_eqep_position_ext[] = { .write = ti_eqep_position_ceiling_write, }, { - .name = "floor", - .read = ti_eqep_position_floor_read, - .write = ti_eqep_position_floor_write, - }, - { .name = "enable", .read = ti_eqep_position_enable_read, .write = ti_eqep_position_enable_write, diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 1a660466dd75..be05e038d956 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -76,11 +76,6 @@ static inline int ceiling_fp(int32_t x) return ret; } -static inline int32_t percent_fp(int percent) -{ - return div_fp(percent, 100); -} - static inline u64 mul_ext_fp(u64 x, u64 y) { return (x * y) >> EXT_FRAC_BITS; @@ -91,11 +86,6 @@ static inline u64 div_ext_fp(u64 x, u64 y) return div64_u64(x << EXT_FRAC_BITS, y); } -static inline int32_t percent_ext_fp(int percent) -{ - return div_ext_fp(percent, 100); -} - /** * struct sample - Store performance sample * @core_avg_perf: Ratio of APERF/MPERF which is the actual average @@ -2653,12 +2643,13 @@ static void intel_cpufreq_adjust_perf(unsigned int cpunum, unsigned long capacity) { struct cpudata *cpu = all_cpu_data[cpunum]; + u64 hwp_cap = READ_ONCE(cpu->hwp_cap_cached); int old_pstate = cpu->pstate.current_pstate; int cap_pstate, min_pstate, max_pstate, target_pstate; update_turbo_state(); - cap_pstate = global.turbo_disabled ? cpu->pstate.max_pstate : - cpu->pstate.turbo_pstate; + cap_pstate = global.turbo_disabled ? HWP_GUARANTEED_PERF(hwp_cap) : + HWP_HIGHEST_PERF(hwp_cap); /* Optimization: Avoid unnecessary divisions. */ diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c index 0acc9e241cd7..b9ccb6a3dad9 100644 --- a/drivers/cpufreq/powernow-k8.c +++ b/drivers/cpufreq/powernow-k8.c @@ -878,9 +878,9 @@ static int get_transition_latency(struct powernow_k8_data *data) /* Take a frequency, and issue the fid/vid transition command */ static int transition_frequency_fidvid(struct powernow_k8_data *data, - unsigned int index) + unsigned int index, + struct cpufreq_policy *policy) { - struct cpufreq_policy *policy; u32 fid = 0; u32 vid = 0; int res; @@ -912,9 +912,6 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data, freqs.old = find_khz_freq_from_fid(data->currfid); freqs.new = find_khz_freq_from_fid(fid); - policy = cpufreq_cpu_get(smp_processor_id()); - cpufreq_cpu_put(policy); - cpufreq_freq_transition_begin(policy, &freqs); res = transition_fid_vid(data, fid, vid); cpufreq_freq_transition_end(policy, &freqs, res); @@ -969,7 +966,7 @@ static long powernowk8_target_fn(void *arg) powernow_k8_acpi_pst_values(data, newstate); - ret = transition_frequency_fidvid(data, newstate); + ret = transition_frequency_fidvid(data, newstate, pol); if (ret) { pr_err("transition frequency failed\n"); diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index bbd51703e738..e535f28a8028 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -366,6 +366,7 @@ if CRYPTO_DEV_OMAP config CRYPTO_DEV_OMAP_SHAM tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator" depends on ARCH_OMAP2PLUS + select CRYPTO_ENGINE select CRYPTO_SHA1 select CRYPTO_MD5 select CRYPTO_SHA256 diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 3948b88aa123..f264b70c383e 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -76,10 +76,6 @@ static void dma_buf_release(struct dentry *dentry) dmabuf->ops->release(dmabuf); - mutex_lock(&db_list.lock); - list_del(&dmabuf->list_node); - mutex_unlock(&db_list.lock); - if (dmabuf->resv == (struct dma_resv *)&dmabuf[1]) dma_resv_fini(dmabuf->resv); @@ -88,6 +84,22 @@ static void dma_buf_release(struct dentry *dentry) kfree(dmabuf); } +static int dma_buf_file_release(struct inode *inode, struct file *file) +{ + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) + return -EINVAL; + + dmabuf = file->private_data; + + mutex_lock(&db_list.lock); + list_del(&dmabuf->list_node); + mutex_unlock(&db_list.lock); + + return 0; +} + static const struct dentry_operations dma_buf_dentry_ops = { .d_dname = dmabuffs_dname, .d_release = dma_buf_release, @@ -413,6 +425,7 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file) } static const struct file_operations dma_buf_fops = { + .release = dma_buf_file_release, .mmap = dma_buf_mmap_internal, .llseek = dma_buf_llseek, .poll = dma_buf_poll, diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 3c4e34301172..364fc2f3e499 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -251,6 +251,9 @@ static void cma_heap_dma_buf_release(struct dma_buf *dmabuf) buffer->vaddr = NULL; } + /* free page list */ + kfree(buffer->pages); + /* release memory */ cma_release(cma_heap->cma, buffer->cma_pages, buffer->pagecount); kfree(buffer); } diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index b971505b8715..08d71dafa001 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -86,12 +86,12 @@ static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) if (desc->chunk) { /* Create and add new element into the linked list */ - desc->chunks_alloc++; - list_add_tail(&chunk->list, &desc->chunk->list); if (!dw_edma_alloc_burst(chunk)) { kfree(chunk); return NULL; } + desc->chunks_alloc++; + list_add_tail(&chunk->list, &desc->chunk->list); } else { /* List head */ chunk->burst = NULL; diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c index 266423a2cabc..4dbb03c545e4 100644 --- a/drivers/dma/idxd/sysfs.c +++ b/drivers/dma/idxd/sysfs.c @@ -434,7 +434,7 @@ int idxd_register_driver(void) return 0; drv_fail: - for (; i > 0; i--) + while (--i >= 0) driver_unregister(&idxd_drvs[i]->drv); return rc; } @@ -1840,7 +1840,7 @@ int idxd_register_bus_type(void) return 0; bus_err: - for (; i > 0; i--) + while (--i >= 0) bus_unregister(idxd_bus_types[i]); return rc; } diff --git a/drivers/dma/mediatek/mtk-hsdma.c b/drivers/dma/mediatek/mtk-hsdma.c index f133ae8dece1..6ad8afbb95f2 100644 --- a/drivers/dma/mediatek/mtk-hsdma.c +++ b/drivers/dma/mediatek/mtk-hsdma.c @@ -1007,6 +1007,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev) return 0; err_free: + mtk_hsdma_hw_deinit(hsdma); of_dma_controller_free(pdev->dev.of_node); err_unregister: dma_async_device_unregister(dd); diff --git a/drivers/dma/milbeaut-xdmac.c b/drivers/dma/milbeaut-xdmac.c index 584c931e807a..d29d01e730aa 100644 --- a/drivers/dma/milbeaut-xdmac.c +++ b/drivers/dma/milbeaut-xdmac.c @@ -350,7 +350,7 @@ static int milbeaut_xdmac_probe(struct platform_device *pdev) ret = dma_async_device_register(ddev); if (ret) - return ret; + goto disable_xdmac; ret = of_dma_controller_register(dev->of_node, of_dma_simple_xlate, mdev); @@ -363,6 +363,8 @@ static int milbeaut_xdmac_probe(struct platform_device *pdev) unregister_dmac: dma_async_device_unregister(ddev); +disable_xdmac: + disable_xdmac(mdev); return ret; } diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c index d5773d474d8f..88579857ca1d 100644 --- a/drivers/dma/qcom/bam_dma.c +++ b/drivers/dma/qcom/bam_dma.c @@ -630,7 +630,7 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, GFP_NOWAIT); if (!async_desc) - goto err_out; + return NULL; if (flags & DMA_PREP_FENCE) async_desc->flags |= DESC_FLAG_NWD; @@ -670,10 +670,6 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan, } return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags); - -err_out: - kfree(async_desc); - return NULL; } /** diff --git a/drivers/dma/qcom/gpi.c b/drivers/dma/qcom/gpi.c index d2334f535de2..1a0bf6b0567a 100644 --- a/drivers/dma/qcom/gpi.c +++ b/drivers/dma/qcom/gpi.c @@ -1416,7 +1416,7 @@ static int gpi_alloc_ring(struct gpi_ring *ring, u32 elements, len = 1 << bit; ring->alloc_size = (len + (len - 1)); dev_dbg(gpii->gpi_dev->dev, - "#el:%u el_size:%u len:%u actual_len:%llu alloc_size:%lu\n", + "#el:%u el_size:%u len:%u actual_len:%llu alloc_size:%zu\n", elements, el_size, (elements * el_size), len, ring->alloc_size); @@ -1424,7 +1424,7 @@ static int gpi_alloc_ring(struct gpi_ring *ring, u32 elements, ring->alloc_size, &ring->dma_handle, GFP_KERNEL); if (!ring->pre_aligned) { - dev_err(gpii->gpi_dev->dev, "could not alloc size:%lu mem for ring\n", + dev_err(gpii->gpi_dev->dev, "could not alloc size:%zu mem for ring\n", ring->alloc_size); return -ENOMEM; } @@ -1444,8 +1444,8 @@ static int gpi_alloc_ring(struct gpi_ring *ring, u32 elements, smp_wmb(); dev_dbg(gpii->gpi_dev->dev, - "phy_pre:0x%0llx phy_alig:0x%0llx len:%u el_size:%u elements:%u\n", - ring->dma_handle, ring->phys_addr, ring->len, + "phy_pre:%pad phy_alig:%pa len:%u el_size:%u elements:%u\n", + &ring->dma_handle, &ring->phys_addr, ring->len, ring->el_size, ring->elements); return 0; @@ -1948,7 +1948,7 @@ static int gpi_ch_init(struct gchan *gchan) return ret; error_start_chan: - for (i = i - 1; i >= 0; i++) { + for (i = i - 1; i >= 0; i--) { gpi_stop_chan(&gpii->gchan[i]); gpi_send_cmd(gpii, gchan, GPI_CH_CMD_RESET); } diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c index e4637ec786d3..36ba8b43e78d 100644 --- a/drivers/dma/stm32-mdma.c +++ b/drivers/dma/stm32-mdma.c @@ -199,7 +199,7 @@ #define STM32_MDMA_MAX_CHANNELS 63 #define STM32_MDMA_MAX_REQUESTS 256 #define STM32_MDMA_MAX_BURST 128 -#define STM32_MDMA_VERY_HIGH_PRIORITY 0x11 +#define STM32_MDMA_VERY_HIGH_PRIORITY 0x3 enum stm32_mdma_trigger_mode { STM32_MDMA_BUFFER, diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 87157cbae1b8..298460438bb4 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -4698,9 +4698,9 @@ static int pktdma_setup_resources(struct udma_dev *ud) ud->tchan_tpl.levels = 1; } - ud->tchan_tpl.levels = ud->tchan_tpl.levels; - ud->tchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; - ud->tchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; + ud->rchan_tpl.levels = ud->tchan_tpl.levels; + ud->rchan_tpl.start_idx[0] = ud->tchan_tpl.start_idx[0]; + ud->rchan_tpl.start_idx[1] = ud->tchan_tpl.start_idx[1]; ud->tchan_map = devm_kmalloc_array(dev, BITS_TO_LONGS(ud->tchan_cnt), sizeof(unsigned long), GFP_KERNEL); diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index 22faea653ea8..79777550a6ff 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -2781,7 +2781,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, has_dre = false; if (!has_dre) - xdev->common.copy_align = fls(width - 1); + xdev->common.copy_align = (enum dmaengine_alignment)fls(width - 1); if (of_device_is_compatible(node, "xlnx,axi-vdma-mm2s-channel") || of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel") || @@ -2900,7 +2900,8 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, static int xilinx_dma_child_probe(struct xilinx_dma_device *xdev, struct device_node *node) { - int ret, i, nr_channels = 1; + int ret, i; + u32 nr_channels = 1; ret = of_property_read_u32(node, "dma-channels", &nr_channels); if (xdev->dma_config->dmatype == XDMA_TYPE_AXIMCDMA && ret < 0) @@ -3112,7 +3113,11 @@ static int xilinx_dma_probe(struct platform_device *pdev) } /* Register the DMA engine with the core */ - dma_async_device_register(&xdev->common); + err = dma_async_device_register(&xdev->common); + if (err) { + dev_err(xdev->dev, "failed to register the dma device\n"); + goto error; + } err = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev); diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig index c70f46e80a3b..dea65d85594f 100644 --- a/drivers/gpio/Kconfig +++ b/drivers/gpio/Kconfig @@ -521,7 +521,8 @@ config GPIO_SAMA5D2_PIOBU config GPIO_SIFIVE bool "SiFive GPIO support" - depends on OF_GPIO && IRQ_DOMAIN_HIERARCHY + depends on OF_GPIO + select IRQ_DOMAIN_HIERARCHY select GPIO_GENERIC select GPIOLIB_IRQCHIP select REGMAP_MMIO @@ -597,6 +598,8 @@ config GPIO_TEGRA default ARCH_TEGRA depends on ARCH_TEGRA || COMPILE_TEST depends on OF_GPIO + select GPIOLIB_IRQCHIP + select IRQ_DOMAIN_HIERARCHY help Say yes here to support GPIO pins on NVIDIA Tegra SoCs. diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c index 672681a976f5..a912a8fed197 100644 --- a/drivers/gpio/gpio-mvebu.c +++ b/drivers/gpio/gpio-mvebu.c @@ -676,20 +676,17 @@ static void mvebu_pwm_get_state(struct pwm_chip *chip, else state->duty_cycle = 1; + val = (unsigned long long) u; /* on duration */ regmap_read(mvpwm->regs, mvebu_pwmreg_blink_off_duration(mvpwm), &u); - val = (unsigned long long) u * NSEC_PER_SEC; + val += (unsigned long long) u; /* period = on + off duration */ + val *= NSEC_PER_SEC; do_div(val, mvpwm->clk_rate); - if (val < state->duty_cycle) { + if (val > UINT_MAX) + state->period = UINT_MAX; + else if (val) + state->period = val; + else state->period = 1; - } else { - val -= state->duty_cycle; - if (val > UINT_MAX) - state->period = UINT_MAX; - else if (val) - state->period = val; - else - state->period = 1; - } regmap_read(mvchip->regs, GPIO_BLINK_EN_OFF + mvchip->offset, &u); if (u) diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c index 12b679ca552c..1a7b51163528 100644 --- a/drivers/gpio/gpiolib-cdev.c +++ b/drivers/gpio/gpiolib-cdev.c @@ -1979,6 +1979,21 @@ struct gpio_chardev_data { #endif }; +static int chipinfo_get(struct gpio_chardev_data *cdev, void __user *ip) +{ + struct gpio_device *gdev = cdev->gdev; + struct gpiochip_info chipinfo; + + memset(&chipinfo, 0, sizeof(chipinfo)); + + strscpy(chipinfo.name, dev_name(&gdev->dev), sizeof(chipinfo.name)); + strscpy(chipinfo.label, gdev->label, sizeof(chipinfo.label)); + chipinfo.lines = gdev->ngpio; + if (copy_to_user(ip, &chipinfo, sizeof(chipinfo))) + return -EFAULT; + return 0; +} + #ifdef CONFIG_GPIO_CDEV_V1 /* * returns 0 if the versions match, else the previously selected ABI version @@ -1993,6 +2008,41 @@ static int lineinfo_ensure_abi_version(struct gpio_chardev_data *cdata, return abiv; } + +static int lineinfo_get_v1(struct gpio_chardev_data *cdev, void __user *ip, + bool watch) +{ + struct gpio_desc *desc; + struct gpioline_info lineinfo; + struct gpio_v2_line_info lineinfo_v2; + + if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) + return -EFAULT; + + /* this doubles as a range check on line_offset */ + desc = gpiochip_get_desc(cdev->gdev->chip, lineinfo.line_offset); + if (IS_ERR(desc)) + return PTR_ERR(desc); + + if (watch) { + if (lineinfo_ensure_abi_version(cdev, 1)) + return -EPERM; + + if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines)) + return -EBUSY; + } + + gpio_desc_to_lineinfo(desc, &lineinfo_v2); + gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); + + if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) { + if (watch) + clear_bit(lineinfo.line_offset, cdev->watched_lines); + return -EFAULT; + } + + return 0; +} #endif static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip, @@ -2030,6 +2080,22 @@ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip, return 0; } +static int lineinfo_unwatch(struct gpio_chardev_data *cdev, void __user *ip) +{ + __u32 offset; + + if (copy_from_user(&offset, ip, sizeof(offset))) + return -EFAULT; + + if (offset >= cdev->gdev->ngpio) + return -EINVAL; + + if (!test_and_clear_bit(offset, cdev->watched_lines)) + return -EBUSY; + + return 0; +} + /* * gpio_ioctl() - ioctl handler for the GPIO chardev */ @@ -2037,80 +2103,24 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct gpio_chardev_data *cdev = file->private_data; struct gpio_device *gdev = cdev->gdev; - struct gpio_chip *gc = gdev->chip; void __user *ip = (void __user *)arg; - __u32 offset; /* We fail any subsequent ioctl():s when the chip is gone */ - if (!gc) + if (!gdev->chip) return -ENODEV; /* Fill in the struct and pass to userspace */ if (cmd == GPIO_GET_CHIPINFO_IOCTL) { - struct gpiochip_info chipinfo; - - memset(&chipinfo, 0, sizeof(chipinfo)); - - strscpy(chipinfo.name, dev_name(&gdev->dev), - sizeof(chipinfo.name)); - strscpy(chipinfo.label, gdev->label, - sizeof(chipinfo.label)); - chipinfo.lines = gdev->ngpio; - if (copy_to_user(ip, &chipinfo, sizeof(chipinfo))) - return -EFAULT; - return 0; + return chipinfo_get(cdev, ip); #ifdef CONFIG_GPIO_CDEV_V1 - } else if (cmd == GPIO_GET_LINEINFO_IOCTL) { - struct gpio_desc *desc; - struct gpioline_info lineinfo; - struct gpio_v2_line_info lineinfo_v2; - - if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) - return -EFAULT; - - /* this doubles as a range check on line_offset */ - desc = gpiochip_get_desc(gc, lineinfo.line_offset); - if (IS_ERR(desc)) - return PTR_ERR(desc); - - gpio_desc_to_lineinfo(desc, &lineinfo_v2); - gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); - - if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) - return -EFAULT; - return 0; } else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) { return linehandle_create(gdev, ip); } else if (cmd == GPIO_GET_LINEEVENT_IOCTL) { return lineevent_create(gdev, ip); - } else if (cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) { - struct gpio_desc *desc; - struct gpioline_info lineinfo; - struct gpio_v2_line_info lineinfo_v2; - - if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) - return -EFAULT; - - /* this doubles as a range check on line_offset */ - desc = gpiochip_get_desc(gc, lineinfo.line_offset); - if (IS_ERR(desc)) - return PTR_ERR(desc); - - if (lineinfo_ensure_abi_version(cdev, 1)) - return -EPERM; - - if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines)) - return -EBUSY; - - gpio_desc_to_lineinfo(desc, &lineinfo_v2); - gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); - - if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) { - clear_bit(lineinfo.line_offset, cdev->watched_lines); - return -EFAULT; - } - - return 0; + } else if (cmd == GPIO_GET_LINEINFO_IOCTL || + cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) { + return lineinfo_get_v1(cdev, ip, + cmd == GPIO_GET_LINEINFO_WATCH_IOCTL); #endif /* CONFIG_GPIO_CDEV_V1 */ } else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL || cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) { @@ -2119,16 +2129,7 @@ static long gpio_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } else if (cmd == GPIO_V2_GET_LINE_IOCTL) { return linereq_create(gdev, ip); } else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) { - if (copy_from_user(&offset, ip, sizeof(offset))) - return -EFAULT; - - if (offset >= cdev->gdev->ngpio) - return -EINVAL; - - if (!test_and_clear_bit(offset, cdev->watched_lines)) - return -EBUSY; - - return 0; + return lineinfo_unwatch(cdev, ip); } return -EINVAL; } diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c index b02cc2abd3b6..b78a634cca24 100644 --- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -1489,6 +1489,9 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc, type = IRQ_TYPE_NONE; } + if (gc->to_irq) + chip_warn(gc, "to_irq is redefined in %s and you shouldn't rely on it\n", __func__); + gc->to_irq = gpiochip_to_irq; gc->irq.default_type = type; gc->irq.lock_key = lock_key; diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index a84dc427cf82..560aaecba31b 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -3030,7 +3030,7 @@ int drm_atomic_helper_set_config(struct drm_mode_set *set, ret = handle_conflicting_encoders(state, true); if (ret) - return ret; + goto fail; ret = drm_atomic_commit(state); diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c index 3c5b04abb66f..eedbb48815b7 100644 --- a/drivers/gpu/drm/drm_dp_helper.c +++ b/drivers/gpu/drm/drm_dp_helper.c @@ -1236,7 +1236,7 @@ bool drm_dp_read_sink_count_cap(struct drm_connector *connector, return connector->connector_type != DRM_MODE_CONNECTOR_eDP && dpcd[DP_DPCD_REV] >= DP_DPCD_REV_11 && dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_PRESENT && - !drm_dp_has_quirk(desc, 0, DP_DPCD_QUIRK_NO_SINK_COUNT); + !drm_dp_has_quirk(desc, DP_DPCD_QUIRK_NO_SINK_COUNT); } EXPORT_SYMBOL(drm_dp_read_sink_count_cap); @@ -1957,87 +1957,6 @@ drm_dp_get_quirks(const struct drm_dp_dpcd_ident *ident, bool is_branch) #undef DEVICE_ID_ANY #undef DEVICE_ID -struct edid_quirk { - u8 mfg_id[2]; - u8 prod_id[2]; - u32 quirks; -}; - -#define MFG(first, second) { (first), (second) } -#define PROD_ID(first, second) { (first), (second) } - -/* - * Some devices have unreliable OUIDs where they don't set the device ID - * correctly, and as a result we need to use the EDID for finding additional - * DP quirks in such cases. - */ -static const struct edid_quirk edid_quirk_list[] = { - /* Optional 4K AMOLED panel in the ThinkPad X1 Extreme 2nd Generation - * only supports DPCD backlight controls - */ - { MFG(0x4c, 0x83), PROD_ID(0x41, 0x41), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - /* - * Some Dell CML 2020 systems have panels support both AUX and PWM - * backlight control, and some only support AUX backlight control. All - * said panels start up in AUX mode by default, and we don't have any - * support for disabling HDR mode on these panels which would be - * required to switch to PWM backlight control mode (plus, I'm not - * even sure we want PWM backlight controls over DPCD backlight - * controls anyway...). Until we have a better way of detecting these, - * force DPCD backlight mode on all of them. - */ - { MFG(0x06, 0xaf), PROD_ID(0x9b, 0x32), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - { MFG(0x06, 0xaf), PROD_ID(0xeb, 0x41), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - { MFG(0x4d, 0x10), PROD_ID(0xc7, 0x14), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - { MFG(0x4d, 0x10), PROD_ID(0xe6, 0x14), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - { MFG(0x4c, 0x83), PROD_ID(0x47, 0x41), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, - { MFG(0x09, 0xe5), PROD_ID(0xde, 0x08), BIT(DP_QUIRK_FORCE_DPCD_BACKLIGHT) }, -}; - -#undef MFG -#undef PROD_ID - -/** - * drm_dp_get_edid_quirks() - Check the EDID of a DP device to find additional - * DP-specific quirks - * @edid: The EDID to check - * - * While OUIDs are meant to be used to recognize a DisplayPort device, a lot - * of manufacturers don't seem to like following standards and neglect to fill - * the dev-ID in, making it impossible to only use OUIDs for determining - * quirks in some cases. This function can be used to check the EDID and look - * up any additional DP quirks. The bits returned by this function correspond - * to the quirk bits in &drm_dp_quirk. - * - * Returns: a bitmask of quirks, if any. The driver can check this using - * drm_dp_has_quirk(). - */ -u32 drm_dp_get_edid_quirks(const struct edid *edid) -{ - const struct edid_quirk *quirk; - u32 quirks = 0; - int i; - - if (!edid) - return 0; - - for (i = 0; i < ARRAY_SIZE(edid_quirk_list); i++) { - quirk = &edid_quirk_list[i]; - if (memcmp(quirk->mfg_id, edid->mfg_id, - sizeof(edid->mfg_id)) == 0 && - memcmp(quirk->prod_id, edid->prod_code, - sizeof(edid->prod_code)) == 0) - quirks |= quirk->quirks; - } - - DRM_DEBUG_KMS("DP sink: EDID mfg %*phD prod-ID %*phD quirks: 0x%04x\n", - (int)sizeof(edid->mfg_id), edid->mfg_id, - (int)sizeof(edid->prod_code), edid->prod_code, quirks); - - return quirks; -} -EXPORT_SYMBOL(drm_dp_get_edid_quirks); - /** * drm_dp_read_desc - read sink/branch descriptor from DPCD * @aux: DisplayPort AUX channel diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c index 18b15a4aee2d..e82b596d646c 100644 --- a/drivers/gpu/drm/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/drm_dp_mst_topology.c @@ -3629,14 +3629,26 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr, return 0; } -static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count) +/** + * drm_dp_get_vc_payload_bw - get the VC payload BW for an MST link + * @link_rate: link rate in 10kbits/s units + * @link_lane_count: lane count + * + * Calculate the total bandwidth of a MultiStream Transport link. The returned + * value is in units of PBNs/(timeslots/1 MTP). This value can be used to + * convert the number of PBNs required for a given stream to the number of + * timeslots this stream requires in each MTP. + */ +int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count) { - if (dp_link_bw == 0 || dp_link_count == 0) - DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n", - dp_link_bw, dp_link_count); + if (link_rate == 0 || link_lane_count == 0) + DRM_DEBUG_KMS("invalid link rate/lane count: (%d / %d)\n", + link_rate, link_lane_count); - return dp_link_bw * dp_link_count / 2; + /* See DP v2.0 2.6.4.2, VCPayload_Bandwidth_for_OneTimeSlotPer_MTP_Allocation */ + return link_rate * link_lane_count / 54000; } +EXPORT_SYMBOL(drm_dp_get_vc_payload_bw); /** * drm_dp_read_mst_cap() - check whether or not a sink supports MST @@ -3692,7 +3704,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms goto out_unlock; } - mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1], + mgr->pbn_div = drm_dp_get_vc_payload_bw(drm_dp_bw_code_to_link_rate(mgr->dpcd[1]), mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK); if (mgr->pbn_div == 0) { ret = -EINVAL; @@ -5824,8 +5836,7 @@ struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port) if (drm_dp_read_desc(port->mgr->aux, &desc, true)) return NULL; - if (drm_dp_has_quirk(&desc, 0, - DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) && + if (drm_dp_has_quirk(&desc, DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD) && port->mgr->dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14 && port->parent == port->mgr->mst_primary) { u8 downstreamport; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 02ca22e90290..0b232a73c1b7 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -387,9 +387,16 @@ static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, if (gbo->vmap_use_count > 0) goto out; - ret = ttm_bo_vmap(&gbo->bo, &gbo->map); - if (ret) - return ret; + /* + * VRAM helpers unmap the BO only on demand. So the previous + * page mapping might still be around. Only vmap if the there's + * no mapping present. + */ + if (dma_buf_map_is_null(&gbo->map)) { + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); + if (ret) + return ret; + } out: ++gbo->vmap_use_count; @@ -577,6 +584,7 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, return; ttm_bo_vunmap(bo, &gbo->map); + dma_buf_map_clear(&gbo->map); /* explicitly clear mapping for next vmap call */ } static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index bf6e525bb116..338650abd267 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -1259,7 +1259,14 @@ retry: if (ret) goto out; - if (old_fb->format != fb->format) { + /* + * Only check the FOURCC format code, excluding modifiers. This is + * enough for all legacy drivers. Atomic drivers have their own + * checks in their ->atomic_check implementation, which will + * return -EINVAL if any hw or driver constraint is violated due + * to modifier changes. + */ + if (old_fb->format->format != fb->format->format) { DRM_DEBUG_KMS("Page flip is not allowed to change frame buffer format.\n"); ret = -EINVAL; goto out; diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 6e74e6745eca..349146049849 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -388,19 +388,18 @@ int drm_syncobj_find_fence(struct drm_file *file_private, return -ENOENT; *fence = drm_syncobj_fence_get(syncobj); - drm_syncobj_put(syncobj); if (*fence) { ret = dma_fence_chain_find_seqno(fence, point); if (!ret) - return 0; + goto out; dma_fence_put(*fence); } else { ret = -EINVAL; } if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT)) - return ret; + goto out; memset(&wait, 0, sizeof(wait)); wait.task = current; @@ -432,6 +431,9 @@ int drm_syncobj_find_fence(struct drm_file *file_private, if (wait.node.next) drm_syncobj_remove_wait(syncobj, &wait); +out: + drm_syncobj_put(syncobj); + return ret; } EXPORT_SYMBOL(drm_syncobj_find_fence); diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug index 25cd9788a4d5..be76054c01d8 100644 --- a/drivers/gpu/drm/i915/Kconfig.debug +++ b/drivers/gpu/drm/i915/Kconfig.debug @@ -31,10 +31,13 @@ config DRM_I915_DEBUG select DRM_DEBUG_SELFTEST select DMABUF_SELFTESTS select SW_SYNC # signaling validation framework (igt/syncobj*) + select DRM_I915_WERROR + select DRM_I915_DEBUG_GEM + select DRM_I915_DEBUG_GEM_ONCE + select DRM_I915_DEBUG_MMIO + select DRM_I915_DEBUG_RUNTIME_PM select DRM_I915_SW_FENCE_DEBUG_OBJECTS select DRM_I915_SELFTEST - select DRM_I915_DEBUG_RUNTIME_PM - select DRM_I915_DEBUG_MMIO default n help Choose this option to turn on extra driver debugging that may affect @@ -69,6 +72,21 @@ config DRM_I915_DEBUG_GEM If in doubt, say "N". +config DRM_I915_DEBUG_GEM_ONCE + bool "Make a GEM debug failure fatal" + default n + depends on DRM_I915_DEBUG_GEM + help + During development, we often only want the very first failure + as that would otherwise be lost in the deluge of subsequent + failures. However, more casual testers may not want to trigger + a hard BUG_ON and hope that the system remains sufficiently usable + to capture a bug report in situ. + + Recommended for driver developers only. + + If in doubt, say "N". + config DRM_I915_ERRLOG_GEM bool "Insert extra logging (very verbose) for common GEM errors" default n diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index d6ac946d0407..32bd1fdffffe 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -136,6 +136,7 @@ gem-y += \ gem/i915_gem_clflush.o \ gem/i915_gem_client_blt.o \ gem/i915_gem_context.o \ + gem/i915_gem_create.o \ gem/i915_gem_dmabuf.o \ gem/i915_gem_domain.o \ gem/i915_gem_execbuffer.o \ @@ -200,14 +201,17 @@ i915-y += \ display/intel_color.o \ display/intel_combo_phy.o \ display/intel_connector.o \ + display/intel_crtc.o \ display/intel_csr.o \ display/intel_cursor.o \ display/intel_display.o \ display/intel_display_power.o \ display/intel_dpio_phy.o \ + display/intel_dpll.o \ display/intel_dpll_mgr.o \ display/intel_dsb.o \ display/intel_fbc.o \ + display/intel_fdi.o \ display/intel_fifo_underrun.o \ display/intel_frontbuffer.o \ display/intel_global_state.o \ @@ -239,6 +243,7 @@ i915-y += \ display/intel_crt.o \ display/intel_ddi.o \ display/intel_dp.o \ + display/intel_dp_aux.o \ display/intel_dp_aux_backlight.o \ display/intel_dp_hdcp.o \ display/intel_dp_link_training.o \ @@ -252,9 +257,11 @@ i915-y += \ display/intel_lspcon.o \ display/intel_lvds.o \ display/intel_panel.o \ + display/intel_pps.o \ display/intel_sdvo.o \ display/intel_tv.o \ display/intel_vdsc.o \ + display/intel_vrr.o \ display/vlv_dsi.o \ display/vlv_dsi_pll.o diff --git a/drivers/gpu/drm/i915/display/i9xx_plane.c b/drivers/gpu/drm/i915/display/i9xx_plane.c index b78985c855a5..d30374df67f0 100644 --- a/drivers/gpu/drm/i915/display/i9xx_plane.c +++ b/drivers/gpu/drm/i915/display/i9xx_plane.c @@ -276,6 +276,13 @@ int i9xx_check_plane_surface(struct intel_plane_state *plane_state) } } + if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { + drm_WARN_ON(&dev_priv->drm, src_x > 8191 || src_y > 4095); + } else if (INTEL_GEN(dev_priv) >= 4 && + fb->modifier == I915_FORMAT_MOD_X_TILED) { + drm_WARN_ON(&dev_priv->drm, src_x > 4095 || src_y > 4095); + } + plane_state->color_plane[0].offset = offset; plane_state->color_plane[0].x = src_x; plane_state->color_plane[0].y = src_y; @@ -488,6 +495,129 @@ static void i9xx_disable_plane(struct intel_plane *plane, spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); } +static void +g4x_primary_async_flip(struct intel_plane *plane, + const struct intel_crtc_state *crtc_state, + const struct intel_plane_state *plane_state, + bool async_flip) +{ + struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + u32 dspcntr = plane_state->ctl | i9xx_plane_ctl_crtc(crtc_state); + u32 dspaddr_offset = plane_state->color_plane[0].offset; + enum i9xx_plane_id i9xx_plane = plane->i9xx_plane; + unsigned long irqflags; + + if (async_flip) + dspcntr |= DISPPLANE_ASYNC_FLIP; + + spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); + intel_de_write_fw(dev_priv, DSPCNTR(i9xx_plane), dspcntr); + intel_de_write_fw(dev_priv, DSPSURF(i9xx_plane), + intel_plane_ggtt_offset(plane_state) + dspaddr_offset); + spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); +} + +static void +vlv_primary_async_flip(struct intel_plane *plane, + const struct intel_crtc_state *crtc_state, + const struct intel_plane_state *plane_state, + bool async_flip) +{ + struct drm_i915_private *dev_priv = to_i915(plane->base.dev); + u32 dspaddr_offset = plane_state->color_plane[0].offset; + enum i9xx_plane_id i9xx_plane = plane->i9xx_plane; + unsigned long irqflags; + + spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); + intel_de_write_fw(dev_priv, DSPADDR_VLV(i9xx_plane), + intel_plane_ggtt_offset(plane_state) + dspaddr_offset); + spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); +} + +static void +bdw_primary_enable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + bdw_enable_pipe_irq(i915, pipe, GEN8_PIPE_PRIMARY_FLIP_DONE); + spin_unlock_irq(&i915->irq_lock); +} + +static void +bdw_primary_disable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + bdw_disable_pipe_irq(i915, pipe, GEN8_PIPE_PRIMARY_FLIP_DONE); + spin_unlock_irq(&i915->irq_lock); +} + +static void +ivb_primary_enable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + + spin_lock_irq(&i915->irq_lock); + ilk_enable_display_irq(i915, DE_PLANE_FLIP_DONE_IVB(plane->i9xx_plane)); + spin_unlock_irq(&i915->irq_lock); +} + +static void +ivb_primary_disable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + + spin_lock_irq(&i915->irq_lock); + ilk_disable_display_irq(i915, DE_PLANE_FLIP_DONE_IVB(plane->i9xx_plane)); + spin_unlock_irq(&i915->irq_lock); +} + +static void +ilk_primary_enable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + + spin_lock_irq(&i915->irq_lock); + ilk_enable_display_irq(i915, DE_PLANE_FLIP_DONE(plane->i9xx_plane)); + spin_unlock_irq(&i915->irq_lock); +} + +static void +ilk_primary_disable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + + spin_lock_irq(&i915->irq_lock); + ilk_disable_display_irq(i915, DE_PLANE_FLIP_DONE(plane->i9xx_plane)); + spin_unlock_irq(&i915->irq_lock); +} + +static void +vlv_primary_enable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + i915_enable_pipestat(i915, pipe, PLANE_FLIP_DONE_INT_STATUS_VLV); + spin_unlock_irq(&i915->irq_lock); +} + +static void +vlv_primary_disable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + i915_disable_pipestat(i915, pipe, PLANE_FLIP_DONE_INT_STATUS_VLV); + spin_unlock_irq(&i915->irq_lock); +} + static bool i9xx_plane_get_hw_state(struct intel_plane *plane, enum pipe *pipe) { @@ -523,21 +653,56 @@ static bool i9xx_plane_get_hw_state(struct intel_plane *plane, return ret; } +static unsigned int +hsw_primary_max_stride(struct intel_plane *plane, + u32 pixel_format, u64 modifier, + unsigned int rotation) +{ + const struct drm_format_info *info = drm_format_info(pixel_format); + int cpp = info->cpp[0]; + + /* Limit to 8k pixels to guarantee OFFSET.x doesn't get too big. */ + return min(8192 * cpp, 32 * 1024); +} + +static unsigned int +ilk_primary_max_stride(struct intel_plane *plane, + u32 pixel_format, u64 modifier, + unsigned int rotation) +{ + const struct drm_format_info *info = drm_format_info(pixel_format); + int cpp = info->cpp[0]; + + /* Limit to 4k pixels to guarantee TILEOFF.x doesn't get too big. */ + if (modifier == I915_FORMAT_MOD_X_TILED) + return min(4096 * cpp, 32 * 1024); + else + return 32 * 1024; +} + unsigned int +i965_plane_max_stride(struct intel_plane *plane, + u32 pixel_format, u64 modifier, + unsigned int rotation) +{ + const struct drm_format_info *info = drm_format_info(pixel_format); + int cpp = info->cpp[0]; + + /* Limit to 4k pixels to guarantee TILEOFF.x doesn't get too big. */ + if (modifier == I915_FORMAT_MOD_X_TILED) + return min(4096 * cpp, 16 * 1024); + else + return 32 * 1024; +} + +static unsigned int i9xx_plane_max_stride(struct intel_plane *plane, u32 pixel_format, u64 modifier, unsigned int rotation) { struct drm_i915_private *dev_priv = to_i915(plane->base.dev); - if (!HAS_GMCH(dev_priv)) { - return 32*1024; - } else if (INTEL_GEN(dev_priv) >= 4) { - if (modifier == I915_FORMAT_MOD_X_TILED) - return 16*1024; - else - return 32*1024; - } else if (INTEL_GEN(dev_priv) >= 3) { + if (INTEL_GEN(dev_priv) >= 3) { if (modifier == I915_FORMAT_MOD_X_TILED) return 8*1024; else @@ -649,12 +814,42 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) else plane->min_cdclk = i9xx_plane_min_cdclk; - plane->max_stride = i9xx_plane_max_stride; + if (HAS_GMCH(dev_priv)) { + if (INTEL_GEN(dev_priv) >= 4) + plane->max_stride = i965_plane_max_stride; + else + plane->max_stride = i9xx_plane_max_stride; + } else { + if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) + plane->max_stride = hsw_primary_max_stride; + else + plane->max_stride = ilk_primary_max_stride; + } + plane->update_plane = i9xx_update_plane; plane->disable_plane = i9xx_disable_plane; plane->get_hw_state = i9xx_plane_get_hw_state; plane->check_plane = i9xx_plane_check; + if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { + plane->async_flip = vlv_primary_async_flip; + plane->enable_flip_done = vlv_primary_enable_flip_done; + plane->disable_flip_done = vlv_primary_disable_flip_done; + } else if (IS_BROADWELL(dev_priv)) { + plane->need_async_flip_disable_wa = true; + plane->async_flip = g4x_primary_async_flip; + plane->enable_flip_done = bdw_primary_enable_flip_done; + plane->disable_flip_done = bdw_primary_disable_flip_done; + } else if (INTEL_GEN(dev_priv) >= 7) { + plane->async_flip = g4x_primary_async_flip; + plane->enable_flip_done = ivb_primary_enable_flip_done; + plane->disable_flip_done = ivb_primary_disable_flip_done; + } else if (INTEL_GEN(dev_priv) >= 5) { + plane->async_flip = g4x_primary_async_flip; + plane->enable_flip_done = ilk_primary_enable_flip_done; + plane->disable_flip_done = ilk_primary_disable_flip_done; + } + if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) ret = drm_universal_plane_init(&dev_priv->drm, &plane->base, 0, plane_funcs, diff --git a/drivers/gpu/drm/i915/display/i9xx_plane.h b/drivers/gpu/drm/i915/display/i9xx_plane.h index bc2834a62735..ca963c2a8457 100644 --- a/drivers/gpu/drm/i915/display/i9xx_plane.h +++ b/drivers/gpu/drm/i915/display/i9xx_plane.h @@ -13,7 +13,7 @@ struct drm_i915_private; struct intel_plane; struct intel_plane_state; -unsigned int i9xx_plane_max_stride(struct intel_plane *plane, +unsigned int i965_plane_max_stride(struct intel_plane *plane, u32 pixel_format, u64 modifier, unsigned int rotation); int i9xx_check_plane_surface(struct intel_plane_state *plane_state); diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.c b/drivers/gpu/drm/i915/display/intel_atomic_plane.c index b5e1ee99535c..4683f98f7e54 100644 --- a/drivers/gpu/drm/i915/display/intel_atomic_plane.c +++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.c @@ -452,7 +452,7 @@ void intel_update_plane(struct intel_plane *plane, trace_intel_update_plane(&plane->base, crtc); if (crtc_state->uapi.async_flip && plane->async_flip) - plane->async_flip(plane, crtc_state, plane_state); + plane->async_flip(plane, crtc_state, plane_state, true); else plane->update_plane(plane, crtc_state, plane_state); } diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c index bd060404d249..4b5a30ac84bc 100644 --- a/drivers/gpu/drm/i915/display/intel_bw.c +++ b/drivers/gpu/drm/i915/display/intel_bw.c @@ -20,76 +20,9 @@ struct intel_qgv_point { struct intel_qgv_info { struct intel_qgv_point points[I915_NUM_QGV_POINTS]; u8 num_points; - u8 num_channels; u8 t_bl; - enum intel_dram_type dram_type; }; -static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv, - struct intel_qgv_info *qi) -{ - u32 val = 0; - int ret; - - ret = sandybridge_pcode_read(dev_priv, - ICL_PCODE_MEM_SUBSYSYSTEM_INFO | - ICL_PCODE_MEM_SS_READ_GLOBAL_INFO, - &val, NULL); - if (ret) - return ret; - - if (IS_GEN(dev_priv, 12)) { - switch (val & 0xf) { - case 0: - qi->dram_type = INTEL_DRAM_DDR4; - break; - case 3: - qi->dram_type = INTEL_DRAM_LPDDR4; - break; - case 4: - qi->dram_type = INTEL_DRAM_DDR3; - break; - case 5: - qi->dram_type = INTEL_DRAM_LPDDR3; - break; - default: - MISSING_CASE(val & 0xf); - break; - } - } else if (IS_GEN(dev_priv, 11)) { - switch (val & 0xf) { - case 0: - qi->dram_type = INTEL_DRAM_DDR4; - break; - case 1: - qi->dram_type = INTEL_DRAM_DDR3; - break; - case 2: - qi->dram_type = INTEL_DRAM_LPDDR3; - break; - case 3: - qi->dram_type = INTEL_DRAM_LPDDR4; - break; - default: - MISSING_CASE(val & 0xf); - break; - } - } else { - MISSING_CASE(INTEL_GEN(dev_priv)); - qi->dram_type = INTEL_DRAM_LPDDR3; /* Conservative default */ - } - - qi->num_channels = (val & 0xf0) >> 4; - qi->num_points = (val & 0xf00) >> 8; - - if (IS_GEN(dev_priv, 12)) - qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 16; - else if (IS_GEN(dev_priv, 11)) - qi->t_bl = qi->dram_type == INTEL_DRAM_DDR4 ? 4 : 8; - - return 0; -} - static int icl_pcode_read_qgv_point_info(struct drm_i915_private *dev_priv, struct intel_qgv_point *sp, int point) @@ -139,11 +72,15 @@ int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv, static int icl_get_qgv_points(struct drm_i915_private *dev_priv, struct intel_qgv_info *qi) { + const struct dram_info *dram_info = &dev_priv->dram_info; int i, ret; - ret = icl_pcode_read_mem_global_info(dev_priv, qi); - if (ret) - return ret; + qi->num_points = dram_info->num_qgv_points; + + if (IS_GEN(dev_priv, 12)) + qi->t_bl = dev_priv->dram_info.type == INTEL_DRAM_DDR4 ? 4 : 16; + else if (IS_GEN(dev_priv, 11)) + qi->t_bl = dev_priv->dram_info.type == INTEL_DRAM_DDR4 ? 4 : 8; if (drm_WARN_ON(&dev_priv->drm, qi->num_points > ARRAY_SIZE(qi->points))) @@ -209,7 +146,7 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel { struct intel_qgv_info qi = {}; bool is_y_tile = true; /* assume y tile may be used */ - int num_channels; + int num_channels = dev_priv->dram_info.num_channels; int deinterleave; int ipqdepth, ipqdepthpch; int dclk_max; @@ -222,7 +159,6 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel "Failed to get memory subsystem information, ignoring bandwidth limits"); return ret; } - num_channels = qi.num_channels; deinterleave = DIV_ROUND_UP(num_channels, is_y_tile ? 4 : 2); dclk_max = icl_sagv_max_dclk(&qi); diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c index 172d398081ee..ff7dcb7088bf 100644 --- a/drivers/gpu/drm/i915/display/intel_color.c +++ b/drivers/gpu/drm/i915/display/intel_color.c @@ -1485,6 +1485,7 @@ static u32 ivb_csc_mode(const struct intel_crtc_state *crtc_state) static int ivb_color_check(struct intel_crtc_state *crtc_state) { + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); bool limited_color_range = ilk_csc_limited_range(crtc_state); int ret; @@ -1492,6 +1493,13 @@ static int ivb_color_check(struct intel_crtc_state *crtc_state) if (ret) return ret; + if (crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB && + crtc_state->hw.ctm) { + drm_dbg_kms(&dev_priv->drm, + "YCBCR and CTM together are not possible\n"); + return -EINVAL; + } + crtc_state->gamma_enable = (crtc_state->hw.gamma_lut || crtc_state->hw.degamma_lut) && @@ -1525,12 +1533,20 @@ static u32 glk_gamma_mode(const struct intel_crtc_state *crtc_state) static int glk_color_check(struct intel_crtc_state *crtc_state) { + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); int ret; ret = check_luts(crtc_state); if (ret) return ret; + if (crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB && + crtc_state->hw.ctm) { + drm_dbg_kms(&dev_priv->drm, + "YCBCR and CTM together are not possible\n"); + return -EINVAL; + } + crtc_state->gamma_enable = crtc_state->hw.gamma_lut && !crtc_state->c8_planes; diff --git a/drivers/gpu/drm/i915/display/intel_crtc.c b/drivers/gpu/drm/i915/display/intel_crtc.c new file mode 100644 index 000000000000..57b0a3ebe908 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_crtc.c @@ -0,0 +1,325 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ +#include <linux/kernel.h> +#include <linux/slab.h> + +#include <drm/drm_atomic_helper.h> +#include <drm/drm_fourcc.h> +#include <drm/drm_plane.h> +#include <drm/drm_plane_helper.h> + +#include "intel_atomic.h" +#include "intel_atomic_plane.h" +#include "intel_color.h" +#include "intel_crtc.h" +#include "intel_cursor.h" +#include "intel_display_debugfs.h" +#include "intel_display_types.h" +#include "intel_pipe_crc.h" +#include "intel_sprite.h" +#include "i9xx_plane.h" + +static void assert_vblank_disabled(struct drm_crtc *crtc) +{ + if (I915_STATE_WARN_ON(drm_crtc_vblank_get(crtc) == 0)) + drm_crtc_vblank_put(crtc); +} + +u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_vblank_crtc *vblank = &dev->vblank[drm_crtc_index(&crtc->base)]; + + if (!vblank->max_vblank_count) + return (u32)drm_crtc_accurate_vblank_count(&crtc->base); + + return crtc->base.funcs->get_vblank_counter(&crtc->base); +} + +u32 intel_crtc_max_vblank_count(const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + u32 mode_flags = crtc->mode_flags; + + /* + * From Gen 11, In case of dsi cmd mode, frame counter wouldnt + * have updated at the beginning of TE, if we want to use + * the hw counter, then we would find it updated in only + * the next TE, hence switching to sw counter. + */ + if (mode_flags & (I915_MODE_FLAG_DSI_USE_TE0 | I915_MODE_FLAG_DSI_USE_TE1)) + return 0; + + /* + * On i965gm the hardware frame counter reads + * zero when the TV encoder is enabled :( + */ + if (IS_I965GM(dev_priv) && + (crtc_state->output_types & BIT(INTEL_OUTPUT_TVOUT))) + return 0; + + if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) + return 0xffffffff; /* full 32 bit counter */ + else if (INTEL_GEN(dev_priv) >= 3) + return 0xffffff; /* only 24 bits of frame count */ + else + return 0; /* Gen2 doesn't have a hardware frame counter */ +} + +void intel_crtc_vblank_on(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + + assert_vblank_disabled(&crtc->base); + drm_crtc_set_max_vblank_count(&crtc->base, + intel_crtc_max_vblank_count(crtc_state)); + drm_crtc_vblank_on(&crtc->base); +} + +void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + + drm_crtc_vblank_off(&crtc->base); + assert_vblank_disabled(&crtc->base); +} + +struct intel_crtc_state *intel_crtc_state_alloc(struct intel_crtc *crtc) +{ + struct intel_crtc_state *crtc_state; + + crtc_state = kmalloc(sizeof(*crtc_state), GFP_KERNEL); + + if (crtc_state) + intel_crtc_state_reset(crtc_state, crtc); + + return crtc_state; +} + +void intel_crtc_state_reset(struct intel_crtc_state *crtc_state, + struct intel_crtc *crtc) +{ + memset(crtc_state, 0, sizeof(*crtc_state)); + + __drm_atomic_helper_crtc_state_reset(&crtc_state->uapi, &crtc->base); + + crtc_state->cpu_transcoder = INVALID_TRANSCODER; + crtc_state->master_transcoder = INVALID_TRANSCODER; + crtc_state->hsw_workaround_pipe = INVALID_PIPE; + crtc_state->output_format = INTEL_OUTPUT_FORMAT_INVALID; + crtc_state->scaler_state.scaler_id = -1; + crtc_state->mst_master_transcoder = INVALID_TRANSCODER; +} + +static struct intel_crtc *intel_crtc_alloc(void) +{ + struct intel_crtc_state *crtc_state; + struct intel_crtc *crtc; + + crtc = kzalloc(sizeof(*crtc), GFP_KERNEL); + if (!crtc) + return ERR_PTR(-ENOMEM); + + crtc_state = intel_crtc_state_alloc(crtc); + if (!crtc_state) { + kfree(crtc); + return ERR_PTR(-ENOMEM); + } + + crtc->base.state = &crtc_state->uapi; + crtc->config = crtc_state; + + return crtc; +} + +static void intel_crtc_free(struct intel_crtc *crtc) +{ + intel_crtc_destroy_state(&crtc->base, crtc->base.state); + kfree(crtc); +} + +static void intel_crtc_destroy(struct drm_crtc *crtc) +{ + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + + drm_crtc_cleanup(crtc); + kfree(intel_crtc); +} + +static int intel_crtc_late_register(struct drm_crtc *crtc) +{ + intel_crtc_debugfs_add(crtc); + return 0; +} + +#define INTEL_CRTC_FUNCS \ + .set_config = drm_atomic_helper_set_config, \ + .destroy = intel_crtc_destroy, \ + .page_flip = drm_atomic_helper_page_flip, \ + .atomic_duplicate_state = intel_crtc_duplicate_state, \ + .atomic_destroy_state = intel_crtc_destroy_state, \ + .set_crc_source = intel_crtc_set_crc_source, \ + .verify_crc_source = intel_crtc_verify_crc_source, \ + .get_crc_sources = intel_crtc_get_crc_sources, \ + .late_register = intel_crtc_late_register + +static const struct drm_crtc_funcs bdw_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = g4x_get_vblank_counter, + .enable_vblank = bdw_enable_vblank, + .disable_vblank = bdw_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs ilk_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = g4x_get_vblank_counter, + .enable_vblank = ilk_enable_vblank, + .disable_vblank = ilk_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs g4x_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = g4x_get_vblank_counter, + .enable_vblank = i965_enable_vblank, + .disable_vblank = i965_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs i965_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = i915_get_vblank_counter, + .enable_vblank = i965_enable_vblank, + .disable_vblank = i965_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs i915gm_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = i915_get_vblank_counter, + .enable_vblank = i915gm_enable_vblank, + .disable_vblank = i915gm_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs i915_crtc_funcs = { + INTEL_CRTC_FUNCS, + + .get_vblank_counter = i915_get_vblank_counter, + .enable_vblank = i8xx_enable_vblank, + .disable_vblank = i8xx_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +static const struct drm_crtc_funcs i8xx_crtc_funcs = { + INTEL_CRTC_FUNCS, + + /* no hw vblank counter */ + .enable_vblank = i8xx_enable_vblank, + .disable_vblank = i8xx_disable_vblank, + .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, +}; + +int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe) +{ + struct intel_plane *primary, *cursor; + const struct drm_crtc_funcs *funcs; + struct intel_crtc *crtc; + int sprite, ret; + + crtc = intel_crtc_alloc(); + if (IS_ERR(crtc)) + return PTR_ERR(crtc); + + crtc->pipe = pipe; + crtc->num_scalers = RUNTIME_INFO(dev_priv)->num_scalers[pipe]; + + primary = intel_primary_plane_create(dev_priv, pipe); + if (IS_ERR(primary)) { + ret = PTR_ERR(primary); + goto fail; + } + crtc->plane_ids_mask |= BIT(primary->id); + + for_each_sprite(dev_priv, pipe, sprite) { + struct intel_plane *plane; + + plane = intel_sprite_plane_create(dev_priv, pipe, sprite); + if (IS_ERR(plane)) { + ret = PTR_ERR(plane); + goto fail; + } + crtc->plane_ids_mask |= BIT(plane->id); + } + + cursor = intel_cursor_plane_create(dev_priv, pipe); + if (IS_ERR(cursor)) { + ret = PTR_ERR(cursor); + goto fail; + } + crtc->plane_ids_mask |= BIT(cursor->id); + + if (HAS_GMCH(dev_priv)) { + if (IS_CHERRYVIEW(dev_priv) || + IS_VALLEYVIEW(dev_priv) || IS_G4X(dev_priv)) + funcs = &g4x_crtc_funcs; + else if (IS_GEN(dev_priv, 4)) + funcs = &i965_crtc_funcs; + else if (IS_I945GM(dev_priv) || IS_I915GM(dev_priv)) + funcs = &i915gm_crtc_funcs; + else if (IS_GEN(dev_priv, 3)) + funcs = &i915_crtc_funcs; + else + funcs = &i8xx_crtc_funcs; + } else { + if (INTEL_GEN(dev_priv) >= 8) + funcs = &bdw_crtc_funcs; + else + funcs = &ilk_crtc_funcs; + } + + ret = drm_crtc_init_with_planes(&dev_priv->drm, &crtc->base, + &primary->base, &cursor->base, + funcs, "pipe %c", pipe_name(pipe)); + if (ret) + goto fail; + + BUG_ON(pipe >= ARRAY_SIZE(dev_priv->pipe_to_crtc_mapping) || + dev_priv->pipe_to_crtc_mapping[pipe] != NULL); + dev_priv->pipe_to_crtc_mapping[pipe] = crtc; + + if (INTEL_GEN(dev_priv) < 9) { + enum i9xx_plane_id i9xx_plane = primary->i9xx_plane; + + BUG_ON(i9xx_plane >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) || + dev_priv->plane_to_crtc_mapping[i9xx_plane] != NULL); + dev_priv->plane_to_crtc_mapping[i9xx_plane] = crtc; + } + + if (INTEL_GEN(dev_priv) >= 10) + drm_crtc_create_scaling_filter_property(&crtc->base, + BIT(DRM_SCALING_FILTER_DEFAULT) | + BIT(DRM_SCALING_FILTER_NEAREST_NEIGHBOR)); + + intel_color_init(crtc); + + intel_crtc_crc_init(crtc); + + drm_WARN_ON(&dev_priv->drm, drm_crtc_index(&crtc->base) != crtc->pipe); + + return 0; + +fail: + intel_crtc_free(crtc); + + return ret; +} diff --git a/drivers/gpu/drm/i915/display/intel_crtc.h b/drivers/gpu/drm/i915/display/intel_crtc.h new file mode 100644 index 000000000000..08112d557411 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_crtc.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef _INTEL_CRTC_H_ +#define _INTEL_CRTC_H_ + +#include <linux/types.h> + +enum pipe; +struct drm_i915_private; +struct intel_crtc; +struct intel_crtc_state; + +u32 intel_crtc_max_vblank_count(const struct intel_crtc_state *crtc_state); +int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe); +struct intel_crtc_state *intel_crtc_state_alloc(struct intel_crtc *crtc); +void intel_crtc_state_reset(struct intel_crtc_state *crtc_state, + struct intel_crtc *crtc); + +#endif diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c index ef9848001418..fbd857f2bdb2 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.c +++ b/drivers/gpu/drm/i915/display/intel_ddi.c @@ -46,10 +46,12 @@ #include "intel_hotplug.h" #include "intel_lspcon.h" #include "intel_panel.h" +#include "intel_pps.h" #include "intel_psr.h" #include "intel_sprite.h" #include "intel_tc.h" #include "intel_vdsc.h" +#include "intel_vrr.h" struct ddi_buf_trans { u32 trans1; /* balance leg enable, de-emph level */ @@ -2099,9 +2101,9 @@ void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state } } -int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder, - enum transcoder cpu_transcoder, - bool enable) +int intel_ddi_toggle_hdcp_bits(struct intel_encoder *intel_encoder, + enum transcoder cpu_transcoder, + bool enable, u32 hdcp_mask) { struct drm_device *dev = intel_encoder->base.dev; struct drm_i915_private *dev_priv = to_i915(dev); @@ -2116,9 +2118,9 @@ int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder, tmp = intel_de_read(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder)); if (enable) - tmp |= TRANS_DDI_HDCP_SIGNALLING; + tmp |= hdcp_mask; else - tmp &= ~TRANS_DDI_HDCP_SIGNALLING; + tmp &= ~hdcp_mask; intel_de_write(dev_priv, TRANS_DDI_FUNC_CTL(cpu_transcoder), tmp); intel_display_power_put(dev_priv, intel_encoder->power_domain, wakeref); return ret; @@ -2701,15 +2703,11 @@ static void icl_ddi_combo_vswing_program(struct intel_encoder *encoder, ddi_translations = ehl_get_combo_buf_trans(encoder, crtc_state, &n_entries); else ddi_translations = icl_get_combo_buf_trans(encoder, crtc_state, &n_entries); - if (!ddi_translations) - return; - if (level >= n_entries) { - drm_dbg_kms(&dev_priv->drm, - "DDI translation not found for level %d. Using %d instead.", - level, n_entries - 1); + if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations)) + return; + if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries)) level = n_entries - 1; - } if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP)) { struct intel_dp *intel_dp = enc_to_intel_dp(encoder); @@ -2830,13 +2828,11 @@ static void icl_mg_phy_ddi_vswing_sequence(struct intel_encoder *encoder, u32 val; ddi_translations = icl_get_mg_buf_trans(encoder, crtc_state, &n_entries); - /* The table does not have values for level 3 and level 9. */ - if (level >= n_entries || level == 3 || level == 9) { - drm_dbg_kms(&dev_priv->drm, - "DDI translation not found for level %d. Using %d instead.", - level, n_entries - 2); - level = n_entries - 2; - } + + if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations)) + return; + if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries)) + level = n_entries - 1; /* Set MG_TX_LINK_PARAMS cri_use_fs32 to 0. */ for (ln = 0; ln < 2; ln++) { @@ -2968,7 +2964,9 @@ tgl_dkl_phy_ddi_vswing_sequence(struct intel_encoder *encoder, ddi_translations = tgl_get_dkl_buf_trans(encoder, crtc_state, &n_entries); - if (level >= n_entries) + if (drm_WARN_ON_ONCE(&dev_priv->drm, !ddi_translations)) + return; + if (drm_WARN_ON_ONCE(&dev_priv->drm, level >= n_entries)) level = n_entries - 1; dpcnt_mask = (DKL_TX_PRESHOOT_COEFF_MASK | @@ -3555,6 +3553,22 @@ i915_reg_t dp_tp_status_reg(struct intel_encoder *encoder, return DP_TP_STATUS(encoder->port); } +static void intel_dp_sink_set_msa_timing_par_ignore_state(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + bool enable) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + + if (!crtc_state->vrr.enable) + return; + + if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_DOWNSPREAD_CTRL, + enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0) <= 0) + drm_dbg_kms(&i915->drm, + "Failed to set MSA_TIMING_PAR_IGNORE %s in the sink\n", + enable ? "enable" : "disable"); +} + static void intel_dp_sink_set_fec_ready(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state) { @@ -3625,7 +3639,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state, */ /* 2. Enable Panel Power if PPS is required */ - intel_edp_panel_on(intel_dp); + intel_pps_on(intel_dp); /* * 3. For non-TBT Type-C ports, set FIA lane count @@ -3768,7 +3782,7 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state, crtc_state->port_clock, crtc_state->lane_count); - intel_edp_panel_on(intel_dp); + intel_pps_on(intel_dp); intel_ddi_clk_select(encoder, crtc_state); @@ -4010,8 +4024,8 @@ static void intel_ddi_post_disable_dp(struct intel_atomic_state *state, if (INTEL_GEN(dev_priv) >= 12) intel_ddi_disable_pipe_clock(old_crtc_state); - intel_edp_panel_vdd_on(intel_dp); - intel_edp_panel_off(intel_dp); + intel_pps_vdd_on(intel_dp); + intel_pps_off(intel_dp); if (!intel_phy_is_tc(dev_priv, phy) || dig_port->tc_mode != TC_PORT_TBT_ALT) @@ -4062,6 +4076,8 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state, intel_disable_pipe(old_crtc_state); + intel_vrr_disable(old_crtc_state); + intel_ddi_disable_transcoder_func(old_crtc_state); intel_dsc_disable(old_crtc_state); @@ -4313,6 +4329,8 @@ static void intel_enable_ddi(struct intel_atomic_state *state, if (!crtc_state->bigjoiner_slave) intel_ddi_enable_transcoder_func(encoder, crtc_state); + intel_vrr_enable(encoder, crtc_state); + intel_enable_pipe(crtc_state); intel_crtc_vblank_on(crtc_state); @@ -4326,7 +4344,7 @@ static void intel_enable_ddi(struct intel_atomic_state *state, if (conn_state->content_protection == DRM_MODE_CONTENT_PROTECTION_DESIRED) intel_hdcp_enable(to_intel_connector(conn_state->connector), - crtc_state->cpu_transcoder, + crtc_state, (u8)conn_state->hdcp_content_type); } @@ -4349,6 +4367,9 @@ static void intel_disable_ddi_dp(struct intel_atomic_state *state, /* Disable the decompression in DP Sink */ intel_dp_sink_set_decompression_state(intel_dp, old_crtc_state, false); + /* Disable Ignore_MSA bit in DP Sink */ + intel_dp_sink_set_msa_timing_par_ignore_state(intel_dp, old_crtc_state, + false); } static void intel_disable_ddi_hdmi(struct intel_atomic_state *state, @@ -5039,6 +5060,8 @@ static void intel_ddi_encoder_destroy(struct drm_encoder *encoder) intel_dp_encoder_flush_work(encoder); drm_encoder_cleanup(encoder); + if (dig_port) + kfree(dig_port->hdcp_port_data.streams); kfree(dig_port); } @@ -5192,12 +5215,20 @@ intel_ddi_hotplug(struct intel_encoder *encoder, { struct drm_i915_private *i915 = to_i915(encoder->base.dev); struct intel_digital_port *dig_port = enc_to_dig_port(encoder); + struct intel_dp *intel_dp = &dig_port->dp; enum phy phy = intel_port_to_phy(i915, encoder->port); bool is_tc = intel_phy_is_tc(i915, phy); struct drm_modeset_acquire_ctx ctx; enum intel_hotplug_state state; int ret; + if (intel_dp->compliance.test_active && + intel_dp->compliance.test_type == DP_TEST_LINK_PHY_TEST_PATTERN) { + intel_dp_phy_test(encoder); + /* just do the PHY test and nothing else */ + return INTEL_HOTPLUG_UNCHANGED; + } + state = intel_encoder_hotplug(encoder, connector); drm_modeset_acquire_init(&ctx, 0); diff --git a/drivers/gpu/drm/i915/display/intel_ddi.h b/drivers/gpu/drm/i915/display/intel_ddi.h index dcc711cfe4fe..a4dd815c0000 100644 --- a/drivers/gpu/drm/i915/display/intel_ddi.h +++ b/drivers/gpu/drm/i915/display/intel_ddi.h @@ -50,9 +50,9 @@ u32 bxt_signal_levels(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state); u32 ddi_signal_levels(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state); -int intel_ddi_toggle_hdcp_signalling(struct intel_encoder *intel_encoder, - enum transcoder cpu_transcoder, - bool enable); +int intel_ddi_toggle_hdcp_bits(struct intel_encoder *intel_encoder, + enum transcoder cpu_transcoder, + bool enable, u32 hdcp_mask); void icl_sanitize_encoder_pll_mapping(struct intel_encoder *encoder); #endif /* __INTEL_DDI_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 0189d379a55e..9843a0f202a0 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -48,6 +48,7 @@ #include "display/intel_display_debugfs.h" #include "display/intel_dp.h" #include "display/intel_dp_mst.h" +#include "display/intel_dpll.h" #include "display/intel_dpll_mgr.h" #include "display/intel_dsi.h" #include "display/intel_dvo.h" @@ -57,6 +58,9 @@ #include "display/intel_sdvo.h" #include "display/intel_tv.h" #include "display/intel_vdsc.h" +#include "display/intel_vrr.h" + +#include "gem/i915_gem_object.h" #include "gt/intel_rps.h" @@ -68,11 +72,12 @@ #include "intel_bw.h" #include "intel_cdclk.h" #include "intel_color.h" +#include "intel_crtc.h" #include "intel_csr.h" -#include "intel_cursor.h" #include "intel_display_types.h" #include "intel_dp_link_training.h" #include "intel_fbc.h" +#include "intel_fdi.h" #include "intel_fbdev.h" #include "intel_fifo_underrun.h" #include "intel_frontbuffer.h" @@ -81,6 +86,7 @@ #include "intel_overlay.h" #include "intel_pipe_crc.h" #include "intel_pm.h" +#include "intel_pps.h" #include "intel_psr.h" #include "intel_quirks.h" #include "intel_sideband.h" @@ -114,18 +120,6 @@ static void skl_pfit_enable(const struct intel_crtc_state *crtc_state); static void ilk_pfit_enable(const struct intel_crtc_state *crtc_state); static void intel_modeset_setup_hw_state(struct drm_device *dev, struct drm_modeset_acquire_ctx *ctx); -static struct intel_crtc_state *intel_crtc_state_alloc(struct intel_crtc *crtc); - -struct intel_limit { - struct { - int min, max; - } dot, vco, n, m, m1, m2, p, p1; - - struct { - int dot_limit; - int p2_slow, p2_fast; - } p2; -}; /* returns HPLL frequency in kHz */ int vlv_get_hpll_vco(struct drm_i915_private *dev_priv) @@ -184,281 +178,6 @@ static void intel_update_czclk(struct drm_i915_private *dev_priv) dev_priv->czclk_freq); } -/* units of 100MHz */ -static u32 intel_fdi_link_freq(struct drm_i915_private *dev_priv, - const struct intel_crtc_state *pipe_config) -{ - if (HAS_DDI(dev_priv)) - return pipe_config->port_clock; /* SPLL */ - else - return dev_priv->fdi_pll_freq; -} - -static const struct intel_limit intel_limits_i8xx_dac = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 908000, .max = 1512000 }, - .n = { .min = 2, .max = 16 }, - .m = { .min = 96, .max = 140 }, - .m1 = { .min = 18, .max = 26 }, - .m2 = { .min = 6, .max = 16 }, - .p = { .min = 4, .max = 128 }, - .p1 = { .min = 2, .max = 33 }, - .p2 = { .dot_limit = 165000, - .p2_slow = 4, .p2_fast = 2 }, -}; - -static const struct intel_limit intel_limits_i8xx_dvo = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 908000, .max = 1512000 }, - .n = { .min = 2, .max = 16 }, - .m = { .min = 96, .max = 140 }, - .m1 = { .min = 18, .max = 26 }, - .m2 = { .min = 6, .max = 16 }, - .p = { .min = 4, .max = 128 }, - .p1 = { .min = 2, .max = 33 }, - .p2 = { .dot_limit = 165000, - .p2_slow = 4, .p2_fast = 4 }, -}; - -static const struct intel_limit intel_limits_i8xx_lvds = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 908000, .max = 1512000 }, - .n = { .min = 2, .max = 16 }, - .m = { .min = 96, .max = 140 }, - .m1 = { .min = 18, .max = 26 }, - .m2 = { .min = 6, .max = 16 }, - .p = { .min = 4, .max = 128 }, - .p1 = { .min = 1, .max = 6 }, - .p2 = { .dot_limit = 165000, - .p2_slow = 14, .p2_fast = 7 }, -}; - -static const struct intel_limit intel_limits_i9xx_sdvo = { - .dot = { .min = 20000, .max = 400000 }, - .vco = { .min = 1400000, .max = 2800000 }, - .n = { .min = 1, .max = 6 }, - .m = { .min = 70, .max = 120 }, - .m1 = { .min = 8, .max = 18 }, - .m2 = { .min = 3, .max = 7 }, - .p = { .min = 5, .max = 80 }, - .p1 = { .min = 1, .max = 8 }, - .p2 = { .dot_limit = 200000, - .p2_slow = 10, .p2_fast = 5 }, -}; - -static const struct intel_limit intel_limits_i9xx_lvds = { - .dot = { .min = 20000, .max = 400000 }, - .vco = { .min = 1400000, .max = 2800000 }, - .n = { .min = 1, .max = 6 }, - .m = { .min = 70, .max = 120 }, - .m1 = { .min = 8, .max = 18 }, - .m2 = { .min = 3, .max = 7 }, - .p = { .min = 7, .max = 98 }, - .p1 = { .min = 1, .max = 8 }, - .p2 = { .dot_limit = 112000, - .p2_slow = 14, .p2_fast = 7 }, -}; - - -static const struct intel_limit intel_limits_g4x_sdvo = { - .dot = { .min = 25000, .max = 270000 }, - .vco = { .min = 1750000, .max = 3500000}, - .n = { .min = 1, .max = 4 }, - .m = { .min = 104, .max = 138 }, - .m1 = { .min = 17, .max = 23 }, - .m2 = { .min = 5, .max = 11 }, - .p = { .min = 10, .max = 30 }, - .p1 = { .min = 1, .max = 3}, - .p2 = { .dot_limit = 270000, - .p2_slow = 10, - .p2_fast = 10 - }, -}; - -static const struct intel_limit intel_limits_g4x_hdmi = { - .dot = { .min = 22000, .max = 400000 }, - .vco = { .min = 1750000, .max = 3500000}, - .n = { .min = 1, .max = 4 }, - .m = { .min = 104, .max = 138 }, - .m1 = { .min = 16, .max = 23 }, - .m2 = { .min = 5, .max = 11 }, - .p = { .min = 5, .max = 80 }, - .p1 = { .min = 1, .max = 8}, - .p2 = { .dot_limit = 165000, - .p2_slow = 10, .p2_fast = 5 }, -}; - -static const struct intel_limit intel_limits_g4x_single_channel_lvds = { - .dot = { .min = 20000, .max = 115000 }, - .vco = { .min = 1750000, .max = 3500000 }, - .n = { .min = 1, .max = 3 }, - .m = { .min = 104, .max = 138 }, - .m1 = { .min = 17, .max = 23 }, - .m2 = { .min = 5, .max = 11 }, - .p = { .min = 28, .max = 112 }, - .p1 = { .min = 2, .max = 8 }, - .p2 = { .dot_limit = 0, - .p2_slow = 14, .p2_fast = 14 - }, -}; - -static const struct intel_limit intel_limits_g4x_dual_channel_lvds = { - .dot = { .min = 80000, .max = 224000 }, - .vco = { .min = 1750000, .max = 3500000 }, - .n = { .min = 1, .max = 3 }, - .m = { .min = 104, .max = 138 }, - .m1 = { .min = 17, .max = 23 }, - .m2 = { .min = 5, .max = 11 }, - .p = { .min = 14, .max = 42 }, - .p1 = { .min = 2, .max = 6 }, - .p2 = { .dot_limit = 0, - .p2_slow = 7, .p2_fast = 7 - }, -}; - -static const struct intel_limit pnv_limits_sdvo = { - .dot = { .min = 20000, .max = 400000}, - .vco = { .min = 1700000, .max = 3500000 }, - /* Pineview's Ncounter is a ring counter */ - .n = { .min = 3, .max = 6 }, - .m = { .min = 2, .max = 256 }, - /* Pineview only has one combined m divider, which we treat as m2. */ - .m1 = { .min = 0, .max = 0 }, - .m2 = { .min = 0, .max = 254 }, - .p = { .min = 5, .max = 80 }, - .p1 = { .min = 1, .max = 8 }, - .p2 = { .dot_limit = 200000, - .p2_slow = 10, .p2_fast = 5 }, -}; - -static const struct intel_limit pnv_limits_lvds = { - .dot = { .min = 20000, .max = 400000 }, - .vco = { .min = 1700000, .max = 3500000 }, - .n = { .min = 3, .max = 6 }, - .m = { .min = 2, .max = 256 }, - .m1 = { .min = 0, .max = 0 }, - .m2 = { .min = 0, .max = 254 }, - .p = { .min = 7, .max = 112 }, - .p1 = { .min = 1, .max = 8 }, - .p2 = { .dot_limit = 112000, - .p2_slow = 14, .p2_fast = 14 }, -}; - -/* Ironlake / Sandybridge - * - * We calculate clock using (register_value + 2) for N/M1/M2, so here - * the range value for them is (actual_value - 2). - */ -static const struct intel_limit ilk_limits_dac = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 1760000, .max = 3510000 }, - .n = { .min = 1, .max = 5 }, - .m = { .min = 79, .max = 127 }, - .m1 = { .min = 12, .max = 22 }, - .m2 = { .min = 5, .max = 9 }, - .p = { .min = 5, .max = 80 }, - .p1 = { .min = 1, .max = 8 }, - .p2 = { .dot_limit = 225000, - .p2_slow = 10, .p2_fast = 5 }, -}; - -static const struct intel_limit ilk_limits_single_lvds = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 1760000, .max = 3510000 }, - .n = { .min = 1, .max = 3 }, - .m = { .min = 79, .max = 118 }, - .m1 = { .min = 12, .max = 22 }, - .m2 = { .min = 5, .max = 9 }, - .p = { .min = 28, .max = 112 }, - .p1 = { .min = 2, .max = 8 }, - .p2 = { .dot_limit = 225000, - .p2_slow = 14, .p2_fast = 14 }, -}; - -static const struct intel_limit ilk_limits_dual_lvds = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 1760000, .max = 3510000 }, - .n = { .min = 1, .max = 3 }, - .m = { .min = 79, .max = 127 }, - .m1 = { .min = 12, .max = 22 }, - .m2 = { .min = 5, .max = 9 }, - .p = { .min = 14, .max = 56 }, - .p1 = { .min = 2, .max = 8 }, - .p2 = { .dot_limit = 225000, - .p2_slow = 7, .p2_fast = 7 }, -}; - -/* LVDS 100mhz refclk limits. */ -static const struct intel_limit ilk_limits_single_lvds_100m = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 1760000, .max = 3510000 }, - .n = { .min = 1, .max = 2 }, - .m = { .min = 79, .max = 126 }, - .m1 = { .min = 12, .max = 22 }, - .m2 = { .min = 5, .max = 9 }, - .p = { .min = 28, .max = 112 }, - .p1 = { .min = 2, .max = 8 }, - .p2 = { .dot_limit = 225000, - .p2_slow = 14, .p2_fast = 14 }, -}; - -static const struct intel_limit ilk_limits_dual_lvds_100m = { - .dot = { .min = 25000, .max = 350000 }, - .vco = { .min = 1760000, .max = 3510000 }, - .n = { .min = 1, .max = 3 }, - .m = { .min = 79, .max = 126 }, - .m1 = { .min = 12, .max = 22 }, - .m2 = { .min = 5, .max = 9 }, - .p = { .min = 14, .max = 42 }, - .p1 = { .min = 2, .max = 6 }, - .p2 = { .dot_limit = 225000, - .p2_slow = 7, .p2_fast = 7 }, -}; - -static const struct intel_limit intel_limits_vlv = { - /* - * These are the data rate limits (measured in fast clocks) - * since those are the strictest limits we have. The fast - * clock and actual rate limits are more relaxed, so checking - * them would make no difference. - */ - .dot = { .min = 25000 * 5, .max = 270000 * 5 }, - .vco = { .min = 4000000, .max = 6000000 }, - .n = { .min = 1, .max = 7 }, - .m1 = { .min = 2, .max = 3 }, - .m2 = { .min = 11, .max = 156 }, - .p1 = { .min = 2, .max = 3 }, - .p2 = { .p2_slow = 2, .p2_fast = 20 }, /* slow=min, fast=max */ -}; - -static const struct intel_limit intel_limits_chv = { - /* - * These are the data rate limits (measured in fast clocks) - * since those are the strictest limits we have. The fast - * clock and actual rate limits are more relaxed, so checking - * them would make no difference. - */ - .dot = { .min = 25000 * 5, .max = 540000 * 5}, - .vco = { .min = 4800000, .max = 6480000 }, - .n = { .min = 1, .max = 1 }, - .m1 = { .min = 2, .max = 2 }, - .m2 = { .min = 24 << 22, .max = 175 << 22 }, - .p1 = { .min = 2, .max = 4 }, - .p2 = { .p2_slow = 1, .p2_fast = 14 }, -}; - -static const struct intel_limit intel_limits_bxt = { - /* FIXME: find real dot limits */ - .dot = { .min = 0, .max = INT_MAX }, - .vco = { .min = 4800000, .max = 6700000 }, - .n = { .min = 1, .max = 1 }, - .m1 = { .min = 2, .max = 2 }, - /* FIXME: find real m2 limits */ - .m2 = { .min = 2 << 22, .max = 255 << 22 }, - .p1 = { .min = 2, .max = 4 }, - .p2 = { .p2_slow = 1, .p2_fast = 20 }, -}; - /* WA Display #0827: Gen9:all */ static void skl_wa_827(struct drm_i915_private *dev_priv, enum pipe pipe, bool enable) @@ -503,483 +222,6 @@ is_trans_port_sync_mode(const struct intel_crtc_state *crtc_state) is_trans_port_sync_slave(crtc_state); } -/* - * Platform specific helpers to calculate the port PLL loopback- (clock.m), - * and post-divider (clock.p) values, pre- (clock.vco) and post-divided fast - * (clock.dot) clock rates. This fast dot clock is fed to the port's IO logic. - * The helpers' return value is the rate of the clock that is fed to the - * display engine's pipe which can be the above fast dot clock rate or a - * divided-down version of it. - */ -/* m1 is reserved as 0 in Pineview, n is a ring counter */ -static int pnv_calc_dpll_params(int refclk, struct dpll *clock) -{ - clock->m = clock->m2 + 2; - clock->p = clock->p1 * clock->p2; - if (WARN_ON(clock->n == 0 || clock->p == 0)) - return 0; - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); - - return clock->dot; -} - -static u32 i9xx_dpll_compute_m(struct dpll *dpll) -{ - return 5 * (dpll->m1 + 2) + (dpll->m2 + 2); -} - -static int i9xx_calc_dpll_params(int refclk, struct dpll *clock) -{ - clock->m = i9xx_dpll_compute_m(clock); - clock->p = clock->p1 * clock->p2; - if (WARN_ON(clock->n + 2 == 0 || clock->p == 0)) - return 0; - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n + 2); - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); - - return clock->dot; -} - -static int vlv_calc_dpll_params(int refclk, struct dpll *clock) -{ - clock->m = clock->m1 * clock->m2; - clock->p = clock->p1 * clock->p2; - if (WARN_ON(clock->n == 0 || clock->p == 0)) - return 0; - clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); - - return clock->dot / 5; -} - -int chv_calc_dpll_params(int refclk, struct dpll *clock) -{ - clock->m = clock->m1 * clock->m2; - clock->p = clock->p1 * clock->p2; - if (WARN_ON(clock->n == 0 || clock->p == 0)) - return 0; - clock->vco = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(refclk, clock->m), - clock->n << 22); - clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); - - return clock->dot / 5; -} - -/* - * Returns whether the given set of divisors are valid for a given refclk with - * the given connectors. - */ -static bool intel_pll_is_valid(struct drm_i915_private *dev_priv, - const struct intel_limit *limit, - const struct dpll *clock) -{ - if (clock->n < limit->n.min || limit->n.max < clock->n) - return false; - if (clock->p1 < limit->p1.min || limit->p1.max < clock->p1) - return false; - if (clock->m2 < limit->m2.min || limit->m2.max < clock->m2) - return false; - if (clock->m1 < limit->m1.min || limit->m1.max < clock->m1) - return false; - - if (!IS_PINEVIEW(dev_priv) && !IS_VALLEYVIEW(dev_priv) && - !IS_CHERRYVIEW(dev_priv) && !IS_GEN9_LP(dev_priv)) - if (clock->m1 <= clock->m2) - return false; - - if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv) && - !IS_GEN9_LP(dev_priv)) { - if (clock->p < limit->p.min || limit->p.max < clock->p) - return false; - if (clock->m < limit->m.min || limit->m.max < clock->m) - return false; - } - - if (clock->vco < limit->vco.min || limit->vco.max < clock->vco) - return false; - /* XXX: We may need to be checking "Dot clock" depending on the multiplier, - * connector, etc., rather than just a single range. - */ - if (clock->dot < limit->dot.min || limit->dot.max < clock->dot) - return false; - - return true; -} - -static int -i9xx_select_p2_div(const struct intel_limit *limit, - const struct intel_crtc_state *crtc_state, - int target) -{ - struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - /* - * For LVDS just rely on its current settings for dual-channel. - * We haven't figured out how to reliably set up different - * single/dual channel state, if we even can. - */ - if (intel_is_dual_link_lvds(dev_priv)) - return limit->p2.p2_fast; - else - return limit->p2.p2_slow; - } else { - if (target < limit->p2.dot_limit) - return limit->p2.p2_slow; - else - return limit->p2.p2_fast; - } -} - -/* - * Returns a set of divisors for the desired target clock with the given - * refclk, or FALSE. The returned values represent the clock equation: - * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. - * - * Target and reference clocks are specified in kHz. - * - * If match_clock is provided, then best_clock P divider must match the P - * divider from @match_clock used for LVDS downclocking. - */ -static bool -i9xx_find_best_dpll(const struct intel_limit *limit, - struct intel_crtc_state *crtc_state, - int target, int refclk, struct dpll *match_clock, - struct dpll *best_clock) -{ - struct drm_device *dev = crtc_state->uapi.crtc->dev; - struct dpll clock; - int err = target; - - memset(best_clock, 0, sizeof(*best_clock)); - - clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); - - for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; - clock.m1++) { - for (clock.m2 = limit->m2.min; - clock.m2 <= limit->m2.max; clock.m2++) { - if (clock.m2 >= clock.m1) - break; - for (clock.n = limit->n.min; - clock.n <= limit->n.max; clock.n++) { - for (clock.p1 = limit->p1.min; - clock.p1 <= limit->p1.max; clock.p1++) { - int this_err; - - i9xx_calc_dpll_params(refclk, &clock); - if (!intel_pll_is_valid(to_i915(dev), - limit, - &clock)) - continue; - if (match_clock && - clock.p != match_clock->p) - continue; - - this_err = abs(clock.dot - target); - if (this_err < err) { - *best_clock = clock; - err = this_err; - } - } - } - } - } - - return (err != target); -} - -/* - * Returns a set of divisors for the desired target clock with the given - * refclk, or FALSE. The returned values represent the clock equation: - * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. - * - * Target and reference clocks are specified in kHz. - * - * If match_clock is provided, then best_clock P divider must match the P - * divider from @match_clock used for LVDS downclocking. - */ -static bool -pnv_find_best_dpll(const struct intel_limit *limit, - struct intel_crtc_state *crtc_state, - int target, int refclk, struct dpll *match_clock, - struct dpll *best_clock) -{ - struct drm_device *dev = crtc_state->uapi.crtc->dev; - struct dpll clock; - int err = target; - - memset(best_clock, 0, sizeof(*best_clock)); - - clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); - - for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; - clock.m1++) { - for (clock.m2 = limit->m2.min; - clock.m2 <= limit->m2.max; clock.m2++) { - for (clock.n = limit->n.min; - clock.n <= limit->n.max; clock.n++) { - for (clock.p1 = limit->p1.min; - clock.p1 <= limit->p1.max; clock.p1++) { - int this_err; - - pnv_calc_dpll_params(refclk, &clock); - if (!intel_pll_is_valid(to_i915(dev), - limit, - &clock)) - continue; - if (match_clock && - clock.p != match_clock->p) - continue; - - this_err = abs(clock.dot - target); - if (this_err < err) { - *best_clock = clock; - err = this_err; - } - } - } - } - } - - return (err != target); -} - -/* - * Returns a set of divisors for the desired target clock with the given - * refclk, or FALSE. The returned values represent the clock equation: - * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. - * - * Target and reference clocks are specified in kHz. - * - * If match_clock is provided, then best_clock P divider must match the P - * divider from @match_clock used for LVDS downclocking. - */ -static bool -g4x_find_best_dpll(const struct intel_limit *limit, - struct intel_crtc_state *crtc_state, - int target, int refclk, struct dpll *match_clock, - struct dpll *best_clock) -{ - struct drm_device *dev = crtc_state->uapi.crtc->dev; - struct dpll clock; - int max_n; - bool found = false; - /* approximately equals target * 0.00585 */ - int err_most = (target >> 8) + (target >> 9); - - memset(best_clock, 0, sizeof(*best_clock)); - - clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); - - max_n = limit->n.max; - /* based on hardware requirement, prefer smaller n to precision */ - for (clock.n = limit->n.min; clock.n <= max_n; clock.n++) { - /* based on hardware requirement, prefere larger m1,m2 */ - for (clock.m1 = limit->m1.max; - clock.m1 >= limit->m1.min; clock.m1--) { - for (clock.m2 = limit->m2.max; - clock.m2 >= limit->m2.min; clock.m2--) { - for (clock.p1 = limit->p1.max; - clock.p1 >= limit->p1.min; clock.p1--) { - int this_err; - - i9xx_calc_dpll_params(refclk, &clock); - if (!intel_pll_is_valid(to_i915(dev), - limit, - &clock)) - continue; - - this_err = abs(clock.dot - target); - if (this_err < err_most) { - *best_clock = clock; - err_most = this_err; - max_n = clock.n; - found = true; - } - } - } - } - } - return found; -} - -/* - * Check if the calculated PLL configuration is more optimal compared to the - * best configuration and error found so far. Return the calculated error. - */ -static bool vlv_PLL_is_optimal(struct drm_device *dev, int target_freq, - const struct dpll *calculated_clock, - const struct dpll *best_clock, - unsigned int best_error_ppm, - unsigned int *error_ppm) -{ - /* - * For CHV ignore the error and consider only the P value. - * Prefer a bigger P value based on HW requirements. - */ - if (IS_CHERRYVIEW(to_i915(dev))) { - *error_ppm = 0; - - return calculated_clock->p > best_clock->p; - } - - if (drm_WARN_ON_ONCE(dev, !target_freq)) - return false; - - *error_ppm = div_u64(1000000ULL * - abs(target_freq - calculated_clock->dot), - target_freq); - /* - * Prefer a better P value over a better (smaller) error if the error - * is small. Ensure this preference for future configurations too by - * setting the error to 0. - */ - if (*error_ppm < 100 && calculated_clock->p > best_clock->p) { - *error_ppm = 0; - - return true; - } - - return *error_ppm + 10 < best_error_ppm; -} - -/* - * Returns a set of divisors for the desired target clock with the given - * refclk, or FALSE. The returned values represent the clock equation: - * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. - */ -static bool -vlv_find_best_dpll(const struct intel_limit *limit, - struct intel_crtc_state *crtc_state, - int target, int refclk, struct dpll *match_clock, - struct dpll *best_clock) -{ - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - struct drm_device *dev = crtc->base.dev; - struct dpll clock; - unsigned int bestppm = 1000000; - /* min update 19.2 MHz */ - int max_n = min(limit->n.max, refclk / 19200); - bool found = false; - - target *= 5; /* fast clock */ - - memset(best_clock, 0, sizeof(*best_clock)); - - /* based on hardware requirement, prefer smaller n to precision */ - for (clock.n = limit->n.min; clock.n <= max_n; clock.n++) { - for (clock.p1 = limit->p1.max; clock.p1 >= limit->p1.min; clock.p1--) { - for (clock.p2 = limit->p2.p2_fast; clock.p2 >= limit->p2.p2_slow; - clock.p2 -= clock.p2 > 10 ? 2 : 1) { - clock.p = clock.p1 * clock.p2; - /* based on hardware requirement, prefer bigger m1,m2 values */ - for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; clock.m1++) { - unsigned int ppm; - - clock.m2 = DIV_ROUND_CLOSEST(target * clock.p * clock.n, - refclk * clock.m1); - - vlv_calc_dpll_params(refclk, &clock); - - if (!intel_pll_is_valid(to_i915(dev), - limit, - &clock)) - continue; - - if (!vlv_PLL_is_optimal(dev, target, - &clock, - best_clock, - bestppm, &ppm)) - continue; - - *best_clock = clock; - bestppm = ppm; - found = true; - } - } - } - } - - return found; -} - -/* - * Returns a set of divisors for the desired target clock with the given - * refclk, or FALSE. The returned values represent the clock equation: - * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. - */ -static bool -chv_find_best_dpll(const struct intel_limit *limit, - struct intel_crtc_state *crtc_state, - int target, int refclk, struct dpll *match_clock, - struct dpll *best_clock) -{ - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - struct drm_device *dev = crtc->base.dev; - unsigned int best_error_ppm; - struct dpll clock; - u64 m2; - int found = false; - - memset(best_clock, 0, sizeof(*best_clock)); - best_error_ppm = 1000000; - - /* - * Based on hardware doc, the n always set to 1, and m1 always - * set to 2. If requires to support 200Mhz refclk, we need to - * revisit this because n may not 1 anymore. - */ - clock.n = 1; - clock.m1 = 2; - target *= 5; /* fast clock */ - - for (clock.p1 = limit->p1.max; clock.p1 >= limit->p1.min; clock.p1--) { - for (clock.p2 = limit->p2.p2_fast; - clock.p2 >= limit->p2.p2_slow; - clock.p2 -= clock.p2 > 10 ? 2 : 1) { - unsigned int error_ppm; - - clock.p = clock.p1 * clock.p2; - - m2 = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(target, clock.p * clock.n) << 22, - refclk * clock.m1); - - if (m2 > INT_MAX/clock.m1) - continue; - - clock.m2 = m2; - - chv_calc_dpll_params(refclk, &clock); - - if (!intel_pll_is_valid(to_i915(dev), limit, &clock)) - continue; - - if (!vlv_PLL_is_optimal(dev, target, &clock, best_clock, - best_error_ppm, &error_ppm)) - continue; - - *best_clock = clock; - best_error_ppm = error_ppm; - found = true; - } - } - - return found; -} - -bool bxt_find_best_dpll(struct intel_crtc_state *crtc_state, - struct dpll *best_clock) -{ - int refclk = 100000; - const struct intel_limit *limit = &intel_limits_bxt; - - return chv_find_best_dpll(limit, crtc_state, - crtc_state->port_clock, refclk, - NULL, best_clock); -} - static bool pipe_scanline_is_moving(struct drm_i915_private *dev_priv, enum pipe pipe) { @@ -1253,12 +495,6 @@ static void assert_planes_disabled(struct intel_crtc *crtc) assert_plane_disabled(plane); } -static void assert_vblank_disabled(struct drm_crtc *crtc) -{ - if (I915_STATE_WARN_ON(drm_crtc_vblank_get(crtc) == 0)) - drm_crtc_vblank_put(crtc); -} - void assert_pch_transcoder_disabled(struct drm_i915_private *dev_priv, enum pipe pipe) { @@ -1610,7 +846,7 @@ static void ilk_enable_pch_transcoder(const struct intel_crtc_state *crtc_state) val |= TRANS_CHICKEN2_TIMING_OVERRIDE; /* Configure frame start delay to match the CPU */ val &= ~TRANS_CHICKEN2_FRAME_START_DELAY_MASK; - val |= TRANS_CHICKEN2_FRAME_START_DELAY(0); + val |= TRANS_CHICKEN2_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } @@ -1621,7 +857,7 @@ static void ilk_enable_pch_transcoder(const struct intel_crtc_state *crtc_state) if (HAS_PCH_IBX(dev_priv)) { /* Configure frame start delay to match the CPU */ val &= ~TRANS_FRAME_START_DELAY_MASK; - val |= TRANS_FRAME_START_DELAY(0); + val |= TRANS_FRAME_START_DELAY(dev_priv->framestart_delay - 1); /* * Make the BPC in transcoder be consistent with @@ -1666,7 +902,7 @@ static void lpt_enable_pch_transcoder(struct drm_i915_private *dev_priv, val |= TRANS_CHICKEN2_TIMING_OVERRIDE; /* Configure frame start delay to match the CPU */ val &= ~TRANS_CHICKEN2_FRAME_START_DELAY_MASK; - val |= TRANS_CHICKEN2_FRAME_START_DELAY(0); + val |= TRANS_CHICKEN2_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, TRANS_CHICKEN2(PIPE_A), val); val = TRANS_ENABLE; @@ -1743,55 +979,6 @@ enum pipe intel_crtc_pch_transcoder(struct intel_crtc *crtc) return crtc->pipe; } -static u32 intel_crtc_max_vblank_count(const struct intel_crtc_state *crtc_state) -{ - struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - u32 mode_flags = crtc->mode_flags; - - /* - * From Gen 11, In case of dsi cmd mode, frame counter wouldnt - * have updated at the beginning of TE, if we want to use - * the hw counter, then we would find it updated in only - * the next TE, hence switching to sw counter. - */ - if (mode_flags & (I915_MODE_FLAG_DSI_USE_TE0 | I915_MODE_FLAG_DSI_USE_TE1)) - return 0; - - /* - * On i965gm the hardware frame counter reads - * zero when the TV encoder is enabled :( - */ - if (IS_I965GM(dev_priv) && - (crtc_state->output_types & BIT(INTEL_OUTPUT_TVOUT))) - return 0; - - if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) - return 0xffffffff; /* full 32 bit counter */ - else if (INTEL_GEN(dev_priv) >= 3) - return 0xffffff; /* only 24 bits of frame count */ - else - return 0; /* Gen2 doesn't have a hardware frame counter */ -} - -void intel_crtc_vblank_on(const struct intel_crtc_state *crtc_state) -{ - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - - assert_vblank_disabled(&crtc->base); - drm_crtc_set_max_vblank_count(&crtc->base, - intel_crtc_max_vblank_count(crtc_state)); - drm_crtc_vblank_on(&crtc->base); -} - -void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state) -{ - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - - drm_crtc_vblank_off(&crtc->base); - assert_vblank_disabled(&crtc->base); -} - void intel_enable_pipe(const struct intel_crtc_state *new_crtc_state) { struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc); @@ -1906,8 +1093,8 @@ static bool is_ccs_plane(const struct drm_framebuffer *fb, int plane) static bool is_gen12_ccs_modifier(u64 modifier) { return modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS || + modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC || modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS; - } static bool is_gen12_ccs_plane(const struct drm_framebuffer *fb, int plane) @@ -1915,6 +1102,12 @@ static bool is_gen12_ccs_plane(const struct drm_framebuffer *fb, int plane) return is_gen12_ccs_modifier(fb->modifier) && is_ccs_plane(fb, plane); } +static bool is_gen12_ccs_cc_plane(const struct drm_framebuffer *fb, int plane) +{ + return fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC && + plane == 2; +} + static bool is_aux_plane(const struct drm_framebuffer *fb, int plane) { if (is_ccs_modifier(fb->modifier)) @@ -1936,6 +1129,9 @@ static int ccs_to_main_plane(const struct drm_framebuffer *fb, int ccs_plane) drm_WARN_ON(fb->dev, !is_ccs_modifier(fb->modifier) || ccs_plane < fb->format->num_planes / 2); + if (is_gen12_ccs_cc_plane(fb, ccs_plane)) + return 0; + return ccs_plane - fb->format->num_planes / 2; } @@ -1954,7 +1150,7 @@ int intel_main_to_aux_plane(const struct drm_framebuffer *fb, int main_plane) bool intel_format_info_is_yuv_semiplanar(const struct drm_format_info *info, - uint64_t modifier) + u64 modifier) { return info->is_yuv && info->num_planes == (is_ccs_modifier(modifier) ? 4 : 2); @@ -1986,6 +1182,7 @@ intel_tile_width_bytes(const struct drm_framebuffer *fb, int color_plane) return 128; fallthrough; case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS: + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS: if (is_ccs_plane(fb, color_plane)) return 64; @@ -2120,6 +1317,11 @@ static unsigned int intel_linear_alignment(const struct drm_i915_private *dev_pr return 0; } +static bool has_async_flips(struct drm_i915_private *i915) +{ + return INTEL_GEN(i915) >= 5; +} + static unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, int color_plane) { @@ -2134,7 +1336,7 @@ static unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, case DRM_FORMAT_MOD_LINEAR: return intel_linear_alignment(dev_priv); case I915_FORMAT_MOD_X_TILED: - if (INTEL_GEN(dev_priv) >= 9) + if (has_async_flips(dev_priv)) return 256 * 1024; return 0; case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS: @@ -2142,6 +1344,7 @@ static unsigned int intel_surf_alignment(const struct drm_framebuffer *fb, return intel_tile_row_size(fb, color_plane); fallthrough; case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS: + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: return 16 * 1024; case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Yf_TILED_CCS: @@ -2247,7 +1450,7 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, */ ret = i915_vma_pin_fence(vma); if (ret != 0 && INTEL_GEN(dev_priv) < 4) { - i915_gem_object_unpin_from_display_plane(vma); + i915_vma_unpin(vma); vma = ERR_PTR(ret); goto err; } @@ -2265,12 +1468,9 @@ err: void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags) { - i915_gem_object_lock(vma->obj, NULL); if (flags & PLANE_HAS_FENCE) i915_vma_unpin_fence(vma); - i915_gem_object_unpin_from_display_plane(vma); - i915_gem_object_unlock(vma->obj); - + i915_vma_unpin(vma); i915_vma_put(vma); } @@ -2546,6 +1746,7 @@ static unsigned int intel_fb_modifier_to_tiling(u64 fb_modifier) case I915_FORMAT_MOD_Y_TILED: case I915_FORMAT_MOD_Y_TILED_CCS: case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS: + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS: return I915_TILING_Y; default: @@ -2624,6 +1825,25 @@ static const struct drm_format_info gen12_ccs_formats[] = { .hsub = 2, .vsub = 2, .is_yuv = true }, }; +/* + * Same as gen12_ccs_formats[] above, but with additional surface used + * to pass Clear Color information in plane 2 with 64 bits of data. + */ +static const struct drm_format_info gen12_ccs_cc_formats[] = { + { .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 3, + .char_per_block = { 4, 1, 0 }, .block_w = { 1, 2, 2 }, .block_h = { 1, 1, 1 }, + .hsub = 1, .vsub = 1, }, + { .format = DRM_FORMAT_XBGR8888, .depth = 24, .num_planes = 3, + .char_per_block = { 4, 1, 0 }, .block_w = { 1, 2, 2 }, .block_h = { 1, 1, 1 }, + .hsub = 1, .vsub = 1, }, + { .format = DRM_FORMAT_ARGB8888, .depth = 32, .num_planes = 3, + .char_per_block = { 4, 1, 0 }, .block_w = { 1, 2, 2 }, .block_h = { 1, 1, 1 }, + .hsub = 1, .vsub = 1, .has_alpha = true }, + { .format = DRM_FORMAT_ABGR8888, .depth = 32, .num_planes = 3, + .char_per_block = { 4, 1, 0 }, .block_w = { 1, 2, 2 }, .block_h = { 1, 1, 1 }, + .hsub = 1, .vsub = 1, .has_alpha = true }, +}; + static const struct drm_format_info * lookup_format_info(const struct drm_format_info formats[], int num_formats, u32 format) @@ -2652,6 +1872,10 @@ intel_get_format_info(const struct drm_mode_fb_cmd2 *cmd) return lookup_format_info(gen12_ccs_formats, ARRAY_SIZE(gen12_ccs_formats), cmd->pixel_format); + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: + return lookup_format_info(gen12_ccs_cc_formats, + ARRAY_SIZE(gen12_ccs_cc_formats), + cmd->pixel_format); default: return NULL; } @@ -2660,6 +1884,7 @@ intel_get_format_info(const struct drm_mode_fb_cmd2 *cmd) bool is_ccs_modifier(u64 modifier) { return modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS || + modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC || modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS || modifier == I915_FORMAT_MOD_Y_TILED_CCS || modifier == I915_FORMAT_MOD_Yf_TILED_CCS; @@ -2878,7 +2103,7 @@ intel_fb_check_ccs_xy(struct drm_framebuffer *fb, int ccs_plane, int x, int y) int ccs_x, ccs_y; int main_x, main_y; - if (!is_ccs_plane(fb, ccs_plane)) + if (!is_ccs_plane(fb, ccs_plane) || is_gen12_ccs_cc_plane(fb, ccs_plane)) return 0; intel_tile_dims(fb, ccs_plane, &tile_width, &tile_height); @@ -3005,6 +2230,18 @@ intel_fill_fb_info(struct drm_i915_private *dev_priv, int x, y; int ret; + /* + * Plane 2 of Render Compression with Clear Color fb modifier + * is consumed by the driver and not passed to DE. Skip the + * arithmetic related to alignment and offset calculation. + */ + if (is_gen12_ccs_cc_plane(fb, i)) { + if (IS_ALIGNED(fb->offsets[i], PAGE_SIZE)) + continue; + else + return -EINVAL; + } + cpp = fb->format->cpp[i]; intel_fb_plane_dims(&width, &height, fb, i); @@ -3854,6 +3091,8 @@ static int skl_check_main_surface(struct intel_plane_state *plane_state) } } + drm_WARN_ON(&dev_priv->drm, x > 8191 || y > 8191); + plane_state->color_plane[0].offset = offset; plane_state->color_plane[0].x = x; plane_state->color_plane[0].y = y; @@ -3926,6 +3165,8 @@ static int skl_check_nv12_aux_surface(struct intel_plane_state *plane_state) } } + drm_WARN_ON(&i915->drm, x > 8191 || y > 8191); + plane_state->color_plane[uv_plane].offset = offset; plane_state->color_plane[uv_plane].x = x; plane_state->color_plane[uv_plane].y = y; @@ -3946,7 +3187,8 @@ static int skl_check_ccs_aux_surface(struct intel_plane_state *plane_state) int hsub, vsub; int x, y; - if (!is_ccs_plane(fb, ccs_plane)) + if (!is_ccs_plane(fb, ccs_plane) || + is_gen12_ccs_cc_plane(fb, ccs_plane)) continue; intel_fb_plane_get_subsampling(&main_hsub, &main_vsub, fb, @@ -4186,6 +3428,7 @@ static u32 skl_plane_ctl_tiling(u64 fb_modifier) case I915_FORMAT_MOD_Y_TILED: return PLANE_CTL_TILED_Y; case I915_FORMAT_MOD_Y_TILED_CCS: + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: return PLANE_CTL_TILED_Y | PLANE_CTL_RENDER_DECOMPRESSION_ENABLE; case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS: return PLANE_CTL_TILED_Y | @@ -4246,9 +3489,6 @@ u32 skl_plane_ctl_crtc(const struct intel_crtc_state *crtc_state) struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); u32 plane_ctl = 0; - if (crtc_state->uapi.async_flip) - plane_ctl |= PLANE_CTL_ASYNC_FLIP; - if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) return plane_ctl; @@ -4346,6 +3586,8 @@ u32 glk_plane_color_ctl(const struct intel_crtc_state *crtc_state, plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE; } else if (fb->format->is_yuv) { plane_color_ctl |= PLANE_COLOR_INPUT_CSC_ENABLE; + if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE) + plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE; } return plane_color_ctl; @@ -4535,532 +3777,6 @@ static void icl_set_pipe_chicken(struct intel_crtc *crtc) intel_de_write(dev_priv, PIPE_CHICKEN(pipe), tmp); } -static void intel_fdi_normal_train(struct intel_crtc *crtc) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - enum pipe pipe = crtc->pipe; - i915_reg_t reg; - u32 temp; - - /* enable normal train */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if (IS_IVYBRIDGE(dev_priv)) { - temp &= ~FDI_LINK_TRAIN_NONE_IVB; - temp |= FDI_LINK_TRAIN_NONE_IVB | FDI_TX_ENHANCE_FRAME_ENABLE; - } else { - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_NONE | FDI_TX_ENHANCE_FRAME_ENABLE; - } - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if (HAS_PCH_CPT(dev_priv)) { - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp |= FDI_LINK_TRAIN_NORMAL_CPT; - } else { - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_NONE; - } - intel_de_write(dev_priv, reg, temp | FDI_RX_ENHANCE_FRAME_ENABLE); - - /* wait one idle pattern time */ - intel_de_posting_read(dev_priv, reg); - udelay(1000); - - /* IVB wants error correction enabled */ - if (IS_IVYBRIDGE(dev_priv)) - intel_de_write(dev_priv, reg, - intel_de_read(dev_priv, reg) | FDI_FS_ERRC_ENABLE | FDI_FE_ERRC_ENABLE); -} - -/* The FDI link training functions for ILK/Ibexpeak. */ -static void ilk_fdi_link_train(struct intel_crtc *crtc, - const struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - enum pipe pipe = crtc->pipe; - i915_reg_t reg; - u32 temp, tries; - - /* FDI needs bits from pipe first */ - assert_pipe_enabled(dev_priv, crtc_state->cpu_transcoder); - - /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit - for train result */ - reg = FDI_RX_IMR(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_RX_SYMBOL_LOCK; - temp &= ~FDI_RX_BIT_LOCK; - intel_de_write(dev_priv, reg, temp); - intel_de_read(dev_priv, reg); - udelay(150); - - /* enable CPU FDI TX and PCH FDI RX */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_DP_PORT_WIDTH_MASK; - temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - /* Ironlake workaround, enable clock pointer after FDI enable*/ - intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), - FDI_RX_PHASE_SYNC_POINTER_OVR); - intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), - FDI_RX_PHASE_SYNC_POINTER_OVR | FDI_RX_PHASE_SYNC_POINTER_EN); - - reg = FDI_RX_IIR(pipe); - for (tries = 0; tries < 5; tries++) { - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - - if ((temp & FDI_RX_BIT_LOCK)) { - drm_dbg_kms(&dev_priv->drm, "FDI train 1 done.\n"); - intel_de_write(dev_priv, reg, temp | FDI_RX_BIT_LOCK); - break; - } - } - if (tries == 5) - drm_err(&dev_priv->drm, "FDI train 1 fail!\n"); - - /* Train 2 */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_2; - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_2; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - reg = FDI_RX_IIR(pipe); - for (tries = 0; tries < 5; tries++) { - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - - if (temp & FDI_RX_SYMBOL_LOCK) { - intel_de_write(dev_priv, reg, - temp | FDI_RX_SYMBOL_LOCK); - drm_dbg_kms(&dev_priv->drm, "FDI train 2 done.\n"); - break; - } - } - if (tries == 5) - drm_err(&dev_priv->drm, "FDI train 2 fail!\n"); - - drm_dbg_kms(&dev_priv->drm, "FDI train done\n"); - -} - -static const int snb_b_fdi_train_param[] = { - FDI_LINK_TRAIN_400MV_0DB_SNB_B, - FDI_LINK_TRAIN_400MV_6DB_SNB_B, - FDI_LINK_TRAIN_600MV_3_5DB_SNB_B, - FDI_LINK_TRAIN_800MV_0DB_SNB_B, -}; - -/* The FDI link training functions for SNB/Cougarpoint. */ -static void gen6_fdi_link_train(struct intel_crtc *crtc, - const struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - enum pipe pipe = crtc->pipe; - i915_reg_t reg; - u32 temp, i, retry; - - /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit - for train result */ - reg = FDI_RX_IMR(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_RX_SYMBOL_LOCK; - temp &= ~FDI_RX_BIT_LOCK; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - /* enable CPU FDI TX and PCH FDI RX */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_DP_PORT_WIDTH_MASK; - temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; - /* SNB-B */ - temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B; - intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); - - intel_de_write(dev_priv, FDI_RX_MISC(pipe), - FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if (HAS_PCH_CPT(dev_priv)) { - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; - } else { - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - } - intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - for (i = 0; i < 4; i++) { - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; - temp |= snb_b_fdi_train_param[i]; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(500); - - for (retry = 0; retry < 5; retry++) { - reg = FDI_RX_IIR(pipe); - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - if (temp & FDI_RX_BIT_LOCK) { - intel_de_write(dev_priv, reg, - temp | FDI_RX_BIT_LOCK); - drm_dbg_kms(&dev_priv->drm, - "FDI train 1 done.\n"); - break; - } - udelay(50); - } - if (retry < 5) - break; - } - if (i == 4) - drm_err(&dev_priv->drm, "FDI train 1 fail!\n"); - - /* Train 2 */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_2; - if (IS_GEN(dev_priv, 6)) { - temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; - /* SNB-B */ - temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B; - } - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if (HAS_PCH_CPT(dev_priv)) { - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp |= FDI_LINK_TRAIN_PATTERN_2_CPT; - } else { - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_2; - } - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - for (i = 0; i < 4; i++) { - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; - temp |= snb_b_fdi_train_param[i]; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(500); - - for (retry = 0; retry < 5; retry++) { - reg = FDI_RX_IIR(pipe); - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - if (temp & FDI_RX_SYMBOL_LOCK) { - intel_de_write(dev_priv, reg, - temp | FDI_RX_SYMBOL_LOCK); - drm_dbg_kms(&dev_priv->drm, - "FDI train 2 done.\n"); - break; - } - udelay(50); - } - if (retry < 5) - break; - } - if (i == 4) - drm_err(&dev_priv->drm, "FDI train 2 fail!\n"); - - drm_dbg_kms(&dev_priv->drm, "FDI train done.\n"); -} - -/* Manual link training for Ivy Bridge A0 parts */ -static void ivb_manual_fdi_link_train(struct intel_crtc *crtc, - const struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - enum pipe pipe = crtc->pipe; - i915_reg_t reg; - u32 temp, i, j; - - /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit - for train result */ - reg = FDI_RX_IMR(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_RX_SYMBOL_LOCK; - temp &= ~FDI_RX_BIT_LOCK; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(150); - - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR before link train 0x%x\n", - intel_de_read(dev_priv, FDI_RX_IIR(pipe))); - - /* Try each vswing and preemphasis setting twice before moving on */ - for (j = 0; j < ARRAY_SIZE(snb_b_fdi_train_param) * 2; j++) { - /* disable first in case we need to retry */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~(FDI_LINK_TRAIN_AUTO | FDI_LINK_TRAIN_NONE_IVB); - temp &= ~FDI_TX_ENABLE; - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_AUTO; - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp &= ~FDI_RX_ENABLE; - intel_de_write(dev_priv, reg, temp); - - /* enable CPU FDI TX and PCH FDI RX */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_DP_PORT_WIDTH_MASK; - temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); - temp |= FDI_LINK_TRAIN_PATTERN_1_IVB; - temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; - temp |= snb_b_fdi_train_param[j/2]; - temp |= FDI_COMPOSITE_SYNC; - intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); - - intel_de_write(dev_priv, FDI_RX_MISC(pipe), - FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; - temp |= FDI_COMPOSITE_SYNC; - intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(1); /* should be 0.5us */ - - for (i = 0; i < 4; i++) { - reg = FDI_RX_IIR(pipe); - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - - if (temp & FDI_RX_BIT_LOCK || - (intel_de_read(dev_priv, reg) & FDI_RX_BIT_LOCK)) { - intel_de_write(dev_priv, reg, - temp | FDI_RX_BIT_LOCK); - drm_dbg_kms(&dev_priv->drm, - "FDI train 1 done, level %i.\n", - i); - break; - } - udelay(1); /* should be 0.5us */ - } - if (i == 4) { - drm_dbg_kms(&dev_priv->drm, - "FDI train 1 fail on vswing %d\n", j / 2); - continue; - } - - /* Train 2 */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE_IVB; - temp |= FDI_LINK_TRAIN_PATTERN_2_IVB; - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp |= FDI_LINK_TRAIN_PATTERN_2_CPT; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(2); /* should be 1.5us */ - - for (i = 0; i < 4; i++) { - reg = FDI_RX_IIR(pipe); - temp = intel_de_read(dev_priv, reg); - drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); - - if (temp & FDI_RX_SYMBOL_LOCK || - (intel_de_read(dev_priv, reg) & FDI_RX_SYMBOL_LOCK)) { - intel_de_write(dev_priv, reg, - temp | FDI_RX_SYMBOL_LOCK); - drm_dbg_kms(&dev_priv->drm, - "FDI train 2 done, level %i.\n", - i); - goto train_done; - } - udelay(2); /* should be 1.5us */ - } - if (i == 4) - drm_dbg_kms(&dev_priv->drm, - "FDI train 2 fail on vswing %d\n", j / 2); - } - -train_done: - drm_dbg_kms(&dev_priv->drm, "FDI train done.\n"); -} - -static void ilk_fdi_pll_enable(const struct intel_crtc_state *crtc_state) -{ - struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc); - struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev); - enum pipe pipe = intel_crtc->pipe; - i915_reg_t reg; - u32 temp; - - /* enable PCH FDI RX PLL, wait warmup plus DMI latency */ - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~(FDI_DP_PORT_WIDTH_MASK | (0x7 << 16)); - temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); - temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; - intel_de_write(dev_priv, reg, temp | FDI_RX_PLL_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(200); - - /* Switch from Rawclk to PCDclk */ - temp = intel_de_read(dev_priv, reg); - intel_de_write(dev_priv, reg, temp | FDI_PCDCLK); - - intel_de_posting_read(dev_priv, reg); - udelay(200); - - /* Enable CPU FDI TX PLL, always on for Ironlake */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if ((temp & FDI_TX_PLL_ENABLE) == 0) { - intel_de_write(dev_priv, reg, temp | FDI_TX_PLL_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(100); - } -} - -static void ilk_fdi_pll_disable(struct intel_crtc *intel_crtc) -{ - struct drm_device *dev = intel_crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - enum pipe pipe = intel_crtc->pipe; - i915_reg_t reg; - u32 temp; - - /* Switch from PCDclk to Rawclk */ - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - intel_de_write(dev_priv, reg, temp & ~FDI_PCDCLK); - - /* Disable CPU FDI TX PLL */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - intel_de_write(dev_priv, reg, temp & ~FDI_TX_PLL_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(100); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - intel_de_write(dev_priv, reg, temp & ~FDI_RX_PLL_ENABLE); - - /* Wait for the clocks to turn off. */ - intel_de_posting_read(dev_priv, reg); - udelay(100); -} - -static void ilk_fdi_disable(struct intel_crtc *crtc) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - enum pipe pipe = crtc->pipe; - i915_reg_t reg; - u32 temp; - - /* disable CPU FDI tx and PCH FDI rx */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - intel_de_write(dev_priv, reg, temp & ~FDI_TX_ENABLE); - intel_de_posting_read(dev_priv, reg); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~(0x7 << 16); - temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; - intel_de_write(dev_priv, reg, temp & ~FDI_RX_ENABLE); - - intel_de_posting_read(dev_priv, reg); - udelay(100); - - /* Ironlake workaround, disable clock pointer after downing FDI */ - if (HAS_PCH_IBX(dev_priv)) - intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), - FDI_RX_PHASE_SYNC_POINTER_OVR); - - /* still set train pattern 1 */ - reg = FDI_TX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - intel_de_write(dev_priv, reg, temp); - - reg = FDI_RX_CTL(pipe); - temp = intel_de_read(dev_priv, reg); - if (HAS_PCH_CPT(dev_priv)) { - temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; - temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; - } else { - temp &= ~FDI_LINK_TRAIN_NONE; - temp |= FDI_LINK_TRAIN_PATTERN_1; - } - /* BPC in FDI rx is consistent with that in PIPECONF */ - temp &= ~(0x07 << 16); - temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; - intel_de_write(dev_priv, reg, temp); - - intel_de_posting_read(dev_priv, reg); - udelay(100); -} - bool intel_has_pending_fb_unpin(struct drm_i915_private *dev_priv) { struct drm_crtc *crtc; @@ -5291,7 +4007,7 @@ static void ivb_update_fdi_bc_bifurcation(const struct intel_crtc_state *crtc_st * Finds the encoder associated with the given CRTC. This can only be * used when we know that the CRTC isn't feeding multiple encoders! */ -static struct intel_encoder * +struct intel_encoder * intel_get_crtc_new_encoder(const struct intel_atomic_state *state, const struct intel_crtc_state *crtc_state) { @@ -5810,7 +4526,7 @@ static void cnl_program_nearest_filter_coefs(struct drm_i915_private *dev_priv, intel_de_write_fw(dev_priv, CNL_PS_COEF_INDEX_SET(pipe, id, set), 0); } -inline u32 skl_scaler_get_filter_select(enum drm_scaling_filter filter, int set) +u32 skl_scaler_get_filter_select(enum drm_scaling_filter filter, int set) { if (filter == DRM_SCALING_FILTER_NEAREST_NEIGHBOR) { return (PS_FILTER_PROGRAMMED | @@ -6129,41 +4845,72 @@ static void intel_post_plane_update(struct intel_atomic_state *state, icl_wa_scalerclkgating(dev_priv, pipe, false); } -static void skl_disable_async_flip_wa(struct intel_atomic_state *state, - struct intel_crtc *crtc, - const struct intel_crtc_state *new_crtc_state) +static void intel_crtc_enable_flip_done(struct intel_atomic_state *state, + struct intel_crtc *crtc) { - struct drm_i915_private *dev_priv = to_i915(state->base.dev); + const struct intel_crtc_state *crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + u8 update_planes = crtc_state->update_planes; + const struct intel_plane_state *plane_state; struct intel_plane *plane; - struct intel_plane_state *new_plane_state; int i; - for_each_new_intel_plane_in_state(state, plane, new_plane_state, i) { - u32 update_mask = new_crtc_state->update_planes; - u32 plane_ctl, surf_addr; - enum plane_id plane_id; - unsigned long irqflags; - enum pipe pipe; - - if (crtc->pipe != plane->pipe || - !(update_mask & BIT(plane->id))) - continue; + for_each_new_intel_plane_in_state(state, plane, plane_state, i) { + if (plane->enable_flip_done && + plane->pipe == crtc->pipe && + update_planes & BIT(plane->id)) + plane->enable_flip_done(plane); + } +} - plane_id = plane->id; - pipe = plane->pipe; +static void intel_crtc_disable_flip_done(struct intel_atomic_state *state, + struct intel_crtc *crtc) +{ + const struct intel_crtc_state *crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + u8 update_planes = crtc_state->update_planes; + const struct intel_plane_state *plane_state; + struct intel_plane *plane; + int i; - spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); - plane_ctl = intel_de_read_fw(dev_priv, PLANE_CTL(pipe, plane_id)); - surf_addr = intel_de_read_fw(dev_priv, PLANE_SURF(pipe, plane_id)); + for_each_new_intel_plane_in_state(state, plane, plane_state, i) { + if (plane->disable_flip_done && + plane->pipe == crtc->pipe && + update_planes & BIT(plane->id)) + plane->disable_flip_done(plane); + } +} - plane_ctl &= ~PLANE_CTL_ASYNC_FLIP; +static void intel_crtc_async_flip_disable_wa(struct intel_atomic_state *state, + struct intel_crtc *crtc) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + const struct intel_crtc_state *old_crtc_state = + intel_atomic_get_old_crtc_state(state, crtc); + const struct intel_crtc_state *new_crtc_state = + intel_atomic_get_new_crtc_state(state, crtc); + u8 update_planes = new_crtc_state->update_planes; + const struct intel_plane_state *old_plane_state; + struct intel_plane *plane; + bool need_vbl_wait = false; + int i; - intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl); - intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), surf_addr); - spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); + for_each_old_intel_plane_in_state(state, plane, old_plane_state, i) { + if (plane->need_async_flip_disable_wa && + plane->pipe == crtc->pipe && + update_planes & BIT(plane->id)) { + /* + * Apart from the async flip bit we want to + * preserve the old state for the plane. + */ + plane->async_flip(plane, old_crtc_state, + old_plane_state, false); + need_vbl_wait = true; + } } - intel_wait_for_vblank(dev_priv, crtc->pipe); + if (need_vbl_wait) + intel_wait_for_vblank(i915, crtc->pipe); } static void intel_pre_plane_update(struct intel_atomic_state *state, @@ -6256,10 +5003,8 @@ static void intel_pre_plane_update(struct intel_atomic_state *state, * WA for platforms where async address update enable bit * is double buffered and only latched at start of vblank. */ - if (old_crtc_state->uapi.async_flip && - !new_crtc_state->uapi.async_flip && - IS_GEN_RANGE(dev_priv, 9, 10)) - skl_disable_async_flip_wa(state, crtc, new_crtc_state); + if (old_crtc_state->uapi.async_flip && !new_crtc_state->uapi.async_flip) + intel_crtc_async_flip_disable_wa(state, crtc); } static void intel_crtc_disable_planes(struct intel_atomic_state *state, @@ -6679,7 +5424,7 @@ static void hsw_set_frame_start_delay(const struct intel_crtc_state *crtc_state) val = intel_de_read(dev_priv, reg); val &= ~HSW_FRAME_START_DELAY_MASK; - val |= HSW_FRAME_START_DELAY(0); + val |= HSW_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } @@ -7467,143 +6212,6 @@ static void intel_connector_verify_state(struct intel_crtc_state *crtc_state, } } -static int pipe_required_fdi_lanes(struct intel_crtc_state *crtc_state) -{ - if (crtc_state->hw.enable && crtc_state->has_pch_encoder) - return crtc_state->fdi_lanes; - - return 0; -} - -static int ilk_check_fdi_lanes(struct drm_device *dev, enum pipe pipe, - struct intel_crtc_state *pipe_config) -{ - struct drm_i915_private *dev_priv = to_i915(dev); - struct drm_atomic_state *state = pipe_config->uapi.state; - struct intel_crtc *other_crtc; - struct intel_crtc_state *other_crtc_state; - - drm_dbg_kms(&dev_priv->drm, - "checking fdi config on pipe %c, lanes %i\n", - pipe_name(pipe), pipe_config->fdi_lanes); - if (pipe_config->fdi_lanes > 4) { - drm_dbg_kms(&dev_priv->drm, - "invalid fdi lane config on pipe %c: %i lanes\n", - pipe_name(pipe), pipe_config->fdi_lanes); - return -EINVAL; - } - - if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { - if (pipe_config->fdi_lanes > 2) { - drm_dbg_kms(&dev_priv->drm, - "only 2 lanes on haswell, required: %i lanes\n", - pipe_config->fdi_lanes); - return -EINVAL; - } else { - return 0; - } - } - - if (INTEL_NUM_PIPES(dev_priv) == 2) - return 0; - - /* Ivybridge 3 pipe is really complicated */ - switch (pipe) { - case PIPE_A: - return 0; - case PIPE_B: - if (pipe_config->fdi_lanes <= 2) - return 0; - - other_crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_C); - other_crtc_state = - intel_atomic_get_crtc_state(state, other_crtc); - if (IS_ERR(other_crtc_state)) - return PTR_ERR(other_crtc_state); - - if (pipe_required_fdi_lanes(other_crtc_state) > 0) { - drm_dbg_kms(&dev_priv->drm, - "invalid shared fdi lane config on pipe %c: %i lanes\n", - pipe_name(pipe), pipe_config->fdi_lanes); - return -EINVAL; - } - return 0; - case PIPE_C: - if (pipe_config->fdi_lanes > 2) { - drm_dbg_kms(&dev_priv->drm, - "only 2 lanes on pipe %c: required %i lanes\n", - pipe_name(pipe), pipe_config->fdi_lanes); - return -EINVAL; - } - - other_crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_B); - other_crtc_state = - intel_atomic_get_crtc_state(state, other_crtc); - if (IS_ERR(other_crtc_state)) - return PTR_ERR(other_crtc_state); - - if (pipe_required_fdi_lanes(other_crtc_state) > 2) { - drm_dbg_kms(&dev_priv->drm, - "fdi link B uses too many lanes to enable link C\n"); - return -EINVAL; - } - return 0; - default: - BUG(); - } -} - -#define RETRY 1 -static int ilk_fdi_compute_config(struct intel_crtc *intel_crtc, - struct intel_crtc_state *pipe_config) -{ - struct drm_device *dev = intel_crtc->base.dev; - struct drm_i915_private *i915 = to_i915(dev); - const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; - int lane, link_bw, fdi_dotclock, ret; - bool needs_recompute = false; - -retry: - /* FDI is a binary signal running at ~2.7GHz, encoding - * each output octet as 10 bits. The actual frequency - * is stored as a divider into a 100MHz clock, and the - * mode pixel clock is stored in units of 1KHz. - * Hence the bw of each lane in terms of the mode signal - * is: - */ - link_bw = intel_fdi_link_freq(i915, pipe_config); - - fdi_dotclock = adjusted_mode->crtc_clock; - - lane = ilk_get_lanes_required(fdi_dotclock, link_bw, - pipe_config->pipe_bpp); - - pipe_config->fdi_lanes = lane; - - intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock, - link_bw, &pipe_config->fdi_m_n, false, false); - - ret = ilk_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config); - if (ret == -EDEADLK) - return ret; - - if (ret == -EINVAL && pipe_config->pipe_bpp > 6*3) { - pipe_config->pipe_bpp -= 2*3; - drm_dbg_kms(&i915->drm, - "fdi link bw constraint, reducing pipe bpp to %i\n", - pipe_config->pipe_bpp); - needs_recompute = true; - pipe_config->bw_constrained = true; - - goto retry; - } - - if (needs_recompute) - return RETRY; - - return ret; -} - bool hsw_crtc_state_ips_capable(const struct intel_crtc_state *crtc_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); @@ -7835,19 +6443,6 @@ static int intel_crtc_compute_config(struct intel_crtc *crtc, return -EINVAL; } - if ((pipe_config->output_format == INTEL_OUTPUT_FORMAT_YCBCR420 || - pipe_config->output_format == INTEL_OUTPUT_FORMAT_YCBCR444) && - pipe_config->hw.ctm) { - /* - * There is only one pipe CSC unit per pipe, and we need that - * for output conversion from RGB->YCBCR. So if CTM is already - * applied we can't support YCBCR420 output. - */ - drm_dbg_kms(&dev_priv->drm, - "YCBCR420 and CTM together are not possible\n"); - return -EINVAL; - } - /* * Pipe horizontal size must be even in: * - DVO ganged mode @@ -7959,51 +6554,6 @@ static void intel_panel_sanitize_ssc(struct drm_i915_private *dev_priv) } } -static bool intel_panel_use_ssc(struct drm_i915_private *dev_priv) -{ - if (dev_priv->params.panel_use_ssc >= 0) - return dev_priv->params.panel_use_ssc != 0; - return dev_priv->vbt.lvds_use_ssc - && !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE); -} - -static u32 pnv_dpll_compute_fp(struct dpll *dpll) -{ - return (1 << dpll->n) << 16 | dpll->m2; -} - -static u32 i9xx_dpll_compute_fp(struct dpll *dpll) -{ - return dpll->n << 16 | dpll->m1 << 8 | dpll->m2; -} - -static void i9xx_update_pll_dividers(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state, - struct dpll *reduced_clock) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - u32 fp, fp2 = 0; - - if (IS_PINEVIEW(dev_priv)) { - fp = pnv_dpll_compute_fp(&crtc_state->dpll); - if (reduced_clock) - fp2 = pnv_dpll_compute_fp(reduced_clock); - } else { - fp = i9xx_dpll_compute_fp(&crtc_state->dpll); - if (reduced_clock) - fp2 = i9xx_dpll_compute_fp(reduced_clock); - } - - crtc_state->dpll_hw_state.fp0 = fp; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && - reduced_clock) { - crtc_state->dpll_hw_state.fp1 = fp2; - } else { - crtc_state->dpll_hw_state.fp1 = fp; - } -} - static void vlv_pllb_recal_opamp(struct drm_i915_private *dev_priv, enum pipe pipe) { @@ -8128,39 +6678,6 @@ void intel_dp_set_m_n(const struct intel_crtc_state *crtc_state, enum link_m_n_s intel_cpu_transcoder_set_m_n(crtc_state, dp_m_n, dp_m2_n2); } -static void vlv_compute_dpll(struct intel_crtc *crtc, - struct intel_crtc_state *pipe_config) -{ - pipe_config->dpll_hw_state.dpll = DPLL_INTEGRATED_REF_CLK_VLV | - DPLL_REF_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS; - if (crtc->pipe != PIPE_A) - pipe_config->dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV; - - /* DPLL not used with DSI, but still need the rest set up */ - if (!intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DSI)) - pipe_config->dpll_hw_state.dpll |= DPLL_VCO_ENABLE | - DPLL_EXT_BUFFER_ENABLE_VLV; - - pipe_config->dpll_hw_state.dpll_md = - (pipe_config->pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT; -} - -static void chv_compute_dpll(struct intel_crtc *crtc, - struct intel_crtc_state *pipe_config) -{ - pipe_config->dpll_hw_state.dpll = DPLL_SSC_REF_CLK_CHV | - DPLL_REF_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS; - if (crtc->pipe != PIPE_A) - pipe_config->dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV; - - /* DPLL not used with DSI, but still need the rest set up */ - if (!intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DSI)) - pipe_config->dpll_hw_state.dpll |= DPLL_VCO_ENABLE; - - pipe_config->dpll_hw_state.dpll_md = - (pipe_config->pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT; -} - static void vlv_prepare_pll(struct intel_crtc *crtc, const struct intel_crtc_state *pipe_config) { @@ -8420,128 +6937,7 @@ void vlv_force_pll_off(struct drm_i915_private *dev_priv, enum pipe pipe) vlv_disable_pll(dev_priv, pipe); } -static void i9xx_compute_dpll(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state, - struct dpll *reduced_clock) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - u32 dpll; - struct dpll *clock = &crtc_state->dpll; - - i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock); - - dpll = DPLL_VGA_MODE_DIS; - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) - dpll |= DPLLB_MODE_LVDS; - else - dpll |= DPLLB_MODE_DAC_SERIAL; - - if (IS_I945G(dev_priv) || IS_I945GM(dev_priv) || - IS_G33(dev_priv) || IS_PINEVIEW(dev_priv)) { - dpll |= (crtc_state->pixel_multiplier - 1) - << SDVO_MULTIPLIER_SHIFT_HIRES; - } - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO) || - intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) - dpll |= DPLL_SDVO_HIGH_SPEED; - - if (intel_crtc_has_dp_encoder(crtc_state)) - dpll |= DPLL_SDVO_HIGH_SPEED; - - /* compute bitmask from p1 value */ - if (IS_PINEVIEW(dev_priv)) - dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW; - else { - dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; - if (IS_G4X(dev_priv) && reduced_clock) - dpll |= (1 << (reduced_clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; - } - switch (clock->p2) { - case 5: - dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5; - break; - case 7: - dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7; - break; - case 10: - dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10; - break; - case 14: - dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14; - break; - } - if (INTEL_GEN(dev_priv) >= 4) - dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT); - - if (crtc_state->sdvo_tv_clock) - dpll |= PLL_REF_INPUT_TVCLKINBC; - else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && - intel_panel_use_ssc(dev_priv)) - dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; - else - dpll |= PLL_REF_INPUT_DREFCLK; - - dpll |= DPLL_VCO_ENABLE; - crtc_state->dpll_hw_state.dpll = dpll; - - if (INTEL_GEN(dev_priv) >= 4) { - u32 dpll_md = (crtc_state->pixel_multiplier - 1) - << DPLL_MD_UDI_MULTIPLIER_SHIFT; - crtc_state->dpll_hw_state.dpll_md = dpll_md; - } -} - -static void i8xx_compute_dpll(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state, - struct dpll *reduced_clock) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - u32 dpll; - struct dpll *clock = &crtc_state->dpll; - - i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock); - - dpll = DPLL_VGA_MODE_DIS; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; - } else { - if (clock->p1 == 2) - dpll |= PLL_P1_DIVIDE_BY_TWO; - else - dpll |= (clock->p1 - 2) << DPLL_FPA01_P1_POST_DIV_SHIFT; - if (clock->p2 == 4) - dpll |= PLL_P2_DIVIDE_BY_4; - } - - /* - * Bspec: - * "[Almador Errata}: For the correct operation of the muxed DVO pins - * (GDEVSELB/I2Cdata, GIRDBY/I2CClk) and (GFRAMEB/DVI_Data, - * GTRDYB/DVI_Clk): Bit 31 (DPLL VCO Enable) and Bit 30 (2X Clock - * Enable) must be set to “1” in both the DPLL A Control Register - * (06014h-06017h) and DPLL B Control Register (06018h-0601Bh)." - * - * For simplicity We simply keep both bits always enabled in - * both DPLLS. The spec says we should disable the DVO 2X clock - * when not needed, but this seems to work fine in practice. - */ - if (IS_I830(dev_priv) || - intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DVO)) - dpll |= DPLL_DVO_2X_MODE; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && - intel_panel_use_ssc(dev_priv)) - dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; - else - dpll |= PLL_REF_INPUT_DREFCLK; - - dpll |= DPLL_VCO_ENABLE; - crtc_state->dpll_hw_state.dpll = dpll; -} static void intel_set_transcoder_timings(const struct intel_crtc_state *crtc_state) { @@ -8741,214 +7137,12 @@ static void i9xx_set_pipeconf(const struct intel_crtc_state *crtc_state) pipeconf |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode); - pipeconf |= PIPECONF_FRAME_START_DELAY(0); + pipeconf |= PIPECONF_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, PIPECONF(crtc->pipe), pipeconf); intel_de_posting_read(dev_priv, PIPECONF(crtc->pipe)); } -static int i8xx_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - const struct intel_limit *limit; - int refclk = 48000; - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if (intel_panel_use_ssc(dev_priv)) { - refclk = dev_priv->vbt.lvds_ssc_freq; - drm_dbg_kms(&dev_priv->drm, - "using SSC reference clock of %d kHz\n", - refclk); - } - - limit = &intel_limits_i8xx_lvds; - } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DVO)) { - limit = &intel_limits_i8xx_dvo; - } else { - limit = &intel_limits_i8xx_dac; - } - - if (!crtc_state->clock_set && - !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&dev_priv->drm, - "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - i8xx_compute_dpll(crtc, crtc_state, NULL); - - return 0; -} - -static int g4x_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - const struct intel_limit *limit; - int refclk = 96000; - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if (intel_panel_use_ssc(dev_priv)) { - refclk = dev_priv->vbt.lvds_ssc_freq; - drm_dbg_kms(&dev_priv->drm, - "using SSC reference clock of %d kHz\n", - refclk); - } - - if (intel_is_dual_link_lvds(dev_priv)) - limit = &intel_limits_g4x_dual_channel_lvds; - else - limit = &intel_limits_g4x_single_channel_lvds; - } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI) || - intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) { - limit = &intel_limits_g4x_hdmi; - } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO)) { - limit = &intel_limits_g4x_sdvo; - } else { - /* The option is for other outputs */ - limit = &intel_limits_i9xx_sdvo; - } - - if (!crtc_state->clock_set && - !g4x_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&dev_priv->drm, - "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - i9xx_compute_dpll(crtc, crtc_state, NULL); - - return 0; -} - -static int pnv_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - const struct intel_limit *limit; - int refclk = 96000; - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if (intel_panel_use_ssc(dev_priv)) { - refclk = dev_priv->vbt.lvds_ssc_freq; - drm_dbg_kms(&dev_priv->drm, - "using SSC reference clock of %d kHz\n", - refclk); - } - - limit = &pnv_limits_lvds; - } else { - limit = &pnv_limits_sdvo; - } - - if (!crtc_state->clock_set && - !pnv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&dev_priv->drm, - "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - i9xx_compute_dpll(crtc, crtc_state, NULL); - - return 0; -} - -static int i9xx_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_i915_private *dev_priv = to_i915(dev); - const struct intel_limit *limit; - int refclk = 96000; - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if (intel_panel_use_ssc(dev_priv)) { - refclk = dev_priv->vbt.lvds_ssc_freq; - drm_dbg_kms(&dev_priv->drm, - "using SSC reference clock of %d kHz\n", - refclk); - } - - limit = &intel_limits_i9xx_lvds; - } else { - limit = &intel_limits_i9xx_sdvo; - } - - if (!crtc_state->clock_set && - !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&dev_priv->drm, - "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - i9xx_compute_dpll(crtc, crtc_state, NULL); - - return 0; -} - -static int chv_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - int refclk = 100000; - const struct intel_limit *limit = &intel_limits_chv; - struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (!crtc_state->clock_set && - !chv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&i915->drm, "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - chv_compute_dpll(crtc, crtc_state); - - return 0; -} - -static int vlv_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - int refclk = 100000; - const struct intel_limit *limit = &intel_limits_vlv; - struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - if (!crtc_state->clock_set && - !vlv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&i915->drm, "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - vlv_compute_dpll(crtc, crtc_state); - - return 0; -} - static bool i9xx_has_pfit(struct drm_i915_private *dev_priv) { if (IS_I830(dev_priv)) @@ -9850,7 +8044,7 @@ static void ilk_set_pipeconf(const struct intel_crtc_state *crtc_state) val |= PIPECONF_GAMMA_MODE(crtc_state->gamma_mode); - val |= PIPECONF_FRAME_START_DELAY(0); + val |= PIPECONF_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, PIPECONF(pipe), val); intel_de_posting_read(dev_priv, PIPECONF(pipe)); @@ -9958,172 +8152,6 @@ int ilk_get_lanes_required(int target_clock, int link_bw, int bpp) return DIV_ROUND_UP(bps, link_bw * 8); } -static bool ilk_needs_fb_cb_tune(struct dpll *dpll, int factor) -{ - return i9xx_dpll_compute_m(dpll) < factor * dpll->n; -} - -static void ilk_compute_dpll(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state, - struct dpll *reduced_clock) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - u32 dpll, fp, fp2; - int factor; - - /* Enable autotuning of the PLL clock (if permissible) */ - factor = 21; - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if ((intel_panel_use_ssc(dev_priv) && - dev_priv->vbt.lvds_ssc_freq == 100000) || - (HAS_PCH_IBX(dev_priv) && - intel_is_dual_link_lvds(dev_priv))) - factor = 25; - } else if (crtc_state->sdvo_tv_clock) { - factor = 20; - } - - fp = i9xx_dpll_compute_fp(&crtc_state->dpll); - - if (ilk_needs_fb_cb_tune(&crtc_state->dpll, factor)) - fp |= FP_CB_TUNE; - - if (reduced_clock) { - fp2 = i9xx_dpll_compute_fp(reduced_clock); - - if (reduced_clock->m < factor * reduced_clock->n) - fp2 |= FP_CB_TUNE; - } else { - fp2 = fp; - } - - dpll = 0; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) - dpll |= DPLLB_MODE_LVDS; - else - dpll |= DPLLB_MODE_DAC_SERIAL; - - dpll |= (crtc_state->pixel_multiplier - 1) - << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO) || - intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) - dpll |= DPLL_SDVO_HIGH_SPEED; - - if (intel_crtc_has_dp_encoder(crtc_state)) - dpll |= DPLL_SDVO_HIGH_SPEED; - - /* - * The high speed IO clock is only really required for - * SDVO/HDMI/DP, but we also enable it for CRT to make it - * possible to share the DPLL between CRT and HDMI. Enabling - * the clock needlessly does no real harm, except use up a - * bit of power potentially. - * - * We'll limit this to IVB with 3 pipes, since it has only two - * DPLLs and so DPLL sharing is the only way to get three pipes - * driving PCH ports at the same time. On SNB we could do this, - * and potentially avoid enabling the second DPLL, but it's not - * clear if it''s a win or loss power wise. No point in doing - * this on ILK at all since it has a fixed DPLL<->pipe mapping. - */ - if (INTEL_NUM_PIPES(dev_priv) == 3 && - intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) - dpll |= DPLL_SDVO_HIGH_SPEED; - - /* compute bitmask from p1 value */ - dpll |= (1 << (crtc_state->dpll.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; - /* also FPA1 */ - dpll |= (1 << (crtc_state->dpll.p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; - - switch (crtc_state->dpll.p2) { - case 5: - dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5; - break; - case 7: - dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7; - break; - case 10: - dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10; - break; - case 14: - dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14; - break; - } - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && - intel_panel_use_ssc(dev_priv)) - dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; - else - dpll |= PLL_REF_INPUT_DREFCLK; - - dpll |= DPLL_VCO_ENABLE; - - crtc_state->dpll_hw_state.dpll = dpll; - crtc_state->dpll_hw_state.fp0 = fp; - crtc_state->dpll_hw_state.fp1 = fp2; -} - -static int ilk_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - struct intel_atomic_state *state = - to_intel_atomic_state(crtc_state->uapi.state); - const struct intel_limit *limit; - int refclk = 120000; - - memset(&crtc_state->dpll_hw_state, 0, - sizeof(crtc_state->dpll_hw_state)); - - /* CPU eDP is the only output that doesn't need a PCH PLL of its own. */ - if (!crtc_state->has_pch_encoder) - return 0; - - if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { - if (intel_panel_use_ssc(dev_priv)) { - drm_dbg_kms(&dev_priv->drm, - "using SSC reference clock of %d kHz\n", - dev_priv->vbt.lvds_ssc_freq); - refclk = dev_priv->vbt.lvds_ssc_freq; - } - - if (intel_is_dual_link_lvds(dev_priv)) { - if (refclk == 100000) - limit = &ilk_limits_dual_lvds_100m; - else - limit = &ilk_limits_dual_lvds; - } else { - if (refclk == 100000) - limit = &ilk_limits_single_lvds_100m; - else - limit = &ilk_limits_single_lvds; - } - } else { - limit = &ilk_limits_dac; - } - - if (!crtc_state->clock_set && - !g4x_find_best_dpll(limit, crtc_state, crtc_state->port_clock, - refclk, NULL, &crtc_state->dpll)) { - drm_err(&dev_priv->drm, - "Couldn't find PLL settings for mode!\n"); - return -EINVAL; - } - - ilk_compute_dpll(crtc, crtc_state, NULL); - - if (!intel_reserve_shared_dplls(state, crtc, NULL)) { - drm_dbg_kms(&dev_priv->drm, - "failed to find PLL for pipe %c\n", - pipe_name(crtc->pipe)); - return -EINVAL; - } - - return 0; -} - static void intel_pch_transcoder_get_m_n(struct intel_crtc *crtc, struct intel_link_m_n *m_n) { @@ -10536,29 +8564,6 @@ out: return ret; } -static int hsw_crtc_compute_clock(struct intel_crtc *crtc, - struct intel_crtc_state *crtc_state) -{ - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - struct intel_atomic_state *state = - to_intel_atomic_state(crtc_state->uapi.state); - - if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DSI) || - INTEL_GEN(dev_priv) >= 11) { - struct intel_encoder *encoder = - intel_get_crtc_new_encoder(state, crtc_state); - - if (!intel_reserve_shared_dplls(state, crtc, encoder)) { - drm_dbg_kms(&dev_priv->drm, - "failed to find PLL for pipe %c\n", - pipe_name(crtc->pipe)); - return -EINVAL; - } - } - - return 0; -} - static void dg1_get_ddi_pll(struct drm_i915_private *dev_priv, enum port port, struct intel_crtc_state *pipe_config) { @@ -10952,8 +8957,6 @@ static bool hsw_get_pipe_config(struct intel_crtc *crtc, bool active; u32 tmp; - pipe_config->master_transcoder = INVALID_TRANSCODER; - if (!intel_display_power_get_in_set_if_enabled(dev_priv, &power_domain_set, POWER_DOMAIN_PIPE(crtc->pipe))) return false; @@ -10986,6 +8989,9 @@ static bool hsw_get_pipe_config(struct intel_crtc *crtc, intel_get_transcoder_timings(crtc, pipe_config); } + if (HAS_VRR(dev_priv) && !transcoder_is_dsi(pipe_config->cpu_transcoder)) + intel_vrr_get_config(crtc, pipe_config); + intel_get_pipe_src_size(crtc, pipe_config); if (IS_HASWELL(dev_priv)) { @@ -11464,33 +9470,6 @@ static void ilk_pch_clock_get(struct intel_crtc *crtc, &pipe_config->fdi_m_n); } -static void intel_crtc_state_reset(struct intel_crtc_state *crtc_state, - struct intel_crtc *crtc) -{ - memset(crtc_state, 0, sizeof(*crtc_state)); - - __drm_atomic_helper_crtc_state_reset(&crtc_state->uapi, &crtc->base); - - crtc_state->cpu_transcoder = INVALID_TRANSCODER; - crtc_state->master_transcoder = INVALID_TRANSCODER; - crtc_state->hsw_workaround_pipe = INVALID_PIPE; - crtc_state->output_format = INTEL_OUTPUT_FORMAT_INVALID; - crtc_state->scaler_state.scaler_id = -1; - crtc_state->mst_master_transcoder = INVALID_TRANSCODER; -} - -static struct intel_crtc_state *intel_crtc_state_alloc(struct intel_crtc *crtc) -{ - struct intel_crtc_state *crtc_state; - - crtc_state = kmalloc(sizeof(*crtc_state), GFP_KERNEL); - - if (crtc_state) - intel_crtc_state_reset(crtc_state, crtc); - - return crtc_state; -} - /* Returns the currently programmed mode of the given encoder. */ struct drm_display_mode * intel_encoder_current_mode(struct intel_encoder *encoder) @@ -11531,14 +9510,6 @@ intel_encoder_current_mode(struct intel_encoder *encoder) return mode; } -static void intel_crtc_destroy(struct drm_crtc *crtc) -{ - struct intel_crtc *intel_crtc = to_intel_crtc(crtc); - - drm_crtc_cleanup(crtc); - kfree(intel_crtc); -} - /** * intel_wm_need_update - Check whether watermarks need updating * @cur: current plane state @@ -12367,6 +10338,13 @@ static void intel_dump_pipe_config(const struct intel_crtc_state *pipe_config, intel_hdmi_infoframe_enable(DP_SDP_VSC)) intel_dump_dp_vsc_sdp(dev_priv, &pipe_config->infoframes.vsc); + drm_dbg_kms(&dev_priv->drm, "vrr: %s, vmin: %d, vmax: %d, pipeline full: %d, flipline: %d, vmin vblank: %d, vmax vblank: %d\n", + yesno(pipe_config->vrr.enable), + pipe_config->vrr.vmin, pipe_config->vrr.vmax, + pipe_config->vrr.pipeline_full, pipe_config->vrr.flipline, + intel_vrr_vmin_vblank_start(pipe_config), + intel_vrr_vmax_vblank_start(pipe_config)); + drm_dbg_kms(&dev_priv->drm, "requested mode:\n"); drm_mode_debug_printmodeline(&pipe_config->hw.mode); drm_dbg_kms(&dev_priv->drm, "adjusted mode:\n"); @@ -12754,7 +10732,7 @@ encoder_retry: return ret; } - if (ret == RETRY) { + if (ret == I915_DISPLAY_CONFIG_RETRY) { if (drm_WARN(&i915->drm, !retry, "loop in pipe configuration computation\n")) return -EINVAL; @@ -13360,6 +11338,12 @@ intel_pipe_config_compare(const struct intel_crtc_state *current_config, PIPE_CONF_CHECK_I(mst_master_transcoder); + PIPE_CONF_CHECK_BOOL(vrr.enable); + PIPE_CONF_CHECK_I(vrr.vmin); + PIPE_CONF_CHECK_I(vrr.vmax); + PIPE_CONF_CHECK_I(vrr.flipline); + PIPE_CONF_CHECK_I(vrr.pipeline_full); + #undef PIPE_CONF_CHECK_X #undef PIPE_CONF_CHECK_I #undef PIPE_CONF_CHECK_BOOL @@ -13818,10 +11802,17 @@ intel_crtc_update_active_timings(const struct intel_crtc_state *crtc_state) { struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - const struct drm_display_mode *adjusted_mode = - &crtc_state->hw.adjusted_mode; + struct drm_display_mode adjusted_mode = + crtc_state->hw.adjusted_mode; + + if (crtc_state->vrr.enable) { + adjusted_mode.crtc_vtotal = crtc_state->vrr.vmax; + adjusted_mode.crtc_vblank_end = crtc_state->vrr.vmax; + adjusted_mode.crtc_vblank_start = intel_vrr_vmin_vblank_start(crtc_state); + crtc->vmax_vblank_start = intel_vrr_vmax_vblank_start(crtc_state); + } - drm_calc_timestamping_constants(&crtc->base, adjusted_mode); + drm_calc_timestamping_constants(&crtc->base, &adjusted_mode); crtc->mode_flags = crtc_state->mode_flags; @@ -13855,8 +11846,8 @@ intel_crtc_update_active_timings(const struct intel_crtc_state *crtc_state) if (IS_GEN(dev_priv, 2)) { int vtotal; - vtotal = adjusted_mode->crtc_vtotal; - if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) + vtotal = adjusted_mode.crtc_vtotal; + if (adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) vtotal /= 2; crtc->scanline_offset = vtotal - 1; @@ -14329,7 +12320,7 @@ static void kill_bigjoiner_slave(struct intel_atomic_state *state, * Async flip can only change the plane surface address, so anything else * changing is rejected from the intel_atomic_check_async() function. * Once this check is cleared, flip done interrupt is enabled using - * the skl_enable_flip_done() function. + * the intel_crtc_enable_flip_done() function. * * As soon as the surface address register is written, flip done interrupt is * generated and the requested events are sent to the usersapce in the interrupt @@ -14373,7 +12364,7 @@ static int intel_atomic_check_async(struct intel_atomic_state *state) * this(vlv/chv and icl+) should be added when async flip is * enabled in the atomic IOCTL path. */ - if (plane->id != PLANE_PRIMARY) + if (!plane->async_flip) return -EINVAL; /* @@ -14517,6 +12508,8 @@ static int intel_atomic_check(struct drm_device *dev, new_crtc_state->uapi.mode_changed = true; } + intel_vrr_check_modeset(state); + ret = drm_atomic_helper_check_modeset(dev, &state->base); if (ret) goto fail; @@ -14643,20 +12636,6 @@ static int intel_atomic_check(struct drm_device *dev, if (ret) goto fail; - /* - * distrust_bios_wm will force a full dbuf recomputation - * but the hardware state will only get updated accordingly - * if state->modeset==true. Hence distrust_bios_wm==true && - * state->modeset==false is an invalid combination which - * would cause the hardware and software dbuf state to get - * out of sync. We must prevent that. - * - * FIXME clean up this mess and introduce better - * state tracking for dbuf. - */ - if (dev_priv->wm.distrust_bios_wm) - any_ms = true; - intel_fbc_choose_crtc(dev_priv, state); ret = calc_watermark_data(state); if (ret) @@ -14742,17 +12721,6 @@ static int intel_atomic_prepare_commit(struct intel_atomic_state *state) return 0; } -u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc) -{ - struct drm_device *dev = crtc->base.dev; - struct drm_vblank_crtc *vblank = &dev->vblank[drm_crtc_index(&crtc->base)]; - - if (!vblank->max_vblank_count) - return (u32)drm_crtc_accurate_vblank_count(&crtc->base); - - return crtc->base.funcs->get_vblank_counter(&crtc->base); -} - void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state) { @@ -15220,6 +13188,43 @@ static void intel_atomic_cleanup_work(struct work_struct *work) intel_atomic_helper_free_state(i915); } +static void intel_atomic_prepare_plane_clear_colors(struct intel_atomic_state *state) +{ + struct drm_i915_private *i915 = to_i915(state->base.dev); + struct intel_plane *plane; + struct intel_plane_state *plane_state; + int i; + + for_each_new_intel_plane_in_state(state, plane, plane_state, i) { + struct drm_framebuffer *fb = plane_state->hw.fb; + int ret; + + if (!fb || + fb->modifier != I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC) + continue; + + /* + * The layout of the fast clear color value expected by HW + * (the DRM ABI requiring this value to be located in fb at offset 0 of plane#2): + * - 4 x 4 bytes per-channel value + * (in surface type specific float/int format provided by the fb user) + * - 8 bytes native color value used by the display + * (converted/written by GPU during a fast clear operation using the + * above per-channel values) + * + * The commit's FB prepare hook already ensured that FB obj is pinned and the + * caller made sure that the object is synced wrt. the related color clear value + * GPU write on it. + */ + ret = i915_gem_object_read_from_page(intel_fb_obj(fb), + fb->offsets[2] + 16, + &plane_state->ccval, + sizeof(plane_state->ccval)); + /* The above could only fail if the FB obj has an unexpected backing store type. */ + drm_WARN_ON(&i915->drm, ret); + } +} + static void intel_atomic_commit_tail(struct intel_atomic_state *state) { struct drm_device *dev = state->base.dev; @@ -15237,6 +13242,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) if (state->modeset) wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_MODESET); + intel_atomic_prepare_plane_clear_colors(state); + for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { if (intel_crtc_needs_modeset(new_crtc_state) || @@ -15285,7 +13292,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { if (new_crtc_state->uapi.async_flip) - skl_enable_flip_done(crtc); + intel_crtc_enable_flip_done(state, crtc); } /* Now enable the clocks, plane, pipe, and connectors that we set up. */ @@ -15310,7 +13317,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { if (new_crtc_state->uapi.async_flip) - skl_disable_flip_done(crtc); + intel_crtc_disable_flip_done(state, crtc); if (new_crtc_state->hw.active && !intel_crtc_needs_modeset(new_crtc_state) && @@ -15513,7 +13520,6 @@ static int intel_atomic_commit(struct drm_device *dev, intel_runtime_pm_put(&dev_priv->runtime_pm, state->wakeref); return ret; } - dev_priv->wm.distrust_bios_wm = false; intel_shared_dpll_swap_state(state); intel_atomic_track_fbs(state); @@ -15631,15 +13637,6 @@ void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state) intel_unpin_fb_vma(vma, old_plane_state->flags); } -static void fb_obj_bump_render_priority(struct drm_i915_gem_object *obj) -{ - struct i915_sched_attr attr = { - .priority = I915_USER_PRIORITY(I915_PRIORITY_DISPLAY), - }; - - i915_gem_object_wait_priority(obj, 0, &attr); -} - /** * intel_prepare_plane_fb - Prepare fb for usage on plane * @_plane: drm plane to prepare for @@ -15656,6 +13653,9 @@ int intel_prepare_plane_fb(struct drm_plane *_plane, struct drm_plane_state *_new_plane_state) { + struct i915_sched_attr attr = { + .priority = I915_USER_PRIORITY(I915_PRIORITY_DISPLAY), + }; struct intel_plane *plane = to_intel_plane(_plane); struct intel_plane_state *new_plane_state = to_intel_plane_state(_new_plane_state); @@ -15695,6 +13695,8 @@ intel_prepare_plane_fb(struct drm_plane *_plane, } if (new_plane_state->uapi.fence) { /* explicit fencing */ + i915_gem_fence_wait_priority(new_plane_state->uapi.fence, + &attr); ret = i915_sw_fence_await_dma_fence(&state->commit_ready, new_plane_state->uapi.fence, i915_fence_timeout(dev_priv), @@ -15716,7 +13718,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane, if (ret) return ret; - fb_obj_bump_render_priority(obj); + i915_gem_object_wait_priority(obj, 0, &attr); i915_gem_object_flush_frontbuffer(obj, ORIGIN_DIRTYFB); if (!new_plane_state->uapi.fence) { /* implicit fencing */ @@ -15805,113 +13807,6 @@ void intel_plane_destroy(struct drm_plane *plane) kfree(to_intel_plane(plane)); } -static int intel_crtc_late_register(struct drm_crtc *crtc) -{ - intel_crtc_debugfs_add(crtc); - return 0; -} - -#define INTEL_CRTC_FUNCS \ - .set_config = drm_atomic_helper_set_config, \ - .destroy = intel_crtc_destroy, \ - .page_flip = drm_atomic_helper_page_flip, \ - .atomic_duplicate_state = intel_crtc_duplicate_state, \ - .atomic_destroy_state = intel_crtc_destroy_state, \ - .set_crc_source = intel_crtc_set_crc_source, \ - .verify_crc_source = intel_crtc_verify_crc_source, \ - .get_crc_sources = intel_crtc_get_crc_sources, \ - .late_register = intel_crtc_late_register - -static const struct drm_crtc_funcs bdw_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = g4x_get_vblank_counter, - .enable_vblank = bdw_enable_vblank, - .disable_vblank = bdw_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs ilk_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = g4x_get_vblank_counter, - .enable_vblank = ilk_enable_vblank, - .disable_vblank = ilk_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs g4x_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = g4x_get_vblank_counter, - .enable_vblank = i965_enable_vblank, - .disable_vblank = i965_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs i965_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = i915_get_vblank_counter, - .enable_vblank = i965_enable_vblank, - .disable_vblank = i965_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs i915gm_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = i915_get_vblank_counter, - .enable_vblank = i915gm_enable_vblank, - .disable_vblank = i915gm_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs i915_crtc_funcs = { - INTEL_CRTC_FUNCS, - - .get_vblank_counter = i915_get_vblank_counter, - .enable_vblank = i8xx_enable_vblank, - .disable_vblank = i8xx_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static const struct drm_crtc_funcs i8xx_crtc_funcs = { - INTEL_CRTC_FUNCS, - - /* no hw vblank counter */ - .enable_vblank = i8xx_enable_vblank, - .disable_vblank = i8xx_disable_vblank, - .get_vblank_timestamp = intel_crtc_get_vblank_timestamp, -}; - -static struct intel_crtc *intel_crtc_alloc(void) -{ - struct intel_crtc_state *crtc_state; - struct intel_crtc *crtc; - - crtc = kzalloc(sizeof(*crtc), GFP_KERNEL); - if (!crtc) - return ERR_PTR(-ENOMEM); - - crtc_state = intel_crtc_state_alloc(crtc); - if (!crtc_state) { - kfree(crtc); - return ERR_PTR(-ENOMEM); - } - - crtc->base.state = &crtc_state->uapi; - crtc->config = crtc_state; - - return crtc; -} - -static void intel_crtc_free(struct intel_crtc *crtc) -{ - intel_crtc_destroy_state(&crtc->base, crtc->base.state); - kfree(crtc); -} - static void intel_plane_possible_crtcs_init(struct drm_i915_private *dev_priv) { struct intel_plane *plane; @@ -15924,100 +13819,6 @@ static void intel_plane_possible_crtcs_init(struct drm_i915_private *dev_priv) } } -static int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe) -{ - struct intel_plane *primary, *cursor; - const struct drm_crtc_funcs *funcs; - struct intel_crtc *crtc; - int sprite, ret; - - crtc = intel_crtc_alloc(); - if (IS_ERR(crtc)) - return PTR_ERR(crtc); - - crtc->pipe = pipe; - crtc->num_scalers = RUNTIME_INFO(dev_priv)->num_scalers[pipe]; - - primary = intel_primary_plane_create(dev_priv, pipe); - if (IS_ERR(primary)) { - ret = PTR_ERR(primary); - goto fail; - } - crtc->plane_ids_mask |= BIT(primary->id); - - for_each_sprite(dev_priv, pipe, sprite) { - struct intel_plane *plane; - - plane = intel_sprite_plane_create(dev_priv, pipe, sprite); - if (IS_ERR(plane)) { - ret = PTR_ERR(plane); - goto fail; - } - crtc->plane_ids_mask |= BIT(plane->id); - } - - cursor = intel_cursor_plane_create(dev_priv, pipe); - if (IS_ERR(cursor)) { - ret = PTR_ERR(cursor); - goto fail; - } - crtc->plane_ids_mask |= BIT(cursor->id); - - if (HAS_GMCH(dev_priv)) { - if (IS_CHERRYVIEW(dev_priv) || - IS_VALLEYVIEW(dev_priv) || IS_G4X(dev_priv)) - funcs = &g4x_crtc_funcs; - else if (IS_GEN(dev_priv, 4)) - funcs = &i965_crtc_funcs; - else if (IS_I945GM(dev_priv) || IS_I915GM(dev_priv)) - funcs = &i915gm_crtc_funcs; - else if (IS_GEN(dev_priv, 3)) - funcs = &i915_crtc_funcs; - else - funcs = &i8xx_crtc_funcs; - } else { - if (INTEL_GEN(dev_priv) >= 8) - funcs = &bdw_crtc_funcs; - else - funcs = &ilk_crtc_funcs; - } - - ret = drm_crtc_init_with_planes(&dev_priv->drm, &crtc->base, - &primary->base, &cursor->base, - funcs, "pipe %c", pipe_name(pipe)); - if (ret) - goto fail; - - BUG_ON(pipe >= ARRAY_SIZE(dev_priv->pipe_to_crtc_mapping) || - dev_priv->pipe_to_crtc_mapping[pipe] != NULL); - dev_priv->pipe_to_crtc_mapping[pipe] = crtc; - - if (INTEL_GEN(dev_priv) < 9) { - enum i9xx_plane_id i9xx_plane = primary->i9xx_plane; - - BUG_ON(i9xx_plane >= ARRAY_SIZE(dev_priv->plane_to_crtc_mapping) || - dev_priv->plane_to_crtc_mapping[i9xx_plane] != NULL); - dev_priv->plane_to_crtc_mapping[i9xx_plane] = crtc; - } - - if (INTEL_GEN(dev_priv) >= 10) - drm_crtc_create_scaling_filter_property(&crtc->base, - BIT(DRM_SCALING_FILTER_DEFAULT) | - BIT(DRM_SCALING_FILTER_NEAREST_NEIGHBOR)); - - intel_color_init(crtc); - - intel_crtc_crc_init(crtc); - - drm_WARN_ON(&dev_priv->drm, drm_crtc_index(&crtc->base) != crtc->pipe); - - return 0; - -fail: - intel_crtc_free(crtc); - - return ret; -} int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data, struct drm_file *file) @@ -16100,48 +13901,12 @@ static bool intel_ddi_crt_present(struct drm_i915_private *dev_priv) return true; } -void intel_pps_unlock_regs_wa(struct drm_i915_private *dev_priv) -{ - int pps_num; - int pps_idx; - - if (HAS_DDI(dev_priv)) - return; - /* - * This w/a is needed at least on CPT/PPT, but to be sure apply it - * everywhere where registers can be write protected. - */ - if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - pps_num = 2; - else - pps_num = 1; - - for (pps_idx = 0; pps_idx < pps_num; pps_idx++) { - u32 val = intel_de_read(dev_priv, PP_CONTROL(pps_idx)); - - val = (val & ~PANEL_UNLOCK_MASK) | PANEL_UNLOCK_REGS; - intel_de_write(dev_priv, PP_CONTROL(pps_idx), val); - } -} - -static void intel_pps_init(struct drm_i915_private *dev_priv) -{ - if (HAS_PCH_SPLIT(dev_priv) || IS_GEN9_LP(dev_priv)) - dev_priv->pps_mmio_base = PCH_PPS_BASE; - else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - dev_priv->pps_mmio_base = VLV_PPS_BASE; - else - dev_priv->pps_mmio_base = PPS_BASE; - - intel_pps_unlock_regs_wa(dev_priv); -} - static void intel_setup_outputs(struct drm_i915_private *dev_priv) { struct intel_encoder *encoder; bool dpd_is_edp = false; - intel_pps_init(dev_priv); + intel_pps_unlock_regs_wa(dev_priv); if (!HAS_DISPLAY(dev_priv)) return; @@ -16542,7 +14307,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb, goto err; } - if (is_gen12_ccs_plane(fb, i)) { + if (is_gen12_ccs_plane(fb, i) && !is_gen12_ccs_cc_plane(fb, i)) { int ccs_aux_stride = gen12_ccs_aux_stride(fb, i); if (fb->pitches[i] != ccs_aux_stride) { @@ -16740,86 +14505,40 @@ void intel_init_display_hooks(struct drm_i915_private *dev_priv) { intel_init_cdclk_hooks(dev_priv); + intel_dpll_init_clock_hook(dev_priv); + if (INTEL_GEN(dev_priv) >= 9) { dev_priv->display.get_pipe_config = hsw_get_pipe_config; - dev_priv->display.get_initial_plane_config = - skl_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = hsw_crtc_compute_clock; dev_priv->display.crtc_enable = hsw_crtc_enable; dev_priv->display.crtc_disable = hsw_crtc_disable; } else if (HAS_DDI(dev_priv)) { dev_priv->display.get_pipe_config = hsw_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = - hsw_crtc_compute_clock; dev_priv->display.crtc_enable = hsw_crtc_enable; dev_priv->display.crtc_disable = hsw_crtc_disable; } else if (HAS_PCH_SPLIT(dev_priv)) { dev_priv->display.get_pipe_config = ilk_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = - ilk_crtc_compute_clock; dev_priv->display.crtc_enable = ilk_crtc_enable; dev_priv->display.crtc_disable = ilk_crtc_disable; - } else if (IS_CHERRYVIEW(dev_priv)) { + } else if (IS_CHERRYVIEW(dev_priv) || + IS_VALLEYVIEW(dev_priv)) { dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = chv_crtc_compute_clock; dev_priv->display.crtc_enable = valleyview_crtc_enable; dev_priv->display.crtc_disable = i9xx_crtc_disable; - } else if (IS_VALLEYVIEW(dev_priv)) { - dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = vlv_crtc_compute_clock; - dev_priv->display.crtc_enable = valleyview_crtc_enable; - dev_priv->display.crtc_disable = i9xx_crtc_disable; - } else if (IS_G4X(dev_priv)) { - dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = g4x_crtc_compute_clock; - dev_priv->display.crtc_enable = i9xx_crtc_enable; - dev_priv->display.crtc_disable = i9xx_crtc_disable; - } else if (IS_PINEVIEW(dev_priv)) { - dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = pnv_crtc_compute_clock; - dev_priv->display.crtc_enable = i9xx_crtc_enable; - dev_priv->display.crtc_disable = i9xx_crtc_disable; - } else if (!IS_GEN(dev_priv, 2)) { - dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = i9xx_crtc_compute_clock; - dev_priv->display.crtc_enable = i9xx_crtc_enable; - dev_priv->display.crtc_disable = i9xx_crtc_disable; } else { dev_priv->display.get_pipe_config = i9xx_get_pipe_config; - dev_priv->display.get_initial_plane_config = - i9xx_get_initial_plane_config; - dev_priv->display.crtc_compute_clock = i8xx_crtc_compute_clock; dev_priv->display.crtc_enable = i9xx_crtc_enable; dev_priv->display.crtc_disable = i9xx_crtc_disable; } - if (IS_GEN(dev_priv, 5)) { - dev_priv->display.fdi_link_train = ilk_fdi_link_train; - } else if (IS_GEN(dev_priv, 6)) { - dev_priv->display.fdi_link_train = gen6_fdi_link_train; - } else if (IS_IVYBRIDGE(dev_priv)) { - /* FIXME: detect B0+ stepping and use auto training */ - dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train; - } + intel_fdi_init_hook(dev_priv); - if (INTEL_GEN(dev_priv) >= 9) + if (INTEL_GEN(dev_priv) >= 9) { dev_priv->display.commit_modeset_enables = skl_commit_modeset_enables; - else + dev_priv->display.get_initial_plane_config = skl_get_initial_plane_config; + } else { dev_priv->display.commit_modeset_enables = intel_commit_modeset_enables; + dev_priv->display.get_initial_plane_config = i9xx_get_initial_plane_config; + } } @@ -16827,14 +14546,10 @@ void intel_modeset_init_hw(struct drm_i915_private *i915) { struct intel_cdclk_state *cdclk_state = to_intel_cdclk_state(i915->cdclk.obj.state); - struct intel_dbuf_state *dbuf_state = - to_intel_dbuf_state(i915->dbuf.obj.state); intel_update_cdclk(i915); intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK"); cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw; - - dbuf_state->enabled_slices = i915->dbuf.enabled_slices; } static int sanitize_watermarks_add_affected(struct drm_atomic_state *state) @@ -17067,8 +14782,7 @@ static void intel_mode_config_init(struct drm_i915_private *i915) mode_config->funcs = &intel_mode_funcs; - if (INTEL_GEN(i915) >= 9) - mode_config->async_page_flip = true; + mode_config->async_page_flip = has_async_flips(i915); /* * Maximum framebuffer dimensions, chosen to match @@ -17153,6 +14867,8 @@ int intel_modeset_init_noirq(struct drm_i915_private *i915) i915->flip_wq = alloc_workqueue("i915_flip", WQ_HIGHPRI | WQ_UNBOUND, WQ_UNBOUND_MAX_ACTIVE); + i915->framestart_delay = 1; /* 1-4 */ + intel_mode_config_init(i915); ret = intel_cdclk_init(i915); @@ -17199,6 +14915,8 @@ int intel_modeset_init_nogem(struct drm_i915_private *i915) intel_panel_sanitize_ssc(i915); + intel_pps_setup(i915); + intel_gmbus_setup(i915); drm_dbg_kms(&i915->drm, "%d display pipe%s available.\n", @@ -17487,7 +15205,7 @@ static void intel_sanitize_frame_start_delay(const struct intel_crtc_state *crtc val = intel_de_read(dev_priv, reg); val &= ~HSW_FRAME_START_DELAY_MASK; - val |= HSW_FRAME_START_DELAY(0); + val |= HSW_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } else { i915_reg_t reg = PIPECONF(cpu_transcoder); @@ -17495,7 +15213,7 @@ static void intel_sanitize_frame_start_delay(const struct intel_crtc_state *crtc val = intel_de_read(dev_priv, reg); val &= ~PIPECONF_FRAME_START_DELAY_MASK; - val |= PIPECONF_FRAME_START_DELAY(0); + val |= PIPECONF_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } @@ -17508,7 +15226,7 @@ static void intel_sanitize_frame_start_delay(const struct intel_crtc_state *crtc val = intel_de_read(dev_priv, reg); val &= ~TRANS_FRAME_START_DELAY_MASK; - val |= TRANS_FRAME_START_DELAY(0); + val |= TRANS_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } else { enum pipe pch_transcoder = intel_crtc_pch_transcoder(crtc); @@ -17517,7 +15235,7 @@ static void intel_sanitize_frame_start_delay(const struct intel_crtc_state *crtc val = intel_de_read(dev_priv, reg); val &= ~TRANS_CHICKEN2_FRAME_START_DELAY_MASK; - val |= TRANS_CHICKEN2_FRAME_START_DELAY(0); + val |= TRANS_CHICKEN2_FRAME_START_DELAY(dev_priv->framestart_delay - 1); intel_de_write(dev_priv, reg, val); } } diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h index 7ddbc00a0f41..64ffa34544a7 100644 --- a/drivers/gpu/drm/i915/display/intel_display.h +++ b/drivers/gpu/drm/i915/display/intel_display.h @@ -546,7 +546,6 @@ unsigned int intel_rotation_info_size(const struct intel_rotation_info *rot_info unsigned int intel_remapped_info_size(const struct intel_remapped_info *rem_info); bool intel_has_pending_fb_unpin(struct drm_i915_private *dev_priv); int intel_display_suspend(struct drm_device *dev); -void intel_pps_unlock_regs_wa(struct drm_i915_private *dev_priv); void intel_encoder_destroy(struct drm_encoder *encoder); struct drm_display_mode * intel_encoder_current_mode(struct intel_encoder *encoder); @@ -643,7 +642,7 @@ void intel_display_print_error_state(struct drm_i915_error_state_buf *e, bool intel_format_info_is_yuv_semiplanar(const struct drm_format_info *info, - uint64_t modifier); + u64 modifier); int intel_plane_compute_gtt(struct intel_plane_state *plane_state); u32 intel_plane_compute_aligned_offset(int *x, int *y, @@ -651,6 +650,9 @@ u32 intel_plane_compute_aligned_offset(int *x, int *y, int color_plane); int intel_plane_pin_fb(struct intel_plane_state *plane_state); void intel_plane_unpin_fb(struct intel_plane_state *old_plane_state); +struct intel_encoder * +intel_get_crtc_new_encoder(const struct intel_atomic_state *state, + const struct intel_crtc_state *crtc_state); /* modesetting */ void intel_modeset_init_hw(struct drm_i915_private *i915); diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c b/drivers/gpu/drm/i915/display/intel_display_debugfs.c index cd7e5519ee7d..d62b18d5ecd8 100644 --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c @@ -1139,7 +1139,6 @@ static ssize_t i915_ipc_status_write(struct file *file, const char __user *ubuf, if (!dev_priv->ipc_enabled && enable) drm_info(&dev_priv->drm, "Enabling IPC: WM will be proper only after next commit\n"); - dev_priv->wm.distrust_bios_wm = true; dev_priv->ipc_enabled = enable; intel_enable_ipc(dev_priv); } @@ -2155,13 +2154,13 @@ static int i915_panel_show(struct seq_file *m, void *data) return -ENODEV; seq_printf(m, "Panel power up delay: %d\n", - intel_dp->panel_power_up_delay); + intel_dp->pps.panel_power_up_delay); seq_printf(m, "Panel power down delay: %d\n", - intel_dp->panel_power_down_delay); + intel_dp->pps.panel_power_down_delay); seq_printf(m, "Backlight on delay: %d\n", - intel_dp->backlight_on_delay); + intel_dp->pps.backlight_on_delay); seq_printf(m, "Backlight off delay: %d\n", - intel_dp->backlight_off_delay); + intel_dp->pps.backlight_off_delay); return 0; } diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c index d52374f01316..c11c37c65d86 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -4,7 +4,6 @@ */ #include "display/intel_crt.h" -#include "display/intel_dp.h" #include "i915_drv.h" #include "i915_irq.h" @@ -16,6 +15,7 @@ #include "intel_dpio_phy.h" #include "intel_hotplug.h" #include "intel_pm.h" +#include "intel_pps.h" #include "intel_sideband.h" #include "intel_tc.h" #include "intel_vga.h" @@ -936,7 +936,7 @@ static void bxt_enable_dc9(struct drm_i915_private *dev_priv) * because PPS registers are always on. */ if (!HAS_PCH_SPLIT(dev_priv)) - intel_power_sequencer_reset(dev_priv); + intel_pps_reset_all(dev_priv); gen9_set_dc_state(dev_priv, DC_STATE_EN_DC9); } @@ -1446,7 +1446,7 @@ static void vlv_display_power_well_deinit(struct drm_i915_private *dev_priv) /* make sure we're done processing display irqs */ intel_synchronize_irq(dev_priv); - intel_power_sequencer_reset(dev_priv); + intel_pps_reset_all(dev_priv); /* Prevent us from re-enabling polling on accident in late suspend */ if (!dev_priv->drm.dev->power.is_suspended) diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h index 9dfad41eb3dc..39397748b4b0 100644 --- a/drivers/gpu/drm/i915/display/intel_display_types.h +++ b/drivers/gpu/drm/i915/display/intel_display_types.h @@ -228,7 +228,7 @@ struct intel_encoder { struct intel_panel_bl_funcs { /* Connector and platform specific backlight functions */ int (*setup)(struct intel_connector *connector, enum pipe pipe); - u32 (*get)(struct intel_connector *connector); + u32 (*get)(struct intel_connector *connector, enum pipe pipe); void (*set)(const struct drm_connector_state *conn_state, u32 level); void (*disable)(const struct drm_connector_state *conn_state, u32 level); void (*enable)(const struct intel_crtc_state *crtc_state, @@ -252,17 +252,28 @@ struct intel_panel { bool alternate_pwm_increment; /* lpt+ */ /* PWM chip */ + u32 pwm_level_min; + u32 pwm_level_max; + bool pwm_enabled; bool util_pin_active_low; /* bxt+ */ u8 controller; /* bxt+ only */ struct pwm_device *pwm; struct pwm_state pwm_state; /* DPCD backlight */ - u8 pwmgen_bit_count; + union { + struct { + u8 pwmgen_bit_count; + } vesa; + struct { + bool sdr_uses_aux; + } intel; + } edp; struct backlight_device *device; const struct intel_panel_bl_funcs *funcs; + const struct intel_panel_bl_funcs *pwm_funcs; void (*power)(struct intel_connector *, bool enable); } backlight; }; @@ -343,6 +354,10 @@ struct intel_hdcp_shim { enum transcoder cpu_transcoder, bool enable); + /* Enable/Disable stream encryption on DP MST Transport Link */ + int (*stream_encryption)(struct intel_connector *connector, + bool enable); + /* Ensures the link is still protected */ bool (*check_link)(struct intel_digital_port *dig_port, struct intel_connector *connector); @@ -374,8 +389,13 @@ struct intel_hdcp_shim { int (*config_stream_type)(struct intel_digital_port *dig_port, bool is_repeater, u8 type); + /* Enable/Disable HDCP 2.2 stream encryption on DP MST Transport Link */ + int (*stream_2_2_encryption)(struct intel_connector *connector, + bool enable); + /* HDCP2.2 Link Integrity Check */ - int (*check_2_2_link)(struct intel_digital_port *dig_port); + int (*check_2_2_link)(struct intel_digital_port *dig_port, + struct intel_connector *connector); }; struct intel_hdcp { @@ -402,7 +422,6 @@ struct intel_hdcp { * content can flow only through a link protected by HDCP2.2. */ u8 content_type; - struct hdcp_port_data port_data; bool is_paired; bool is_repeater; @@ -436,6 +455,8 @@ struct intel_hdcp { * Hence caching the transcoder here. */ enum transcoder cpu_transcoder; + /* Only used for DP MST stream encryption */ + enum transcoder stream_transcoder; }; struct intel_connector { @@ -535,7 +556,7 @@ struct intel_plane_state { struct drm_framebuffer *fb; u16 alpha; - uint16_t pixel_blend_mode; + u16 pixel_blend_mode; unsigned int rotation; enum drm_color_encoding color_encoding; enum drm_color_range color_range; @@ -610,6 +631,9 @@ struct intel_plane_state { struct drm_intel_sprite_colorkey ckey; struct drm_rect psr2_sel_fetch_area; + + /* Clear Color Value */ + u64 ccval; }; struct intel_initial_plane_config { @@ -669,6 +693,8 @@ struct intel_crtc_scaler_state { #define I915_MODE_FLAG_DSI_USE_TE1 (1<<4) /* Flag to indicate mipi dsi periodic command mode where we do not get TE */ #define I915_MODE_FLAG_DSI_PERIODIC_CMD_MODE (1<<5) +/* Do tricks to make vblank timestamps sane with VRR? */ +#define I915_MODE_FLAG_VRR (1<<6) struct intel_wm_level { bool enable; @@ -1127,6 +1153,13 @@ struct intel_crtc_state { struct intel_dsb *dsb; u32 psr2_man_track_ctl; + + /* Variable Refresh Rate state */ + struct { + bool enable; + u8 pipeline_full; + u16 flipline, vmin, vmax; + } vrr; }; enum intel_pipe_crc_source { @@ -1169,6 +1202,8 @@ struct intel_crtc { /* I915_MODE_FLAG_* */ u8 mode_flags; + u16 vmax_vblank_start; + struct intel_display_power_domain_set enabled_power_domains; struct intel_overlay *overlay; @@ -1221,6 +1256,7 @@ struct intel_plane { enum pipe pipe; bool has_fbc; bool has_ccs; + bool need_async_flip_disable_wa; u32 frontbuffer_bit; struct { @@ -1257,7 +1293,10 @@ struct intel_plane { const struct intel_plane_state *plane_state); void (*async_flip)(struct intel_plane *plane, const struct intel_crtc_state *crtc_state, - const struct intel_plane_state *plane_state); + const struct intel_plane_state *plane_state, + bool async_flip); + void (*enable_flip_done)(struct intel_plane *plane); + void (*disable_flip_done)(struct intel_plane *plane); }; struct intel_watermark_params { @@ -1344,6 +1383,38 @@ struct intel_dp_pcon_frl { int trained_rate_gbps; }; +struct intel_pps { + int panel_power_up_delay; + int panel_power_down_delay; + int panel_power_cycle_delay; + int backlight_on_delay; + int backlight_off_delay; + struct delayed_work panel_vdd_work; + bool want_panel_vdd; + unsigned long last_power_on; + unsigned long last_backlight_off; + ktime_t panel_power_off_time; + intel_wakeref_t vdd_wakeref; + + /* + * Pipe whose power sequencer is currently locked into + * this port. Only relevant on VLV/CHV. + */ + enum pipe pps_pipe; + /* + * Pipe currently driving the port. Used for preventing + * the use of the PPS for any pipe currentrly driving + * external DP as that will mess things up on VLV. + */ + enum pipe active_pipe; + /* + * Set if the sequencer may be reset due to a power transition, + * requiring a reinitialization. Only relevant on BXT. + */ + bool pps_reset; + struct edp_power_seq pps_delays; +}; + struct intel_dp { i915_reg_t output_reg; u32 DP; @@ -1380,39 +1451,11 @@ struct intel_dp { int max_link_rate; /* sink or branch descriptor */ struct drm_dp_desc desc; - u32 edid_quirks; struct drm_dp_aux aux; u32 aux_busy_last_status; u8 train_set[4]; - int panel_power_up_delay; - int panel_power_down_delay; - int panel_power_cycle_delay; - int backlight_on_delay; - int backlight_off_delay; - struct delayed_work panel_vdd_work; - bool want_panel_vdd; - unsigned long last_power_on; - unsigned long last_backlight_off; - ktime_t panel_power_off_time; - intel_wakeref_t vdd_wakeref; - /* - * Pipe whose power sequencer is currently locked into - * this port. Only relevant on VLV/CHV. - */ - enum pipe pps_pipe; - /* - * Pipe currently driving the port. Used for preventing - * the use of the PPS for any pipe currentrly driving - * external DP as that will mess things up on VLV. - */ - enum pipe active_pipe; - /* - * Set if the sequencer may be reset due to a power transition, - * requiring a reinitialization. Only relevant on BXT. - */ - bool pps_reset; - struct edp_power_seq pps_delays; + struct intel_pps pps; bool can_mst; /* this port supports mst */ bool is_mst; @@ -1511,10 +1554,14 @@ struct intel_digital_port { enum phy_fia tc_phy_fia; u8 tc_phy_fia_idx; - /* protects num_hdcp_streams reference count */ + /* protects num_hdcp_streams reference count, hdcp_port_data and hdcp_auth_status */ struct mutex hdcp_mutex; /* the number of pipes using HDCP signalling out of this port */ unsigned int num_hdcp_streams; + /* port HDCP auth status */ + bool hdcp_auth_status; + /* HDCP port data need to pass to security f/w */ + struct hdcp_port_data hdcp_port_data; void (*write_infoframe)(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state, @@ -1825,4 +1872,26 @@ to_intel_frontbuffer(struct drm_framebuffer *fb) return fb ? to_intel_framebuffer(fb)->frontbuffer : NULL; } +static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv) +{ + if (dev_priv->params.panel_use_ssc >= 0) + return dev_priv->params.panel_use_ssc != 0; + return dev_priv->vbt.lvds_use_ssc + && !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE); +} + +static inline u32 i9xx_dpll_compute_fp(struct dpll *dpll) +{ + return dpll->n << 16 | dpll->m1 << 8 | dpll->m2; +} + +static inline u32 intel_fdi_link_freq(struct drm_i915_private *dev_priv, + const struct intel_crtc_state *pipe_config) +{ + if (HAS_DDI(dev_priv)) + return pipe_config->port_clock; /* SPLL */ + else + return dev_priv->fdi_pll_freq; +} + #endif /* __INTEL_DISPLAY_TYPES_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c index 469e765a1b7b..8c12d5375607 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -41,13 +41,13 @@ #include "i915_debugfs.h" #include "i915_drv.h" -#include "i915_trace.h" #include "intel_atomic.h" #include "intel_audio.h" #include "intel_connector.h" #include "intel_ddi.h" #include "intel_display_types.h" #include "intel_dp.h" +#include "intel_dp_aux.h" #include "intel_dp_link_training.h" #include "intel_dp_mst.h" #include "intel_dpio_phy.h" @@ -58,10 +58,12 @@ #include "intel_lspcon.h" #include "intel_lvds.h" #include "intel_panel.h" +#include "intel_pps.h" #include "intel_psr.h" #include "intel_sideband.h" #include "intel_tc.h" #include "intel_vdsc.h" +#include "intel_vrr.h" #define DP_DPRX_ESI_LEN 14 @@ -121,6 +123,11 @@ static const struct dp_link_dpll chv_dpll[] = { { .p1 = 4, .p2 = 1, .n = 1, .m1 = 2, .m2 = 0x6c00000 } }, }; +const struct dpll *vlv_get_dpll(struct drm_i915_private *i915) +{ + return IS_CHERRYVIEW(i915) ? &chv_dpll[0].dpll : &vlv_dpll[0].dpll; +} + /* Constants for DP DSC configurations */ static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15}; @@ -145,12 +152,6 @@ bool intel_dp_is_edp(struct intel_dp *intel_dp) static void intel_dp_link_down(struct intel_encoder *encoder, const struct intel_crtc_state *old_crtc_state); -static bool edp_panel_vdd_on(struct intel_dp *intel_dp); -static void edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync); -static void vlv_init_panel_power_sequencer(struct intel_encoder *encoder, - const struct intel_crtc_state *crtc_state); -static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv, - enum pipe pipe); static void intel_dp_unset_edid(struct intel_dp *intel_dp); /* update sink rates from dpcd */ @@ -162,8 +163,7 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp) int i, max_rate; int max_lttpr_rate; - if (drm_dp_has_quirk(&intel_dp->desc, 0, - DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS)) { + if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS)) { /* Needed, e.g., for Apple MBP 2017, 15 inch eDP Retina panel */ static const int quirk_rates[] = { 162000, 270000, 324000 }; @@ -863,1125 +863,6 @@ intel_dp_mode_valid(struct drm_connector *connector, return intel_mode_valid_max_plane_size(dev_priv, mode, bigjoiner); } -u32 intel_dp_pack_aux(const u8 *src, int src_bytes) -{ - int i; - u32 v = 0; - - if (src_bytes > 4) - src_bytes = 4; - for (i = 0; i < src_bytes; i++) - v |= ((u32)src[i]) << ((3 - i) * 8); - return v; -} - -static void intel_dp_unpack_aux(u32 src, u8 *dst, int dst_bytes) -{ - int i; - if (dst_bytes > 4) - dst_bytes = 4; - for (i = 0; i < dst_bytes; i++) - dst[i] = src >> ((3-i) * 8); -} - -static void -intel_dp_init_panel_power_sequencer(struct intel_dp *intel_dp); -static void -intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp, - bool force_disable_vdd); -static void -intel_dp_pps_init(struct intel_dp *intel_dp); - -static intel_wakeref_t -pps_lock(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - intel_wakeref_t wakeref; - - /* - * See intel_power_sequencer_reset() why we need - * a power domain reference here. - */ - wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_DISPLAY_CORE); - mutex_lock(&dev_priv->pps_mutex); - - return wakeref; -} - -static intel_wakeref_t -pps_unlock(struct intel_dp *intel_dp, intel_wakeref_t wakeref) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - mutex_unlock(&dev_priv->pps_mutex); - intel_display_power_put(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref); - return 0; -} - -#define with_pps_lock(dp, wf) \ - for ((wf) = pps_lock(dp); (wf); (wf) = pps_unlock((dp), (wf))) - -static void -vlv_power_sequencer_kick(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum pipe pipe = intel_dp->pps_pipe; - bool pll_enabled, release_cl_override = false; - enum dpio_phy phy = DPIO_PHY(pipe); - enum dpio_channel ch = vlv_pipe_to_channel(pipe); - u32 DP; - - if (drm_WARN(&dev_priv->drm, - intel_de_read(dev_priv, intel_dp->output_reg) & DP_PORT_EN, - "skipping pipe %c power sequencer kick due to [ENCODER:%d:%s] being active\n", - pipe_name(pipe), dig_port->base.base.base.id, - dig_port->base.base.name)) - return; - - drm_dbg_kms(&dev_priv->drm, - "kicking pipe %c power sequencer for [ENCODER:%d:%s]\n", - pipe_name(pipe), dig_port->base.base.base.id, - dig_port->base.base.name); - - /* Preserve the BIOS-computed detected bit. This is - * supposed to be read-only. - */ - DP = intel_de_read(dev_priv, intel_dp->output_reg) & DP_DETECTED; - DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; - DP |= DP_PORT_WIDTH(1); - DP |= DP_LINK_TRAIN_PAT_1; - - if (IS_CHERRYVIEW(dev_priv)) - DP |= DP_PIPE_SEL_CHV(pipe); - else - DP |= DP_PIPE_SEL(pipe); - - pll_enabled = intel_de_read(dev_priv, DPLL(pipe)) & DPLL_VCO_ENABLE; - - /* - * The DPLL for the pipe must be enabled for this to work. - * So enable temporarily it if it's not already enabled. - */ - if (!pll_enabled) { - release_cl_override = IS_CHERRYVIEW(dev_priv) && - !chv_phy_powergate_ch(dev_priv, phy, ch, true); - - if (vlv_force_pll_on(dev_priv, pipe, IS_CHERRYVIEW(dev_priv) ? - &chv_dpll[0].dpll : &vlv_dpll[0].dpll)) { - drm_err(&dev_priv->drm, - "Failed to force on pll for pipe %c!\n", - pipe_name(pipe)); - return; - } - } - - /* - * Similar magic as in intel_dp_enable_port(). - * We _must_ do this port enable + disable trick - * to make this power sequencer lock onto the port. - * Otherwise even VDD force bit won't work. - */ - intel_de_write(dev_priv, intel_dp->output_reg, DP); - intel_de_posting_read(dev_priv, intel_dp->output_reg); - - intel_de_write(dev_priv, intel_dp->output_reg, DP | DP_PORT_EN); - intel_de_posting_read(dev_priv, intel_dp->output_reg); - - intel_de_write(dev_priv, intel_dp->output_reg, DP & ~DP_PORT_EN); - intel_de_posting_read(dev_priv, intel_dp->output_reg); - - if (!pll_enabled) { - vlv_force_pll_off(dev_priv, pipe); - - if (release_cl_override) - chv_phy_powergate_ch(dev_priv, phy, ch, false); - } -} - -static enum pipe vlv_find_free_pps(struct drm_i915_private *dev_priv) -{ - struct intel_encoder *encoder; - unsigned int pipes = (1 << PIPE_A) | (1 << PIPE_B); - - /* - * We don't have power sequencer currently. - * Pick one that's not used by other ports. - */ - for_each_intel_dp(&dev_priv->drm, encoder) { - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); - - if (encoder->type == INTEL_OUTPUT_EDP) { - drm_WARN_ON(&dev_priv->drm, - intel_dp->active_pipe != INVALID_PIPE && - intel_dp->active_pipe != - intel_dp->pps_pipe); - - if (intel_dp->pps_pipe != INVALID_PIPE) - pipes &= ~(1 << intel_dp->pps_pipe); - } else { - drm_WARN_ON(&dev_priv->drm, - intel_dp->pps_pipe != INVALID_PIPE); - - if (intel_dp->active_pipe != INVALID_PIPE) - pipes &= ~(1 << intel_dp->active_pipe); - } - } - - if (pipes == 0) - return INVALID_PIPE; - - return ffs(pipes) - 1; -} - -static enum pipe -vlv_power_sequencer_pipe(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum pipe pipe; - - lockdep_assert_held(&dev_priv->pps_mutex); - - /* We should never land here with regular DP ports */ - drm_WARN_ON(&dev_priv->drm, !intel_dp_is_edp(intel_dp)); - - drm_WARN_ON(&dev_priv->drm, intel_dp->active_pipe != INVALID_PIPE && - intel_dp->active_pipe != intel_dp->pps_pipe); - - if (intel_dp->pps_pipe != INVALID_PIPE) - return intel_dp->pps_pipe; - - pipe = vlv_find_free_pps(dev_priv); - - /* - * Didn't find one. This should not happen since there - * are two power sequencers and up to two eDP ports. - */ - if (drm_WARN_ON(&dev_priv->drm, pipe == INVALID_PIPE)) - pipe = PIPE_A; - - vlv_steal_power_sequencer(dev_priv, pipe); - intel_dp->pps_pipe = pipe; - - drm_dbg_kms(&dev_priv->drm, - "picked pipe %c power sequencer for [ENCODER:%d:%s]\n", - pipe_name(intel_dp->pps_pipe), - dig_port->base.base.base.id, - dig_port->base.base.name); - - /* init power sequencer on this pipe and port */ - intel_dp_init_panel_power_sequencer(intel_dp); - intel_dp_init_panel_power_sequencer_registers(intel_dp, true); - - /* - * Even vdd force doesn't work until we've made - * the power sequencer lock in on the port. - */ - vlv_power_sequencer_kick(intel_dp); - - return intel_dp->pps_pipe; -} - -static int -bxt_power_sequencer_idx(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - int backlight_controller = dev_priv->vbt.backlight.controller; - - lockdep_assert_held(&dev_priv->pps_mutex); - - /* We should never land here with regular DP ports */ - drm_WARN_ON(&dev_priv->drm, !intel_dp_is_edp(intel_dp)); - - if (!intel_dp->pps_reset) - return backlight_controller; - - intel_dp->pps_reset = false; - - /* - * Only the HW needs to be reprogrammed, the SW state is fixed and - * has been setup during connector init. - */ - intel_dp_init_panel_power_sequencer_registers(intel_dp, false); - - return backlight_controller; -} - -typedef bool (*vlv_pipe_check)(struct drm_i915_private *dev_priv, - enum pipe pipe); - -static bool vlv_pipe_has_pp_on(struct drm_i915_private *dev_priv, - enum pipe pipe) -{ - return intel_de_read(dev_priv, PP_STATUS(pipe)) & PP_ON; -} - -static bool vlv_pipe_has_vdd_on(struct drm_i915_private *dev_priv, - enum pipe pipe) -{ - return intel_de_read(dev_priv, PP_CONTROL(pipe)) & EDP_FORCE_VDD; -} - -static bool vlv_pipe_any(struct drm_i915_private *dev_priv, - enum pipe pipe) -{ - return true; -} - -static enum pipe -vlv_initial_pps_pipe(struct drm_i915_private *dev_priv, - enum port port, - vlv_pipe_check pipe_check) -{ - enum pipe pipe; - - for (pipe = PIPE_A; pipe <= PIPE_B; pipe++) { - u32 port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(pipe)) & - PANEL_PORT_SELECT_MASK; - - if (port_sel != PANEL_PORT_SELECT_VLV(port)) - continue; - - if (!pipe_check(dev_priv, pipe)) - continue; - - return pipe; - } - - return INVALID_PIPE; -} - -static void -vlv_initial_power_sequencer_setup(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum port port = dig_port->base.port; - - lockdep_assert_held(&dev_priv->pps_mutex); - - /* try to find a pipe with this port selected */ - /* first pick one where the panel is on */ - intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port, - vlv_pipe_has_pp_on); - /* didn't find one? pick one where vdd is on */ - if (intel_dp->pps_pipe == INVALID_PIPE) - intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port, - vlv_pipe_has_vdd_on); - /* didn't find one? pick one with just the correct port */ - if (intel_dp->pps_pipe == INVALID_PIPE) - intel_dp->pps_pipe = vlv_initial_pps_pipe(dev_priv, port, - vlv_pipe_any); - - /* didn't find one? just let vlv_power_sequencer_pipe() pick one when needed */ - if (intel_dp->pps_pipe == INVALID_PIPE) { - drm_dbg_kms(&dev_priv->drm, - "no initial power sequencer for [ENCODER:%d:%s]\n", - dig_port->base.base.base.id, - dig_port->base.base.name); - return; - } - - drm_dbg_kms(&dev_priv->drm, - "initial power sequencer for [ENCODER:%d:%s]: pipe %c\n", - dig_port->base.base.base.id, - dig_port->base.base.name, - pipe_name(intel_dp->pps_pipe)); - - intel_dp_init_panel_power_sequencer(intel_dp); - intel_dp_init_panel_power_sequencer_registers(intel_dp, false); -} - -void intel_power_sequencer_reset(struct drm_i915_private *dev_priv) -{ - struct intel_encoder *encoder; - - if (drm_WARN_ON(&dev_priv->drm, - !(IS_VALLEYVIEW(dev_priv) || - IS_CHERRYVIEW(dev_priv) || - IS_GEN9_LP(dev_priv)))) - return; - - /* - * We can't grab pps_mutex here due to deadlock with power_domain - * mutex when power_domain functions are called while holding pps_mutex. - * That also means that in order to use pps_pipe the code needs to - * hold both a power domain reference and pps_mutex, and the power domain - * reference get/put must be done while _not_ holding pps_mutex. - * pps_{lock,unlock}() do these steps in the correct order, so one - * should use them always. - */ - - for_each_intel_dp(&dev_priv->drm, encoder) { - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); - - drm_WARN_ON(&dev_priv->drm, - intel_dp->active_pipe != INVALID_PIPE); - - if (encoder->type != INTEL_OUTPUT_EDP) - continue; - - if (IS_GEN9_LP(dev_priv)) - intel_dp->pps_reset = true; - else - intel_dp->pps_pipe = INVALID_PIPE; - } -} - -struct pps_registers { - i915_reg_t pp_ctrl; - i915_reg_t pp_stat; - i915_reg_t pp_on; - i915_reg_t pp_off; - i915_reg_t pp_div; -}; - -static void intel_pps_get_registers(struct intel_dp *intel_dp, - struct pps_registers *regs) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - int pps_idx = 0; - - memset(regs, 0, sizeof(*regs)); - - if (IS_GEN9_LP(dev_priv)) - pps_idx = bxt_power_sequencer_idx(intel_dp); - else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - pps_idx = vlv_power_sequencer_pipe(intel_dp); - - regs->pp_ctrl = PP_CONTROL(pps_idx); - regs->pp_stat = PP_STATUS(pps_idx); - regs->pp_on = PP_ON_DELAYS(pps_idx); - regs->pp_off = PP_OFF_DELAYS(pps_idx); - - /* Cycle delay moved from PP_DIVISOR to PP_CONTROL */ - if (IS_GEN9_LP(dev_priv) || INTEL_PCH_TYPE(dev_priv) >= PCH_CNP) - regs->pp_div = INVALID_MMIO_REG; - else - regs->pp_div = PP_DIVISOR(pps_idx); -} - -static i915_reg_t -_pp_ctrl_reg(struct intel_dp *intel_dp) -{ - struct pps_registers regs; - - intel_pps_get_registers(intel_dp, ®s); - - return regs.pp_ctrl; -} - -static i915_reg_t -_pp_stat_reg(struct intel_dp *intel_dp) -{ - struct pps_registers regs; - - intel_pps_get_registers(intel_dp, ®s); - - return regs.pp_stat; -} - -static bool edp_have_panel_power(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - lockdep_assert_held(&dev_priv->pps_mutex); - - if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && - intel_dp->pps_pipe == INVALID_PIPE) - return false; - - return (intel_de_read(dev_priv, _pp_stat_reg(intel_dp)) & PP_ON) != 0; -} - -static bool edp_have_panel_vdd(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - lockdep_assert_held(&dev_priv->pps_mutex); - - if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && - intel_dp->pps_pipe == INVALID_PIPE) - return false; - - return intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp)) & EDP_FORCE_VDD; -} - -static void -intel_dp_check_edp(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - if (!intel_dp_is_edp(intel_dp)) - return; - - if (!edp_have_panel_power(intel_dp) && !edp_have_panel_vdd(intel_dp)) { - drm_WARN(&dev_priv->drm, 1, - "eDP powered off while attempting aux channel communication.\n"); - drm_dbg_kms(&dev_priv->drm, "Status 0x%08x Control 0x%08x\n", - intel_de_read(dev_priv, _pp_stat_reg(intel_dp)), - intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp))); - } -} - -static u32 -intel_dp_aux_wait_done(struct intel_dp *intel_dp) -{ - struct drm_i915_private *i915 = dp_to_i915(intel_dp); - i915_reg_t ch_ctl = intel_dp->aux_ch_ctl_reg(intel_dp); - const unsigned int timeout_ms = 10; - u32 status; - bool done; - -#define C (((status = intel_uncore_read_notrace(&i915->uncore, ch_ctl)) & DP_AUX_CH_CTL_SEND_BUSY) == 0) - done = wait_event_timeout(i915->gmbus_wait_queue, C, - msecs_to_jiffies_timeout(timeout_ms)); - - /* just trace the final value */ - trace_i915_reg_rw(false, ch_ctl, status, sizeof(status), true); - - if (!done) - drm_err(&i915->drm, - "%s: did not complete or timeout within %ums (status 0x%08x)\n", - intel_dp->aux.name, timeout_ms, status); -#undef C - - return status; -} - -static u32 g4x_get_aux_clock_divider(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - if (index) - return 0; - - /* - * The clock divider is based off the hrawclk, and would like to run at - * 2MHz. So, take the hrawclk value and divide by 2000 and use that - */ - return DIV_ROUND_CLOSEST(RUNTIME_INFO(dev_priv)->rawclk_freq, 2000); -} - -static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - u32 freq; - - if (index) - return 0; - - /* - * The clock divider is based off the cdclk or PCH rawclk, and would - * like to run at 2MHz. So, take the cdclk or PCH rawclk value and - * divide by 2000 and use that - */ - if (dig_port->aux_ch == AUX_CH_A) - freq = dev_priv->cdclk.hw.cdclk; - else - freq = RUNTIME_INFO(dev_priv)->rawclk_freq; - return DIV_ROUND_CLOSEST(freq, 2000); -} - -static u32 hsw_get_aux_clock_divider(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - - if (dig_port->aux_ch != AUX_CH_A && HAS_PCH_LPT_H(dev_priv)) { - /* Workaround for non-ULT HSW */ - switch (index) { - case 0: return 63; - case 1: return 72; - default: return 0; - } - } - - return ilk_get_aux_clock_divider(intel_dp, index); -} - -static u32 skl_get_aux_clock_divider(struct intel_dp *intel_dp, int index) -{ - /* - * SKL doesn't need us to program the AUX clock divider (Hardware will - * derive the clock from CDCLK automatically). We still implement the - * get_aux_clock_divider vfunc to plug-in into the existing code. - */ - return index ? 0 : 1; -} - -static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp, - int send_bytes, - u32 aux_clock_divider) -{ - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - struct drm_i915_private *dev_priv = - to_i915(dig_port->base.base.dev); - u32 precharge, timeout; - - if (IS_GEN(dev_priv, 6)) - precharge = 3; - else - precharge = 5; - - if (IS_BROADWELL(dev_priv)) - timeout = DP_AUX_CH_CTL_TIME_OUT_600us; - else - timeout = DP_AUX_CH_CTL_TIME_OUT_400us; - - return DP_AUX_CH_CTL_SEND_BUSY | - DP_AUX_CH_CTL_DONE | - DP_AUX_CH_CTL_INTERRUPT | - DP_AUX_CH_CTL_TIME_OUT_ERROR | - timeout | - DP_AUX_CH_CTL_RECEIVE_ERROR | - (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | - (precharge << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) | - (aux_clock_divider << DP_AUX_CH_CTL_BIT_CLOCK_2X_SHIFT); -} - -static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp, - int send_bytes, - u32 unused) -{ - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - struct drm_i915_private *i915 = - to_i915(dig_port->base.base.dev); - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); - u32 ret; - - ret = DP_AUX_CH_CTL_SEND_BUSY | - DP_AUX_CH_CTL_DONE | - DP_AUX_CH_CTL_INTERRUPT | - DP_AUX_CH_CTL_TIME_OUT_ERROR | - DP_AUX_CH_CTL_TIME_OUT_MAX | - DP_AUX_CH_CTL_RECEIVE_ERROR | - (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | - DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) | - DP_AUX_CH_CTL_SYNC_PULSE_SKL(32); - - if (intel_phy_is_tc(i915, phy) && - dig_port->tc_mode == TC_PORT_TBT_ALT) - ret |= DP_AUX_CH_CTL_TBT_IO; - - return ret; -} - -static int -intel_dp_aux_xfer(struct intel_dp *intel_dp, - const u8 *send, int send_bytes, - u8 *recv, int recv_size, - u32 aux_send_ctl_flags) -{ - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - struct drm_i915_private *i915 = - to_i915(dig_port->base.base.dev); - struct intel_uncore *uncore = &i915->uncore; - enum phy phy = intel_port_to_phy(i915, dig_port->base.port); - bool is_tc_port = intel_phy_is_tc(i915, phy); - i915_reg_t ch_ctl, ch_data[5]; - u32 aux_clock_divider; - enum intel_display_power_domain aux_domain; - intel_wakeref_t aux_wakeref; - intel_wakeref_t pps_wakeref; - int i, ret, recv_bytes; - int try, clock = 0; - u32 status; - bool vdd; - - ch_ctl = intel_dp->aux_ch_ctl_reg(intel_dp); - for (i = 0; i < ARRAY_SIZE(ch_data); i++) - ch_data[i] = intel_dp->aux_ch_data_reg(intel_dp, i); - - if (is_tc_port) - intel_tc_port_lock(dig_port); - - aux_domain = intel_aux_power_domain(dig_port); - - aux_wakeref = intel_display_power_get(i915, aux_domain); - pps_wakeref = pps_lock(intel_dp); - - /* - * We will be called with VDD already enabled for dpcd/edid/oui reads. - * In such cases we want to leave VDD enabled and it's up to upper layers - * to turn it off. But for eg. i2c-dev access we need to turn it on/off - * ourselves. - */ - vdd = edp_panel_vdd_on(intel_dp); - - /* dp aux is extremely sensitive to irq latency, hence request the - * lowest possible wakeup latency and so prevent the cpu from going into - * deep sleep states. - */ - cpu_latency_qos_update_request(&intel_dp->pm_qos, 0); - - intel_dp_check_edp(intel_dp); - - /* Try to wait for any previous AUX channel activity */ - for (try = 0; try < 3; try++) { - status = intel_uncore_read_notrace(uncore, ch_ctl); - if ((status & DP_AUX_CH_CTL_SEND_BUSY) == 0) - break; - msleep(1); - } - /* just trace the final value */ - trace_i915_reg_rw(false, ch_ctl, status, sizeof(status), true); - - if (try == 3) { - const u32 status = intel_uncore_read(uncore, ch_ctl); - - if (status != intel_dp->aux_busy_last_status) { - drm_WARN(&i915->drm, 1, - "%s: not started (status 0x%08x)\n", - intel_dp->aux.name, status); - intel_dp->aux_busy_last_status = status; - } - - ret = -EBUSY; - goto out; - } - - /* Only 5 data registers! */ - if (drm_WARN_ON(&i915->drm, send_bytes > 20 || recv_size > 20)) { - ret = -E2BIG; - goto out; - } - - while ((aux_clock_divider = intel_dp->get_aux_clock_divider(intel_dp, clock++))) { - u32 send_ctl = intel_dp->get_aux_send_ctl(intel_dp, - send_bytes, - aux_clock_divider); - - send_ctl |= aux_send_ctl_flags; - - /* Must try at least 3 times according to DP spec */ - for (try = 0; try < 5; try++) { - /* Load the send data into the aux channel data registers */ - for (i = 0; i < send_bytes; i += 4) - intel_uncore_write(uncore, - ch_data[i >> 2], - intel_dp_pack_aux(send + i, - send_bytes - i)); - - /* Send the command and wait for it to complete */ - intel_uncore_write(uncore, ch_ctl, send_ctl); - - status = intel_dp_aux_wait_done(intel_dp); - - /* Clear done status and any errors */ - intel_uncore_write(uncore, - ch_ctl, - status | - DP_AUX_CH_CTL_DONE | - DP_AUX_CH_CTL_TIME_OUT_ERROR | - DP_AUX_CH_CTL_RECEIVE_ERROR); - - /* DP CTS 1.2 Core Rev 1.1, 4.2.1.1 & 4.2.1.2 - * 400us delay required for errors and timeouts - * Timeout errors from the HW already meet this - * requirement so skip to next iteration - */ - if (status & DP_AUX_CH_CTL_TIME_OUT_ERROR) - continue; - - if (status & DP_AUX_CH_CTL_RECEIVE_ERROR) { - usleep_range(400, 500); - continue; - } - if (status & DP_AUX_CH_CTL_DONE) - goto done; - } - } - - if ((status & DP_AUX_CH_CTL_DONE) == 0) { - drm_err(&i915->drm, "%s: not done (status 0x%08x)\n", - intel_dp->aux.name, status); - ret = -EBUSY; - goto out; - } - -done: - /* Check for timeout or receive error. - * Timeouts occur when the sink is not connected - */ - if (status & DP_AUX_CH_CTL_RECEIVE_ERROR) { - drm_err(&i915->drm, "%s: receive error (status 0x%08x)\n", - intel_dp->aux.name, status); - ret = -EIO; - goto out; - } - - /* Timeouts occur when the device isn't connected, so they're - * "normal" -- don't fill the kernel log with these */ - if (status & DP_AUX_CH_CTL_TIME_OUT_ERROR) { - drm_dbg_kms(&i915->drm, "%s: timeout (status 0x%08x)\n", - intel_dp->aux.name, status); - ret = -ETIMEDOUT; - goto out; - } - - /* Unload any bytes sent back from the other side */ - recv_bytes = ((status & DP_AUX_CH_CTL_MESSAGE_SIZE_MASK) >> - DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT); - - /* - * By BSpec: "Message sizes of 0 or >20 are not allowed." - * We have no idea of what happened so we return -EBUSY so - * drm layer takes care for the necessary retries. - */ - if (recv_bytes == 0 || recv_bytes > 20) { - drm_dbg_kms(&i915->drm, - "%s: Forbidden recv_bytes = %d on aux transaction\n", - intel_dp->aux.name, recv_bytes); - ret = -EBUSY; - goto out; - } - - if (recv_bytes > recv_size) - recv_bytes = recv_size; - - for (i = 0; i < recv_bytes; i += 4) - intel_dp_unpack_aux(intel_uncore_read(uncore, ch_data[i >> 2]), - recv + i, recv_bytes - i); - - ret = recv_bytes; -out: - cpu_latency_qos_update_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE); - - if (vdd) - edp_panel_vdd_off(intel_dp, false); - - pps_unlock(intel_dp, pps_wakeref); - intel_display_power_put_async(i915, aux_domain, aux_wakeref); - - if (is_tc_port) - intel_tc_port_unlock(dig_port); - - return ret; -} - -#define BARE_ADDRESS_SIZE 3 -#define HEADER_SIZE (BARE_ADDRESS_SIZE + 1) - -static void -intel_dp_aux_header(u8 txbuf[HEADER_SIZE], - const struct drm_dp_aux_msg *msg) -{ - txbuf[0] = (msg->request << 4) | ((msg->address >> 16) & 0xf); - txbuf[1] = (msg->address >> 8) & 0xff; - txbuf[2] = msg->address & 0xff; - txbuf[3] = msg->size - 1; -} - -static u32 intel_dp_aux_xfer_flags(const struct drm_dp_aux_msg *msg) -{ - /* - * If we're trying to send the HDCP Aksv, we need to set a the Aksv - * select bit to inform the hardware to send the Aksv after our header - * since we can't access that data from software. - */ - if ((msg->request & ~DP_AUX_I2C_MOT) == DP_AUX_NATIVE_WRITE && - msg->address == DP_AUX_HDCP_AKSV) - return DP_AUX_CH_CTL_AUX_AKSV_SELECT; - - return 0; -} - -static ssize_t -intel_dp_aux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg) -{ - struct intel_dp *intel_dp = container_of(aux, struct intel_dp, aux); - struct drm_i915_private *i915 = dp_to_i915(intel_dp); - u8 txbuf[20], rxbuf[20]; - size_t txsize, rxsize; - u32 flags = intel_dp_aux_xfer_flags(msg); - int ret; - - intel_dp_aux_header(txbuf, msg); - - switch (msg->request & ~DP_AUX_I2C_MOT) { - case DP_AUX_NATIVE_WRITE: - case DP_AUX_I2C_WRITE: - case DP_AUX_I2C_WRITE_STATUS_UPDATE: - txsize = msg->size ? HEADER_SIZE + msg->size : BARE_ADDRESS_SIZE; - rxsize = 2; /* 0 or 1 data bytes */ - - if (drm_WARN_ON(&i915->drm, txsize > 20)) - return -E2BIG; - - drm_WARN_ON(&i915->drm, !msg->buffer != !msg->size); - - if (msg->buffer) - memcpy(txbuf + HEADER_SIZE, msg->buffer, msg->size); - - ret = intel_dp_aux_xfer(intel_dp, txbuf, txsize, - rxbuf, rxsize, flags); - if (ret > 0) { - msg->reply = rxbuf[0] >> 4; - - if (ret > 1) { - /* Number of bytes written in a short write. */ - ret = clamp_t(int, rxbuf[1], 0, msg->size); - } else { - /* Return payload size. */ - ret = msg->size; - } - } - break; - - case DP_AUX_NATIVE_READ: - case DP_AUX_I2C_READ: - txsize = msg->size ? HEADER_SIZE : BARE_ADDRESS_SIZE; - rxsize = msg->size + 1; - - if (drm_WARN_ON(&i915->drm, rxsize > 20)) - return -E2BIG; - - ret = intel_dp_aux_xfer(intel_dp, txbuf, txsize, - rxbuf, rxsize, flags); - if (ret > 0) { - msg->reply = rxbuf[0] >> 4; - /* - * Assume happy day, and copy the data. The caller is - * expected to check msg->reply before touching it. - * - * Return payload size. - */ - ret--; - memcpy(msg->buffer, rxbuf + 1, ret); - } - break; - - default: - ret = -EINVAL; - break; - } - - return ret; -} - - -static i915_reg_t g4x_aux_ctl_reg(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - return DP_AUX_CH_CTL(aux_ch); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_CTL(AUX_CH_B); - } -} - -static i915_reg_t g4x_aux_data_reg(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - return DP_AUX_CH_DATA(aux_ch, index); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_DATA(AUX_CH_B, index); - } -} - -static i915_reg_t ilk_aux_ctl_reg(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - return DP_AUX_CH_CTL(aux_ch); - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - return PCH_DP_AUX_CH_CTL(aux_ch); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_CTL(AUX_CH_A); - } -} - -static i915_reg_t ilk_aux_data_reg(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - return DP_AUX_CH_DATA(aux_ch, index); - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - return PCH_DP_AUX_CH_DATA(aux_ch, index); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_DATA(AUX_CH_A, index); - } -} - -static i915_reg_t skl_aux_ctl_reg(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - case AUX_CH_E: - case AUX_CH_F: - return DP_AUX_CH_CTL(aux_ch); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_CTL(AUX_CH_A); - } -} - -static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_D: - case AUX_CH_E: - case AUX_CH_F: - return DP_AUX_CH_DATA(aux_ch, index); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_DATA(AUX_CH_A, index); - } -} - -static i915_reg_t tgl_aux_ctl_reg(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_USBC1: - case AUX_CH_USBC2: - case AUX_CH_USBC3: - case AUX_CH_USBC4: - case AUX_CH_USBC5: - case AUX_CH_USBC6: - return DP_AUX_CH_CTL(aux_ch); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_CTL(AUX_CH_A); - } -} - -static i915_reg_t tgl_aux_data_reg(struct intel_dp *intel_dp, int index) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - enum aux_ch aux_ch = dig_port->aux_ch; - - switch (aux_ch) { - case AUX_CH_A: - case AUX_CH_B: - case AUX_CH_C: - case AUX_CH_USBC1: - case AUX_CH_USBC2: - case AUX_CH_USBC3: - case AUX_CH_USBC4: - case AUX_CH_USBC5: - case AUX_CH_USBC6: - return DP_AUX_CH_DATA(aux_ch, index); - default: - MISSING_CASE(aux_ch); - return DP_AUX_CH_DATA(AUX_CH_A, index); - } -} - -static void -intel_dp_aux_fini(struct intel_dp *intel_dp) -{ - if (cpu_latency_qos_request_active(&intel_dp->pm_qos)) - cpu_latency_qos_remove_request(&intel_dp->pm_qos); - - kfree(intel_dp->aux.name); -} - -static void -intel_dp_aux_init(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - struct intel_encoder *encoder = &dig_port->base; - enum aux_ch aux_ch = dig_port->aux_ch; - - if (INTEL_GEN(dev_priv) >= 12) { - intel_dp->aux_ch_ctl_reg = tgl_aux_ctl_reg; - intel_dp->aux_ch_data_reg = tgl_aux_data_reg; - } else if (INTEL_GEN(dev_priv) >= 9) { - intel_dp->aux_ch_ctl_reg = skl_aux_ctl_reg; - intel_dp->aux_ch_data_reg = skl_aux_data_reg; - } else if (HAS_PCH_SPLIT(dev_priv)) { - intel_dp->aux_ch_ctl_reg = ilk_aux_ctl_reg; - intel_dp->aux_ch_data_reg = ilk_aux_data_reg; - } else { - intel_dp->aux_ch_ctl_reg = g4x_aux_ctl_reg; - intel_dp->aux_ch_data_reg = g4x_aux_data_reg; - } - - if (INTEL_GEN(dev_priv) >= 9) - intel_dp->get_aux_clock_divider = skl_get_aux_clock_divider; - else if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) - intel_dp->get_aux_clock_divider = hsw_get_aux_clock_divider; - else if (HAS_PCH_SPLIT(dev_priv)) - intel_dp->get_aux_clock_divider = ilk_get_aux_clock_divider; - else - intel_dp->get_aux_clock_divider = g4x_get_aux_clock_divider; - - if (INTEL_GEN(dev_priv) >= 9) - intel_dp->get_aux_send_ctl = skl_get_aux_send_ctl; - else - intel_dp->get_aux_send_ctl = g4x_get_aux_send_ctl; - - drm_dp_aux_init(&intel_dp->aux); - - /* Failure to allocate our preferred name is not critical */ - if (INTEL_GEN(dev_priv) >= 12 && aux_ch >= AUX_CH_USBC1) - intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX USBC%c/%s", - aux_ch - AUX_CH_USBC1 + '1', - encoder->base.name); - else - intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX %c/%s", - aux_ch_name(aux_ch), - encoder->base.name); - - intel_dp->aux.transfer = intel_dp_aux_transfer; - cpu_latency_qos_add_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE); -} - bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp) { int max_rate = intel_dp->source_rates[intel_dp->num_source_rates - 1]; @@ -2844,6 +1725,9 @@ intel_dp_drrs_compute_config(struct intel_dp *intel_dp, struct intel_connector *intel_connector = intel_dp->attached_connector; struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + if (pipe_config->vrr.enable) + return; + /* * DRRS and PSR can't be enable together, so giving preference to PSR * as it allows more power-savings by complete shutting down display, @@ -2876,8 +1760,7 @@ intel_dp_compute_config(struct intel_encoder *encoder, struct intel_connector *intel_connector = intel_dp->attached_connector; struct intel_digital_connector_state *intel_conn_state = to_intel_digital_connector_state(conn_state); - bool constant_n = drm_dp_has_quirk(&intel_dp->desc, 0, - DP_DPCD_QUIRK_CONSTANT_N); + bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N); int ret = 0, output_bpp; if (HAS_PCH_SPLIT(dev_priv) && !HAS_DDI(dev_priv) && port != PORT_A) @@ -2947,6 +1830,7 @@ intel_dp_compute_config(struct intel_encoder *encoder, if (!HAS_DDI(dev_priv)) intel_dp_set_clock(encoder, pipe_config); + intel_vrr_compute_config(pipe_config, conn_state); intel_psr_compute_config(intel_dp, pipe_config); intel_dp_drrs_compute_config(intel_dp, pipe_config, output_bpp, constant_n); @@ -3047,431 +1931,6 @@ static void intel_dp_prepare(struct intel_encoder *encoder, } } -#define IDLE_ON_MASK (PP_ON | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK) -#define IDLE_ON_VALUE (PP_ON | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_ON_IDLE) - -#define IDLE_OFF_MASK (PP_ON | PP_SEQUENCE_MASK | 0 | 0) -#define IDLE_OFF_VALUE (0 | PP_SEQUENCE_NONE | 0 | 0) - -#define IDLE_CYCLE_MASK (PP_ON | PP_SEQUENCE_MASK | PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK) -#define IDLE_CYCLE_VALUE (0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE) - -static void intel_pps_verify_state(struct intel_dp *intel_dp); - -static void wait_panel_status(struct intel_dp *intel_dp, - u32 mask, - u32 value) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - i915_reg_t pp_stat_reg, pp_ctrl_reg; - - lockdep_assert_held(&dev_priv->pps_mutex); - - intel_pps_verify_state(intel_dp); - - pp_stat_reg = _pp_stat_reg(intel_dp); - pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - - drm_dbg_kms(&dev_priv->drm, - "mask %08x value %08x status %08x control %08x\n", - mask, value, - intel_de_read(dev_priv, pp_stat_reg), - intel_de_read(dev_priv, pp_ctrl_reg)); - - if (intel_de_wait_for_register(dev_priv, pp_stat_reg, - mask, value, 5000)) - drm_err(&dev_priv->drm, - "Panel status timeout: status %08x control %08x\n", - intel_de_read(dev_priv, pp_stat_reg), - intel_de_read(dev_priv, pp_ctrl_reg)); - - drm_dbg_kms(&dev_priv->drm, "Wait complete\n"); -} - -static void wait_panel_on(struct intel_dp *intel_dp) -{ - struct drm_i915_private *i915 = dp_to_i915(intel_dp); - - drm_dbg_kms(&i915->drm, "Wait for panel power on\n"); - wait_panel_status(intel_dp, IDLE_ON_MASK, IDLE_ON_VALUE); -} - -static void wait_panel_off(struct intel_dp *intel_dp) -{ - struct drm_i915_private *i915 = dp_to_i915(intel_dp); - - drm_dbg_kms(&i915->drm, "Wait for panel power off time\n"); - wait_panel_status(intel_dp, IDLE_OFF_MASK, IDLE_OFF_VALUE); -} - -static void wait_panel_power_cycle(struct intel_dp *intel_dp) -{ - struct drm_i915_private *i915 = dp_to_i915(intel_dp); - ktime_t panel_power_on_time; - s64 panel_power_off_duration; - - drm_dbg_kms(&i915->drm, "Wait for panel power cycle\n"); - - /* take the difference of currrent time and panel power off time - * and then make panel wait for t11_t12 if needed. */ - panel_power_on_time = ktime_get_boottime(); - panel_power_off_duration = ktime_ms_delta(panel_power_on_time, intel_dp->panel_power_off_time); - - /* When we disable the VDD override bit last we have to do the manual - * wait. */ - if (panel_power_off_duration < (s64)intel_dp->panel_power_cycle_delay) - wait_remaining_ms_from_jiffies(jiffies, - intel_dp->panel_power_cycle_delay - panel_power_off_duration); - - wait_panel_status(intel_dp, IDLE_CYCLE_MASK, IDLE_CYCLE_VALUE); -} - -static void wait_backlight_on(struct intel_dp *intel_dp) -{ - wait_remaining_ms_from_jiffies(intel_dp->last_power_on, - intel_dp->backlight_on_delay); -} - -static void edp_wait_backlight_off(struct intel_dp *intel_dp) -{ - wait_remaining_ms_from_jiffies(intel_dp->last_backlight_off, - intel_dp->backlight_off_delay); -} - -/* Read the current pp_control value, unlocking the register if it - * is locked - */ - -static u32 ilk_get_pp_control(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - u32 control; - - lockdep_assert_held(&dev_priv->pps_mutex); - - control = intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp)); - if (drm_WARN_ON(&dev_priv->drm, !HAS_DDI(dev_priv) && - (control & PANEL_UNLOCK_MASK) != PANEL_UNLOCK_REGS)) { - control &= ~PANEL_UNLOCK_MASK; - control |= PANEL_UNLOCK_REGS; - } - return control; -} - -/* - * Must be paired with edp_panel_vdd_off(). - * Must hold pps_mutex around the whole on/off sequence. - * Can be nested with intel_edp_panel_vdd_{on,off}() calls. - */ -static bool edp_panel_vdd_on(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - u32 pp; - i915_reg_t pp_stat_reg, pp_ctrl_reg; - bool need_to_disable = !intel_dp->want_panel_vdd; - - lockdep_assert_held(&dev_priv->pps_mutex); - - if (!intel_dp_is_edp(intel_dp)) - return false; - - cancel_delayed_work(&intel_dp->panel_vdd_work); - intel_dp->want_panel_vdd = true; - - if (edp_have_panel_vdd(intel_dp)) - return need_to_disable; - - drm_WARN_ON(&dev_priv->drm, intel_dp->vdd_wakeref); - intel_dp->vdd_wakeref = intel_display_power_get(dev_priv, - intel_aux_power_domain(dig_port)); - - drm_dbg_kms(&dev_priv->drm, "Turning [ENCODER:%d:%s] VDD on\n", - dig_port->base.base.base.id, - dig_port->base.base.name); - - if (!edp_have_panel_power(intel_dp)) - wait_panel_power_cycle(intel_dp); - - pp = ilk_get_pp_control(intel_dp); - pp |= EDP_FORCE_VDD; - - pp_stat_reg = _pp_stat_reg(intel_dp); - pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - drm_dbg_kms(&dev_priv->drm, "PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n", - intel_de_read(dev_priv, pp_stat_reg), - intel_de_read(dev_priv, pp_ctrl_reg)); - /* - * If the panel wasn't on, delay before accessing aux channel - */ - if (!edp_have_panel_power(intel_dp)) { - drm_dbg_kms(&dev_priv->drm, - "[ENCODER:%d:%s] panel power wasn't enabled\n", - dig_port->base.base.base.id, - dig_port->base.base.name); - msleep(intel_dp->panel_power_up_delay); - } - - return need_to_disable; -} - -/* - * Must be paired with intel_edp_panel_vdd_off() or - * intel_edp_panel_off(). - * Nested calls to these functions are not allowed since - * we drop the lock. Caller must use some higher level - * locking to prevent nested calls from other threads. - */ -void intel_edp_panel_vdd_on(struct intel_dp *intel_dp) -{ - intel_wakeref_t wakeref; - bool vdd; - - if (!intel_dp_is_edp(intel_dp)) - return; - - vdd = false; - with_pps_lock(intel_dp, wakeref) - vdd = edp_panel_vdd_on(intel_dp); - I915_STATE_WARN(!vdd, "[ENCODER:%d:%s] VDD already requested on\n", - dp_to_dig_port(intel_dp)->base.base.base.id, - dp_to_dig_port(intel_dp)->base.base.name); -} - -static void edp_panel_vdd_off_sync(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = - dp_to_dig_port(intel_dp); - u32 pp; - i915_reg_t pp_stat_reg, pp_ctrl_reg; - - lockdep_assert_held(&dev_priv->pps_mutex); - - drm_WARN_ON(&dev_priv->drm, intel_dp->want_panel_vdd); - - if (!edp_have_panel_vdd(intel_dp)) - return; - - drm_dbg_kms(&dev_priv->drm, "Turning [ENCODER:%d:%s] VDD off\n", - dig_port->base.base.base.id, - dig_port->base.base.name); - - pp = ilk_get_pp_control(intel_dp); - pp &= ~EDP_FORCE_VDD; - - pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - pp_stat_reg = _pp_stat_reg(intel_dp); - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - - /* Make sure sequencer is idle before allowing subsequent activity */ - drm_dbg_kms(&dev_priv->drm, "PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n", - intel_de_read(dev_priv, pp_stat_reg), - intel_de_read(dev_priv, pp_ctrl_reg)); - - if ((pp & PANEL_POWER_ON) == 0) - intel_dp->panel_power_off_time = ktime_get_boottime(); - - intel_display_power_put(dev_priv, - intel_aux_power_domain(dig_port), - fetch_and_zero(&intel_dp->vdd_wakeref)); -} - -static void edp_panel_vdd_work(struct work_struct *__work) -{ - struct intel_dp *intel_dp = - container_of(to_delayed_work(__work), - struct intel_dp, panel_vdd_work); - intel_wakeref_t wakeref; - - with_pps_lock(intel_dp, wakeref) { - if (!intel_dp->want_panel_vdd) - edp_panel_vdd_off_sync(intel_dp); - } -} - -static void edp_panel_vdd_schedule_off(struct intel_dp *intel_dp) -{ - unsigned long delay; - - /* - * Queue the timer to fire a long time from now (relative to the power - * down delay) to keep the panel power up across a sequence of - * operations. - */ - delay = msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5); - schedule_delayed_work(&intel_dp->panel_vdd_work, delay); -} - -/* - * Must be paired with edp_panel_vdd_on(). - * Must hold pps_mutex around the whole on/off sequence. - * Can be nested with intel_edp_panel_vdd_{on,off}() calls. - */ -static void edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - lockdep_assert_held(&dev_priv->pps_mutex); - - if (!intel_dp_is_edp(intel_dp)) - return; - - I915_STATE_WARN(!intel_dp->want_panel_vdd, "[ENCODER:%d:%s] VDD not forced on", - dp_to_dig_port(intel_dp)->base.base.base.id, - dp_to_dig_port(intel_dp)->base.base.name); - - intel_dp->want_panel_vdd = false; - - if (sync) - edp_panel_vdd_off_sync(intel_dp); - else - edp_panel_vdd_schedule_off(intel_dp); -} - -static void edp_panel_on(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - u32 pp; - i915_reg_t pp_ctrl_reg; - - lockdep_assert_held(&dev_priv->pps_mutex); - - if (!intel_dp_is_edp(intel_dp)) - return; - - drm_dbg_kms(&dev_priv->drm, "Turn [ENCODER:%d:%s] panel power on\n", - dp_to_dig_port(intel_dp)->base.base.base.id, - dp_to_dig_port(intel_dp)->base.base.name); - - if (drm_WARN(&dev_priv->drm, edp_have_panel_power(intel_dp), - "[ENCODER:%d:%s] panel power already on\n", - dp_to_dig_port(intel_dp)->base.base.base.id, - dp_to_dig_port(intel_dp)->base.base.name)) - return; - - wait_panel_power_cycle(intel_dp); - - pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - pp = ilk_get_pp_control(intel_dp); - if (IS_GEN(dev_priv, 5)) { - /* ILK workaround: disable reset around power sequence */ - pp &= ~PANEL_POWER_RESET; - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - } - - pp |= PANEL_POWER_ON; - if (!IS_GEN(dev_priv, 5)) - pp |= PANEL_POWER_RESET; - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - - wait_panel_on(intel_dp); - intel_dp->last_power_on = jiffies; - - if (IS_GEN(dev_priv, 5)) { - pp |= PANEL_POWER_RESET; /* restore panel reset bit */ - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - } -} - -void intel_edp_panel_on(struct intel_dp *intel_dp) -{ - intel_wakeref_t wakeref; - - if (!intel_dp_is_edp(intel_dp)) - return; - - with_pps_lock(intel_dp, wakeref) - edp_panel_on(intel_dp); -} - - -static void edp_panel_off(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - u32 pp; - i915_reg_t pp_ctrl_reg; - - lockdep_assert_held(&dev_priv->pps_mutex); - - if (!intel_dp_is_edp(intel_dp)) - return; - - drm_dbg_kms(&dev_priv->drm, "Turn [ENCODER:%d:%s] panel power off\n", - dig_port->base.base.base.id, dig_port->base.base.name); - - drm_WARN(&dev_priv->drm, !intel_dp->want_panel_vdd, - "Need [ENCODER:%d:%s] VDD to turn off panel\n", - dig_port->base.base.base.id, dig_port->base.base.name); - - pp = ilk_get_pp_control(intel_dp); - /* We need to switch off panel power _and_ force vdd, for otherwise some - * panels get very unhappy and cease to work. */ - pp &= ~(PANEL_POWER_ON | PANEL_POWER_RESET | EDP_FORCE_VDD | - EDP_BLC_ENABLE); - - pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - - intel_dp->want_panel_vdd = false; - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - - wait_panel_off(intel_dp); - intel_dp->panel_power_off_time = ktime_get_boottime(); - - /* We got a reference when we enabled the VDD. */ - intel_display_power_put(dev_priv, - intel_aux_power_domain(dig_port), - fetch_and_zero(&intel_dp->vdd_wakeref)); -} - -void intel_edp_panel_off(struct intel_dp *intel_dp) -{ - intel_wakeref_t wakeref; - - if (!intel_dp_is_edp(intel_dp)) - return; - - with_pps_lock(intel_dp, wakeref) - edp_panel_off(intel_dp); -} - -/* Enable backlight in the panel power control. */ -static void _intel_edp_backlight_on(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - intel_wakeref_t wakeref; - - /* - * If we enable the backlight right away following a panel power - * on, we may see slight flicker as the panel syncs with the eDP - * link. So delay a bit to make sure the image is solid before - * allowing it to appear. - */ - wait_backlight_on(intel_dp); - - with_pps_lock(intel_dp, wakeref) { - i915_reg_t pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - u32 pp; - - pp = ilk_get_pp_control(intel_dp); - pp |= EDP_BLC_ENABLE; - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - } -} /* Enable backlight PWM and backlight PP control. */ void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state, @@ -3486,31 +1945,7 @@ void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state, drm_dbg_kms(&i915->drm, "\n"); intel_panel_enable_backlight(crtc_state, conn_state); - _intel_edp_backlight_on(intel_dp); -} - -/* Disable backlight in the panel power control. */ -static void _intel_edp_backlight_off(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - intel_wakeref_t wakeref; - - if (!intel_dp_is_edp(intel_dp)) - return; - - with_pps_lock(intel_dp, wakeref) { - i915_reg_t pp_ctrl_reg = _pp_ctrl_reg(intel_dp); - u32 pp; - - pp = ilk_get_pp_control(intel_dp); - pp &= ~EDP_BLC_ENABLE; - - intel_de_write(dev_priv, pp_ctrl_reg, pp); - intel_de_posting_read(dev_priv, pp_ctrl_reg); - } - - intel_dp->last_backlight_off = jiffies; - edp_wait_backlight_off(intel_dp); + intel_pps_backlight_on(intel_dp); } /* Disable backlight PP control and backlight PWM. */ @@ -3524,37 +1959,10 @@ void intel_edp_backlight_off(const struct drm_connector_state *old_conn_state) drm_dbg_kms(&i915->drm, "\n"); - _intel_edp_backlight_off(intel_dp); + intel_pps_backlight_off(intel_dp); intel_panel_disable_backlight(old_conn_state); } -/* - * Hook for controlling the panel power control backlight through the bl_power - * sysfs attribute. Take care to handle multiple calls. - */ -static void intel_edp_backlight_power(struct intel_connector *connector, - bool enable) -{ - struct drm_i915_private *i915 = to_i915(connector->base.dev); - struct intel_dp *intel_dp = intel_attached_dp(connector); - intel_wakeref_t wakeref; - bool is_enabled; - - is_enabled = false; - with_pps_lock(intel_dp, wakeref) - is_enabled = ilk_get_pp_control(intel_dp) & EDP_BLC_ENABLE; - if (is_enabled == enable) - return; - - drm_dbg_kms(&i915->drm, "panel power control backlight %s\n", - enable ? "enable" : "disable"); - - if (enable) - _intel_edp_backlight_on(intel_dp); - else - _intel_edp_backlight_off(intel_dp); -} - static void assert_dp_port(struct intel_dp *intel_dp, bool state) { struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); @@ -3975,10 +2383,10 @@ static void intel_disable_dp(struct intel_atomic_state *state, /* Make sure the panel is off before trying to change the mode. But also * ensure that we have vdd while we switch off the panel. */ - intel_edp_panel_vdd_on(intel_dp); + intel_pps_vdd_on(intel_dp); intel_edp_backlight_off(old_conn_state); intel_dp_set_power(intel_dp, DP_SET_POWER_D3); - intel_edp_panel_off(intel_dp); + intel_pps_off(intel_dp); intel_dp->frl.is_trained = false; intel_dp->frl.trained_rate_gbps = 0; } @@ -4425,8 +2833,8 @@ void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp, drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n", enableddisabled(intel_dp->has_hdmi_sink)); - tmp = intel_dp->dfp.ycbcr_444_to_420 ? - DP_CONVERSION_TO_YCBCR420_ENABLE : 0; + tmp = crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR444 && + intel_dp->dfp.ycbcr_444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0; if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1) @@ -4488,15 +2896,15 @@ static void intel_enable_dp(struct intel_atomic_state *state, if (drm_WARN_ON(&dev_priv->drm, dp_reg & DP_PORT_EN)) return; - with_pps_lock(intel_dp, wakeref) { + with_intel_pps_lock(intel_dp, wakeref) { if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - vlv_init_panel_power_sequencer(encoder, pipe_config); + vlv_pps_init(encoder, pipe_config); intel_dp_enable_port(intel_dp, pipe_config); - edp_panel_vdd_on(intel_dp); - edp_panel_on(intel_dp); - edp_panel_vdd_off(intel_dp, true); + intel_pps_vdd_on_unlocked(intel_dp); + intel_pps_on_unlocked(intel_dp); + intel_pps_vdd_off_unlocked(intel_dp, true); } if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { @@ -4555,112 +2963,6 @@ static void g4x_pre_enable_dp(struct intel_atomic_state *state, ilk_edp_pll_on(intel_dp, pipe_config); } -static void vlv_detach_power_sequencer(struct intel_dp *intel_dp) -{ - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); - enum pipe pipe = intel_dp->pps_pipe; - i915_reg_t pp_on_reg = PP_ON_DELAYS(pipe); - - drm_WARN_ON(&dev_priv->drm, intel_dp->active_pipe != INVALID_PIPE); - - if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B)) - return; - - edp_panel_vdd_off_sync(intel_dp); - - /* - * VLV seems to get confused when multiple power sequencers - * have the same port selected (even if only one has power/vdd - * enabled). The failure manifests as vlv_wait_port_ready() failing - * CHV on the other hand doesn't seem to mind having the same port - * selected in multiple power sequencers, but let's clear the - * port select always when logically disconnecting a power sequencer - * from a port. - */ - drm_dbg_kms(&dev_priv->drm, - "detaching pipe %c power sequencer from [ENCODER:%d:%s]\n", - pipe_name(pipe), dig_port->base.base.base.id, - dig_port->base.base.name); - intel_de_write(dev_priv, pp_on_reg, 0); - intel_de_posting_read(dev_priv, pp_on_reg); - - intel_dp->pps_pipe = INVALID_PIPE; -} - -static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv, - enum pipe pipe) -{ - struct intel_encoder *encoder; - - lockdep_assert_held(&dev_priv->pps_mutex); - - for_each_intel_dp(&dev_priv->drm, encoder) { - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); - - drm_WARN(&dev_priv->drm, intel_dp->active_pipe == pipe, - "stealing pipe %c power sequencer from active [ENCODER:%d:%s]\n", - pipe_name(pipe), encoder->base.base.id, - encoder->base.name); - - if (intel_dp->pps_pipe != pipe) - continue; - - drm_dbg_kms(&dev_priv->drm, - "stealing pipe %c power sequencer from [ENCODER:%d:%s]\n", - pipe_name(pipe), encoder->base.base.id, - encoder->base.name); - - /* make sure vdd is off before we steal it */ - vlv_detach_power_sequencer(intel_dp); - } -} - -static void vlv_init_panel_power_sequencer(struct intel_encoder *encoder, - const struct intel_crtc_state *crtc_state) -{ - struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); - struct intel_dp *intel_dp = enc_to_intel_dp(encoder); - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); - - lockdep_assert_held(&dev_priv->pps_mutex); - - drm_WARN_ON(&dev_priv->drm, intel_dp->active_pipe != INVALID_PIPE); - - if (intel_dp->pps_pipe != INVALID_PIPE && - intel_dp->pps_pipe != crtc->pipe) { - /* - * If another power sequencer was being used on this - * port previously make sure to turn off vdd there while - * we still have control of it. - */ - vlv_detach_power_sequencer(intel_dp); - } - - /* - * We may be stealing the power - * sequencer from another port. - */ - vlv_steal_power_sequencer(dev_priv, crtc->pipe); - - intel_dp->active_pipe = crtc->pipe; - - if (!intel_dp_is_edp(intel_dp)) - return; - - /* now it's all ours */ - intel_dp->pps_pipe = crtc->pipe; - - drm_dbg_kms(&dev_priv->drm, - "initializing pipe %c power sequencer for [ENCODER:%d:%s]\n", - pipe_name(intel_dp->pps_pipe), encoder->base.base.id, - encoder->base.name); - - /* init power sequencer on this pipe and port */ - intel_dp_init_panel_power_sequencer(intel_dp); - intel_dp_init_panel_power_sequencer_registers(intel_dp, true); -} - static void vlv_pre_enable_dp(struct intel_atomic_state *state, struct intel_encoder *encoder, const struct intel_crtc_state *pipe_config, @@ -5060,22 +3362,19 @@ ivb_cpu_edp_set_signal_levels(struct intel_dp *intel_dp, intel_de_posting_read(dev_priv, intel_dp->output_reg); } -void intel_dp_set_signal_levels(struct intel_dp *intel_dp, - const struct intel_crtc_state *crtc_state) +static char dp_training_pattern_name(u8 train_pat) { - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - u8 train_set = intel_dp->train_set[0]; - - drm_dbg_kms(&dev_priv->drm, "Using vswing level %d%s\n", - train_set & DP_TRAIN_VOLTAGE_SWING_MASK, - train_set & DP_TRAIN_MAX_SWING_REACHED ? " (max)" : ""); - drm_dbg_kms(&dev_priv->drm, "Using pre-emphasis level %d%s\n", - (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) >> - DP_TRAIN_PRE_EMPHASIS_SHIFT, - train_set & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ? - " (max)" : ""); - - intel_dp->set_signal_levels(intel_dp, crtc_state); + switch (train_pat) { + case DP_TRAINING_PATTERN_1: + case DP_TRAINING_PATTERN_2: + case DP_TRAINING_PATTERN_3: + return '0' + train_pat; + case DP_TRAINING_PATTERN_4: + return '4'; + default: + MISSING_CASE(train_pat); + return '?'; + } } void @@ -5083,13 +3382,15 @@ intel_dp_program_link_training_pattern(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state, u8 dp_train_pat) { - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base; + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + u8 train_pat = intel_dp_training_pattern_symbol(dp_train_pat); - if ((intel_dp_training_pattern_symbol(dp_train_pat)) != - DP_TRAINING_PATTERN_DISABLE) + if (train_pat != DP_TRAINING_PATTERN_DISABLE) drm_dbg_kms(&dev_priv->drm, - "Using DP training pattern TPS%d\n", - intel_dp_training_pattern_symbol(dp_train_pat)); + "[ENCODER:%d:%s] Using DP training pattern TPS%c\n", + encoder->base.base.id, encoder->base.name, + dp_training_pattern_name(train_pat)); intel_dp->set_link_train(intel_dp, crtc_state, dp_train_pat); } @@ -5155,15 +3456,15 @@ intel_dp_link_down(struct intel_encoder *encoder, intel_set_pch_fifo_underrun_reporting(dev_priv, PIPE_A, true); } - msleep(intel_dp->panel_power_down_delay); + msleep(intel_dp->pps.panel_power_down_delay); intel_dp->DP = DP; if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { intel_wakeref_t wakeref; - with_pps_lock(intel_dp, wakeref) - intel_dp->active_pipe = INVALID_PIPE; + with_intel_pps_lock(intel_dp, wakeref) + intel_dp->pps.active_pipe = INVALID_PIPE; } } @@ -6132,7 +4433,7 @@ static void intel_dp_process_phy_request(struct intel_dp *intel_dp, intel_dp_autotest_phy_ddi_disable(intel_dp, crtc_state); - intel_dp_set_signal_levels(intel_dp, crtc_state); + intel_dp_set_signal_levels(intel_dp, crtc_state, DP_PHY_DPRX); intel_dp_phy_pattern_update(intel_dp, crtc_state); @@ -6205,6 +4506,17 @@ update_status: "Could not write test response to sink\n"); } +static void +intel_dp_mst_hpd_irq(struct intel_dp *intel_dp, u8 *esi, bool *handled) +{ + drm_dp_mst_hpd_irq(&intel_dp->mst_mgr, esi, handled); + + if (esi[1] & DP_CP_IRQ) { + intel_hdcp_handle_cp_irq(intel_dp->attached_connector); + *handled = true; + } +} + /** * intel_dp_check_mst_status - service any pending MST interrupts, check link status * @intel_dp: Intel DP struct @@ -6249,7 +4561,8 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp) drm_dbg_kms(&i915->drm, "got esi %3ph\n", esi); - drm_dp_mst_hpd_irq(&intel_dp->mst_mgr, esi, &handled); + intel_dp_mst_hpd_irq(intel_dp, esi, &handled); + if (!handled) break; @@ -6573,7 +4886,7 @@ static int intel_dp_do_phy_test(struct intel_encoder *encoder, return 0; } -static void intel_dp_phy_test(struct intel_encoder *encoder) +void intel_dp_phy_test(struct intel_encoder *encoder) { struct drm_modeset_acquire_ctx ctx; int ret; @@ -7008,8 +5321,8 @@ intel_dp_update_420(struct intel_dp *intel_dp) intel_dp->downstream_ports); rgb_to_ycbcr = drm_dp_downstream_rgb_to_ycbcr_conversion(intel_dp->dpcd, intel_dp->downstream_ports, - DP_DS_HDMI_BT601_RGB_YCBCR_CONV || - DP_DS_HDMI_BT709_RGB_YCBCR_CONV || + DP_DS_HDMI_BT601_RGB_YCBCR_CONV | + DP_DS_HDMI_BT709_RGB_YCBCR_CONV | DP_DS_HDMI_BT2020_RGB_YCBCR_CONV); if (INTEL_GEN(i915) >= 11) { @@ -7060,7 +5373,6 @@ intel_dp_set_edid(struct intel_dp *intel_dp) } drm_dp_cec_set_edid(&intel_dp->aux, edid); - intel_dp->edid_quirks = drm_dp_get_edid_quirks(edid); } static void @@ -7074,7 +5386,6 @@ intel_dp_unset_edid(struct intel_dp *intel_dp) intel_dp->has_hdmi_sink = false; intel_dp->has_audio = false; - intel_dp->edid_quirks = 0; intel_dp->dfp.max_bpc = 0; intel_dp->dfp.max_dotclock = 0; @@ -7241,6 +5552,10 @@ static int intel_dp_get_modes(struct drm_connector *connector) edid = intel_connector->detect_edid; if (edid) { int ret = intel_connector_update_modes(connector, edid); + + if (intel_vrr_is_capable(connector)) + drm_connector_set_vrr_capable_property(connector, + true); if (ret) return ret; } @@ -7329,17 +5644,8 @@ void intel_dp_encoder_flush_work(struct drm_encoder *encoder) struct intel_dp *intel_dp = &dig_port->dp; intel_dp_mst_encoder_cleanup(dig_port); - if (intel_dp_is_edp(intel_dp)) { - intel_wakeref_t wakeref; - cancel_delayed_work_sync(&intel_dp->panel_vdd_work); - /* - * vdd might still be enabled do to the delayed vdd off. - * Make sure vdd is actually turned off here. - */ - with_pps_lock(intel_dp, wakeref) - edp_panel_vdd_off_sync(intel_dp); - } + intel_pps_vdd_off_sync(intel_dp); intel_dp_aux_fini(intel_dp); } @@ -7355,55 +5661,15 @@ static void intel_dp_encoder_destroy(struct drm_encoder *encoder) void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder) { struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder); - intel_wakeref_t wakeref; - if (!intel_dp_is_edp(intel_dp)) - return; - - /* - * vdd might still be enabled do to the delayed vdd off. - * Make sure vdd is actually turned off here. - */ - cancel_delayed_work_sync(&intel_dp->panel_vdd_work); - with_pps_lock(intel_dp, wakeref) - edp_panel_vdd_off_sync(intel_dp); + intel_pps_vdd_off_sync(intel_dp); } void intel_dp_encoder_shutdown(struct intel_encoder *intel_encoder) { struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder); - intel_wakeref_t wakeref; - - if (!intel_dp_is_edp(intel_dp)) - return; - - with_pps_lock(intel_dp, wakeref) - wait_panel_power_cycle(intel_dp); -} - -static void intel_edp_panel_vdd_sanitize(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); - lockdep_assert_held(&dev_priv->pps_mutex); - - if (!edp_have_panel_vdd(intel_dp)) - return; - - /* - * The VDD bit needs a power domain reference, so if the bit is - * already enabled when we boot or resume, grab this reference and - * schedule a vdd off, so we don't hold on to the reference - * indefinitely. - */ - drm_dbg_kms(&dev_priv->drm, - "VDD left on by BIOS, adjusting state tracking\n"); - drm_WARN_ON(&dev_priv->drm, intel_dp->vdd_wakeref); - intel_dp->vdd_wakeref = intel_display_power_get(dev_priv, - intel_aux_power_domain(dig_port)); - - edp_panel_vdd_schedule_off(intel_dp); + intel_pps_wait_power_cycle(intel_dp); } static enum pipe vlv_active_pipe(struct intel_dp *intel_dp) @@ -7423,30 +5689,20 @@ void intel_dp_encoder_reset(struct drm_encoder *encoder) { struct drm_i915_private *dev_priv = to_i915(encoder->dev); struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(encoder)); - intel_wakeref_t wakeref; if (!HAS_DDI(dev_priv)) intel_dp->DP = intel_de_read(dev_priv, intel_dp->output_reg); intel_dp->reset_link_params = true; - if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv) && - !intel_dp_is_edp(intel_dp)) - return; - - with_pps_lock(intel_dp, wakeref) { - if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - intel_dp->active_pipe = vlv_active_pipe(intel_dp); + if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { + intel_wakeref_t wakeref; - if (intel_dp_is_edp(intel_dp)) { - /* - * Reinit the power sequencer, in case BIOS did - * something nasty with it. - */ - intel_dp_pps_init(intel_dp); - intel_edp_panel_vdd_sanitize(intel_dp); - } + with_intel_pps_lock(intel_dp, wakeref) + intel_dp->pps.active_pipe = vlv_active_pipe(intel_dp); } + + intel_pps_encoder_reset(intel_dp); } static int intel_modeset_tile_group(struct intel_atomic_state *state, @@ -7611,19 +5867,6 @@ static const struct drm_encoder_funcs intel_dp_enc_funcs = { .destroy = intel_dp_encoder_destroy, }; -static bool intel_edp_have_power(struct intel_dp *intel_dp) -{ - intel_wakeref_t wakeref; - bool have_power = false; - - with_pps_lock(intel_dp, wakeref) { - have_power = edp_have_panel_power(intel_dp) && - edp_have_panel_vdd(intel_dp); - } - - return have_power; -} - enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *dig_port, bool long_hpd) { @@ -7631,7 +5874,7 @@ intel_dp_hpd_pulse(struct intel_digital_port *dig_port, bool long_hpd) struct intel_dp *intel_dp = &dig_port->dp; if (dig_port->base.type == INTEL_OUTPUT_EDP && - (long_hpd || !intel_edp_have_power(intel_dp))) { + (long_hpd || !intel_pps_have_power(intel_dp))) { /* * vdd off can generate a long/short pulse on eDP which * would require vdd on to handle it, and thus we @@ -7725,277 +5968,9 @@ intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connect connector->state->scaling_mode = DRM_MODE_SCALE_ASPECT; } -} -static void intel_dp_init_panel_power_timestamps(struct intel_dp *intel_dp) -{ - intel_dp->panel_power_off_time = ktime_get_boottime(); - intel_dp->last_power_on = jiffies; - intel_dp->last_backlight_off = jiffies; -} - -static void -intel_pps_readout_hw_state(struct intel_dp *intel_dp, struct edp_power_seq *seq) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - u32 pp_on, pp_off, pp_ctl; - struct pps_registers regs; - - intel_pps_get_registers(intel_dp, ®s); - - pp_ctl = ilk_get_pp_control(intel_dp); - - /* Ensure PPS is unlocked */ - if (!HAS_DDI(dev_priv)) - intel_de_write(dev_priv, regs.pp_ctrl, pp_ctl); - - pp_on = intel_de_read(dev_priv, regs.pp_on); - pp_off = intel_de_read(dev_priv, regs.pp_off); - - /* Pull timing values out of registers */ - seq->t1_t3 = REG_FIELD_GET(PANEL_POWER_UP_DELAY_MASK, pp_on); - seq->t8 = REG_FIELD_GET(PANEL_LIGHT_ON_DELAY_MASK, pp_on); - seq->t9 = REG_FIELD_GET(PANEL_LIGHT_OFF_DELAY_MASK, pp_off); - seq->t10 = REG_FIELD_GET(PANEL_POWER_DOWN_DELAY_MASK, pp_off); - - if (i915_mmio_reg_valid(regs.pp_div)) { - u32 pp_div; - - pp_div = intel_de_read(dev_priv, regs.pp_div); - - seq->t11_t12 = REG_FIELD_GET(PANEL_POWER_CYCLE_DELAY_MASK, pp_div) * 1000; - } else { - seq->t11_t12 = REG_FIELD_GET(BXT_POWER_CYCLE_DELAY_MASK, pp_ctl) * 1000; - } -} - -static void -intel_pps_dump_state(const char *state_name, const struct edp_power_seq *seq) -{ - DRM_DEBUG_KMS("%s t1_t3 %d t8 %d t9 %d t10 %d t11_t12 %d\n", - state_name, - seq->t1_t3, seq->t8, seq->t9, seq->t10, seq->t11_t12); -} - -static void -intel_pps_verify_state(struct intel_dp *intel_dp) -{ - struct edp_power_seq hw; - struct edp_power_seq *sw = &intel_dp->pps_delays; - - intel_pps_readout_hw_state(intel_dp, &hw); - - if (hw.t1_t3 != sw->t1_t3 || hw.t8 != sw->t8 || hw.t9 != sw->t9 || - hw.t10 != sw->t10 || hw.t11_t12 != sw->t11_t12) { - DRM_ERROR("PPS state mismatch\n"); - intel_pps_dump_state("sw", sw); - intel_pps_dump_state("hw", &hw); - } -} - -static void -intel_dp_init_panel_power_sequencer(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - struct edp_power_seq cur, vbt, spec, - *final = &intel_dp->pps_delays; - - lockdep_assert_held(&dev_priv->pps_mutex); - - /* already initialized? */ - if (final->t11_t12 != 0) - return; - - intel_pps_readout_hw_state(intel_dp, &cur); - - intel_pps_dump_state("cur", &cur); - - vbt = dev_priv->vbt.edp.pps; - /* On Toshiba Satellite P50-C-18C system the VBT T12 delay - * of 500ms appears to be too short. Ocassionally the panel - * just fails to power back on. Increasing the delay to 800ms - * seems sufficient to avoid this problem. - */ - if (dev_priv->quirks & QUIRK_INCREASE_T12_DELAY) { - vbt.t11_t12 = max_t(u16, vbt.t11_t12, 1300 * 10); - drm_dbg_kms(&dev_priv->drm, - "Increasing T12 panel delay as per the quirk to %d\n", - vbt.t11_t12); - } - /* T11_T12 delay is special and actually in units of 100ms, but zero - * based in the hw (so we need to add 100 ms). But the sw vbt - * table multiplies it with 1000 to make it in units of 100usec, - * too. */ - vbt.t11_t12 += 100 * 10; - - /* Upper limits from eDP 1.3 spec. Note that we use the clunky units of - * our hw here, which are all in 100usec. */ - spec.t1_t3 = 210 * 10; - spec.t8 = 50 * 10; /* no limit for t8, use t7 instead */ - spec.t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */ - spec.t10 = 500 * 10; - /* This one is special and actually in units of 100ms, but zero - * based in the hw (so we need to add 100 ms). But the sw vbt - * table multiplies it with 1000 to make it in units of 100usec, - * too. */ - spec.t11_t12 = (510 + 100) * 10; - - intel_pps_dump_state("vbt", &vbt); - - /* Use the max of the register settings and vbt. If both are - * unset, fall back to the spec limits. */ -#define assign_final(field) final->field = (max(cur.field, vbt.field) == 0 ? \ - spec.field : \ - max(cur.field, vbt.field)) - assign_final(t1_t3); - assign_final(t8); - assign_final(t9); - assign_final(t10); - assign_final(t11_t12); -#undef assign_final - -#define get_delay(field) (DIV_ROUND_UP(final->field, 10)) - intel_dp->panel_power_up_delay = get_delay(t1_t3); - intel_dp->backlight_on_delay = get_delay(t8); - intel_dp->backlight_off_delay = get_delay(t9); - intel_dp->panel_power_down_delay = get_delay(t10); - intel_dp->panel_power_cycle_delay = get_delay(t11_t12); -#undef get_delay - - drm_dbg_kms(&dev_priv->drm, - "panel power up delay %d, power down delay %d, power cycle delay %d\n", - intel_dp->panel_power_up_delay, - intel_dp->panel_power_down_delay, - intel_dp->panel_power_cycle_delay); - - drm_dbg_kms(&dev_priv->drm, "backlight on delay %d, off delay %d\n", - intel_dp->backlight_on_delay, - intel_dp->backlight_off_delay); - - /* - * We override the HW backlight delays to 1 because we do manual waits - * on them. For T8, even BSpec recommends doing it. For T9, if we - * don't do this, we'll end up waiting for the backlight off delay - * twice: once when we do the manual sleep, and once when we disable - * the panel and wait for the PP_STATUS bit to become zero. - */ - final->t8 = 1; - final->t9 = 1; - - /* - * HW has only a 100msec granularity for t11_t12 so round it up - * accordingly. - */ - final->t11_t12 = roundup(final->t11_t12, 100 * 10); -} - -static void -intel_dp_init_panel_power_sequencer_registers(struct intel_dp *intel_dp, - bool force_disable_vdd) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - u32 pp_on, pp_off, port_sel = 0; - int div = RUNTIME_INFO(dev_priv)->rawclk_freq / 1000; - struct pps_registers regs; - enum port port = dp_to_dig_port(intel_dp)->base.port; - const struct edp_power_seq *seq = &intel_dp->pps_delays; - - lockdep_assert_held(&dev_priv->pps_mutex); - - intel_pps_get_registers(intel_dp, ®s); - - /* - * On some VLV machines the BIOS can leave the VDD - * enabled even on power sequencers which aren't - * hooked up to any port. This would mess up the - * power domain tracking the first time we pick - * one of these power sequencers for use since - * edp_panel_vdd_on() would notice that the VDD was - * already on and therefore wouldn't grab the power - * domain reference. Disable VDD first to avoid this. - * This also avoids spuriously turning the VDD on as - * soon as the new power sequencer gets initialized. - */ - if (force_disable_vdd) { - u32 pp = ilk_get_pp_control(intel_dp); - - drm_WARN(&dev_priv->drm, pp & PANEL_POWER_ON, - "Panel power already on\n"); - - if (pp & EDP_FORCE_VDD) - drm_dbg_kms(&dev_priv->drm, - "VDD already on, disabling first\n"); - - pp &= ~EDP_FORCE_VDD; - - intel_de_write(dev_priv, regs.pp_ctrl, pp); - } - - pp_on = REG_FIELD_PREP(PANEL_POWER_UP_DELAY_MASK, seq->t1_t3) | - REG_FIELD_PREP(PANEL_LIGHT_ON_DELAY_MASK, seq->t8); - pp_off = REG_FIELD_PREP(PANEL_LIGHT_OFF_DELAY_MASK, seq->t9) | - REG_FIELD_PREP(PANEL_POWER_DOWN_DELAY_MASK, seq->t10); - - /* Haswell doesn't have any port selection bits for the panel - * power sequencer any more. */ - if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { - port_sel = PANEL_PORT_SELECT_VLV(port); - } else if (HAS_PCH_IBX(dev_priv) || HAS_PCH_CPT(dev_priv)) { - switch (port) { - case PORT_A: - port_sel = PANEL_PORT_SELECT_DPA; - break; - case PORT_C: - port_sel = PANEL_PORT_SELECT_DPC; - break; - case PORT_D: - port_sel = PANEL_PORT_SELECT_DPD; - break; - default: - MISSING_CASE(port); - break; - } - } - - pp_on |= port_sel; - - intel_de_write(dev_priv, regs.pp_on, pp_on); - intel_de_write(dev_priv, regs.pp_off, pp_off); - - /* - * Compute the divisor for the pp clock, simply match the Bspec formula. - */ - if (i915_mmio_reg_valid(regs.pp_div)) { - intel_de_write(dev_priv, regs.pp_div, - REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, (100 * div) / 2 - 1) | REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(seq->t11_t12, 1000))); - } else { - u32 pp_ctl; - - pp_ctl = intel_de_read(dev_priv, regs.pp_ctrl); - pp_ctl &= ~BXT_POWER_CYCLE_DELAY_MASK; - pp_ctl |= REG_FIELD_PREP(BXT_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(seq->t11_t12, 1000)); - intel_de_write(dev_priv, regs.pp_ctrl, pp_ctl); - } - - drm_dbg_kms(&dev_priv->drm, - "panel power sequencer register settings: PP_ON %#x, PP_OFF %#x, PP_DIV %#x\n", - intel_de_read(dev_priv, regs.pp_on), - intel_de_read(dev_priv, regs.pp_off), - i915_mmio_reg_valid(regs.pp_div) ? - intel_de_read(dev_priv, regs.pp_div) : - (intel_de_read(dev_priv, regs.pp_ctrl) & BXT_POWER_CYCLE_DELAY_MASK)); -} - -static void intel_dp_pps_init(struct intel_dp *intel_dp) -{ - struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); - - if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { - vlv_initial_power_sequencer_setup(intel_dp); - } else { - intel_dp_init_panel_power_sequencer(intel_dp); - intel_dp_init_panel_power_sequencer_registers(intel_dp, false); - } + if (HAS_VRR(dev_priv)) + drm_connector_attach_vrr_capable_property(connector); } /** @@ -8434,14 +6409,11 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, struct drm_display_mode *downclock_mode = NULL; bool has_dpcd; enum pipe pipe = INVALID_PIPE; - intel_wakeref_t wakeref; struct edid *edid; if (!intel_dp_is_edp(intel_dp)) return true; - INIT_DELAYED_WORK(&intel_dp->panel_vdd_work, edp_panel_vdd_work); - /* * On IBX/CPT we may get here with LVDS already registered. Since the * driver uses the only internal power sequencer available for both @@ -8457,11 +6429,7 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, return false; } - with_pps_lock(intel_dp, wakeref) { - intel_dp_init_panel_power_timestamps(intel_dp); - intel_dp_pps_init(intel_dp); - intel_edp_panel_vdd_sanitize(intel_dp); - } + intel_pps_init(intel_dp); /* Cache DPCD and EDID for edp. */ has_dpcd = intel_edp_init_dpcd(intel_dp); @@ -8478,7 +6446,6 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, if (edid) { if (drm_add_edid_modes(connector, edid)) { drm_connector_update_edid_property(connector, edid); - intel_dp->edid_quirks = drm_dp_get_edid_quirks(edid); } else { kfree(edid); edid = ERR_PTR(-EINVAL); @@ -8506,7 +6473,7 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, pipe = vlv_active_pipe(intel_dp); if (pipe != PIPE_A && pipe != PIPE_B) - pipe = intel_dp->pps_pipe; + pipe = intel_dp->pps.pps_pipe; if (pipe != PIPE_A && pipe != PIPE_B) pipe = PIPE_A; @@ -8517,7 +6484,7 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, } intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); - intel_connector->panel.backlight.power = intel_edp_backlight_power; + intel_connector->panel.backlight.power = intel_pps_backlight_power; intel_panel_setup_backlight(connector, pipe); if (fixed_mode) { @@ -8529,13 +6496,7 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp, return true; out_vdd_off: - cancel_delayed_work_sync(&intel_dp->panel_vdd_work); - /* - * vdd might still be enabled do to the delayed vdd off. - * Make sure vdd is actually turned off here. - */ - with_pps_lock(intel_dp, wakeref) - edp_panel_vdd_off_sync(intel_dp); + intel_pps_vdd_off_sync(intel_dp); return false; } @@ -8589,8 +6550,8 @@ intel_dp_init_connector(struct intel_digital_port *dig_port, intel_dp_set_source_rates(intel_dp); intel_dp->reset_link_params = true; - intel_dp->pps_pipe = INVALID_PIPE; - intel_dp->active_pipe = INVALID_PIPE; + intel_dp->pps.pps_pipe = INVALID_PIPE; + intel_dp->pps.active_pipe = INVALID_PIPE; /* Preserve the current hw state. */ intel_dp->DP = intel_de_read(dev_priv, intel_dp->output_reg); @@ -8608,7 +6569,7 @@ intel_dp_init_connector(struct intel_digital_port *dig_port, } if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) - intel_dp->active_pipe = vlv_active_pipe(intel_dp); + intel_dp->pps.active_pipe = vlv_active_pipe(intel_dp); /* * For eDP we always set the encoder type to INTEL_OUTPUT_EDP, but diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h index 4280a09fd8fd..d80839139bfb 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.h +++ b/drivers/gpu/drm/i915/display/intel_dp.h @@ -70,16 +70,11 @@ enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *dig_port, void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state, const struct drm_connector_state *conn_state); void intel_edp_backlight_off(const struct drm_connector_state *conn_state); -void intel_edp_panel_vdd_on(struct intel_dp *intel_dp); -void intel_edp_panel_on(struct intel_dp *intel_dp); -void intel_edp_panel_off(struct intel_dp *intel_dp); void intel_dp_mst_suspend(struct drm_i915_private *dev_priv); void intel_dp_mst_resume(struct drm_i915_private *dev_priv); int intel_dp_max_link_rate(struct intel_dp *intel_dp); int intel_dp_max_lane_count(struct intel_dp *intel_dp); int intel_dp_rate_select(struct intel_dp *intel_dp, int rate); -void intel_power_sequencer_reset(struct drm_i915_private *dev_priv); -u32 intel_dp_pack_aux(const u8 *src, int src_bytes); void intel_edp_drrs_enable(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state); @@ -96,9 +91,6 @@ void intel_dp_program_link_training_pattern(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state, u8 dp_train_pat); -void -intel_dp_set_signal_levels(struct intel_dp *intel_dp, - const struct intel_crtc_state *crtc_state); void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock, u8 *link_bw, u8 *rate_select); bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp); @@ -144,9 +136,11 @@ bool intel_dp_initial_fastset_check(struct intel_encoder *encoder, struct intel_crtc_state *crtc_state); void intel_dp_sync_state(struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state); +const struct dpll *vlv_get_dpll(struct drm_i915_private *i915); void intel_dp_check_frl_training(struct intel_dp *intel_dp); void intel_dp_pcon_dsc_configure(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state); +void intel_dp_phy_test(struct intel_encoder *encoder); #endif /* __INTEL_DP_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.c b/drivers/gpu/drm/i915/display/intel_dp_aux.c new file mode 100644 index 000000000000..eaebf123310a --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dp_aux.c @@ -0,0 +1,692 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020-2021 Intel Corporation + */ + +#include "i915_drv.h" +#include "i915_trace.h" +#include "intel_display_types.h" +#include "intel_dp_aux.h" +#include "intel_pps.h" +#include "intel_tc.h" + +u32 intel_dp_pack_aux(const u8 *src, int src_bytes) +{ + int i; + u32 v = 0; + + if (src_bytes > 4) + src_bytes = 4; + for (i = 0; i < src_bytes; i++) + v |= ((u32)src[i]) << ((3 - i) * 8); + return v; +} + +static void intel_dp_unpack_aux(u32 src, u8 *dst, int dst_bytes) +{ + int i; + + if (dst_bytes > 4) + dst_bytes = 4; + for (i = 0; i < dst_bytes; i++) + dst[i] = src >> ((3 - i) * 8); +} + +static u32 +intel_dp_aux_wait_done(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + i915_reg_t ch_ctl = intel_dp->aux_ch_ctl_reg(intel_dp); + const unsigned int timeout_ms = 10; + u32 status; + bool done; + +#define C (((status = intel_uncore_read_notrace(&i915->uncore, ch_ctl)) & DP_AUX_CH_CTL_SEND_BUSY) == 0) + done = wait_event_timeout(i915->gmbus_wait_queue, C, + msecs_to_jiffies_timeout(timeout_ms)); + + /* just trace the final value */ + trace_i915_reg_rw(false, ch_ctl, status, sizeof(status), true); + + if (!done) + drm_err(&i915->drm, + "%s: did not complete or timeout within %ums (status 0x%08x)\n", + intel_dp->aux.name, timeout_ms, status); +#undef C + + return status; +} + +static u32 g4x_get_aux_clock_divider(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + if (index) + return 0; + + /* + * The clock divider is based off the hrawclk, and would like to run at + * 2MHz. So, take the hrawclk value and divide by 2000 and use that + */ + return DIV_ROUND_CLOSEST(RUNTIME_INFO(dev_priv)->rawclk_freq, 2000); +} + +static u32 ilk_get_aux_clock_divider(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + u32 freq; + + if (index) + return 0; + + /* + * The clock divider is based off the cdclk or PCH rawclk, and would + * like to run at 2MHz. So, take the cdclk or PCH rawclk value and + * divide by 2000 and use that + */ + if (dig_port->aux_ch == AUX_CH_A) + freq = dev_priv->cdclk.hw.cdclk; + else + freq = RUNTIME_INFO(dev_priv)->rawclk_freq; + return DIV_ROUND_CLOSEST(freq, 2000); +} + +static u32 hsw_get_aux_clock_divider(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + + if (dig_port->aux_ch != AUX_CH_A && HAS_PCH_LPT_H(dev_priv)) { + /* Workaround for non-ULT HSW */ + switch (index) { + case 0: return 63; + case 1: return 72; + default: return 0; + } + } + + return ilk_get_aux_clock_divider(intel_dp, index); +} + +static u32 skl_get_aux_clock_divider(struct intel_dp *intel_dp, int index) +{ + /* + * SKL doesn't need us to program the AUX clock divider (Hardware will + * derive the clock from CDCLK automatically). We still implement the + * get_aux_clock_divider vfunc to plug-in into the existing code. + */ + return index ? 0 : 1; +} + +static u32 g4x_get_aux_send_ctl(struct intel_dp *intel_dp, + int send_bytes, + u32 aux_clock_divider) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *dev_priv = + to_i915(dig_port->base.base.dev); + u32 precharge, timeout; + + if (IS_GEN(dev_priv, 6)) + precharge = 3; + else + precharge = 5; + + if (IS_BROADWELL(dev_priv)) + timeout = DP_AUX_CH_CTL_TIME_OUT_600us; + else + timeout = DP_AUX_CH_CTL_TIME_OUT_400us; + + return DP_AUX_CH_CTL_SEND_BUSY | + DP_AUX_CH_CTL_DONE | + DP_AUX_CH_CTL_INTERRUPT | + DP_AUX_CH_CTL_TIME_OUT_ERROR | + timeout | + DP_AUX_CH_CTL_RECEIVE_ERROR | + (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | + (precharge << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) | + (aux_clock_divider << DP_AUX_CH_CTL_BIT_CLOCK_2X_SHIFT); +} + +static u32 skl_get_aux_send_ctl(struct intel_dp *intel_dp, + int send_bytes, + u32 unused) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *i915 = + to_i915(dig_port->base.base.dev); + enum phy phy = intel_port_to_phy(i915, dig_port->base.port); + u32 ret; + + ret = DP_AUX_CH_CTL_SEND_BUSY | + DP_AUX_CH_CTL_DONE | + DP_AUX_CH_CTL_INTERRUPT | + DP_AUX_CH_CTL_TIME_OUT_ERROR | + DP_AUX_CH_CTL_TIME_OUT_MAX | + DP_AUX_CH_CTL_RECEIVE_ERROR | + (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) | + DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(32) | + DP_AUX_CH_CTL_SYNC_PULSE_SKL(32); + + if (intel_phy_is_tc(i915, phy) && + dig_port->tc_mode == TC_PORT_TBT_ALT) + ret |= DP_AUX_CH_CTL_TBT_IO; + + return ret; +} + +static int +intel_dp_aux_xfer(struct intel_dp *intel_dp, + const u8 *send, int send_bytes, + u8 *recv, int recv_size, + u32 aux_send_ctl_flags) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *i915 = + to_i915(dig_port->base.base.dev); + struct intel_uncore *uncore = &i915->uncore; + enum phy phy = intel_port_to_phy(i915, dig_port->base.port); + bool is_tc_port = intel_phy_is_tc(i915, phy); + i915_reg_t ch_ctl, ch_data[5]; + u32 aux_clock_divider; + enum intel_display_power_domain aux_domain; + intel_wakeref_t aux_wakeref; + intel_wakeref_t pps_wakeref; + int i, ret, recv_bytes; + int try, clock = 0; + u32 status; + bool vdd; + + ch_ctl = intel_dp->aux_ch_ctl_reg(intel_dp); + for (i = 0; i < ARRAY_SIZE(ch_data); i++) + ch_data[i] = intel_dp->aux_ch_data_reg(intel_dp, i); + + if (is_tc_port) + intel_tc_port_lock(dig_port); + + aux_domain = intel_aux_power_domain(dig_port); + + aux_wakeref = intel_display_power_get(i915, aux_domain); + pps_wakeref = intel_pps_lock(intel_dp); + + /* + * We will be called with VDD already enabled for dpcd/edid/oui reads. + * In such cases we want to leave VDD enabled and it's up to upper layers + * to turn it off. But for eg. i2c-dev access we need to turn it on/off + * ourselves. + */ + vdd = intel_pps_vdd_on_unlocked(intel_dp); + + /* + * dp aux is extremely sensitive to irq latency, hence request the + * lowest possible wakeup latency and so prevent the cpu from going into + * deep sleep states. + */ + cpu_latency_qos_update_request(&intel_dp->pm_qos, 0); + + intel_pps_check_power_unlocked(intel_dp); + + /* Try to wait for any previous AUX channel activity */ + for (try = 0; try < 3; try++) { + status = intel_uncore_read_notrace(uncore, ch_ctl); + if ((status & DP_AUX_CH_CTL_SEND_BUSY) == 0) + break; + msleep(1); + } + /* just trace the final value */ + trace_i915_reg_rw(false, ch_ctl, status, sizeof(status), true); + + if (try == 3) { + const u32 status = intel_uncore_read(uncore, ch_ctl); + + if (status != intel_dp->aux_busy_last_status) { + drm_WARN(&i915->drm, 1, + "%s: not started (status 0x%08x)\n", + intel_dp->aux.name, status); + intel_dp->aux_busy_last_status = status; + } + + ret = -EBUSY; + goto out; + } + + /* Only 5 data registers! */ + if (drm_WARN_ON(&i915->drm, send_bytes > 20 || recv_size > 20)) { + ret = -E2BIG; + goto out; + } + + while ((aux_clock_divider = intel_dp->get_aux_clock_divider(intel_dp, clock++))) { + u32 send_ctl = intel_dp->get_aux_send_ctl(intel_dp, + send_bytes, + aux_clock_divider); + + send_ctl |= aux_send_ctl_flags; + + /* Must try at least 3 times according to DP spec */ + for (try = 0; try < 5; try++) { + /* Load the send data into the aux channel data registers */ + for (i = 0; i < send_bytes; i += 4) + intel_uncore_write(uncore, + ch_data[i >> 2], + intel_dp_pack_aux(send + i, + send_bytes - i)); + + /* Send the command and wait for it to complete */ + intel_uncore_write(uncore, ch_ctl, send_ctl); + + status = intel_dp_aux_wait_done(intel_dp); + + /* Clear done status and any errors */ + intel_uncore_write(uncore, + ch_ctl, + status | + DP_AUX_CH_CTL_DONE | + DP_AUX_CH_CTL_TIME_OUT_ERROR | + DP_AUX_CH_CTL_RECEIVE_ERROR); + + /* + * DP CTS 1.2 Core Rev 1.1, 4.2.1.1 & 4.2.1.2 + * 400us delay required for errors and timeouts + * Timeout errors from the HW already meet this + * requirement so skip to next iteration + */ + if (status & DP_AUX_CH_CTL_TIME_OUT_ERROR) + continue; + + if (status & DP_AUX_CH_CTL_RECEIVE_ERROR) { + usleep_range(400, 500); + continue; + } + if (status & DP_AUX_CH_CTL_DONE) + goto done; + } + } + + if ((status & DP_AUX_CH_CTL_DONE) == 0) { + drm_err(&i915->drm, "%s: not done (status 0x%08x)\n", + intel_dp->aux.name, status); + ret = -EBUSY; + goto out; + } + +done: + /* + * Check for timeout or receive error. Timeouts occur when the sink is + * not connected. + */ + if (status & DP_AUX_CH_CTL_RECEIVE_ERROR) { + drm_err(&i915->drm, "%s: receive error (status 0x%08x)\n", + intel_dp->aux.name, status); + ret = -EIO; + goto out; + } + + /* + * Timeouts occur when the device isn't connected, so they're "normal" + * -- don't fill the kernel log with these + */ + if (status & DP_AUX_CH_CTL_TIME_OUT_ERROR) { + drm_dbg_kms(&i915->drm, "%s: timeout (status 0x%08x)\n", + intel_dp->aux.name, status); + ret = -ETIMEDOUT; + goto out; + } + + /* Unload any bytes sent back from the other side */ + recv_bytes = ((status & DP_AUX_CH_CTL_MESSAGE_SIZE_MASK) >> + DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT); + + /* + * By BSpec: "Message sizes of 0 or >20 are not allowed." + * We have no idea of what happened so we return -EBUSY so + * drm layer takes care for the necessary retries. + */ + if (recv_bytes == 0 || recv_bytes > 20) { + drm_dbg_kms(&i915->drm, + "%s: Forbidden recv_bytes = %d on aux transaction\n", + intel_dp->aux.name, recv_bytes); + ret = -EBUSY; + goto out; + } + + if (recv_bytes > recv_size) + recv_bytes = recv_size; + + for (i = 0; i < recv_bytes; i += 4) + intel_dp_unpack_aux(intel_uncore_read(uncore, ch_data[i >> 2]), + recv + i, recv_bytes - i); + + ret = recv_bytes; +out: + cpu_latency_qos_update_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE); + + if (vdd) + intel_pps_vdd_off_unlocked(intel_dp, false); + + intel_pps_unlock(intel_dp, pps_wakeref); + intel_display_power_put_async(i915, aux_domain, aux_wakeref); + + if (is_tc_port) + intel_tc_port_unlock(dig_port); + + return ret; +} + +#define BARE_ADDRESS_SIZE 3 +#define HEADER_SIZE (BARE_ADDRESS_SIZE + 1) + +static void +intel_dp_aux_header(u8 txbuf[HEADER_SIZE], + const struct drm_dp_aux_msg *msg) +{ + txbuf[0] = (msg->request << 4) | ((msg->address >> 16) & 0xf); + txbuf[1] = (msg->address >> 8) & 0xff; + txbuf[2] = msg->address & 0xff; + txbuf[3] = msg->size - 1; +} + +static u32 intel_dp_aux_xfer_flags(const struct drm_dp_aux_msg *msg) +{ + /* + * If we're trying to send the HDCP Aksv, we need to set a the Aksv + * select bit to inform the hardware to send the Aksv after our header + * since we can't access that data from software. + */ + if ((msg->request & ~DP_AUX_I2C_MOT) == DP_AUX_NATIVE_WRITE && + msg->address == DP_AUX_HDCP_AKSV) + return DP_AUX_CH_CTL_AUX_AKSV_SELECT; + + return 0; +} + +static ssize_t +intel_dp_aux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg) +{ + struct intel_dp *intel_dp = container_of(aux, struct intel_dp, aux); + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + u8 txbuf[20], rxbuf[20]; + size_t txsize, rxsize; + u32 flags = intel_dp_aux_xfer_flags(msg); + int ret; + + intel_dp_aux_header(txbuf, msg); + + switch (msg->request & ~DP_AUX_I2C_MOT) { + case DP_AUX_NATIVE_WRITE: + case DP_AUX_I2C_WRITE: + case DP_AUX_I2C_WRITE_STATUS_UPDATE: + txsize = msg->size ? HEADER_SIZE + msg->size : BARE_ADDRESS_SIZE; + rxsize = 2; /* 0 or 1 data bytes */ + + if (drm_WARN_ON(&i915->drm, txsize > 20)) + return -E2BIG; + + drm_WARN_ON(&i915->drm, !msg->buffer != !msg->size); + + if (msg->buffer) + memcpy(txbuf + HEADER_SIZE, msg->buffer, msg->size); + + ret = intel_dp_aux_xfer(intel_dp, txbuf, txsize, + rxbuf, rxsize, flags); + if (ret > 0) { + msg->reply = rxbuf[0] >> 4; + + if (ret > 1) { + /* Number of bytes written in a short write. */ + ret = clamp_t(int, rxbuf[1], 0, msg->size); + } else { + /* Return payload size. */ + ret = msg->size; + } + } + break; + + case DP_AUX_NATIVE_READ: + case DP_AUX_I2C_READ: + txsize = msg->size ? HEADER_SIZE : BARE_ADDRESS_SIZE; + rxsize = msg->size + 1; + + if (drm_WARN_ON(&i915->drm, rxsize > 20)) + return -E2BIG; + + ret = intel_dp_aux_xfer(intel_dp, txbuf, txsize, + rxbuf, rxsize, flags); + if (ret > 0) { + msg->reply = rxbuf[0] >> 4; + /* + * Assume happy day, and copy the data. The caller is + * expected to check msg->reply before touching it. + * + * Return payload size. + */ + ret--; + memcpy(msg->buffer, rxbuf + 1, ret); + } + break; + + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static i915_reg_t g4x_aux_ctl_reg(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + return DP_AUX_CH_CTL(aux_ch); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_CTL(AUX_CH_B); + } +} + +static i915_reg_t g4x_aux_data_reg(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + return DP_AUX_CH_DATA(aux_ch, index); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_DATA(AUX_CH_B, index); + } +} + +static i915_reg_t ilk_aux_ctl_reg(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + return DP_AUX_CH_CTL(aux_ch); + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + return PCH_DP_AUX_CH_CTL(aux_ch); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_CTL(AUX_CH_A); + } +} + +static i915_reg_t ilk_aux_data_reg(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + return DP_AUX_CH_DATA(aux_ch, index); + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + return PCH_DP_AUX_CH_DATA(aux_ch, index); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_DATA(AUX_CH_A, index); + } +} + +static i915_reg_t skl_aux_ctl_reg(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + case AUX_CH_E: + case AUX_CH_F: + return DP_AUX_CH_CTL(aux_ch); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_CTL(AUX_CH_A); + } +} + +static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_D: + case AUX_CH_E: + case AUX_CH_F: + return DP_AUX_CH_DATA(aux_ch, index); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_DATA(AUX_CH_A, index); + } +} + +static i915_reg_t tgl_aux_ctl_reg(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_USBC1: + case AUX_CH_USBC2: + case AUX_CH_USBC3: + case AUX_CH_USBC4: + case AUX_CH_USBC5: + case AUX_CH_USBC6: + return DP_AUX_CH_CTL(aux_ch); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_CTL(AUX_CH_A); + } +} + +static i915_reg_t tgl_aux_data_reg(struct intel_dp *intel_dp, int index) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum aux_ch aux_ch = dig_port->aux_ch; + + switch (aux_ch) { + case AUX_CH_A: + case AUX_CH_B: + case AUX_CH_C: + case AUX_CH_USBC1: + case AUX_CH_USBC2: + case AUX_CH_USBC3: + case AUX_CH_USBC4: + case AUX_CH_USBC5: + case AUX_CH_USBC6: + return DP_AUX_CH_DATA(aux_ch, index); + default: + MISSING_CASE(aux_ch); + return DP_AUX_CH_DATA(AUX_CH_A, index); + } +} + +void intel_dp_aux_fini(struct intel_dp *intel_dp) +{ + if (cpu_latency_qos_request_active(&intel_dp->pm_qos)) + cpu_latency_qos_remove_request(&intel_dp->pm_qos); + + kfree(intel_dp->aux.name); +} + +void intel_dp_aux_init(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct intel_encoder *encoder = &dig_port->base; + enum aux_ch aux_ch = dig_port->aux_ch; + + if (INTEL_GEN(dev_priv) >= 12) { + intel_dp->aux_ch_ctl_reg = tgl_aux_ctl_reg; + intel_dp->aux_ch_data_reg = tgl_aux_data_reg; + } else if (INTEL_GEN(dev_priv) >= 9) { + intel_dp->aux_ch_ctl_reg = skl_aux_ctl_reg; + intel_dp->aux_ch_data_reg = skl_aux_data_reg; + } else if (HAS_PCH_SPLIT(dev_priv)) { + intel_dp->aux_ch_ctl_reg = ilk_aux_ctl_reg; + intel_dp->aux_ch_data_reg = ilk_aux_data_reg; + } else { + intel_dp->aux_ch_ctl_reg = g4x_aux_ctl_reg; + intel_dp->aux_ch_data_reg = g4x_aux_data_reg; + } + + if (INTEL_GEN(dev_priv) >= 9) + intel_dp->get_aux_clock_divider = skl_get_aux_clock_divider; + else if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) + intel_dp->get_aux_clock_divider = hsw_get_aux_clock_divider; + else if (HAS_PCH_SPLIT(dev_priv)) + intel_dp->get_aux_clock_divider = ilk_get_aux_clock_divider; + else + intel_dp->get_aux_clock_divider = g4x_get_aux_clock_divider; + + if (INTEL_GEN(dev_priv) >= 9) + intel_dp->get_aux_send_ctl = skl_get_aux_send_ctl; + else + intel_dp->get_aux_send_ctl = g4x_get_aux_send_ctl; + + drm_dp_aux_init(&intel_dp->aux); + + /* Failure to allocate our preferred name is not critical */ + if (INTEL_GEN(dev_priv) >= 12 && aux_ch >= AUX_CH_USBC1) + intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX USBC%c/%s", + aux_ch - AUX_CH_USBC1 + '1', + encoder->base.name); + else + intel_dp->aux.name = kasprintf(GFP_KERNEL, "AUX %c/%s", + aux_ch_name(aux_ch), + encoder->base.name); + + intel_dp->aux.transfer = intel_dp_aux_transfer; + cpu_latency_qos_add_request(&intel_dp->pm_qos, PM_QOS_DEFAULT_VALUE); +} diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux.h b/drivers/gpu/drm/i915/display/intel_dp_aux.h new file mode 100644 index 000000000000..4afbe76217b9 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dp_aux.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020-2021 Intel Corporation + */ + +#ifndef __INTEL_DP_AUX_H__ +#define __INTEL_DP_AUX_H__ + +#include <linux/types.h> + +struct intel_dp; + +u32 intel_dp_pack_aux(const u8 *src, int src_bytes); + +void intel_dp_aux_fini(struct intel_dp *intel_dp); +void intel_dp_aux_init(struct intel_dp *intel_dp); + +#endif /* __INTEL_DP_AUX_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c index 9775f33d1aac..651884390137 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c +++ b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c @@ -22,8 +22,26 @@ * */ +/* + * Laptops with Intel GPUs which have panels that support controlling the + * backlight through DP AUX can actually use two different interfaces: Intel's + * proprietary DP AUX backlight interface, and the standard VESA backlight + * interface. Unfortunately, at the time of writing this a lot of laptops will + * advertise support for the standard VESA backlight interface when they + * don't properly support it. However, on these systems the Intel backlight + * interface generally does work properly. Additionally, these systems will + * usually just indicate that they use PWM backlight controls in their VBIOS + * for some reason. + */ + #include "intel_display_types.h" #include "intel_dp_aux_backlight.h" +#include "intel_panel.h" + +/* TODO: + * Implement HDR, right now we just implement the bare minimum to bring us back into SDR mode so we + * can make people's backlights work in the mean time + */ /* * DP AUX registers for Intel's proprietary HDR backlight interface. We define @@ -77,6 +95,178 @@ #define INTEL_EDP_BRIGHTNESS_OPTIMIZATION_1 0x359 +/* Intel EDP backlight callbacks */ +static bool +intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector) +{ + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); + struct drm_dp_aux *aux = &intel_dp->aux; + struct intel_panel *panel = &connector->panel; + int ret; + u8 tcon_cap[4]; + + ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap)); + if (ret < 0) + return false; + + if (!(tcon_cap[1] & INTEL_EDP_HDR_TCON_BRIGHTNESS_NITS_CAP)) + return false; + + if (tcon_cap[0] >= 1) { + drm_dbg_kms(&i915->drm, "Detected Intel HDR backlight interface version %d\n", + tcon_cap[0]); + } else { + drm_dbg_kms(&i915->drm, "Detected unsupported HDR backlight interface version %d\n", + tcon_cap[0]); + return false; + } + + panel->backlight.edp.intel.sdr_uses_aux = + tcon_cap[2] & INTEL_EDP_SDR_TCON_BRIGHTNESS_AUX_CAP; + + return true; +} + +static u32 +intel_dp_aux_hdr_get_backlight(struct intel_connector *connector, enum pipe pipe) +{ + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_panel *panel = &connector->panel; + struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); + u8 tmp; + u8 buf[2] = { 0 }; + + if (drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &tmp) < 0) { + drm_err(&i915->drm, "Failed to read current backlight mode from DPCD\n"); + return 0; + } + + if (!(tmp & INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE)) { + if (!panel->backlight.edp.intel.sdr_uses_aux) { + u32 pwm_level = panel->backlight.pwm_funcs->get(connector, pipe); + + return intel_panel_backlight_level_from_pwm(connector, pwm_level); + } + + /* Assume 100% brightness if backlight controls aren't enabled yet */ + return panel->backlight.max; + } + + if (drm_dp_dpcd_read(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf, sizeof(buf)) < 0) { + drm_err(&i915->drm, "Failed to read brightness from DPCD\n"); + return 0; + } + + return (buf[1] << 8 | buf[0]); +} + +static void +intel_dp_aux_hdr_set_aux_backlight(const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct drm_device *dev = connector->base.dev; + struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); + u8 buf[4] = { 0 }; + + buf[0] = level & 0xFF; + buf[1] = (level & 0xFF00) >> 8; + + if (drm_dp_dpcd_write(&intel_dp->aux, INTEL_EDP_BRIGHTNESS_NITS_LSB, buf, 4) < 0) + drm_err(dev, "Failed to write brightness level to DPCD\n"); +} + +static void +intel_dp_aux_hdr_set_backlight(const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + + if (panel->backlight.edp.intel.sdr_uses_aux) { + intel_dp_aux_hdr_set_aux_backlight(conn_state, level); + } else { + const u32 pwm_level = intel_panel_backlight_level_to_pwm(connector, level); + + intel_panel_set_pwm_level(conn_state, pwm_level); + } +} + +static void +intel_dp_aux_hdr_enable_backlight(const struct intel_crtc_state *crtc_state, + const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); + int ret; + u8 old_ctrl, ctrl; + + ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl); + if (ret < 0) { + drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret); + return; + } + + ctrl = old_ctrl; + if (panel->backlight.edp.intel.sdr_uses_aux) { + ctrl |= INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE; + intel_dp_aux_hdr_set_aux_backlight(conn_state, level); + } else { + u32 pwm_level = intel_panel_backlight_level_to_pwm(connector, level); + + panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level); + + ctrl &= ~INTEL_EDP_HDR_TCON_BRIGHTNESS_AUX_ENABLE; + } + + if (ctrl != old_ctrl) + if (drm_dp_dpcd_writeb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, ctrl) < 0) + drm_err(&i915->drm, "Failed to configure DPCD brightness controls\n"); +} + +static void +intel_dp_aux_hdr_disable_backlight(const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + + /* Nothing to do for AUX based backlight controls */ + if (panel->backlight.edp.intel.sdr_uses_aux) + return; + + /* Note we want the actual pwm_level to be 0, regardless of pwm_min */ + panel->backlight.pwm_funcs->disable(conn_state, intel_panel_invert_pwm_level(connector, 0)); +} + +static int +intel_dp_aux_hdr_setup_backlight(struct intel_connector *connector, enum pipe pipe) +{ + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_panel *panel = &connector->panel; + int ret; + + if (panel->backlight.edp.intel.sdr_uses_aux) { + drm_dbg_kms(&i915->drm, "SDR backlight is controlled through DPCD\n"); + } else { + drm_dbg_kms(&i915->drm, "SDR backlight is controlled through PWM\n"); + + ret = panel->backlight.pwm_funcs->setup(connector, pipe); + if (ret < 0) { + drm_err(&i915->drm, + "Failed to setup SDR backlight controls through PWM: %d\n", ret); + return ret; + } + } + + panel->backlight.max = 512; + panel->backlight.min = 0; + panel->backlight.level = intel_dp_aux_hdr_get_backlight(connector, pipe); + panel->backlight.enabled = panel->backlight.level != 0; + + return 0; +} + /* VESA backlight callbacks */ static void set_vesa_backlight_enable(struct intel_dp *intel_dp, bool enable) { @@ -128,7 +318,7 @@ static bool intel_dp_aux_vesa_backlight_dpcd_mode(struct intel_connector *connec * Read the current backlight value from DPCD register(s) based * on if 8-bit(MSB) or 16-bit(MSB and LSB) values are supported */ -static u32 intel_dp_aux_vesa_get_backlight(struct intel_connector *connector) +static u32 intel_dp_aux_vesa_get_backlight(struct intel_connector *connector, enum pipe unused) { struct intel_dp *intel_dp = intel_attached_dp(connector); struct drm_i915_private *i915 = dp_to_i915(intel_dp); @@ -195,7 +385,7 @@ static bool intel_dp_aux_vesa_set_pwm_freq(struct intel_connector *connector) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_dp *intel_dp = intel_attached_dp(connector); - const u8 pn = connector->panel.backlight.pwmgen_bit_count; + const u8 pn = connector->panel.backlight.edp.vesa.pwmgen_bit_count; int freq, fxp, f, fxp_actual, fxp_min, fxp_max; freq = dev_priv->vbt.backlight.pwm_freq_hz; @@ -236,6 +426,7 @@ intel_dp_aux_vesa_enable_backlight(const struct intel_crtc_state *crtc_state, struct drm_i915_private *i915 = dp_to_i915(intel_dp); struct intel_panel *panel = &connector->panel; u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode; + u8 pwmgen_bit_count = panel->backlight.edp.vesa.pwmgen_bit_count; if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf) != 1) { @@ -256,7 +447,7 @@ intel_dp_aux_vesa_enable_backlight(const struct intel_crtc_state *crtc_state, if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_EDP_PWMGEN_BIT_COUNT, - panel->backlight.pwmgen_bit_count) < 0) + pwmgen_bit_count) < 0) drm_dbg_kms(&i915->drm, "Failed to write aux pwmgen bit count\n"); @@ -364,7 +555,7 @@ static u32 intel_dp_aux_vesa_calc_max_backlight(struct intel_connector *connecto "Failed to write aux pwmgen bit count\n"); return max_backlight; } - panel->backlight.pwmgen_bit_count = pn; + panel->backlight.edp.vesa.pwmgen_bit_count = pn; max_backlight = (1 << pn) - 1; @@ -381,7 +572,7 @@ static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, return -ENODEV; panel->backlight.min = 0; - panel->backlight.level = intel_dp_aux_vesa_get_backlight(connector); + panel->backlight.level = intel_dp_aux_vesa_get_backlight(connector, pipe); panel->backlight.enabled = intel_dp_aux_vesa_backlight_dpcd_mode(connector) && panel->backlight.level != 0; @@ -395,9 +586,14 @@ intel_dp_aux_supports_vesa_backlight(struct intel_connector *connector) struct drm_i915_private *i915 = dp_to_i915(intel_dp); /* Check the eDP Display control capabilities registers to determine if - * the panel can support backlight control over the aux channel + * the panel can support backlight control over the aux channel. + * + * TODO: We currently only support AUX only backlight configurations, not backlights which + * require a mix of PWM and AUX controls to work. In the mean time, these machines typically + * work just fine using normal PWM controls anyway. */ if (intel_dp->edp_dpcd[1] & DP_EDP_TCON_BACKLIGHT_ADJUSTMENT_CAP && + (intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) && (intel_dp->edp_dpcd[2] & DP_EDP_BACKLIGHT_BRIGHTNESS_AUX_SET_CAP)) { drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n"); return true; @@ -405,6 +601,14 @@ intel_dp_aux_supports_vesa_backlight(struct intel_connector *connector) return false; } +static const struct intel_panel_bl_funcs intel_dp_hdr_bl_funcs = { + .setup = intel_dp_aux_hdr_setup_backlight, + .enable = intel_dp_aux_hdr_enable_backlight, + .disable = intel_dp_aux_hdr_disable_backlight, + .set = intel_dp_aux_hdr_set_backlight, + .get = intel_dp_aux_hdr_get_backlight, +}; + static const struct intel_panel_bl_funcs intel_dp_vesa_bl_funcs = { .setup = intel_dp_aux_vesa_setup_backlight, .enable = intel_dp_aux_vesa_enable_backlight, @@ -413,36 +617,73 @@ static const struct intel_panel_bl_funcs intel_dp_vesa_bl_funcs = { .get = intel_dp_aux_vesa_get_backlight, }; -int intel_dp_aux_init_backlight_funcs(struct intel_connector *intel_connector) +enum intel_dp_aux_backlight_modparam { + INTEL_DP_AUX_BACKLIGHT_AUTO = -1, + INTEL_DP_AUX_BACKLIGHT_OFF = 0, + INTEL_DP_AUX_BACKLIGHT_ON = 1, + INTEL_DP_AUX_BACKLIGHT_FORCE_VESA = 2, + INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL = 3, +}; + +int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector) { - struct intel_panel *panel = &intel_connector->panel; - struct intel_dp *intel_dp = enc_to_intel_dp(intel_connector->encoder); + struct drm_device *dev = connector->base.dev; + struct intel_panel *panel = &connector->panel; + struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); struct drm_i915_private *i915 = dp_to_i915(intel_dp); + bool try_intel_interface = false, try_vesa_interface = false; - if (i915->params.enable_dpcd_backlight == 0 || - !intel_dp_aux_supports_vesa_backlight(intel_connector)) + /* Check the VBT and user's module parameters to figure out which + * interfaces to probe + */ + switch (i915->params.enable_dpcd_backlight) { + case INTEL_DP_AUX_BACKLIGHT_OFF: return -ENODEV; + case INTEL_DP_AUX_BACKLIGHT_AUTO: + switch (i915->vbt.backlight.type) { + case INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE: + try_vesa_interface = true; + break; + case INTEL_BACKLIGHT_DISPLAY_DDI: + try_intel_interface = true; + try_vesa_interface = true; + break; + default: + return -ENODEV; + } + break; + case INTEL_DP_AUX_BACKLIGHT_ON: + if (i915->vbt.backlight.type != INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE) + try_intel_interface = true; + + try_vesa_interface = true; + break; + case INTEL_DP_AUX_BACKLIGHT_FORCE_VESA: + try_vesa_interface = true; + break; + case INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL: + try_intel_interface = true; + break; + } /* - * There are a lot of machines that don't advertise the backlight - * control interface to use properly in their VBIOS, :\ + * A lot of eDP panels in the wild will report supporting both the + * Intel proprietary backlight control interface, and the VESA + * backlight control interface. Many of these panels are liars though, + * and will only work with the Intel interface. So, always probe for + * that first. */ - if (i915->vbt.backlight.type != - INTEL_BACKLIGHT_VESA_EDP_AUX_INTERFACE && - i915->params.enable_dpcd_backlight != 1 && - !drm_dp_has_quirk(&intel_dp->desc, intel_dp->edid_quirks, - DP_QUIRK_FORCE_DPCD_BACKLIGHT)) { - drm_info(&i915->drm, - "Panel advertises DPCD backlight support, but " - "VBT disagrees. If your backlight controls " - "don't work try booting with " - "i915.enable_dpcd_backlight=1. If your machine " - "needs this, please file a _new_ bug report on " - "drm/i915, see " FDO_BUG_URL " for details.\n"); - return -ENODEV; + if (try_intel_interface && intel_dp_aux_supports_hdr_backlight(connector)) { + drm_dbg_kms(dev, "Using Intel proprietary eDP backlight controls\n"); + panel->backlight.funcs = &intel_dp_hdr_bl_funcs; + return 0; } - panel->backlight.funcs = &intel_dp_vesa_bl_funcs; + if (try_vesa_interface && intel_dp_aux_supports_vesa_backlight(connector)) { + drm_dbg_kms(dev, "Using VESA eDP backlight controls\n"); + panel->backlight.funcs = &intel_dp_vesa_bl_funcs; + return 0; + } - return 0; + return -ENODEV; } diff --git a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c index 03424d20e9f7..4dba5bb15af5 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_hdcp.c +++ b/drivers/gpu/drm/i915/display/intel_dp_hdcp.c @@ -16,6 +16,30 @@ #include "intel_dp.h" #include "intel_hdcp.h" +static unsigned int transcoder_to_stream_enc_status(enum transcoder cpu_transcoder) +{ + u32 stream_enc_mask; + + switch (cpu_transcoder) { + case TRANSCODER_A: + stream_enc_mask = HDCP_STATUS_STREAM_A_ENC; + break; + case TRANSCODER_B: + stream_enc_mask = HDCP_STATUS_STREAM_B_ENC; + break; + case TRANSCODER_C: + stream_enc_mask = HDCP_STATUS_STREAM_C_ENC; + break; + case TRANSCODER_D: + stream_enc_mask = HDCP_STATUS_STREAM_D_ENC; + break; + default: + stream_enc_mask = 0; + } + + return stream_enc_mask; +} + static void intel_dp_hdcp_wait_for_cp_irq(struct intel_hdcp *hdcp, int timeout) { long ret; @@ -561,7 +585,8 @@ int intel_dp_hdcp2_config_stream_type(struct intel_digital_port *dig_port, } static -int intel_dp_hdcp2_check_link(struct intel_digital_port *dig_port) +int intel_dp_hdcp2_check_link(struct intel_digital_port *dig_port, + struct intel_connector *connector) { u8 rx_status; int ret; @@ -622,48 +647,143 @@ static const struct intel_hdcp_shim intel_dp_hdcp_shim = { }; static int -intel_dp_mst_hdcp_toggle_signalling(struct intel_digital_port *dig_port, - enum transcoder cpu_transcoder, - bool enable) +intel_dp_mst_toggle_hdcp_stream_select(struct intel_connector *connector, + bool enable) { - struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_hdcp *hdcp = &connector->hdcp; int ret; - if (!enable) - usleep_range(6, 60); /* Bspec says >= 6us */ - - ret = intel_ddi_toggle_hdcp_signalling(&dig_port->base, - cpu_transcoder, enable); + ret = intel_ddi_toggle_hdcp_bits(&dig_port->base, + hdcp->stream_transcoder, enable, + TRANS_DDI_HDCP_SELECT); if (ret) - drm_dbg_kms(&i915->drm, "%s HDCP signalling failed (%d)\n", - enable ? "Enable" : "Disable", ret); + drm_err(&i915->drm, "%s HDCP stream select failed (%d)\n", + enable ? "Enable" : "Disable", ret); return ret; } -static -bool intel_dp_mst_hdcp_check_link(struct intel_digital_port *dig_port, - struct intel_connector *connector) +static int +intel_dp_mst_hdcp_stream_encryption(struct intel_connector *connector, + bool enable) +{ + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_hdcp *hdcp = &connector->hdcp; + enum port port = dig_port->base.port; + enum transcoder cpu_transcoder = hdcp->stream_transcoder; + u32 stream_enc_status; + int ret; + + ret = intel_dp_mst_toggle_hdcp_stream_select(connector, enable); + if (ret) + return ret; + + stream_enc_status = transcoder_to_stream_enc_status(cpu_transcoder); + if (!stream_enc_status) + return -EINVAL; + + /* Wait for encryption confirmation */ + if (intel_de_wait_for_register(i915, + HDCP_STATUS(i915, cpu_transcoder, port), + stream_enc_status, + enable ? stream_enc_status : 0, + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { + drm_err(&i915->drm, "Timed out waiting for transcoder: %s stream encryption %s\n", + transcoder_name(cpu_transcoder), enable ? "enabled" : "disabled"); + return -ETIMEDOUT; + } + + return 0; +} + +static bool intel_dp_mst_get_qses_status(struct intel_digital_port *dig_port, + struct intel_connector *connector) { struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); - struct intel_dp *intel_dp = &dig_port->dp; struct drm_dp_query_stream_enc_status_ack_reply reply; + struct intel_dp *intel_dp = &dig_port->dp; int ret; - if (!intel_dp_hdcp_check_link(dig_port, connector)) - return false; - ret = drm_dp_send_query_stream_enc_status(&intel_dp->mst_mgr, connector->port, &reply); if (ret) { drm_dbg_kms(&i915->drm, - "[CONNECTOR:%d:%s] failed QSES ret=%d\n", - connector->base.base.id, connector->base.name, ret); + "[%s:%d] failed QSES ret=%d\n", + connector->base.name, connector->base.base.id, ret); return false; } + drm_dbg_kms(&i915->drm, "[%s:%d] QSES stream auth: %d stream enc: %d\n", + connector->base.name, connector->base.base.id, + reply.auth_completed, reply.encryption_enabled); + return reply.auth_completed && reply.encryption_enabled; } +static int +intel_dp_mst_hdcp2_stream_encryption(struct intel_connector *connector, + bool enable) +{ + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; + struct intel_hdcp *hdcp = &connector->hdcp; + enum transcoder cpu_transcoder = hdcp->stream_transcoder; + enum pipe pipe = (enum pipe)cpu_transcoder; + enum port port = dig_port->base.port; + int ret; + + drm_WARN_ON(&i915->drm, enable && + !!(intel_de_read(i915, HDCP2_AUTH_STREAM(i915, cpu_transcoder, port)) + & AUTH_STREAM_TYPE) != data->streams[0].stream_type); + + ret = intel_dp_mst_toggle_hdcp_stream_select(connector, enable); + if (ret) + return ret; + + /* Wait for encryption confirmation */ + if (intel_de_wait_for_register(i915, + HDCP2_STREAM_STATUS(i915, cpu_transcoder, pipe), + STREAM_ENCRYPTION_STATUS, + enable ? STREAM_ENCRYPTION_STATUS : 0, + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { + drm_err(&i915->drm, "Timed out waiting for transcoder: %s stream encryption %s\n", + transcoder_name(cpu_transcoder), enable ? "enabled" : "disabled"); + return -ETIMEDOUT; + } + + return 0; +} + +/* + * DP v2.0 I.3.3 ignore the stream signature L' in QSES reply msg reply. + * I.3.5 MST source device may use a QSES msg to query downstream status + * for a particular stream. + */ +static +int intel_dp_mst_hdcp2_check_link(struct intel_digital_port *dig_port, + struct intel_connector *connector) +{ + struct intel_hdcp *hdcp = &connector->hdcp; + int ret; + + /* + * We do need to do the Link Check only for the connector involved with + * HDCP port authentication and encryption. + * We can re-use the hdcp->is_repeater flag to know that the connector + * involved with HDCP port authentication and encryption. + */ + if (hdcp->is_repeater) { + ret = intel_dp_hdcp2_check_link(dig_port, connector); + if (ret) + return ret; + } + + return intel_dp_mst_get_qses_status(dig_port, connector) ? 0 : -EINVAL; +} + static const struct intel_hdcp_shim intel_dp_mst_hdcp_shim = { .write_an_aksv = intel_dp_hdcp_write_an_aksv, .read_bksv = intel_dp_hdcp_read_bksv, @@ -673,10 +793,16 @@ static const struct intel_hdcp_shim intel_dp_mst_hdcp_shim = { .read_ksv_ready = intel_dp_hdcp_read_ksv_ready, .read_ksv_fifo = intel_dp_hdcp_read_ksv_fifo, .read_v_prime_part = intel_dp_hdcp_read_v_prime_part, - .toggle_signalling = intel_dp_mst_hdcp_toggle_signalling, - .check_link = intel_dp_mst_hdcp_check_link, + .toggle_signalling = intel_dp_hdcp_toggle_signalling, + .stream_encryption = intel_dp_mst_hdcp_stream_encryption, + .check_link = intel_dp_hdcp_check_link, .hdcp_capable = intel_dp_hdcp_capable, - + .write_2_2_msg = intel_dp_hdcp2_write_msg, + .read_2_2_msg = intel_dp_hdcp2_read_msg, + .config_stream_type = intel_dp_hdcp2_config_stream_type, + .stream_2_2_encryption = intel_dp_mst_hdcp2_stream_encryption, + .check_2_2_link = intel_dp_mst_hdcp2_check_link, + .hdcp_2_2_capable = intel_dp_hdcp2_capable, .protocol = HDCP_PROTOCOL_DP, }; @@ -693,10 +819,10 @@ int intel_dp_init_hdcp(struct intel_digital_port *dig_port, return 0; if (intel_connector->mst_port) - return intel_hdcp_init(intel_connector, port, + return intel_hdcp_init(intel_connector, dig_port, &intel_dp_mst_hdcp_shim); else if (!intel_dp_is_edp(intel_dp)) - return intel_hdcp_init(intel_connector, port, + return intel_hdcp_init(intel_connector, dig_port, &intel_dp_hdcp_shim); return 0; diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c index 91d3979902d0..892d7db7d94f 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c @@ -34,18 +34,6 @@ intel_dp_dump_link_status(const u8 link_status[DP_LINK_STATUS_SIZE]) link_status[3], link_status[4], link_status[5]); } -static int intel_dp_lttpr_count(struct intel_dp *intel_dp) -{ - int count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps); - - /* - * Pretend no LTTPRs in case of LTTPR detection error, or - * if too many (>8) LTTPRs are detected. This translates to link - * training in transparent mode. - */ - return count <= 0 ? 0 : count; -} - static void intel_dp_reset_lttpr_count(struct intel_dp *intel_dp) { intel_dp->lttpr_common_caps[DP_PHY_REPEATER_CNT - @@ -142,6 +130,17 @@ int intel_dp_lttpr_init(struct intel_dp *intel_dp) return 0; ret = intel_dp_read_lttpr_common_caps(intel_dp); + if (!ret) + return 0; + + lttpr_count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps); + /* + * Prevent setting LTTPR transparent mode explicitly if no LTTPRs are + * detected as this breaks link training at least on the Dell WD19TB + * dock. + */ + if (lttpr_count == 0) + return 0; /* * See DP Standard v2.0 3.6.6.1. about the explicit disabling of @@ -150,17 +149,12 @@ int intel_dp_lttpr_init(struct intel_dp *intel_dp) */ intel_dp_set_lttpr_transparent_mode(intel_dp, true); - if (!ret) - return 0; - - lttpr_count = intel_dp_lttpr_count(intel_dp); - /* * In case of unsupported number of LTTPRs or failing to switch to * non-transparent mode fall-back to transparent link training mode, * still taking into account any LTTPR common lane- rate/count limits. */ - if (lttpr_count == 0) + if (lttpr_count < 0) return 0; if (!intel_dp_set_lttpr_transparent_mode(intel_dp, false)) { @@ -222,11 +216,11 @@ intel_dp_phy_is_downstream_of_source(struct intel_dp *intel_dp, enum drm_dp_phy dp_phy) { struct drm_i915_private *i915 = dp_to_i915(intel_dp); - int lttpr_count = intel_dp_lttpr_count(intel_dp); + int lttpr_count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps); - drm_WARN_ON_ONCE(&i915->drm, lttpr_count == 0 && dp_phy != DP_PHY_DPRX); + drm_WARN_ON_ONCE(&i915->drm, lttpr_count <= 0 && dp_phy != DP_PHY_DPRX); - return lttpr_count == 0 || dp_phy == DP_PHY_LTTPR(lttpr_count - 1); + return lttpr_count <= 0 || dp_phy == DP_PHY_LTTPR(lttpr_count - 1); } static u8 intel_dp_phy_voltage_max(struct intel_dp *intel_dp, @@ -334,6 +328,27 @@ intel_dp_set_link_train(struct intel_dp *intel_dp, return drm_dp_dpcd_write(&intel_dp->aux, reg, buf, len) == len; } +void intel_dp_set_signal_levels(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + enum drm_dp_phy dp_phy) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + u8 train_set = intel_dp->train_set[0]; + char phy_name[10]; + + drm_dbg_kms(&dev_priv->drm, "Using vswing level %d%s, pre-emphasis level %d%s, at %s\n", + train_set & DP_TRAIN_VOLTAGE_SWING_MASK, + train_set & DP_TRAIN_MAX_SWING_REACHED ? " (max)" : "", + (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) >> + DP_TRAIN_PRE_EMPHASIS_SHIFT, + train_set & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ? + " (max)" : "", + intel_dp_phy_name(dp_phy, phy_name, sizeof(phy_name))); + + if (intel_dp_phy_is_downstream_of_source(intel_dp, dp_phy)) + intel_dp->set_signal_levels(intel_dp, crtc_state); +} + static bool intel_dp_reset_link_train(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state, @@ -341,7 +356,7 @@ intel_dp_reset_link_train(struct intel_dp *intel_dp, u8 dp_train_pat) { memset(intel_dp->train_set, 0, sizeof(intel_dp->train_set)); - intel_dp_set_signal_levels(intel_dp, crtc_state); + intel_dp_set_signal_levels(intel_dp, crtc_state, dp_phy); return intel_dp_set_link_train(intel_dp, crtc_state, dp_phy, dp_train_pat); } @@ -355,7 +370,7 @@ intel_dp_update_link_train(struct intel_dp *intel_dp, DP_TRAINING_LANE0_SET_PHY_REPEATER(dp_phy); int ret; - intel_dp_set_signal_levels(intel_dp, crtc_state); + intel_dp_set_signal_levels(intel_dp, crtc_state, dp_phy); ret = drm_dp_dpcd_write(&intel_dp->aux, reg, intel_dp->train_set, crtc_state->lane_count); @@ -413,7 +428,7 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp, drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET, &rate_select, 1); - link_config[0] = 0; + link_config[0] = crtc_state->vrr.enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0; link_config[1] = DP_SET_ANSI_8B10B; drm_dp_dpcd_write(&intel_dp->aux, DP_DOWNSPREAD_CTRL, link_config, 2); @@ -676,9 +691,9 @@ static bool intel_dp_disable_dpcd_training_pattern(struct intel_dp *intel_dp, * @intel_dp: DP struct * @crtc_state: state for CRTC attached to the encoder * - * Stop the link training of the @intel_dp port, disabling the test pattern - * symbol generation on the port and disabling the training pattern in - * the sink's DPCD. + * Stop the link training of the @intel_dp port, disabling the training + * pattern in the sink's DPCD, and disabling the test pattern symbol + * generation on the port. * * What symbols are output on the port after this point is * platform specific: On DDI/VLV/CHV platforms it will be the idle pattern @@ -692,10 +707,9 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp, { intel_dp->link_trained = true; - intel_dp_program_link_training_pattern(intel_dp, - crtc_state, - DP_TRAINING_PATTERN_DISABLE); intel_dp_disable_dpcd_training_pattern(intel_dp, DP_PHY_DPRX); + intel_dp_program_link_training_pattern(intel_dp, crtc_state, + DP_TRAINING_PATTERN_DISABLE); } static bool diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.h b/drivers/gpu/drm/i915/display/intel_dp_link_training.h index 86905aa24db7..6a1f76bd8c75 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.h +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.h @@ -17,6 +17,9 @@ void intel_dp_get_adjust_train(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state, enum drm_dp_phy dp_phy, const u8 link_status[DP_LINK_STATUS_SIZE]); +void intel_dp_set_signal_levels(struct intel_dp *intel_dp, + const struct intel_crtc_state *crtc_state, + enum drm_dp_phy dp_phy); void intel_dp_start_link_train(struct intel_dp *intel_dp, const struct intel_crtc_state *crtc_state); void intel_dp_stop_link_train(struct intel_dp *intel_dp, diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c index 27f04aed8764..b4621ed0127e 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c @@ -53,8 +53,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, struct drm_i915_private *i915 = to_i915(connector->base.dev); const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; - bool constant_n = drm_dp_has_quirk(&intel_dp->desc, 0, - DP_DPCD_QUIRK_CONSTANT_N); + bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N); int bpp, slots = -EINVAL; crtc_state->lane_count = limits->max_lane_count; @@ -69,7 +68,9 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr, connector->port, - crtc_state->pbn, 0); + crtc_state->pbn, + drm_dp_get_vc_payload_bw(crtc_state->port_clock, + crtc_state->lane_count)); if (slots == -EDEADLK) return slots; if (slots >= 0) @@ -569,7 +570,7 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state, if (conn_state->content_protection == DRM_MODE_CONTENT_PROTECTION_DESIRED) intel_hdcp_enable(to_intel_connector(conn_state->connector), - pipe_config->cpu_transcoder, + pipe_config, (u8)conn_state->hdcp_content_type); } @@ -829,12 +830,11 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo intel_attach_force_audio_property(connector); intel_attach_broadcast_rgb_property(connector); - - /* TODO: Figure out how to make HDCP work on GEN12+ */ - if (INTEL_GEN(dev_priv) < 12) { + if (INTEL_GEN(dev_priv) <= 12) { ret = intel_dp_init_hdcp(dig_port, intel_connector); if (ret) - DRM_DEBUG_KMS("HDCP init failed, skipping.\n"); + drm_dbg_kms(&dev_priv->drm, "[%s:%d] HDCP MST init failed, skipping.\n", + connector->name, connector->base.id); } /* diff --git a/drivers/gpu/drm/i915/display/intel_dpll.c b/drivers/gpu/drm/i915/display/intel_dpll.c new file mode 100644 index 000000000000..7ba7f315aaee --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dpll.c @@ -0,0 +1,1363 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ +#include <linux/kernel.h> +#include "intel_display_types.h" +#include "intel_display.h" +#include "intel_dpll.h" +#include "intel_lvds.h" +#include "intel_panel.h" + +struct intel_limit { + struct { + int min, max; + } dot, vco, n, m, m1, m2, p, p1; + + struct { + int dot_limit; + int p2_slow, p2_fast; + } p2; +}; +static const struct intel_limit intel_limits_i8xx_dac = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 908000, .max = 1512000 }, + .n = { .min = 2, .max = 16 }, + .m = { .min = 96, .max = 140 }, + .m1 = { .min = 18, .max = 26 }, + .m2 = { .min = 6, .max = 16 }, + .p = { .min = 4, .max = 128 }, + .p1 = { .min = 2, .max = 33 }, + .p2 = { .dot_limit = 165000, + .p2_slow = 4, .p2_fast = 2 }, +}; + +static const struct intel_limit intel_limits_i8xx_dvo = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 908000, .max = 1512000 }, + .n = { .min = 2, .max = 16 }, + .m = { .min = 96, .max = 140 }, + .m1 = { .min = 18, .max = 26 }, + .m2 = { .min = 6, .max = 16 }, + .p = { .min = 4, .max = 128 }, + .p1 = { .min = 2, .max = 33 }, + .p2 = { .dot_limit = 165000, + .p2_slow = 4, .p2_fast = 4 }, +}; + +static const struct intel_limit intel_limits_i8xx_lvds = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 908000, .max = 1512000 }, + .n = { .min = 2, .max = 16 }, + .m = { .min = 96, .max = 140 }, + .m1 = { .min = 18, .max = 26 }, + .m2 = { .min = 6, .max = 16 }, + .p = { .min = 4, .max = 128 }, + .p1 = { .min = 1, .max = 6 }, + .p2 = { .dot_limit = 165000, + .p2_slow = 14, .p2_fast = 7 }, +}; + +static const struct intel_limit intel_limits_i9xx_sdvo = { + .dot = { .min = 20000, .max = 400000 }, + .vco = { .min = 1400000, .max = 2800000 }, + .n = { .min = 1, .max = 6 }, + .m = { .min = 70, .max = 120 }, + .m1 = { .min = 8, .max = 18 }, + .m2 = { .min = 3, .max = 7 }, + .p = { .min = 5, .max = 80 }, + .p1 = { .min = 1, .max = 8 }, + .p2 = { .dot_limit = 200000, + .p2_slow = 10, .p2_fast = 5 }, +}; + +static const struct intel_limit intel_limits_i9xx_lvds = { + .dot = { .min = 20000, .max = 400000 }, + .vco = { .min = 1400000, .max = 2800000 }, + .n = { .min = 1, .max = 6 }, + .m = { .min = 70, .max = 120 }, + .m1 = { .min = 8, .max = 18 }, + .m2 = { .min = 3, .max = 7 }, + .p = { .min = 7, .max = 98 }, + .p1 = { .min = 1, .max = 8 }, + .p2 = { .dot_limit = 112000, + .p2_slow = 14, .p2_fast = 7 }, +}; + + +static const struct intel_limit intel_limits_g4x_sdvo = { + .dot = { .min = 25000, .max = 270000 }, + .vco = { .min = 1750000, .max = 3500000}, + .n = { .min = 1, .max = 4 }, + .m = { .min = 104, .max = 138 }, + .m1 = { .min = 17, .max = 23 }, + .m2 = { .min = 5, .max = 11 }, + .p = { .min = 10, .max = 30 }, + .p1 = { .min = 1, .max = 3}, + .p2 = { .dot_limit = 270000, + .p2_slow = 10, + .p2_fast = 10 + }, +}; + +static const struct intel_limit intel_limits_g4x_hdmi = { + .dot = { .min = 22000, .max = 400000 }, + .vco = { .min = 1750000, .max = 3500000}, + .n = { .min = 1, .max = 4 }, + .m = { .min = 104, .max = 138 }, + .m1 = { .min = 16, .max = 23 }, + .m2 = { .min = 5, .max = 11 }, + .p = { .min = 5, .max = 80 }, + .p1 = { .min = 1, .max = 8}, + .p2 = { .dot_limit = 165000, + .p2_slow = 10, .p2_fast = 5 }, +}; + +static const struct intel_limit intel_limits_g4x_single_channel_lvds = { + .dot = { .min = 20000, .max = 115000 }, + .vco = { .min = 1750000, .max = 3500000 }, + .n = { .min = 1, .max = 3 }, + .m = { .min = 104, .max = 138 }, + .m1 = { .min = 17, .max = 23 }, + .m2 = { .min = 5, .max = 11 }, + .p = { .min = 28, .max = 112 }, + .p1 = { .min = 2, .max = 8 }, + .p2 = { .dot_limit = 0, + .p2_slow = 14, .p2_fast = 14 + }, +}; + +static const struct intel_limit intel_limits_g4x_dual_channel_lvds = { + .dot = { .min = 80000, .max = 224000 }, + .vco = { .min = 1750000, .max = 3500000 }, + .n = { .min = 1, .max = 3 }, + .m = { .min = 104, .max = 138 }, + .m1 = { .min = 17, .max = 23 }, + .m2 = { .min = 5, .max = 11 }, + .p = { .min = 14, .max = 42 }, + .p1 = { .min = 2, .max = 6 }, + .p2 = { .dot_limit = 0, + .p2_slow = 7, .p2_fast = 7 + }, +}; + +static const struct intel_limit pnv_limits_sdvo = { + .dot = { .min = 20000, .max = 400000}, + .vco = { .min = 1700000, .max = 3500000 }, + /* Pineview's Ncounter is a ring counter */ + .n = { .min = 3, .max = 6 }, + .m = { .min = 2, .max = 256 }, + /* Pineview only has one combined m divider, which we treat as m2. */ + .m1 = { .min = 0, .max = 0 }, + .m2 = { .min = 0, .max = 254 }, + .p = { .min = 5, .max = 80 }, + .p1 = { .min = 1, .max = 8 }, + .p2 = { .dot_limit = 200000, + .p2_slow = 10, .p2_fast = 5 }, +}; + +static const struct intel_limit pnv_limits_lvds = { + .dot = { .min = 20000, .max = 400000 }, + .vco = { .min = 1700000, .max = 3500000 }, + .n = { .min = 3, .max = 6 }, + .m = { .min = 2, .max = 256 }, + .m1 = { .min = 0, .max = 0 }, + .m2 = { .min = 0, .max = 254 }, + .p = { .min = 7, .max = 112 }, + .p1 = { .min = 1, .max = 8 }, + .p2 = { .dot_limit = 112000, + .p2_slow = 14, .p2_fast = 14 }, +}; + +/* Ironlake / Sandybridge + * + * We calculate clock using (register_value + 2) for N/M1/M2, so here + * the range value for them is (actual_value - 2). + */ +static const struct intel_limit ilk_limits_dac = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 1760000, .max = 3510000 }, + .n = { .min = 1, .max = 5 }, + .m = { .min = 79, .max = 127 }, + .m1 = { .min = 12, .max = 22 }, + .m2 = { .min = 5, .max = 9 }, + .p = { .min = 5, .max = 80 }, + .p1 = { .min = 1, .max = 8 }, + .p2 = { .dot_limit = 225000, + .p2_slow = 10, .p2_fast = 5 }, +}; + +static const struct intel_limit ilk_limits_single_lvds = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 1760000, .max = 3510000 }, + .n = { .min = 1, .max = 3 }, + .m = { .min = 79, .max = 118 }, + .m1 = { .min = 12, .max = 22 }, + .m2 = { .min = 5, .max = 9 }, + .p = { .min = 28, .max = 112 }, + .p1 = { .min = 2, .max = 8 }, + .p2 = { .dot_limit = 225000, + .p2_slow = 14, .p2_fast = 14 }, +}; + +static const struct intel_limit ilk_limits_dual_lvds = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 1760000, .max = 3510000 }, + .n = { .min = 1, .max = 3 }, + .m = { .min = 79, .max = 127 }, + .m1 = { .min = 12, .max = 22 }, + .m2 = { .min = 5, .max = 9 }, + .p = { .min = 14, .max = 56 }, + .p1 = { .min = 2, .max = 8 }, + .p2 = { .dot_limit = 225000, + .p2_slow = 7, .p2_fast = 7 }, +}; + +/* LVDS 100mhz refclk limits. */ +static const struct intel_limit ilk_limits_single_lvds_100m = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 1760000, .max = 3510000 }, + .n = { .min = 1, .max = 2 }, + .m = { .min = 79, .max = 126 }, + .m1 = { .min = 12, .max = 22 }, + .m2 = { .min = 5, .max = 9 }, + .p = { .min = 28, .max = 112 }, + .p1 = { .min = 2, .max = 8 }, + .p2 = { .dot_limit = 225000, + .p2_slow = 14, .p2_fast = 14 }, +}; + +static const struct intel_limit ilk_limits_dual_lvds_100m = { + .dot = { .min = 25000, .max = 350000 }, + .vco = { .min = 1760000, .max = 3510000 }, + .n = { .min = 1, .max = 3 }, + .m = { .min = 79, .max = 126 }, + .m1 = { .min = 12, .max = 22 }, + .m2 = { .min = 5, .max = 9 }, + .p = { .min = 14, .max = 42 }, + .p1 = { .min = 2, .max = 6 }, + .p2 = { .dot_limit = 225000, + .p2_slow = 7, .p2_fast = 7 }, +}; + +static const struct intel_limit intel_limits_vlv = { + /* + * These are the data rate limits (measured in fast clocks) + * since those are the strictest limits we have. The fast + * clock and actual rate limits are more relaxed, so checking + * them would make no difference. + */ + .dot = { .min = 25000 * 5, .max = 270000 * 5 }, + .vco = { .min = 4000000, .max = 6000000 }, + .n = { .min = 1, .max = 7 }, + .m1 = { .min = 2, .max = 3 }, + .m2 = { .min = 11, .max = 156 }, + .p1 = { .min = 2, .max = 3 }, + .p2 = { .p2_slow = 2, .p2_fast = 20 }, /* slow=min, fast=max */ +}; + +static const struct intel_limit intel_limits_chv = { + /* + * These are the data rate limits (measured in fast clocks) + * since those are the strictest limits we have. The fast + * clock and actual rate limits are more relaxed, so checking + * them would make no difference. + */ + .dot = { .min = 25000 * 5, .max = 540000 * 5}, + .vco = { .min = 4800000, .max = 6480000 }, + .n = { .min = 1, .max = 1 }, + .m1 = { .min = 2, .max = 2 }, + .m2 = { .min = 24 << 22, .max = 175 << 22 }, + .p1 = { .min = 2, .max = 4 }, + .p2 = { .p2_slow = 1, .p2_fast = 14 }, +}; + +static const struct intel_limit intel_limits_bxt = { + /* FIXME: find real dot limits */ + .dot = { .min = 0, .max = INT_MAX }, + .vco = { .min = 4800000, .max = 6700000 }, + .n = { .min = 1, .max = 1 }, + .m1 = { .min = 2, .max = 2 }, + /* FIXME: find real m2 limits */ + .m2 = { .min = 2 << 22, .max = 255 << 22 }, + .p1 = { .min = 2, .max = 4 }, + .p2 = { .p2_slow = 1, .p2_fast = 20 }, +}; + +/* + * Platform specific helpers to calculate the port PLL loopback- (clock.m), + * and post-divider (clock.p) values, pre- (clock.vco) and post-divided fast + * (clock.dot) clock rates. This fast dot clock is fed to the port's IO logic. + * The helpers' return value is the rate of the clock that is fed to the + * display engine's pipe which can be the above fast dot clock rate or a + * divided-down version of it. + */ +/* m1 is reserved as 0 in Pineview, n is a ring counter */ +int pnv_calc_dpll_params(int refclk, struct dpll *clock) +{ + clock->m = clock->m2 + 2; + clock->p = clock->p1 * clock->p2; + if (WARN_ON(clock->n == 0 || clock->p == 0)) + return 0; + clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); + clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); + + return clock->dot; +} + +static u32 i9xx_dpll_compute_m(struct dpll *dpll) +{ + return 5 * (dpll->m1 + 2) + (dpll->m2 + 2); +} + +int i9xx_calc_dpll_params(int refclk, struct dpll *clock) +{ + clock->m = i9xx_dpll_compute_m(clock); + clock->p = clock->p1 * clock->p2; + if (WARN_ON(clock->n + 2 == 0 || clock->p == 0)) + return 0; + clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n + 2); + clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); + + return clock->dot; +} + +int vlv_calc_dpll_params(int refclk, struct dpll *clock) +{ + clock->m = clock->m1 * clock->m2; + clock->p = clock->p1 * clock->p2; + if (WARN_ON(clock->n == 0 || clock->p == 0)) + return 0; + clock->vco = DIV_ROUND_CLOSEST(refclk * clock->m, clock->n); + clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); + + return clock->dot / 5; +} + +int chv_calc_dpll_params(int refclk, struct dpll *clock) +{ + clock->m = clock->m1 * clock->m2; + clock->p = clock->p1 * clock->p2; + if (WARN_ON(clock->n == 0 || clock->p == 0)) + return 0; + clock->vco = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(refclk, clock->m), + clock->n << 22); + clock->dot = DIV_ROUND_CLOSEST(clock->vco, clock->p); + + return clock->dot / 5; +} + +/* + * Returns whether the given set of divisors are valid for a given refclk with + * the given connectors. + */ +static bool intel_pll_is_valid(struct drm_i915_private *dev_priv, + const struct intel_limit *limit, + const struct dpll *clock) +{ + if (clock->n < limit->n.min || limit->n.max < clock->n) + return false; + if (clock->p1 < limit->p1.min || limit->p1.max < clock->p1) + return false; + if (clock->m2 < limit->m2.min || limit->m2.max < clock->m2) + return false; + if (clock->m1 < limit->m1.min || limit->m1.max < clock->m1) + return false; + + if (!IS_PINEVIEW(dev_priv) && !IS_VALLEYVIEW(dev_priv) && + !IS_CHERRYVIEW(dev_priv) && !IS_GEN9_LP(dev_priv)) + if (clock->m1 <= clock->m2) + return false; + + if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv) && + !IS_GEN9_LP(dev_priv)) { + if (clock->p < limit->p.min || limit->p.max < clock->p) + return false; + if (clock->m < limit->m.min || limit->m.max < clock->m) + return false; + } + + if (clock->vco < limit->vco.min || limit->vco.max < clock->vco) + return false; + /* XXX: We may need to be checking "Dot clock" depending on the multiplier, + * connector, etc., rather than just a single range. + */ + if (clock->dot < limit->dot.min || limit->dot.max < clock->dot) + return false; + + return true; +} + +static int +i9xx_select_p2_div(const struct intel_limit *limit, + const struct intel_crtc_state *crtc_state, + int target) +{ + struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev); + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + /* + * For LVDS just rely on its current settings for dual-channel. + * We haven't figured out how to reliably set up different + * single/dual channel state, if we even can. + */ + if (intel_is_dual_link_lvds(dev_priv)) + return limit->p2.p2_fast; + else + return limit->p2.p2_slow; + } else { + if (target < limit->p2.dot_limit) + return limit->p2.p2_slow; + else + return limit->p2.p2_fast; + } +} + +/* + * Returns a set of divisors for the desired target clock with the given + * refclk, or FALSE. The returned values represent the clock equation: + * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. + * + * Target and reference clocks are specified in kHz. + * + * If match_clock is provided, then best_clock P divider must match the P + * divider from @match_clock used for LVDS downclocking. + */ +static bool +i9xx_find_best_dpll(const struct intel_limit *limit, + struct intel_crtc_state *crtc_state, + int target, int refclk, struct dpll *match_clock, + struct dpll *best_clock) +{ + struct drm_device *dev = crtc_state->uapi.crtc->dev; + struct dpll clock; + int err = target; + + memset(best_clock, 0, sizeof(*best_clock)); + + clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); + + for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; + clock.m1++) { + for (clock.m2 = limit->m2.min; + clock.m2 <= limit->m2.max; clock.m2++) { + if (clock.m2 >= clock.m1) + break; + for (clock.n = limit->n.min; + clock.n <= limit->n.max; clock.n++) { + for (clock.p1 = limit->p1.min; + clock.p1 <= limit->p1.max; clock.p1++) { + int this_err; + + i9xx_calc_dpll_params(refclk, &clock); + if (!intel_pll_is_valid(to_i915(dev), + limit, + &clock)) + continue; + if (match_clock && + clock.p != match_clock->p) + continue; + + this_err = abs(clock.dot - target); + if (this_err < err) { + *best_clock = clock; + err = this_err; + } + } + } + } + } + + return (err != target); +} + +/* + * Returns a set of divisors for the desired target clock with the given + * refclk, or FALSE. The returned values represent the clock equation: + * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. + * + * Target and reference clocks are specified in kHz. + * + * If match_clock is provided, then best_clock P divider must match the P + * divider from @match_clock used for LVDS downclocking. + */ +static bool +pnv_find_best_dpll(const struct intel_limit *limit, + struct intel_crtc_state *crtc_state, + int target, int refclk, struct dpll *match_clock, + struct dpll *best_clock) +{ + struct drm_device *dev = crtc_state->uapi.crtc->dev; + struct dpll clock; + int err = target; + + memset(best_clock, 0, sizeof(*best_clock)); + + clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); + + for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; + clock.m1++) { + for (clock.m2 = limit->m2.min; + clock.m2 <= limit->m2.max; clock.m2++) { + for (clock.n = limit->n.min; + clock.n <= limit->n.max; clock.n++) { + for (clock.p1 = limit->p1.min; + clock.p1 <= limit->p1.max; clock.p1++) { + int this_err; + + pnv_calc_dpll_params(refclk, &clock); + if (!intel_pll_is_valid(to_i915(dev), + limit, + &clock)) + continue; + if (match_clock && + clock.p != match_clock->p) + continue; + + this_err = abs(clock.dot - target); + if (this_err < err) { + *best_clock = clock; + err = this_err; + } + } + } + } + } + + return (err != target); +} + +/* + * Returns a set of divisors for the desired target clock with the given + * refclk, or FALSE. The returned values represent the clock equation: + * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. + * + * Target and reference clocks are specified in kHz. + * + * If match_clock is provided, then best_clock P divider must match the P + * divider from @match_clock used for LVDS downclocking. + */ +static bool +g4x_find_best_dpll(const struct intel_limit *limit, + struct intel_crtc_state *crtc_state, + int target, int refclk, struct dpll *match_clock, + struct dpll *best_clock) +{ + struct drm_device *dev = crtc_state->uapi.crtc->dev; + struct dpll clock; + int max_n; + bool found = false; + /* approximately equals target * 0.00585 */ + int err_most = (target >> 8) + (target >> 9); + + memset(best_clock, 0, sizeof(*best_clock)); + + clock.p2 = i9xx_select_p2_div(limit, crtc_state, target); + + max_n = limit->n.max; + /* based on hardware requirement, prefer smaller n to precision */ + for (clock.n = limit->n.min; clock.n <= max_n; clock.n++) { + /* based on hardware requirement, prefere larger m1,m2 */ + for (clock.m1 = limit->m1.max; + clock.m1 >= limit->m1.min; clock.m1--) { + for (clock.m2 = limit->m2.max; + clock.m2 >= limit->m2.min; clock.m2--) { + for (clock.p1 = limit->p1.max; + clock.p1 >= limit->p1.min; clock.p1--) { + int this_err; + + i9xx_calc_dpll_params(refclk, &clock); + if (!intel_pll_is_valid(to_i915(dev), + limit, + &clock)) + continue; + + this_err = abs(clock.dot - target); + if (this_err < err_most) { + *best_clock = clock; + err_most = this_err; + max_n = clock.n; + found = true; + } + } + } + } + } + return found; +} + +/* + * Check if the calculated PLL configuration is more optimal compared to the + * best configuration and error found so far. Return the calculated error. + */ +static bool vlv_PLL_is_optimal(struct drm_device *dev, int target_freq, + const struct dpll *calculated_clock, + const struct dpll *best_clock, + unsigned int best_error_ppm, + unsigned int *error_ppm) +{ + /* + * For CHV ignore the error and consider only the P value. + * Prefer a bigger P value based on HW requirements. + */ + if (IS_CHERRYVIEW(to_i915(dev))) { + *error_ppm = 0; + + return calculated_clock->p > best_clock->p; + } + + if (drm_WARN_ON_ONCE(dev, !target_freq)) + return false; + + *error_ppm = div_u64(1000000ULL * + abs(target_freq - calculated_clock->dot), + target_freq); + /* + * Prefer a better P value over a better (smaller) error if the error + * is small. Ensure this preference for future configurations too by + * setting the error to 0. + */ + if (*error_ppm < 100 && calculated_clock->p > best_clock->p) { + *error_ppm = 0; + + return true; + } + + return *error_ppm + 10 < best_error_ppm; +} + +/* + * Returns a set of divisors for the desired target clock with the given + * refclk, or FALSE. The returned values represent the clock equation: + * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. + */ +static bool +vlv_find_best_dpll(const struct intel_limit *limit, + struct intel_crtc_state *crtc_state, + int target, int refclk, struct dpll *match_clock, + struct dpll *best_clock) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_device *dev = crtc->base.dev; + struct dpll clock; + unsigned int bestppm = 1000000; + /* min update 19.2 MHz */ + int max_n = min(limit->n.max, refclk / 19200); + bool found = false; + + target *= 5; /* fast clock */ + + memset(best_clock, 0, sizeof(*best_clock)); + + /* based on hardware requirement, prefer smaller n to precision */ + for (clock.n = limit->n.min; clock.n <= max_n; clock.n++) { + for (clock.p1 = limit->p1.max; clock.p1 >= limit->p1.min; clock.p1--) { + for (clock.p2 = limit->p2.p2_fast; clock.p2 >= limit->p2.p2_slow; + clock.p2 -= clock.p2 > 10 ? 2 : 1) { + clock.p = clock.p1 * clock.p2; + /* based on hardware requirement, prefer bigger m1,m2 values */ + for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; clock.m1++) { + unsigned int ppm; + + clock.m2 = DIV_ROUND_CLOSEST(target * clock.p * clock.n, + refclk * clock.m1); + + vlv_calc_dpll_params(refclk, &clock); + + if (!intel_pll_is_valid(to_i915(dev), + limit, + &clock)) + continue; + + if (!vlv_PLL_is_optimal(dev, target, + &clock, + best_clock, + bestppm, &ppm)) + continue; + + *best_clock = clock; + bestppm = ppm; + found = true; + } + } + } + } + + return found; +} + +/* + * Returns a set of divisors for the desired target clock with the given + * refclk, or FALSE. The returned values represent the clock equation: + * reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2. + */ +static bool +chv_find_best_dpll(const struct intel_limit *limit, + struct intel_crtc_state *crtc_state, + int target, int refclk, struct dpll *match_clock, + struct dpll *best_clock) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_device *dev = crtc->base.dev; + unsigned int best_error_ppm; + struct dpll clock; + u64 m2; + int found = false; + + memset(best_clock, 0, sizeof(*best_clock)); + best_error_ppm = 1000000; + + /* + * Based on hardware doc, the n always set to 1, and m1 always + * set to 2. If requires to support 200Mhz refclk, we need to + * revisit this because n may not 1 anymore. + */ + clock.n = 1; + clock.m1 = 2; + target *= 5; /* fast clock */ + + for (clock.p1 = limit->p1.max; clock.p1 >= limit->p1.min; clock.p1--) { + for (clock.p2 = limit->p2.p2_fast; + clock.p2 >= limit->p2.p2_slow; + clock.p2 -= clock.p2 > 10 ? 2 : 1) { + unsigned int error_ppm; + + clock.p = clock.p1 * clock.p2; + + m2 = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(target, clock.p * clock.n) << 22, + refclk * clock.m1); + + if (m2 > INT_MAX/clock.m1) + continue; + + clock.m2 = m2; + + chv_calc_dpll_params(refclk, &clock); + + if (!intel_pll_is_valid(to_i915(dev), limit, &clock)) + continue; + + if (!vlv_PLL_is_optimal(dev, target, &clock, best_clock, + best_error_ppm, &error_ppm)) + continue; + + *best_clock = clock; + best_error_ppm = error_ppm; + found = true; + } + } + + return found; +} + +bool bxt_find_best_dpll(struct intel_crtc_state *crtc_state, + struct dpll *best_clock) +{ + int refclk = 100000; + const struct intel_limit *limit = &intel_limits_bxt; + + return chv_find_best_dpll(limit, crtc_state, + crtc_state->port_clock, refclk, + NULL, best_clock); +} + +static u32 pnv_dpll_compute_fp(struct dpll *dpll) +{ + return (1 << dpll->n) << 16 | dpll->m2; +} + +static void i9xx_update_pll_dividers(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state, + struct dpll *reduced_clock) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + u32 fp, fp2 = 0; + + if (IS_PINEVIEW(dev_priv)) { + fp = pnv_dpll_compute_fp(&crtc_state->dpll); + if (reduced_clock) + fp2 = pnv_dpll_compute_fp(reduced_clock); + } else { + fp = i9xx_dpll_compute_fp(&crtc_state->dpll); + if (reduced_clock) + fp2 = i9xx_dpll_compute_fp(reduced_clock); + } + + crtc_state->dpll_hw_state.fp0 = fp; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && + reduced_clock) { + crtc_state->dpll_hw_state.fp1 = fp2; + } else { + crtc_state->dpll_hw_state.fp1 = fp; + } +} + +static void i9xx_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state, + struct dpll *reduced_clock) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + u32 dpll; + struct dpll *clock = &crtc_state->dpll; + + i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock); + + dpll = DPLL_VGA_MODE_DIS; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) + dpll |= DPLLB_MODE_LVDS; + else + dpll |= DPLLB_MODE_DAC_SERIAL; + + if (IS_I945G(dev_priv) || IS_I945GM(dev_priv) || + IS_G33(dev_priv) || IS_PINEVIEW(dev_priv)) { + dpll |= (crtc_state->pixel_multiplier - 1) + << SDVO_MULTIPLIER_SHIFT_HIRES; + } + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO) || + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) + dpll |= DPLL_SDVO_HIGH_SPEED; + + if (intel_crtc_has_dp_encoder(crtc_state)) + dpll |= DPLL_SDVO_HIGH_SPEED; + + /* compute bitmask from p1 value */ + if (IS_PINEVIEW(dev_priv)) + dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW; + else { + dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; + if (IS_G4X(dev_priv) && reduced_clock) + dpll |= (1 << (reduced_clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; + } + switch (clock->p2) { + case 5: + dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5; + break; + case 7: + dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7; + break; + case 10: + dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10; + break; + case 14: + dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14; + break; + } + if (INTEL_GEN(dev_priv) >= 4) + dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT); + + if (crtc_state->sdvo_tv_clock) + dpll |= PLL_REF_INPUT_TVCLKINBC; + else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && + intel_panel_use_ssc(dev_priv)) + dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; + else + dpll |= PLL_REF_INPUT_DREFCLK; + + dpll |= DPLL_VCO_ENABLE; + crtc_state->dpll_hw_state.dpll = dpll; + + if (INTEL_GEN(dev_priv) >= 4) { + u32 dpll_md = (crtc_state->pixel_multiplier - 1) + << DPLL_MD_UDI_MULTIPLIER_SHIFT; + crtc_state->dpll_hw_state.dpll_md = dpll_md; + } +} + +static void i8xx_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state, + struct dpll *reduced_clock) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + u32 dpll; + struct dpll *clock = &crtc_state->dpll; + + i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock); + + dpll = DPLL_VGA_MODE_DIS; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; + } else { + if (clock->p1 == 2) + dpll |= PLL_P1_DIVIDE_BY_TWO; + else + dpll |= (clock->p1 - 2) << DPLL_FPA01_P1_POST_DIV_SHIFT; + if (clock->p2 == 4) + dpll |= PLL_P2_DIVIDE_BY_4; + } + + /* + * Bspec: + * "[Almador Errata}: For the correct operation of the muxed DVO pins + * (GDEVSELB/I2Cdata, GIRDBY/I2CClk) and (GFRAMEB/DVI_Data, + * GTRDYB/DVI_Clk): Bit 31 (DPLL VCO Enable) and Bit 30 (2X Clock + * Enable) must be set to “1” in both the DPLL A Control Register + * (06014h-06017h) and DPLL B Control Register (06018h-0601Bh)." + * + * For simplicity We simply keep both bits always enabled in + * both DPLLS. The spec says we should disable the DVO 2X clock + * when not needed, but this seems to work fine in practice. + */ + if (IS_I830(dev_priv) || + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DVO)) + dpll |= DPLL_DVO_2X_MODE; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && + intel_panel_use_ssc(dev_priv)) + dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; + else + dpll |= PLL_REF_INPUT_DREFCLK; + + dpll |= DPLL_VCO_ENABLE; + crtc_state->dpll_hw_state.dpll = dpll; +} + +static int hsw_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + struct intel_atomic_state *state = + to_intel_atomic_state(crtc_state->uapi.state); + + if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DSI) || + INTEL_GEN(dev_priv) >= 11) { + struct intel_encoder *encoder = + intel_get_crtc_new_encoder(state, crtc_state); + + if (!intel_reserve_shared_dplls(state, crtc, encoder)) { + drm_dbg_kms(&dev_priv->drm, + "failed to find PLL for pipe %c\n", + pipe_name(crtc->pipe)); + return -EINVAL; + } + } + + return 0; +} + +static bool ilk_needs_fb_cb_tune(struct dpll *dpll, int factor) +{ + return i9xx_dpll_compute_m(dpll) < factor * dpll->n; +} + + +static void ilk_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state, + struct dpll *reduced_clock) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + u32 dpll, fp, fp2; + int factor; + + /* Enable autotuning of the PLL clock (if permissible) */ + factor = 21; + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if ((intel_panel_use_ssc(dev_priv) && + dev_priv->vbt.lvds_ssc_freq == 100000) || + (HAS_PCH_IBX(dev_priv) && + intel_is_dual_link_lvds(dev_priv))) + factor = 25; + } else if (crtc_state->sdvo_tv_clock) { + factor = 20; + } + + fp = i9xx_dpll_compute_fp(&crtc_state->dpll); + + if (ilk_needs_fb_cb_tune(&crtc_state->dpll, factor)) + fp |= FP_CB_TUNE; + + if (reduced_clock) { + fp2 = i9xx_dpll_compute_fp(reduced_clock); + + if (reduced_clock->m < factor * reduced_clock->n) + fp2 |= FP_CB_TUNE; + } else { + fp2 = fp; + } + + dpll = 0; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) + dpll |= DPLLB_MODE_LVDS; + else + dpll |= DPLLB_MODE_DAC_SERIAL; + + dpll |= (crtc_state->pixel_multiplier - 1) + << PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO) || + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) + dpll |= DPLL_SDVO_HIGH_SPEED; + + if (intel_crtc_has_dp_encoder(crtc_state)) + dpll |= DPLL_SDVO_HIGH_SPEED; + + /* + * The high speed IO clock is only really required for + * SDVO/HDMI/DP, but we also enable it for CRT to make it + * possible to share the DPLL between CRT and HDMI. Enabling + * the clock needlessly does no real harm, except use up a + * bit of power potentially. + * + * We'll limit this to IVB with 3 pipes, since it has only two + * DPLLs and so DPLL sharing is the only way to get three pipes + * driving PCH ports at the same time. On SNB we could do this, + * and potentially avoid enabling the second DPLL, but it's not + * clear if it''s a win or loss power wise. No point in doing + * this on ILK at all since it has a fixed DPLL<->pipe mapping. + */ + if (INTEL_NUM_PIPES(dev_priv) == 3 && + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) + dpll |= DPLL_SDVO_HIGH_SPEED; + + /* compute bitmask from p1 value */ + dpll |= (1 << (crtc_state->dpll.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT; + /* also FPA1 */ + dpll |= (1 << (crtc_state->dpll.p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT; + + switch (crtc_state->dpll.p2) { + case 5: + dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5; + break; + case 7: + dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7; + break; + case 10: + dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10; + break; + case 14: + dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14; + break; + } + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS) && + intel_panel_use_ssc(dev_priv)) + dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; + else + dpll |= PLL_REF_INPUT_DREFCLK; + + dpll |= DPLL_VCO_ENABLE; + + crtc_state->dpll_hw_state.dpll = dpll; + crtc_state->dpll_hw_state.fp0 = fp; + crtc_state->dpll_hw_state.fp1 = fp2; +} + +static int ilk_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + struct intel_atomic_state *state = + to_intel_atomic_state(crtc_state->uapi.state); + const struct intel_limit *limit; + int refclk = 120000; + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + /* CPU eDP is the only output that doesn't need a PCH PLL of its own. */ + if (!crtc_state->has_pch_encoder) + return 0; + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if (intel_panel_use_ssc(dev_priv)) { + drm_dbg_kms(&dev_priv->drm, + "using SSC reference clock of %d kHz\n", + dev_priv->vbt.lvds_ssc_freq); + refclk = dev_priv->vbt.lvds_ssc_freq; + } + + if (intel_is_dual_link_lvds(dev_priv)) { + if (refclk == 100000) + limit = &ilk_limits_dual_lvds_100m; + else + limit = &ilk_limits_dual_lvds; + } else { + if (refclk == 100000) + limit = &ilk_limits_single_lvds_100m; + else + limit = &ilk_limits_single_lvds; + } + } else { + limit = &ilk_limits_dac; + } + + if (!crtc_state->clock_set && + !g4x_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&dev_priv->drm, + "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + ilk_compute_dpll(crtc, crtc_state, NULL); + + if (!intel_reserve_shared_dplls(state, crtc, NULL)) { + drm_dbg_kms(&dev_priv->drm, + "failed to find PLL for pipe %c\n", + pipe_name(crtc->pipe)); + return -EINVAL; + } + + return 0; +} + +void vlv_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *pipe_config) +{ + pipe_config->dpll_hw_state.dpll = DPLL_INTEGRATED_REF_CLK_VLV | + DPLL_REF_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS; + if (crtc->pipe != PIPE_A) + pipe_config->dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV; + + /* DPLL not used with DSI, but still need the rest set up */ + if (!intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DSI)) + pipe_config->dpll_hw_state.dpll |= DPLL_VCO_ENABLE | + DPLL_EXT_BUFFER_ENABLE_VLV; + + pipe_config->dpll_hw_state.dpll_md = + (pipe_config->pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT; +} + +void chv_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *pipe_config) +{ + pipe_config->dpll_hw_state.dpll = DPLL_SSC_REF_CLK_CHV | + DPLL_REF_CLK_ENABLE_VLV | DPLL_VGA_MODE_DIS; + if (crtc->pipe != PIPE_A) + pipe_config->dpll_hw_state.dpll |= DPLL_INTEGRATED_CRI_CLK_VLV; + + /* DPLL not used with DSI, but still need the rest set up */ + if (!intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DSI)) + pipe_config->dpll_hw_state.dpll |= DPLL_VCO_ENABLE; + + pipe_config->dpll_hw_state.dpll_md = + (pipe_config->pixel_multiplier - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT; +} + +static int chv_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + int refclk = 100000; + const struct intel_limit *limit = &intel_limits_chv; + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (!crtc_state->clock_set && + !chv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&i915->drm, "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + chv_compute_dpll(crtc, crtc_state); + + return 0; +} + +static int vlv_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + int refclk = 100000; + const struct intel_limit *limit = &intel_limits_vlv; + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (!crtc_state->clock_set && + !vlv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&i915->drm, "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + vlv_compute_dpll(crtc, crtc_state); + + return 0; +} + +static int g4x_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + const struct intel_limit *limit; + int refclk = 96000; + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if (intel_panel_use_ssc(dev_priv)) { + refclk = dev_priv->vbt.lvds_ssc_freq; + drm_dbg_kms(&dev_priv->drm, + "using SSC reference clock of %d kHz\n", + refclk); + } + + if (intel_is_dual_link_lvds(dev_priv)) + limit = &intel_limits_g4x_dual_channel_lvds; + else + limit = &intel_limits_g4x_single_channel_lvds; + } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI) || + intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) { + limit = &intel_limits_g4x_hdmi; + } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_SDVO)) { + limit = &intel_limits_g4x_sdvo; + } else { + /* The option is for other outputs */ + limit = &intel_limits_i9xx_sdvo; + } + + if (!crtc_state->clock_set && + !g4x_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&dev_priv->drm, + "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + i9xx_compute_dpll(crtc, crtc_state, NULL); + + return 0; +} + +static int pnv_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + const struct intel_limit *limit; + int refclk = 96000; + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if (intel_panel_use_ssc(dev_priv)) { + refclk = dev_priv->vbt.lvds_ssc_freq; + drm_dbg_kms(&dev_priv->drm, + "using SSC reference clock of %d kHz\n", + refclk); + } + + limit = &pnv_limits_lvds; + } else { + limit = &pnv_limits_sdvo; + } + + if (!crtc_state->clock_set && + !pnv_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&dev_priv->drm, + "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + i9xx_compute_dpll(crtc, crtc_state, NULL); + + return 0; +} + +static int i9xx_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + const struct intel_limit *limit; + int refclk = 96000; + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if (intel_panel_use_ssc(dev_priv)) { + refclk = dev_priv->vbt.lvds_ssc_freq; + drm_dbg_kms(&dev_priv->drm, + "using SSC reference clock of %d kHz\n", + refclk); + } + + limit = &intel_limits_i9xx_lvds; + } else { + limit = &intel_limits_i9xx_sdvo; + } + + if (!crtc_state->clock_set && + !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&dev_priv->drm, + "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + i9xx_compute_dpll(crtc, crtc_state, NULL); + + return 0; +} + +static int i8xx_crtc_compute_clock(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + const struct intel_limit *limit; + int refclk = 48000; + + memset(&crtc_state->dpll_hw_state, 0, + sizeof(crtc_state->dpll_hw_state)); + + if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_LVDS)) { + if (intel_panel_use_ssc(dev_priv)) { + refclk = dev_priv->vbt.lvds_ssc_freq; + drm_dbg_kms(&dev_priv->drm, + "using SSC reference clock of %d kHz\n", + refclk); + } + + limit = &intel_limits_i8xx_lvds; + } else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DVO)) { + limit = &intel_limits_i8xx_dvo; + } else { + limit = &intel_limits_i8xx_dac; + } + + if (!crtc_state->clock_set && + !i9xx_find_best_dpll(limit, crtc_state, crtc_state->port_clock, + refclk, NULL, &crtc_state->dpll)) { + drm_err(&dev_priv->drm, + "Couldn't find PLL settings for mode!\n"); + return -EINVAL; + } + + i8xx_compute_dpll(crtc, crtc_state, NULL); + + return 0; +} + +void +intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv) +{ + if (INTEL_GEN(dev_priv) >= 9 || HAS_DDI(dev_priv)) + dev_priv->display.crtc_compute_clock = hsw_crtc_compute_clock; + else if (HAS_PCH_SPLIT(dev_priv)) + dev_priv->display.crtc_compute_clock = ilk_crtc_compute_clock; + else if (IS_CHERRYVIEW(dev_priv)) + dev_priv->display.crtc_compute_clock = chv_crtc_compute_clock; + else if (IS_VALLEYVIEW(dev_priv)) + dev_priv->display.crtc_compute_clock = vlv_crtc_compute_clock; + else if (IS_G4X(dev_priv)) + dev_priv->display.crtc_compute_clock = g4x_crtc_compute_clock; + else if (IS_PINEVIEW(dev_priv)) + dev_priv->display.crtc_compute_clock = pnv_crtc_compute_clock; + else if (!IS_GEN(dev_priv, 2)) + dev_priv->display.crtc_compute_clock = i9xx_crtc_compute_clock; + else + dev_priv->display.crtc_compute_clock = i8xx_crtc_compute_clock; +} diff --git a/drivers/gpu/drm/i915/display/intel_dpll.h b/drivers/gpu/drm/i915/display/intel_dpll.h new file mode 100644 index 000000000000..caf4615092e1 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_dpll.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef _INTEL_DPLL_H_ +#define _INTEL_DPLL_H_ + +struct dpll; +struct drm_i915_private; +struct intel_crtc; +struct intel_crtc_state; + +void intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv); +int vlv_calc_dpll_params(int refclk, struct dpll *clock); +int pnv_calc_dpll_params(int refclk, struct dpll *clock); +int i9xx_calc_dpll_params(int refclk, struct dpll *clock); +void vlv_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *pipe_config); +void chv_compute_dpll(struct intel_crtc *crtc, + struct intel_crtc_state *pipe_config); + +#endif diff --git a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c index 88628764956d..584c14c4cbd0 100644 --- a/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c +++ b/drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c @@ -43,7 +43,7 @@ #define PANEL_PWM_MAX_VALUE 0xFF -static u32 dcs_get_backlight(struct intel_connector *connector) +static u32 dcs_get_backlight(struct intel_connector *connector, enum pipe unused) { struct intel_encoder *encoder = intel_attached_encoder(connector); struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c index 33200b5cfad0..5fd4fa4805ef 100644 --- a/drivers/gpu/drm/i915/display/intel_fbc.c +++ b/drivers/gpu/drm/i915/display/intel_fbc.c @@ -676,7 +676,7 @@ static bool intel_fbc_hw_tracking_covers_screen(struct intel_crtc *crtc) } static bool tiling_is_valid(struct drm_i915_private *dev_priv, - uint64_t modifier) + u64 modifier) { switch (modifier) { case DRM_FORMAT_MOD_LINEAR: diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c index 842c04e63214..84f853f113b9 100644 --- a/drivers/gpu/drm/i915/display/intel_fbdev.c +++ b/drivers/gpu/drm/i915/display/intel_fbdev.c @@ -256,7 +256,7 @@ static int intelfb_create(struct drm_fb_helper *helper, * If the object is stolen however, it will be full of whatever * garbage was left in there. */ - if (vma->obj->stolen && !prealloc) + if (!i915_gem_object_is_shmem(vma->obj) && !prealloc) memset_io(info->screen_base, 0, info->screen_size); /* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */ @@ -595,7 +595,7 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous * full of whatever garbage was left in there. */ if (state == FBINFO_STATE_RUNNING && - intel_fb_obj(&ifbdev->fb->base)->stolen) + !i915_gem_object_is_shmem(intel_fb_obj(&ifbdev->fb->base))) memset_io(info->screen_base, 0, info->screen_size); drm_fb_helper_set_suspend(&ifbdev->helper, state); diff --git a/drivers/gpu/drm/i915/display/intel_fdi.c b/drivers/gpu/drm/i915/display/intel_fdi.c new file mode 100644 index 000000000000..b2eb96ae10a2 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_fdi.c @@ -0,0 +1,683 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ +#include "intel_atomic.h" +#include "intel_display_types.h" +#include "intel_fdi.h" + +/* units of 100MHz */ +static int pipe_required_fdi_lanes(struct intel_crtc_state *crtc_state) +{ + if (crtc_state->hw.enable && crtc_state->has_pch_encoder) + return crtc_state->fdi_lanes; + + return 0; +} + +static int ilk_check_fdi_lanes(struct drm_device *dev, enum pipe pipe, + struct intel_crtc_state *pipe_config) +{ + struct drm_i915_private *dev_priv = to_i915(dev); + struct drm_atomic_state *state = pipe_config->uapi.state; + struct intel_crtc *other_crtc; + struct intel_crtc_state *other_crtc_state; + + drm_dbg_kms(&dev_priv->drm, + "checking fdi config on pipe %c, lanes %i\n", + pipe_name(pipe), pipe_config->fdi_lanes); + if (pipe_config->fdi_lanes > 4) { + drm_dbg_kms(&dev_priv->drm, + "invalid fdi lane config on pipe %c: %i lanes\n", + pipe_name(pipe), pipe_config->fdi_lanes); + return -EINVAL; + } + + if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { + if (pipe_config->fdi_lanes > 2) { + drm_dbg_kms(&dev_priv->drm, + "only 2 lanes on haswell, required: %i lanes\n", + pipe_config->fdi_lanes); + return -EINVAL; + } else { + return 0; + } + } + + if (INTEL_NUM_PIPES(dev_priv) == 2) + return 0; + + /* Ivybridge 3 pipe is really complicated */ + switch (pipe) { + case PIPE_A: + return 0; + case PIPE_B: + if (pipe_config->fdi_lanes <= 2) + return 0; + + other_crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_C); + other_crtc_state = + intel_atomic_get_crtc_state(state, other_crtc); + if (IS_ERR(other_crtc_state)) + return PTR_ERR(other_crtc_state); + + if (pipe_required_fdi_lanes(other_crtc_state) > 0) { + drm_dbg_kms(&dev_priv->drm, + "invalid shared fdi lane config on pipe %c: %i lanes\n", + pipe_name(pipe), pipe_config->fdi_lanes); + return -EINVAL; + } + return 0; + case PIPE_C: + if (pipe_config->fdi_lanes > 2) { + drm_dbg_kms(&dev_priv->drm, + "only 2 lanes on pipe %c: required %i lanes\n", + pipe_name(pipe), pipe_config->fdi_lanes); + return -EINVAL; + } + + other_crtc = intel_get_crtc_for_pipe(dev_priv, PIPE_B); + other_crtc_state = + intel_atomic_get_crtc_state(state, other_crtc); + if (IS_ERR(other_crtc_state)) + return PTR_ERR(other_crtc_state); + + if (pipe_required_fdi_lanes(other_crtc_state) > 2) { + drm_dbg_kms(&dev_priv->drm, + "fdi link B uses too many lanes to enable link C\n"); + return -EINVAL; + } + return 0; + default: + BUG(); + } +} + +int ilk_fdi_compute_config(struct intel_crtc *intel_crtc, + struct intel_crtc_state *pipe_config) +{ + struct drm_device *dev = intel_crtc->base.dev; + struct drm_i915_private *i915 = to_i915(dev); + const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; + int lane, link_bw, fdi_dotclock, ret; + bool needs_recompute = false; + +retry: + /* FDI is a binary signal running at ~2.7GHz, encoding + * each output octet as 10 bits. The actual frequency + * is stored as a divider into a 100MHz clock, and the + * mode pixel clock is stored in units of 1KHz. + * Hence the bw of each lane in terms of the mode signal + * is: + */ + link_bw = intel_fdi_link_freq(i915, pipe_config); + + fdi_dotclock = adjusted_mode->crtc_clock; + + lane = ilk_get_lanes_required(fdi_dotclock, link_bw, + pipe_config->pipe_bpp); + + pipe_config->fdi_lanes = lane; + + intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock, + link_bw, &pipe_config->fdi_m_n, false, false); + + ret = ilk_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config); + if (ret == -EDEADLK) + return ret; + + if (ret == -EINVAL && pipe_config->pipe_bpp > 6*3) { + pipe_config->pipe_bpp -= 2*3; + drm_dbg_kms(&i915->drm, + "fdi link bw constraint, reducing pipe bpp to %i\n", + pipe_config->pipe_bpp); + needs_recompute = true; + pipe_config->bw_constrained = true; + + goto retry; + } + + if (needs_recompute) + return I915_DISPLAY_CONFIG_RETRY; + + return ret; +} + +void intel_fdi_normal_train(struct intel_crtc *crtc) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + enum pipe pipe = crtc->pipe; + i915_reg_t reg; + u32 temp; + + /* enable normal train */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if (IS_IVYBRIDGE(dev_priv)) { + temp &= ~FDI_LINK_TRAIN_NONE_IVB; + temp |= FDI_LINK_TRAIN_NONE_IVB | FDI_TX_ENHANCE_FRAME_ENABLE; + } else { + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_NONE | FDI_TX_ENHANCE_FRAME_ENABLE; + } + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if (HAS_PCH_CPT(dev_priv)) { + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp |= FDI_LINK_TRAIN_NORMAL_CPT; + } else { + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_NONE; + } + intel_de_write(dev_priv, reg, temp | FDI_RX_ENHANCE_FRAME_ENABLE); + + /* wait one idle pattern time */ + intel_de_posting_read(dev_priv, reg); + udelay(1000); + + /* IVB wants error correction enabled */ + if (IS_IVYBRIDGE(dev_priv)) + intel_de_write(dev_priv, reg, + intel_de_read(dev_priv, reg) | FDI_FS_ERRC_ENABLE | FDI_FE_ERRC_ENABLE); +} + +/* The FDI link training functions for ILK/Ibexpeak. */ +static void ilk_fdi_link_train(struct intel_crtc *crtc, + const struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + enum pipe pipe = crtc->pipe; + i915_reg_t reg; + u32 temp, tries; + + /* FDI needs bits from pipe first */ + assert_pipe_enabled(dev_priv, crtc_state->cpu_transcoder); + + /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit + for train result */ + reg = FDI_RX_IMR(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_RX_SYMBOL_LOCK; + temp &= ~FDI_RX_BIT_LOCK; + intel_de_write(dev_priv, reg, temp); + intel_de_read(dev_priv, reg); + udelay(150); + + /* enable CPU FDI TX and PCH FDI RX */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_DP_PORT_WIDTH_MASK; + temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + /* Ironlake workaround, enable clock pointer after FDI enable*/ + intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), + FDI_RX_PHASE_SYNC_POINTER_OVR); + intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), + FDI_RX_PHASE_SYNC_POINTER_OVR | FDI_RX_PHASE_SYNC_POINTER_EN); + + reg = FDI_RX_IIR(pipe); + for (tries = 0; tries < 5; tries++) { + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + + if ((temp & FDI_RX_BIT_LOCK)) { + drm_dbg_kms(&dev_priv->drm, "FDI train 1 done.\n"); + intel_de_write(dev_priv, reg, temp | FDI_RX_BIT_LOCK); + break; + } + } + if (tries == 5) + drm_err(&dev_priv->drm, "FDI train 1 fail!\n"); + + /* Train 2 */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_2; + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_2; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + reg = FDI_RX_IIR(pipe); + for (tries = 0; tries < 5; tries++) { + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + + if (temp & FDI_RX_SYMBOL_LOCK) { + intel_de_write(dev_priv, reg, + temp | FDI_RX_SYMBOL_LOCK); + drm_dbg_kms(&dev_priv->drm, "FDI train 2 done.\n"); + break; + } + } + if (tries == 5) + drm_err(&dev_priv->drm, "FDI train 2 fail!\n"); + + drm_dbg_kms(&dev_priv->drm, "FDI train done\n"); + +} + +static const int snb_b_fdi_train_param[] = { + FDI_LINK_TRAIN_400MV_0DB_SNB_B, + FDI_LINK_TRAIN_400MV_6DB_SNB_B, + FDI_LINK_TRAIN_600MV_3_5DB_SNB_B, + FDI_LINK_TRAIN_800MV_0DB_SNB_B, +}; + +/* The FDI link training functions for SNB/Cougarpoint. */ +static void gen6_fdi_link_train(struct intel_crtc *crtc, + const struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + enum pipe pipe = crtc->pipe; + i915_reg_t reg; + u32 temp, i, retry; + + /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit + for train result */ + reg = FDI_RX_IMR(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_RX_SYMBOL_LOCK; + temp &= ~FDI_RX_BIT_LOCK; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + /* enable CPU FDI TX and PCH FDI RX */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_DP_PORT_WIDTH_MASK; + temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; + /* SNB-B */ + temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B; + intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); + + intel_de_write(dev_priv, FDI_RX_MISC(pipe), + FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if (HAS_PCH_CPT(dev_priv)) { + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; + } else { + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + } + intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + for (i = 0; i < 4; i++) { + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; + temp |= snb_b_fdi_train_param[i]; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(500); + + for (retry = 0; retry < 5; retry++) { + reg = FDI_RX_IIR(pipe); + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + if (temp & FDI_RX_BIT_LOCK) { + intel_de_write(dev_priv, reg, + temp | FDI_RX_BIT_LOCK); + drm_dbg_kms(&dev_priv->drm, + "FDI train 1 done.\n"); + break; + } + udelay(50); + } + if (retry < 5) + break; + } + if (i == 4) + drm_err(&dev_priv->drm, "FDI train 1 fail!\n"); + + /* Train 2 */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_2; + if (IS_GEN(dev_priv, 6)) { + temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; + /* SNB-B */ + temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B; + } + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if (HAS_PCH_CPT(dev_priv)) { + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp |= FDI_LINK_TRAIN_PATTERN_2_CPT; + } else { + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_2; + } + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + for (i = 0; i < 4; i++) { + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; + temp |= snb_b_fdi_train_param[i]; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(500); + + for (retry = 0; retry < 5; retry++) { + reg = FDI_RX_IIR(pipe); + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + if (temp & FDI_RX_SYMBOL_LOCK) { + intel_de_write(dev_priv, reg, + temp | FDI_RX_SYMBOL_LOCK); + drm_dbg_kms(&dev_priv->drm, + "FDI train 2 done.\n"); + break; + } + udelay(50); + } + if (retry < 5) + break; + } + if (i == 4) + drm_err(&dev_priv->drm, "FDI train 2 fail!\n"); + + drm_dbg_kms(&dev_priv->drm, "FDI train done.\n"); +} + +/* Manual link training for Ivy Bridge A0 parts */ +static void ivb_manual_fdi_link_train(struct intel_crtc *crtc, + const struct intel_crtc_state *crtc_state) +{ + struct drm_device *dev = crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + enum pipe pipe = crtc->pipe; + i915_reg_t reg; + u32 temp, i, j; + + /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit + for train result */ + reg = FDI_RX_IMR(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_RX_SYMBOL_LOCK; + temp &= ~FDI_RX_BIT_LOCK; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(150); + + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR before link train 0x%x\n", + intel_de_read(dev_priv, FDI_RX_IIR(pipe))); + + /* Try each vswing and preemphasis setting twice before moving on */ + for (j = 0; j < ARRAY_SIZE(snb_b_fdi_train_param) * 2; j++) { + /* disable first in case we need to retry */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~(FDI_LINK_TRAIN_AUTO | FDI_LINK_TRAIN_NONE_IVB); + temp &= ~FDI_TX_ENABLE; + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_AUTO; + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp &= ~FDI_RX_ENABLE; + intel_de_write(dev_priv, reg, temp); + + /* enable CPU FDI TX and PCH FDI RX */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_DP_PORT_WIDTH_MASK; + temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); + temp |= FDI_LINK_TRAIN_PATTERN_1_IVB; + temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK; + temp |= snb_b_fdi_train_param[j/2]; + temp |= FDI_COMPOSITE_SYNC; + intel_de_write(dev_priv, reg, temp | FDI_TX_ENABLE); + + intel_de_write(dev_priv, FDI_RX_MISC(pipe), + FDI_RX_TP1_TO_TP2_48 | FDI_RX_FDI_DELAY_90); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; + temp |= FDI_COMPOSITE_SYNC; + intel_de_write(dev_priv, reg, temp | FDI_RX_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(1); /* should be 0.5us */ + + for (i = 0; i < 4; i++) { + reg = FDI_RX_IIR(pipe); + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + + if (temp & FDI_RX_BIT_LOCK || + (intel_de_read(dev_priv, reg) & FDI_RX_BIT_LOCK)) { + intel_de_write(dev_priv, reg, + temp | FDI_RX_BIT_LOCK); + drm_dbg_kms(&dev_priv->drm, + "FDI train 1 done, level %i.\n", + i); + break; + } + udelay(1); /* should be 0.5us */ + } + if (i == 4) { + drm_dbg_kms(&dev_priv->drm, + "FDI train 1 fail on vswing %d\n", j / 2); + continue; + } + + /* Train 2 */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE_IVB; + temp |= FDI_LINK_TRAIN_PATTERN_2_IVB; + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp |= FDI_LINK_TRAIN_PATTERN_2_CPT; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(2); /* should be 1.5us */ + + for (i = 0; i < 4; i++) { + reg = FDI_RX_IIR(pipe); + temp = intel_de_read(dev_priv, reg); + drm_dbg_kms(&dev_priv->drm, "FDI_RX_IIR 0x%x\n", temp); + + if (temp & FDI_RX_SYMBOL_LOCK || + (intel_de_read(dev_priv, reg) & FDI_RX_SYMBOL_LOCK)) { + intel_de_write(dev_priv, reg, + temp | FDI_RX_SYMBOL_LOCK); + drm_dbg_kms(&dev_priv->drm, + "FDI train 2 done, level %i.\n", + i); + goto train_done; + } + udelay(2); /* should be 1.5us */ + } + if (i == 4) + drm_dbg_kms(&dev_priv->drm, + "FDI train 2 fail on vswing %d\n", j / 2); + } + +train_done: + drm_dbg_kms(&dev_priv->drm, "FDI train done.\n"); +} + +void ilk_fdi_pll_enable(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev); + enum pipe pipe = intel_crtc->pipe; + i915_reg_t reg; + u32 temp; + + /* enable PCH FDI RX PLL, wait warmup plus DMI latency */ + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~(FDI_DP_PORT_WIDTH_MASK | (0x7 << 16)); + temp |= FDI_DP_PORT_WIDTH(crtc_state->fdi_lanes); + temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; + intel_de_write(dev_priv, reg, temp | FDI_RX_PLL_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(200); + + /* Switch from Rawclk to PCDclk */ + temp = intel_de_read(dev_priv, reg); + intel_de_write(dev_priv, reg, temp | FDI_PCDCLK); + + intel_de_posting_read(dev_priv, reg); + udelay(200); + + /* Enable CPU FDI TX PLL, always on for Ironlake */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if ((temp & FDI_TX_PLL_ENABLE) == 0) { + intel_de_write(dev_priv, reg, temp | FDI_TX_PLL_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(100); + } +} + +void ilk_fdi_pll_disable(struct intel_crtc *intel_crtc) +{ + struct drm_device *dev = intel_crtc->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + enum pipe pipe = intel_crtc->pipe; + i915_reg_t reg; + u32 temp; + + /* Switch from PCDclk to Rawclk */ + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + intel_de_write(dev_priv, reg, temp & ~FDI_PCDCLK); + + /* Disable CPU FDI TX PLL */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + intel_de_write(dev_priv, reg, temp & ~FDI_TX_PLL_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(100); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + intel_de_write(dev_priv, reg, temp & ~FDI_RX_PLL_ENABLE); + + /* Wait for the clocks to turn off. */ + intel_de_posting_read(dev_priv, reg); + udelay(100); +} + +void ilk_fdi_disable(struct intel_crtc *crtc) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum pipe pipe = crtc->pipe; + i915_reg_t reg; + u32 temp; + + /* disable CPU FDI tx and PCH FDI rx */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + intel_de_write(dev_priv, reg, temp & ~FDI_TX_ENABLE); + intel_de_posting_read(dev_priv, reg); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~(0x7 << 16); + temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; + intel_de_write(dev_priv, reg, temp & ~FDI_RX_ENABLE); + + intel_de_posting_read(dev_priv, reg); + udelay(100); + + /* Ironlake workaround, disable clock pointer after downing FDI */ + if (HAS_PCH_IBX(dev_priv)) + intel_de_write(dev_priv, FDI_RX_CHICKEN(pipe), + FDI_RX_PHASE_SYNC_POINTER_OVR); + + /* still set train pattern 1 */ + reg = FDI_TX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + intel_de_write(dev_priv, reg, temp); + + reg = FDI_RX_CTL(pipe); + temp = intel_de_read(dev_priv, reg); + if (HAS_PCH_CPT(dev_priv)) { + temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT; + temp |= FDI_LINK_TRAIN_PATTERN_1_CPT; + } else { + temp &= ~FDI_LINK_TRAIN_NONE; + temp |= FDI_LINK_TRAIN_PATTERN_1; + } + /* BPC in FDI rx is consistent with that in PIPECONF */ + temp &= ~(0x07 << 16); + temp |= (intel_de_read(dev_priv, PIPECONF(pipe)) & PIPECONF_BPC_MASK) << 11; + intel_de_write(dev_priv, reg, temp); + + intel_de_posting_read(dev_priv, reg); + udelay(100); +} + +void +intel_fdi_init_hook(struct drm_i915_private *dev_priv) +{ + if (IS_GEN(dev_priv, 5)) { + dev_priv->display.fdi_link_train = ilk_fdi_link_train; + } else if (IS_GEN(dev_priv, 6)) { + dev_priv->display.fdi_link_train = gen6_fdi_link_train; + } else if (IS_IVYBRIDGE(dev_priv)) { + /* FIXME: detect B0+ stepping and use auto training */ + dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train; + } +} diff --git a/drivers/gpu/drm/i915/display/intel_fdi.h b/drivers/gpu/drm/i915/display/intel_fdi.h new file mode 100644 index 000000000000..a9cd21663eb8 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_fdi.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef _INTEL_FDI_H_ +#define _INTEL_FDI_H_ + +struct drm_i915_private; +struct intel_crtc; +struct intel_crtc_state; + +#define I915_DISPLAY_CONFIG_RETRY 1 +int ilk_fdi_compute_config(struct intel_crtc *intel_crtc, + struct intel_crtc_state *pipe_config); +void intel_fdi_normal_train(struct intel_crtc *crtc); +void ilk_fdi_disable(struct intel_crtc *crtc); +void ilk_fdi_pll_disable(struct intel_crtc *intel_crtc); +void ilk_fdi_pll_enable(const struct intel_crtc_state *crtc_state); +void intel_fdi_init_hook(struct drm_i915_private *dev_priv); + +#endif diff --git a/drivers/gpu/drm/i915/display/intel_frontbuffer.c b/drivers/gpu/drm/i915/display/intel_frontbuffer.c index d898b370d7a4..7b38eee9980f 100644 --- a/drivers/gpu/drm/i915/display/intel_frontbuffer.c +++ b/drivers/gpu/drm/i915/display/intel_frontbuffer.c @@ -225,8 +225,10 @@ static void frontbuffer_release(struct kref *ref) struct i915_vma *vma; spin_lock(&obj->vma.lock); - for_each_ggtt_vma(vma, obj) + for_each_ggtt_vma(vma, obj) { + i915_vma_clear_scanout(vma); vma->display_alignment = I915_GTT_MIN_ALIGNMENT; + } spin_unlock(&obj->vma.lock); RCU_INIT_POINTER(obj->frontbuffer, NULL); diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c index b2a4bbcfdcd2..ae1371c36a32 100644 --- a/drivers/gpu/drm/i915/display/intel_hdcp.c +++ b/drivers/gpu/drm/i915/display/intel_hdcp.c @@ -15,6 +15,7 @@ #include <drm/drm_hdcp.h> #include <drm/i915_component.h> +#include "i915_drv.h" #include "i915_reg.h" #include "intel_display_power.h" #include "intel_display_types.h" @@ -23,9 +24,78 @@ #include "intel_connector.h" #define KEY_LOAD_TRIES 5 -#define ENCRYPT_STATUS_CHANGE_TIMEOUT_MS 50 #define HDCP2_LC_RETRY_CNT 3 +static int intel_conn_to_vcpi(struct intel_connector *connector) +{ + /* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */ + return connector->port ? connector->port->vcpi.vcpi : 0; +} + +/* + * intel_hdcp_required_content_stream selects the most highest common possible HDCP + * content_type for all streams in DP MST topology because security f/w doesn't + * have any provision to mark content_type for each stream separately, it marks + * all available streams with the content_type proivided at the time of port + * authentication. This may prohibit the userspace to use type1 content on + * HDCP 2.2 capable sink because of other sink are not capable of HDCP 2.2 in + * DP MST topology. Though it is not compulsory, security fw should change its + * policy to mark different content_types for different streams. + */ +static int +intel_hdcp_required_content_stream(struct intel_digital_port *dig_port) +{ + struct drm_connector_list_iter conn_iter; + struct intel_digital_port *conn_dig_port; + struct intel_connector *connector; + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; + bool enforce_type0 = false; + int k; + + data->k = 0; + + if (dig_port->hdcp_auth_status) + return 0; + + drm_connector_list_iter_begin(&i915->drm, &conn_iter); + for_each_intel_connector_iter(connector, &conn_iter) { + if (connector->base.status == connector_status_disconnected) + continue; + + if (!intel_encoder_is_mst(intel_attached_encoder(connector))) + continue; + + conn_dig_port = intel_attached_dig_port(connector); + if (conn_dig_port != dig_port) + continue; + + if (!enforce_type0 && !intel_hdcp2_capable(connector)) + enforce_type0 = true; + + data->streams[data->k].stream_id = intel_conn_to_vcpi(connector); + data->k++; + + /* if there is only one active stream */ + if (dig_port->dp.active_mst_links <= 1) + break; + } + drm_connector_list_iter_end(&conn_iter); + + if (drm_WARN_ON(&i915->drm, data->k > INTEL_NUM_PIPES(i915) || data->k == 0)) + return -EINVAL; + + /* + * Apply common protection level across all streams in DP MST Topology. + * Use highest supported content type for all streams in DP MST Topology. + */ + for (k = 0; k < data->k; k++) + data->streams[k].stream_type = + enforce_type0 ? DRM_MODE_HDCP_CONTENT_TYPE0 : DRM_MODE_HDCP_CONTENT_TYPE1; + + return 0; +} + static bool intel_hdcp_is_ksv_valid(u8 *ksv) { @@ -762,15 +832,22 @@ static int intel_hdcp_auth(struct intel_connector *connector) if (intel_de_wait_for_set(dev_priv, HDCP_STATUS(dev_priv, cpu_transcoder, port), HDCP_STATUS_ENC, - ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { drm_err(&dev_priv->drm, "Timed out waiting for encryption\n"); return -ETIMEDOUT; } - /* - * XXX: If we have MST-connected devices, we need to enable encryption - * on those as well. - */ + /* DP MST Auth Part 1 Step 2.a and Step 2.b */ + if (shim->stream_encryption) { + ret = shim->stream_encryption(connector, true); + if (ret) { + drm_err(&dev_priv->drm, "[%s:%d] Failed to enable HDCP 1.4 stream enc\n", + connector->base.name, connector->base.base.id); + return ret; + } + drm_dbg_kms(&dev_priv->drm, "HDCP 1.4 transcoder: %s stream encrypted\n", + transcoder_name(hdcp->stream_transcoder)); + } if (repeater_present) return intel_hdcp_auth_downstream(connector); @@ -792,24 +869,29 @@ static int _intel_hdcp_disable(struct intel_connector *connector) drm_dbg_kms(&dev_priv->drm, "[%s:%d] HDCP is being disabled...\n", connector->base.name, connector->base.base.id); - /* - * If there are other connectors on this port using HDCP, don't disable - * it. Instead, toggle the HDCP signalling off on that particular - * connector/pipe and exit. - */ - if (dig_port->num_hdcp_streams > 0) { - ret = hdcp->shim->toggle_signalling(dig_port, - cpu_transcoder, false); - if (ret) - DRM_ERROR("Failed to disable HDCP signalling\n"); - return ret; + if (hdcp->shim->stream_encryption) { + ret = hdcp->shim->stream_encryption(connector, false); + if (ret) { + drm_err(&dev_priv->drm, "[%s:%d] Failed to disable HDCP 1.4 stream enc\n", + connector->base.name, connector->base.base.id); + return ret; + } + drm_dbg_kms(&dev_priv->drm, "HDCP 1.4 transcoder: %s stream encryption disabled\n", + transcoder_name(hdcp->stream_transcoder)); + /* + * If there are other connectors on this port using HDCP, + * don't disable it until it disabled HDCP encryption for + * all connectors in MST topology. + */ + if (dig_port->num_hdcp_streams > 0) + return 0; } hdcp->hdcp_encrypted = false; intel_de_write(dev_priv, HDCP_CONF(dev_priv, cpu_transcoder, port), 0); if (intel_de_wait_for_clear(dev_priv, HDCP_STATUS(dev_priv, cpu_transcoder, port), - ~0, ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { + ~0, HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS)) { drm_err(&dev_priv->drm, "Failed to disable HDCP, timeout clearing status\n"); return -ETIMEDOUT; @@ -1014,7 +1096,8 @@ static int hdcp2_prepare_ake_init(struct intel_connector *connector, struct hdcp2_ake_init *ake_data) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1043,7 +1126,8 @@ hdcp2_verify_rx_cert_prepare_km(struct intel_connector *connector, struct hdcp2_ake_no_stored_km *ek_pub_km, size_t *msg_sz) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1070,7 +1154,8 @@ hdcp2_verify_rx_cert_prepare_km(struct intel_connector *connector, static int hdcp2_verify_hprime(struct intel_connector *connector, struct hdcp2_ake_send_hprime *rx_hprime) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1095,7 +1180,8 @@ static int hdcp2_store_pairing_info(struct intel_connector *connector, struct hdcp2_ake_send_pairing_info *pairing_info) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1121,7 +1207,8 @@ static int hdcp2_prepare_lc_init(struct intel_connector *connector, struct hdcp2_lc_init *lc_init) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1147,7 +1234,8 @@ static int hdcp2_verify_lprime(struct intel_connector *connector, struct hdcp2_lc_send_lprime *rx_lprime) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1172,7 +1260,8 @@ hdcp2_verify_lprime(struct intel_connector *connector, static int hdcp2_prepare_skey(struct intel_connector *connector, struct hdcp2_ske_send_eks *ske_data) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1200,7 +1289,8 @@ hdcp2_verify_rep_topology_prepare_ack(struct intel_connector *connector, *rep_topology, struct hdcp2_rep_send_ack *rep_send_ack) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1228,7 +1318,8 @@ static int hdcp2_verify_mprime(struct intel_connector *connector, struct hdcp2_rep_stream_ready *stream_ready) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1251,7 +1342,8 @@ hdcp2_verify_mprime(struct intel_connector *connector, static int hdcp2_authenticate_port(struct intel_connector *connector) { - struct hdcp_port_data *data = &connector->hdcp.port_data; + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1275,6 +1367,7 @@ static int hdcp2_authenticate_port(struct intel_connector *connector) static int hdcp2_close_mei_session(struct intel_connector *connector) { + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct i915_hdcp_comp_master *comp; int ret; @@ -1288,7 +1381,7 @@ static int hdcp2_close_mei_session(struct intel_connector *connector) } ret = comp->ops->close_hdcp_session(comp->mei_dev, - &connector->hdcp.port_data); + &dig_port->hdcp_port_data); mutex_unlock(&dev_priv->hdcp_comp_mutex); return ret; @@ -1448,13 +1541,14 @@ static int _hdcp2_propagate_stream_management_info(struct intel_connector *connector) { struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct intel_hdcp *hdcp = &connector->hdcp; union { struct hdcp2_rep_stream_manage stream_manage; struct hdcp2_rep_stream_ready stream_ready; } msgs; const struct intel_hdcp_shim *shim = hdcp->shim; - int ret; + int ret, streams_size_delta, i; if (connector->hdcp.seq_num_m > HDCP_2_2_SEQ_NUM_MAX) return -ERANGE; @@ -1463,16 +1557,18 @@ int _hdcp2_propagate_stream_management_info(struct intel_connector *connector) msgs.stream_manage.msg_id = HDCP_2_2_REP_STREAM_MANAGE; drm_hdcp_cpu_to_be24(msgs.stream_manage.seq_num_m, hdcp->seq_num_m); - /* K no of streams is fixed as 1. Stored as big-endian. */ - msgs.stream_manage.k = cpu_to_be16(1); + msgs.stream_manage.k = cpu_to_be16(data->k); - /* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */ - msgs.stream_manage.streams[0].stream_id = 0; - msgs.stream_manage.streams[0].stream_type = hdcp->content_type; + for (i = 0; i < data->k; i++) { + msgs.stream_manage.streams[i].stream_id = data->streams[i].stream_id; + msgs.stream_manage.streams[i].stream_type = data->streams[i].stream_type; + } + streams_size_delta = (HDCP_2_2_MAX_CONTENT_STREAMS_CNT - data->k) * + sizeof(struct hdcp2_streamid_type); /* Send it to Repeater */ ret = shim->write_2_2_msg(dig_port, &msgs.stream_manage, - sizeof(msgs.stream_manage)); + sizeof(msgs.stream_manage) - streams_size_delta); if (ret < 0) goto out; @@ -1481,8 +1577,8 @@ int _hdcp2_propagate_stream_management_info(struct intel_connector *connector) if (ret < 0) goto out; - hdcp->port_data.seq_num_m = hdcp->seq_num_m; - hdcp->port_data.streams[0].stream_type = hdcp->content_type; + data->seq_num_m = hdcp->seq_num_m; + ret = hdcp2_verify_mprime(connector, &msgs.stream_ready); out: @@ -1606,6 +1702,36 @@ static int hdcp2_authenticate_sink(struct intel_connector *connector) return ret; } +static int hdcp2_enable_stream_encryption(struct intel_connector *connector) +{ + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); + struct drm_i915_private *dev_priv = to_i915(connector->base.dev); + struct intel_hdcp *hdcp = &connector->hdcp; + enum transcoder cpu_transcoder = hdcp->cpu_transcoder; + enum port port = dig_port->base.port; + int ret = 0; + + if (!(intel_de_read(dev_priv, HDCP2_STATUS(dev_priv, cpu_transcoder, port)) & + LINK_ENCRYPTION_STATUS)) { + drm_err(&dev_priv->drm, "[%s:%d] HDCP 2.2 Link is not encrypted\n", + connector->base.name, connector->base.base.id); + return -EPERM; + } + + if (hdcp->shim->stream_2_2_encryption) { + ret = hdcp->shim->stream_2_2_encryption(connector, true); + if (ret) { + drm_err(&dev_priv->drm, "[%s:%d] Failed to enable HDCP 2.2 stream enc\n", + connector->base.name, connector->base.base.id); + return ret; + } + drm_dbg_kms(&dev_priv->drm, "HDCP 2.2 transcoder: %s stream encrypted\n", + transcoder_name(hdcp->stream_transcoder)); + } + + return ret; +} + static int hdcp2_enable_encryption(struct intel_connector *connector) { struct intel_digital_port *dig_port = intel_attached_dig_port(connector); @@ -1641,7 +1767,8 @@ static int hdcp2_enable_encryption(struct intel_connector *connector) HDCP2_STATUS(dev_priv, cpu_transcoder, port), LINK_ENCRYPTION_STATUS, - ENCRYPT_STATUS_CHANGE_TIMEOUT_MS); + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS); + dig_port->hdcp_auth_status = true; return ret; } @@ -1665,7 +1792,7 @@ static int hdcp2_disable_encryption(struct intel_connector *connector) HDCP2_STATUS(dev_priv, cpu_transcoder, port), LINK_ENCRYPTION_STATUS, - ENCRYPT_STATUS_CHANGE_TIMEOUT_MS); + HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS); if (ret == -ETIMEDOUT) drm_dbg_kms(&dev_priv->drm, "Disable Encryption Timedout"); @@ -1714,11 +1841,11 @@ hdcp2_propagate_stream_management_info(struct intel_connector *connector) static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector) { + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); struct drm_i915_private *i915 = to_i915(connector->base.dev); - struct intel_hdcp *hdcp = &connector->hdcp; - int ret, i, tries = 3; + int ret = 0, i, tries = 3; - for (i = 0; i < tries; i++) { + for (i = 0; i < tries && !dig_port->hdcp_auth_status; i++) { ret = hdcp2_authenticate_sink(connector); if (!ret) { ret = hdcp2_propagate_stream_management_info(connector); @@ -1728,8 +1855,7 @@ static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector) ret); break; } - hdcp->port_data.streams[0].stream_type = - hdcp->content_type; + ret = hdcp2_authenticate_port(connector); if (!ret) break; @@ -1744,7 +1870,7 @@ static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector) drm_dbg_kms(&i915->drm, "Port deauth failed.\n"); } - if (!ret) { + if (!ret && !dig_port->hdcp_auth_status) { /* * Ensuring the required 200mSec min time interval between * Session Key Exchange and encryption. @@ -1759,12 +1885,16 @@ static int hdcp2_authenticate_and_encrypt(struct intel_connector *connector) } } + ret = hdcp2_enable_stream_encryption(connector); + return ret; } static int _intel_hdcp2_enable(struct intel_connector *connector) { + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct intel_hdcp *hdcp = &connector->hdcp; int ret; @@ -1772,6 +1902,16 @@ static int _intel_hdcp2_enable(struct intel_connector *connector) connector->base.name, connector->base.base.id, hdcp->content_type); + /* Stream which requires encryption */ + if (!intel_encoder_is_mst(intel_attached_encoder(connector))) { + data->k = 1; + data->streams[0].stream_type = hdcp->content_type; + } else { + ret = intel_hdcp_required_content_stream(dig_port); + if (ret) + return ret; + } + ret = hdcp2_authenticate_and_encrypt(connector); if (ret) { drm_dbg_kms(&i915->drm, "HDCP2 Type%d Enabling Failed. (%d)\n", @@ -1789,18 +1929,37 @@ static int _intel_hdcp2_enable(struct intel_connector *connector) static int _intel_hdcp2_disable(struct intel_connector *connector) { + struct intel_digital_port *dig_port = intel_attached_dig_port(connector); struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; + struct intel_hdcp *hdcp = &connector->hdcp; int ret; drm_dbg_kms(&i915->drm, "[%s:%d] HDCP2.2 is being Disabled\n", connector->base.name, connector->base.base.id); + if (hdcp->shim->stream_2_2_encryption) { + ret = hdcp->shim->stream_2_2_encryption(connector, false); + if (ret) { + drm_err(&i915->drm, "[%s:%d] Failed to disable HDCP 2.2 stream enc\n", + connector->base.name, connector->base.base.id); + return ret; + } + drm_dbg_kms(&i915->drm, "HDCP 2.2 transcoder: %s stream encryption disabled\n", + transcoder_name(hdcp->stream_transcoder)); + + if (dig_port->num_hdcp_streams > 0) + return 0; + } + ret = hdcp2_disable_encryption(connector); if (hdcp2_deauthenticate_port(connector) < 0) drm_dbg_kms(&i915->drm, "Port deauth failed.\n"); connector->hdcp.hdcp2_encrypted = false; + dig_port->hdcp_auth_status = false; + data->k = 0; return ret; } @@ -1816,6 +1975,7 @@ static int intel_hdcp2_check_link(struct intel_connector *connector) int ret = 0; mutex_lock(&hdcp->mutex); + mutex_lock(&dig_port->hdcp_mutex); cpu_transcoder = hdcp->cpu_transcoder; /* hdcp2_check_link is expected only when HDCP2.2 is Enabled */ @@ -1837,7 +1997,7 @@ static int intel_hdcp2_check_link(struct intel_connector *connector) goto out; } - ret = hdcp->shim->check_2_2_link(dig_port); + ret = hdcp->shim->check_2_2_link(dig_port, connector); if (ret == HDCP_LINK_PROTECTED) { if (hdcp->value != DRM_MODE_CONTENT_PROTECTION_UNDESIRED) { intel_hdcp_update_value(connector, @@ -1893,6 +2053,7 @@ static int intel_hdcp2_check_link(struct intel_connector *connector) } out: + mutex_unlock(&dig_port->hdcp_mutex); mutex_unlock(&hdcp->mutex); return ret; } @@ -1968,12 +2129,13 @@ static enum mei_fw_tc intel_get_mei_fw_tc(enum transcoder cpu_transcoder) } static int initialize_hdcp_port_data(struct intel_connector *connector, - enum port port, + struct intel_digital_port *dig_port, const struct intel_hdcp_shim *shim) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); + struct hdcp_port_data *data = &dig_port->hdcp_port_data; struct intel_hdcp *hdcp = &connector->hdcp; - struct hdcp_port_data *data = &hdcp->port_data; + enum port port = dig_port->base.port; if (INTEL_GEN(dev_priv) < 12) data->fw_ddi = intel_get_mei_fw_ddi_index(port); @@ -1994,16 +2156,15 @@ static int initialize_hdcp_port_data(struct intel_connector *connector, data->port_type = (u8)HDCP_PORT_TYPE_INTEGRATED; data->protocol = (u8)shim->protocol; - data->k = 1; if (!data->streams) - data->streams = kcalloc(data->k, + data->streams = kcalloc(INTEL_NUM_PIPES(dev_priv), sizeof(struct hdcp2_streamid_type), GFP_KERNEL); if (!data->streams) { drm_err(&dev_priv->drm, "Out of Memory\n"); return -ENOMEM; } - + /* For SST */ data->streams[0].stream_id = 0; data->streams[0].stream_type = hdcp->content_type; @@ -2046,14 +2207,15 @@ void intel_hdcp_component_init(struct drm_i915_private *dev_priv) } } -static void intel_hdcp2_init(struct intel_connector *connector, enum port port, +static void intel_hdcp2_init(struct intel_connector *connector, + struct intel_digital_port *dig_port, const struct intel_hdcp_shim *shim) { struct drm_i915_private *i915 = to_i915(connector->base.dev); struct intel_hdcp *hdcp = &connector->hdcp; int ret; - ret = initialize_hdcp_port_data(connector, port, shim); + ret = initialize_hdcp_port_data(connector, dig_port, shim); if (ret) { drm_dbg_kms(&i915->drm, "Mei hdcp data init failed\n"); return; @@ -2063,7 +2225,7 @@ static void intel_hdcp2_init(struct intel_connector *connector, enum port port, } int intel_hdcp_init(struct intel_connector *connector, - enum port port, + struct intel_digital_port *dig_port, const struct intel_hdcp_shim *shim) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); @@ -2073,15 +2235,15 @@ int intel_hdcp_init(struct intel_connector *connector, if (!shim) return -EINVAL; - if (is_hdcp2_supported(dev_priv) && !connector->mst_port) - intel_hdcp2_init(connector, port, shim); + if (is_hdcp2_supported(dev_priv)) + intel_hdcp2_init(connector, dig_port, shim); ret = drm_connector_attach_content_protection_property(&connector->base, hdcp->hdcp2_supported); if (ret) { hdcp->hdcp2_supported = false; - kfree(hdcp->port_data.streams); + kfree(dig_port->hdcp_port_data.streams); return ret; } @@ -2095,7 +2257,7 @@ int intel_hdcp_init(struct intel_connector *connector, } int intel_hdcp_enable(struct intel_connector *connector, - enum transcoder cpu_transcoder, u8 content_type) + const struct intel_crtc_state *pipe_config, u8 content_type) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_digital_port *dig_port = intel_attached_dig_port(connector); @@ -2106,15 +2268,28 @@ int intel_hdcp_enable(struct intel_connector *connector, if (!hdcp->shim) return -ENOENT; + if (!connector->encoder) { + drm_err(&dev_priv->drm, "[%s:%d] encoder is not initialized\n", + connector->base.name, connector->base.base.id); + return -ENODEV; + } + mutex_lock(&hdcp->mutex); mutex_lock(&dig_port->hdcp_mutex); drm_WARN_ON(&dev_priv->drm, hdcp->value == DRM_MODE_CONTENT_PROTECTION_ENABLED); hdcp->content_type = content_type; - hdcp->cpu_transcoder = cpu_transcoder; + + if (intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST)) { + hdcp->cpu_transcoder = pipe_config->mst_master_transcoder; + hdcp->stream_transcoder = pipe_config->cpu_transcoder; + } else { + hdcp->cpu_transcoder = pipe_config->cpu_transcoder; + hdcp->stream_transcoder = INVALID_TRANSCODER; + } if (INTEL_GEN(dev_priv) >= 12) - hdcp->port_data.fw_tc = intel_get_mei_fw_tc(cpu_transcoder); + dig_port->hdcp_port_data.fw_tc = intel_get_mei_fw_tc(hdcp->cpu_transcoder); /* * Considering that HDCP2.2 is more secure than HDCP1.4, If the setup @@ -2210,6 +2385,7 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state, if (content_protection_type_changed) { mutex_lock(&hdcp->mutex); hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED; + drm_connector_get(&connector->base); schedule_work(&hdcp->prop_work); mutex_unlock(&hdcp->mutex); } @@ -2221,11 +2397,19 @@ void intel_hdcp_update_pipe(struct intel_atomic_state *state, desired_and_not_enabled = hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED; mutex_unlock(&hdcp->mutex); + /* + * If HDCP already ENABLED and CP property is DESIRED, schedule + * prop_work to update correct CP property to user space. + */ + if (!desired_and_not_enabled && !content_protection_type_changed) { + drm_connector_get(&connector->base); + schedule_work(&hdcp->prop_work); + } } if (desired_and_not_enabled || content_protection_type_changed) intel_hdcp_enable(connector, - crtc_state->cpu_transcoder, + crtc_state, (u8)conn_state->hdcp_content_type); } @@ -2275,7 +2459,6 @@ void intel_hdcp_cleanup(struct intel_connector *connector) drm_WARN_ON(connector->base.dev, work_pending(&hdcp->prop_work)); mutex_lock(&hdcp->mutex); - kfree(hdcp->port_data.streams); hdcp->shim = NULL; mutex_unlock(&hdcp->mutex); } diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.h b/drivers/gpu/drm/i915/display/intel_hdcp.h index 1bbf5b67ed0a..8f53b0c7fe5c 100644 --- a/drivers/gpu/drm/i915/display/intel_hdcp.h +++ b/drivers/gpu/drm/i915/display/intel_hdcp.h @@ -8,6 +8,8 @@ #include <linux/types.h> +#define HDCP_ENCRYPT_STATUS_CHANGE_TIMEOUT_MS 50 + struct drm_connector; struct drm_connector_state; struct drm_i915_private; @@ -16,16 +18,18 @@ struct intel_connector; struct intel_crtc_state; struct intel_encoder; struct intel_hdcp_shim; +struct intel_digital_port; enum port; enum transcoder; void intel_hdcp_atomic_check(struct drm_connector *connector, struct drm_connector_state *old_state, struct drm_connector_state *new_state); -int intel_hdcp_init(struct intel_connector *connector, enum port port, +int intel_hdcp_init(struct intel_connector *connector, + struct intel_digital_port *dig_port, const struct intel_hdcp_shim *hdcp_shim); int intel_hdcp_enable(struct intel_connector *connector, - enum transcoder cpu_transcoder, u8 content_type); + const struct intel_crtc_state *pipe_config, u8 content_type); int intel_hdcp_disable(struct intel_connector *connector); void intel_hdcp_update_pipe(struct intel_atomic_state *state, struct intel_encoder *encoder, diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c index c5959590562b..d5f4b40a8460 100644 --- a/drivers/gpu/drm/i915/display/intel_hdmi.c +++ b/drivers/gpu/drm/i915/display/intel_hdmi.c @@ -1494,15 +1494,16 @@ static int kbl_repositioning_enc_en_signal(struct intel_connector *connector, usleep_range(25, 50); } - ret = intel_ddi_toggle_hdcp_signalling(&dig_port->base, cpu_transcoder, - false); + ret = intel_ddi_toggle_hdcp_bits(&dig_port->base, cpu_transcoder, + false, TRANS_DDI_HDCP_SIGNALLING); if (ret) { drm_err(&dev_priv->drm, "Disable HDCP signalling failed (%d)\n", ret); return ret; } - ret = intel_ddi_toggle_hdcp_signalling(&dig_port->base, cpu_transcoder, - true); + + ret = intel_ddi_toggle_hdcp_bits(&dig_port->base, cpu_transcoder, + true, TRANS_DDI_HDCP_SIGNALLING); if (ret) { drm_err(&dev_priv->drm, "Enable HDCP signalling failed (%d)\n", ret); @@ -1525,8 +1526,9 @@ int intel_hdmi_hdcp_toggle_signalling(struct intel_digital_port *dig_port, if (!enable) usleep_range(6, 60); /* Bspec says >= 6us */ - ret = intel_ddi_toggle_hdcp_signalling(&dig_port->base, cpu_transcoder, - enable); + ret = intel_ddi_toggle_hdcp_bits(&dig_port->base, + cpu_transcoder, enable, + TRANS_DDI_HDCP_SIGNALLING); if (ret) { drm_err(&dev_priv->drm, "%s HDCP signalling failed (%d)\n", enable ? "Enable" : "Disable", ret); @@ -1731,7 +1733,8 @@ int intel_hdmi_hdcp2_read_msg(struct intel_digital_port *dig_port, } static -int intel_hdmi_hdcp2_check_link(struct intel_digital_port *dig_port) +int intel_hdmi_hdcp2_check_link(struct intel_digital_port *dig_port, + struct intel_connector *connector) { u8 rx_status[HDCP_2_2_HDMI_RXSTATUS_LEN]; int ret; @@ -3290,7 +3293,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *dig_port, intel_hdmi->attached_connector = intel_connector; if (is_hdcp_supported(dev_priv, port)) { - int ret = intel_hdcp_init(intel_connector, port, + int ret = intel_hdcp_init(intel_connector, dig_port, &intel_hdmi_hdcp_shim); if (ret) drm_dbg_kms(&dev_priv->drm, diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c index 6be5d8946c69..8edb9b2cdbeb 100644 --- a/drivers/gpu/drm/i915/display/intel_overlay.c +++ b/drivers/gpu/drm/i915/display/intel_overlay.c @@ -360,7 +360,7 @@ static void intel_overlay_release_old_vma(struct intel_overlay *overlay) intel_frontbuffer_flip_complete(overlay->i915, INTEL_FRONTBUFFER_OVERLAY(overlay->crtc->pipe)); - i915_gem_object_unpin_from_display_plane(vma); + i915_vma_unpin(vma); i915_vma_put(vma); } @@ -861,7 +861,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay, return 0; out_unpin: - i915_gem_object_unpin_from_display_plane(vma); + i915_vma_unpin(vma); out_pin_section: atomic_dec(&dev_priv->gpu_error.pending_fb_pin); diff --git a/drivers/gpu/drm/i915/display/intel_panel.c b/drivers/gpu/drm/i915/display/intel_panel.c index 7a4239d1c241..5fdf52643150 100644 --- a/drivers/gpu/drm/i915/display/intel_panel.c +++ b/drivers/gpu/drm/i915/display/intel_panel.c @@ -511,40 +511,79 @@ static u32 scale_hw_to_user(struct intel_connector *connector, 0, user_max); } -static u32 intel_panel_compute_brightness(struct intel_connector *connector, - u32 val) +u32 intel_panel_invert_pwm_level(struct intel_connector *connector, u32 val) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0); + drm_WARN_ON(&dev_priv->drm, panel->backlight.pwm_level_max == 0); if (dev_priv->params.invert_brightness < 0) return val; if (dev_priv->params.invert_brightness > 0 || dev_priv->quirks & QUIRK_INVERT_BRIGHTNESS) { - return panel->backlight.max - val + panel->backlight.min; + return panel->backlight.pwm_level_max - val + panel->backlight.pwm_level_min; } return val; } -static u32 lpt_get_backlight(struct intel_connector *connector) +void intel_panel_set_pwm_level(const struct drm_connector_state *conn_state, u32 val) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_panel *panel = &connector->panel; + + drm_dbg_kms(&i915->drm, "set backlight PWM = %d\n", val); + panel->backlight.pwm_funcs->set(conn_state, val); +} + +u32 intel_panel_backlight_level_to_pwm(struct intel_connector *connector, u32 val) +{ + struct drm_i915_private *dev_priv = to_i915(connector->base.dev); + struct intel_panel *panel = &connector->panel; + + drm_WARN_ON_ONCE(&dev_priv->drm, + panel->backlight.max == 0 || panel->backlight.pwm_level_max == 0); + + val = scale(val, panel->backlight.min, panel->backlight.max, + panel->backlight.pwm_level_min, panel->backlight.pwm_level_max); + + return intel_panel_invert_pwm_level(connector, val); +} + +u32 intel_panel_backlight_level_from_pwm(struct intel_connector *connector, u32 val) +{ + struct drm_i915_private *dev_priv = to_i915(connector->base.dev); + struct intel_panel *panel = &connector->panel; + + drm_WARN_ON_ONCE(&dev_priv->drm, + panel->backlight.max == 0 || panel->backlight.pwm_level_max == 0); + + if (dev_priv->params.invert_brightness > 0 || + (dev_priv->params.invert_brightness == 0 && dev_priv->quirks & QUIRK_INVERT_BRIGHTNESS)) + val = panel->backlight.pwm_level_max - (val - panel->backlight.pwm_level_min); + + return scale(val, panel->backlight.pwm_level_min, panel->backlight.pwm_level_max, + panel->backlight.min, panel->backlight.max); +} + +static u32 lpt_get_backlight(struct intel_connector *connector, enum pipe unused) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); return intel_de_read(dev_priv, BLC_PWM_PCH_CTL2) & BACKLIGHT_DUTY_CYCLE_MASK; } -static u32 pch_get_backlight(struct intel_connector *connector) +static u32 pch_get_backlight(struct intel_connector *connector, enum pipe unused) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); return intel_de_read(dev_priv, BLC_PWM_CPU_CTL) & BACKLIGHT_DUTY_CYCLE_MASK; } -static u32 i9xx_get_backlight(struct intel_connector *connector) +static u32 i9xx_get_backlight(struct intel_connector *connector, enum pipe unused) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; @@ -564,23 +603,17 @@ static u32 i9xx_get_backlight(struct intel_connector *connector) return val; } -static u32 _vlv_get_backlight(struct drm_i915_private *dev_priv, enum pipe pipe) +static u32 vlv_get_backlight(struct intel_connector *connector, enum pipe pipe) { + struct drm_i915_private *dev_priv = to_i915(connector->base.dev); + if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B)) return 0; return intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe)) & BACKLIGHT_DUTY_CYCLE_MASK; } -static u32 vlv_get_backlight(struct intel_connector *connector) -{ - struct drm_i915_private *dev_priv = to_i915(connector->base.dev); - enum pipe pipe = intel_connector_get_pipe(connector); - - return _vlv_get_backlight(dev_priv, pipe); -} - -static u32 bxt_get_backlight(struct intel_connector *connector) +static u32 bxt_get_backlight(struct intel_connector *connector, enum pipe unused) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; @@ -589,7 +622,7 @@ static u32 bxt_get_backlight(struct intel_connector *connector) BXT_BLC_PWM_DUTY(panel->backlight.controller)); } -static u32 ext_pwm_get_backlight(struct intel_connector *connector) +static u32 ext_pwm_get_backlight(struct intel_connector *connector, enum pipe unused) { struct intel_panel *panel = &connector->panel; struct pwm_state state; @@ -624,12 +657,12 @@ static void i9xx_set_backlight(const struct drm_connector_state *conn_state, u32 struct intel_panel *panel = &connector->panel; u32 tmp, mask; - drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0); + drm_WARN_ON(&dev_priv->drm, panel->backlight.pwm_level_max == 0); if (panel->backlight.combination_mode) { u8 lbpc; - lbpc = level * 0xfe / panel->backlight.max + 1; + lbpc = level * 0xfe / panel->backlight.pwm_level_max + 1; level /= lbpc; pci_write_config_byte(dev_priv->drm.pdev, LBPC, lbpc); } @@ -681,9 +714,8 @@ intel_panel_actually_set_backlight(const struct drm_connector_state *conn_state, struct drm_i915_private *i915 = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - drm_dbg_kms(&i915->drm, "set backlight PWM = %d\n", level); + drm_dbg_kms(&i915->drm, "set backlight level = %d\n", level); - level = intel_panel_compute_brightness(connector, level); panel->backlight.funcs->set(conn_state, level); } @@ -732,7 +764,7 @@ static void lpt_disable_backlight(const struct drm_connector_state *old_conn_sta struct drm_i915_private *dev_priv = to_i915(connector->base.dev); u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, level); + intel_panel_set_pwm_level(old_conn_state, level); /* * Although we don't support or enable CPU PWM with LPT/SPT based @@ -760,7 +792,7 @@ static void pch_disable_backlight(const struct drm_connector_state *old_conn_sta struct drm_i915_private *dev_priv = to_i915(connector->base.dev); u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); tmp = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2); intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, tmp & ~BLM_PWM_ENABLE); @@ -771,7 +803,7 @@ static void pch_disable_backlight(const struct drm_connector_state *old_conn_sta static void i9xx_disable_backlight(const struct drm_connector_state *old_conn_state, u32 val) { - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); } static void i965_disable_backlight(const struct drm_connector_state *old_conn_state, u32 val) @@ -779,7 +811,7 @@ static void i965_disable_backlight(const struct drm_connector_state *old_conn_st struct drm_i915_private *dev_priv = to_i915(old_conn_state->connector->dev); u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); tmp = intel_de_read(dev_priv, BLC_PWM_CTL2); intel_de_write(dev_priv, BLC_PWM_CTL2, tmp & ~BLM_PWM_ENABLE); @@ -792,7 +824,7 @@ static void vlv_disable_backlight(const struct drm_connector_state *old_conn_sta enum pipe pipe = to_intel_crtc(old_conn_state->crtc)->pipe; u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); tmp = intel_de_read(dev_priv, VLV_BLC_PWM_CTL2(pipe)); intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe), @@ -806,7 +838,7 @@ static void bxt_disable_backlight(const struct drm_connector_state *old_conn_sta struct intel_panel *panel = &connector->panel; u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); tmp = intel_de_read(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller)); @@ -827,7 +859,7 @@ static void cnp_disable_backlight(const struct drm_connector_state *old_conn_sta struct intel_panel *panel = &connector->panel; u32 tmp; - intel_panel_actually_set_backlight(old_conn_state, val); + intel_panel_set_pwm_level(old_conn_state, val); tmp = intel_de_read(dev_priv, BXT_BLC_PWM_CTL(panel->backlight.controller)); @@ -906,7 +938,7 @@ static void lpt_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, SOUTH_CHICKEN1, schicken); } - pch_ctl2 = panel->backlight.max << 16; + pch_ctl2 = panel->backlight.pwm_level_max << 16; intel_de_write(dev_priv, BLC_PWM_PCH_CTL2, pch_ctl2); pch_ctl1 = 0; @@ -923,7 +955,7 @@ static void lpt_enable_backlight(const struct intel_crtc_state *crtc_state, pch_ctl1 | BLM_PCH_PWM_ENABLE); /* This won't stick until the above enable. */ - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); } static void pch_enable_backlight(const struct intel_crtc_state *crtc_state, @@ -958,9 +990,9 @@ static void pch_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, BLC_PWM_CPU_CTL2, cpu_ctl2 | BLM_PWM_ENABLE); /* This won't stick until the above enable. */ - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); - pch_ctl2 = panel->backlight.max << 16; + pch_ctl2 = panel->backlight.pwm_level_max << 16; intel_de_write(dev_priv, BLC_PWM_PCH_CTL2, pch_ctl2); pch_ctl1 = 0; @@ -987,7 +1019,7 @@ static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, BLC_PWM_CTL, 0); } - freq = panel->backlight.max; + freq = panel->backlight.pwm_level_max; if (panel->backlight.combination_mode) freq /= 0xff; @@ -1001,7 +1033,7 @@ static void i9xx_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_posting_read(dev_priv, BLC_PWM_CTL); /* XXX: combine this into above write? */ - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); /* * Needed to enable backlight on some 855gm models. BLC_HIST_CTL is @@ -1028,7 +1060,7 @@ static void i965_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, BLC_PWM_CTL2, ctl2); } - freq = panel->backlight.max; + freq = panel->backlight.pwm_level_max; if (panel->backlight.combination_mode) freq /= 0xff; @@ -1044,7 +1076,7 @@ static void i965_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_posting_read(dev_priv, BLC_PWM_CTL2); intel_de_write(dev_priv, BLC_PWM_CTL2, ctl2 | BLM_PWM_ENABLE); - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); } static void vlv_enable_backlight(const struct intel_crtc_state *crtc_state, @@ -1063,11 +1095,11 @@ static void vlv_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, VLV_BLC_PWM_CTL2(pipe), ctl2); } - ctl = panel->backlight.max << 16; + ctl = panel->backlight.pwm_level_max << 16; intel_de_write(dev_priv, VLV_BLC_PWM_CTL(pipe), ctl); /* XXX: combine this into above write? */ - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); ctl2 = 0; if (panel->backlight.active_low_pwm) @@ -1116,9 +1148,9 @@ static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, BXT_BLC_PWM_FREQ(panel->backlight.controller), - panel->backlight.max); + panel->backlight.pwm_level_max); - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); pwm_ctl = 0; if (panel->backlight.active_low_pwm) @@ -1152,9 +1184,9 @@ static void cnp_enable_backlight(const struct intel_crtc_state *crtc_state, intel_de_write(dev_priv, BXT_BLC_PWM_FREQ(panel->backlight.controller), - panel->backlight.max); + panel->backlight.pwm_level_max); - intel_panel_actually_set_backlight(conn_state, level); + intel_panel_set_pwm_level(conn_state, level); pwm_ctl = 0; if (panel->backlight.active_low_pwm) @@ -1174,7 +1206,6 @@ static void ext_pwm_enable_backlight(const struct intel_crtc_state *crtc_state, struct intel_connector *connector = to_intel_connector(conn_state->connector); struct intel_panel *panel = &connector->panel; - level = intel_panel_compute_brightness(connector, level); pwm_set_relative_duty_cycle(&panel->backlight.pwm_state, level, 100); panel->backlight.pwm_state.enabled = true; pwm_apply_state(panel->backlight.pwm, &panel->backlight.pwm_state); @@ -1232,10 +1263,8 @@ static u32 intel_panel_get_backlight(struct intel_connector *connector) mutex_lock(&dev_priv->backlight_lock); - if (panel->backlight.enabled) { - val = panel->backlight.funcs->get(connector); - val = intel_panel_compute_brightness(connector, val); - } + if (panel->backlight.enabled) + val = panel->backlight.funcs->get(connector, intel_connector_get_pipe(connector)); mutex_unlock(&dev_priv->backlight_lock); @@ -1566,13 +1595,13 @@ static u32 get_backlight_max_vbt(struct intel_connector *connector) u16 pwm_freq_hz = get_vbt_pwm_freq(dev_priv); u32 pwm; - if (!panel->backlight.funcs->hz_to_pwm) { + if (!panel->backlight.pwm_funcs->hz_to_pwm) { drm_dbg_kms(&dev_priv->drm, "backlight frequency conversion not supported\n"); return 0; } - pwm = panel->backlight.funcs->hz_to_pwm(connector, pwm_freq_hz); + pwm = panel->backlight.pwm_funcs->hz_to_pwm(connector, pwm_freq_hz); if (!pwm) { drm_dbg_kms(&dev_priv->drm, "backlight frequency conversion failed\n"); @@ -1591,7 +1620,7 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector) struct intel_panel *panel = &connector->panel; int min; - drm_WARN_ON(&dev_priv->drm, panel->backlight.max == 0); + drm_WARN_ON(&dev_priv->drm, panel->backlight.pwm_level_max == 0); /* * XXX: If the vbt value is 255, it makes min equal to max, which leads @@ -1608,7 +1637,7 @@ static u32 get_backlight_min_vbt(struct intel_connector *connector) } /* vbt value is a coefficient in range [0..255] */ - return scale(min, 0, 255, 0, panel->backlight.max); + return scale(min, 0, 255, 0, panel->backlight.pwm_level_max); } static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unused) @@ -1628,29 +1657,27 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY; pch_ctl2 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL2); - panel->backlight.max = pch_ctl2 >> 16; + panel->backlight.pwm_level_max = pch_ctl2 >> 16; cpu_ctl2 = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2); - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - panel->backlight.enabled = pch_ctl1 & BLM_PCH_PWM_ENABLE; + panel->backlight.pwm_enabled = pch_ctl1 & BLM_PCH_PWM_ENABLE; - cpu_mode = panel->backlight.enabled && HAS_PCH_LPT(dev_priv) && + cpu_mode = panel->backlight.pwm_enabled && HAS_PCH_LPT(dev_priv) && !(pch_ctl1 & BLM_PCH_OVERRIDE_ENABLE) && (cpu_ctl2 & BLM_PWM_ENABLE); - if (cpu_mode) - val = pch_get_backlight(connector); - else - val = lpt_get_backlight(connector); if (cpu_mode) { + val = pch_get_backlight(connector, unused); + drm_dbg_kms(&dev_priv->drm, "CPU backlight register was enabled, switching to PCH override\n"); @@ -1663,10 +1690,6 @@ static int lpt_setup_backlight(struct intel_connector *connector, enum pipe unus cpu_ctl2 & ~BLM_PWM_ENABLE); } - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); - return 0; } @@ -1674,29 +1697,24 @@ static int pch_setup_backlight(struct intel_connector *connector, enum pipe unus { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - u32 cpu_ctl2, pch_ctl1, pch_ctl2, val; + u32 cpu_ctl2, pch_ctl1, pch_ctl2; pch_ctl1 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL1); panel->backlight.active_low_pwm = pch_ctl1 & BLM_PCH_POLARITY; pch_ctl2 = intel_de_read(dev_priv, BLC_PWM_PCH_CTL2); - panel->backlight.max = pch_ctl2 >> 16; + panel->backlight.pwm_level_max = pch_ctl2 >> 16; - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; - panel->backlight.min = get_backlight_min_vbt(connector); - - val = pch_get_backlight(connector); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); cpu_ctl2 = intel_de_read(dev_priv, BLC_PWM_CPU_CTL2); - panel->backlight.enabled = (cpu_ctl2 & BLM_PWM_ENABLE) && + panel->backlight.pwm_enabled = (cpu_ctl2 & BLM_PWM_ENABLE) && (pch_ctl1 & BLM_PCH_PWM_ENABLE); return 0; @@ -1716,27 +1734,26 @@ static int i9xx_setup_backlight(struct intel_connector *connector, enum pipe unu if (IS_PINEVIEW(dev_priv)) panel->backlight.active_low_pwm = ctl & BLM_POLARITY_PNV; - panel->backlight.max = ctl >> 17; + panel->backlight.pwm_level_max = ctl >> 17; - if (!panel->backlight.max) { - panel->backlight.max = get_backlight_max_vbt(connector); - panel->backlight.max >>= 1; + if (!panel->backlight.pwm_level_max) { + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); + panel->backlight.pwm_level_max >>= 1; } - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; if (panel->backlight.combination_mode) - panel->backlight.max *= 0xff; + panel->backlight.pwm_level_max *= 0xff; - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - val = i9xx_get_backlight(connector); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); + val = i9xx_get_backlight(connector, unused); + val = intel_panel_invert_pwm_level(connector, val); + val = clamp(val, panel->backlight.pwm_level_min, panel->backlight.pwm_level_max); - panel->backlight.enabled = val != 0; + panel->backlight.pwm_enabled = val != 0; return 0; } @@ -1745,32 +1762,27 @@ static int i965_setup_backlight(struct intel_connector *connector, enum pipe unu { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - u32 ctl, ctl2, val; + u32 ctl, ctl2; ctl2 = intel_de_read(dev_priv, BLC_PWM_CTL2); panel->backlight.combination_mode = ctl2 & BLM_COMBINATION_MODE; panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965; ctl = intel_de_read(dev_priv, BLC_PWM_CTL); - panel->backlight.max = ctl >> 16; + panel->backlight.pwm_level_max = ctl >> 16; - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; if (panel->backlight.combination_mode) - panel->backlight.max *= 0xff; + panel->backlight.pwm_level_max *= 0xff; - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - val = i9xx_get_backlight(connector); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); - - panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE; + panel->backlight.pwm_enabled = ctl2 & BLM_PWM_ENABLE; return 0; } @@ -1779,7 +1791,7 @@ static int vlv_setup_backlight(struct intel_connector *connector, enum pipe pipe { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - u32 ctl, ctl2, val; + u32 ctl, ctl2; if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B)) return -ENODEV; @@ -1788,22 +1800,17 @@ static int vlv_setup_backlight(struct intel_connector *connector, enum pipe pipe panel->backlight.active_low_pwm = ctl2 & BLM_POLARITY_I965; ctl = intel_de_read(dev_priv, VLV_BLC_PWM_CTL(pipe)); - panel->backlight.max = ctl >> 16; + panel->backlight.pwm_level_max = ctl >> 16; - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; - panel->backlight.min = get_backlight_min_vbt(connector); - - val = _vlv_get_backlight(dev_priv, pipe); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - panel->backlight.enabled = ctl2 & BLM_PWM_ENABLE; + panel->backlight.pwm_enabled = ctl2 & BLM_PWM_ENABLE; return 0; } @@ -1828,24 +1835,18 @@ bxt_setup_backlight(struct intel_connector *connector, enum pipe unused) } panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY; - panel->backlight.max = - intel_de_read(dev_priv, - BXT_BLC_PWM_FREQ(panel->backlight.controller)); + panel->backlight.pwm_level_max = + intel_de_read(dev_priv, BXT_BLC_PWM_FREQ(panel->backlight.controller)); - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - val = bxt_get_backlight(connector); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); - - panel->backlight.enabled = pwm_ctl & BXT_BLC_PWM_ENABLE; + panel->backlight.pwm_enabled = pwm_ctl & BXT_BLC_PWM_ENABLE; return 0; } @@ -1855,7 +1856,7 @@ cnp_setup_backlight(struct intel_connector *connector, enum pipe unused) { struct drm_i915_private *dev_priv = to_i915(connector->base.dev); struct intel_panel *panel = &connector->panel; - u32 pwm_ctl, val; + u32 pwm_ctl; /* * CNP has the BXT implementation of backlight, but with only one @@ -1868,24 +1869,18 @@ cnp_setup_backlight(struct intel_connector *connector, enum pipe unused) BXT_BLC_PWM_CTL(panel->backlight.controller)); panel->backlight.active_low_pwm = pwm_ctl & BXT_BLC_PWM_POLARITY; - panel->backlight.max = - intel_de_read(dev_priv, - BXT_BLC_PWM_FREQ(panel->backlight.controller)); + panel->backlight.pwm_level_max = + intel_de_read(dev_priv, BXT_BLC_PWM_FREQ(panel->backlight.controller)); - if (!panel->backlight.max) - panel->backlight.max = get_backlight_max_vbt(connector); + if (!panel->backlight.pwm_level_max) + panel->backlight.pwm_level_max = get_backlight_max_vbt(connector); - if (!panel->backlight.max) + if (!panel->backlight.pwm_level_max) return -ENODEV; - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); - val = bxt_get_backlight(connector); - val = intel_panel_compute_brightness(connector, val); - panel->backlight.level = clamp(val, panel->backlight.min, - panel->backlight.max); - - panel->backlight.enabled = pwm_ctl & BXT_BLC_PWM_ENABLE; + panel->backlight.pwm_enabled = pwm_ctl & BXT_BLC_PWM_ENABLE; return 0; } @@ -1915,8 +1910,8 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector, return -ENODEV; } - panel->backlight.max = 100; /* 100% */ - panel->backlight.min = get_backlight_min_vbt(connector); + panel->backlight.pwm_level_max = 100; /* 100% */ + panel->backlight.pwm_level_min = get_backlight_min_vbt(connector); if (pwm_is_enabled(panel->backlight.pwm)) { /* PWM is already enabled, use existing settings */ @@ -1924,10 +1919,8 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector, level = pwm_get_relative_duty_cycle(&panel->backlight.pwm_state, 100); - level = intel_panel_compute_brightness(connector, level); - panel->backlight.level = clamp(level, panel->backlight.min, - panel->backlight.max); - panel->backlight.enabled = true; + level = intel_panel_invert_pwm_level(connector, level); + panel->backlight.pwm_enabled = true; drm_dbg_kms(&dev_priv->drm, "PWM already enabled at freq %ld, VBT freq %d, level %d\n", NSEC_PER_SEC / (unsigned long)panel->backlight.pwm_state.period, @@ -1943,6 +1936,58 @@ static int ext_pwm_setup_backlight(struct intel_connector *connector, return 0; } +static void intel_pwm_set_backlight(const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + + panel->backlight.pwm_funcs->set(conn_state, + intel_panel_invert_pwm_level(connector, level)); +} + +static u32 intel_pwm_get_backlight(struct intel_connector *connector, enum pipe pipe) +{ + struct intel_panel *panel = &connector->panel; + + return intel_panel_invert_pwm_level(connector, + panel->backlight.pwm_funcs->get(connector, pipe)); +} + +static void intel_pwm_enable_backlight(const struct intel_crtc_state *crtc_state, + const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + + panel->backlight.pwm_funcs->enable(crtc_state, conn_state, + intel_panel_invert_pwm_level(connector, level)); +} + +static void intel_pwm_disable_backlight(const struct drm_connector_state *conn_state, u32 level) +{ + struct intel_connector *connector = to_intel_connector(conn_state->connector); + struct intel_panel *panel = &connector->panel; + + panel->backlight.pwm_funcs->disable(conn_state, + intel_panel_invert_pwm_level(connector, level)); +} + +static int intel_pwm_setup_backlight(struct intel_connector *connector, enum pipe pipe) +{ + struct intel_panel *panel = &connector->panel; + int ret = panel->backlight.pwm_funcs->setup(connector, pipe); + + if (ret < 0) + return ret; + + panel->backlight.min = panel->backlight.pwm_level_min; + panel->backlight.max = panel->backlight.pwm_level_max; + panel->backlight.level = intel_pwm_get_backlight(connector, pipe); + panel->backlight.enabled = panel->backlight.pwm_enabled; + + return 0; +} + void intel_panel_update_backlight(struct intel_atomic_state *state, struct intel_encoder *encoder, const struct intel_crtc_state *crtc_state, @@ -2016,7 +2061,7 @@ static void intel_panel_destroy_backlight(struct intel_panel *panel) panel->backlight.present = false; } -static const struct intel_panel_bl_funcs bxt_funcs = { +static const struct intel_panel_bl_funcs bxt_pwm_funcs = { .setup = bxt_setup_backlight, .enable = bxt_enable_backlight, .disable = bxt_disable_backlight, @@ -2025,7 +2070,7 @@ static const struct intel_panel_bl_funcs bxt_funcs = { .hz_to_pwm = bxt_hz_to_pwm, }; -static const struct intel_panel_bl_funcs cnp_funcs = { +static const struct intel_panel_bl_funcs cnp_pwm_funcs = { .setup = cnp_setup_backlight, .enable = cnp_enable_backlight, .disable = cnp_disable_backlight, @@ -2034,7 +2079,7 @@ static const struct intel_panel_bl_funcs cnp_funcs = { .hz_to_pwm = cnp_hz_to_pwm, }; -static const struct intel_panel_bl_funcs lpt_funcs = { +static const struct intel_panel_bl_funcs lpt_pwm_funcs = { .setup = lpt_setup_backlight, .enable = lpt_enable_backlight, .disable = lpt_disable_backlight, @@ -2043,7 +2088,7 @@ static const struct intel_panel_bl_funcs lpt_funcs = { .hz_to_pwm = lpt_hz_to_pwm, }; -static const struct intel_panel_bl_funcs spt_funcs = { +static const struct intel_panel_bl_funcs spt_pwm_funcs = { .setup = lpt_setup_backlight, .enable = lpt_enable_backlight, .disable = lpt_disable_backlight, @@ -2052,7 +2097,7 @@ static const struct intel_panel_bl_funcs spt_funcs = { .hz_to_pwm = spt_hz_to_pwm, }; -static const struct intel_panel_bl_funcs pch_funcs = { +static const struct intel_panel_bl_funcs pch_pwm_funcs = { .setup = pch_setup_backlight, .enable = pch_enable_backlight, .disable = pch_disable_backlight, @@ -2069,7 +2114,7 @@ static const struct intel_panel_bl_funcs ext_pwm_funcs = { .get = ext_pwm_get_backlight, }; -static const struct intel_panel_bl_funcs vlv_funcs = { +static const struct intel_panel_bl_funcs vlv_pwm_funcs = { .setup = vlv_setup_backlight, .enable = vlv_enable_backlight, .disable = vlv_disable_backlight, @@ -2078,7 +2123,7 @@ static const struct intel_panel_bl_funcs vlv_funcs = { .hz_to_pwm = vlv_hz_to_pwm, }; -static const struct intel_panel_bl_funcs i965_funcs = { +static const struct intel_panel_bl_funcs i965_pwm_funcs = { .setup = i965_setup_backlight, .enable = i965_enable_backlight, .disable = i965_disable_backlight, @@ -2087,7 +2132,7 @@ static const struct intel_panel_bl_funcs i965_funcs = { .hz_to_pwm = i965_hz_to_pwm, }; -static const struct intel_panel_bl_funcs i9xx_funcs = { +static const struct intel_panel_bl_funcs i9xx_pwm_funcs = { .setup = i9xx_setup_backlight, .enable = i9xx_enable_backlight, .disable = i9xx_disable_backlight, @@ -2096,6 +2141,14 @@ static const struct intel_panel_bl_funcs i9xx_funcs = { .hz_to_pwm = i9xx_hz_to_pwm, }; +static const struct intel_panel_bl_funcs pwm_bl_funcs = { + .setup = intel_pwm_setup_backlight, + .enable = intel_pwm_enable_backlight, + .disable = intel_pwm_disable_backlight, + .set = intel_pwm_set_backlight, + .get = intel_pwm_get_backlight, +}; + /* Set up chip specific backlight functions */ static void intel_panel_init_backlight_funcs(struct intel_panel *panel) @@ -2104,36 +2157,39 @@ intel_panel_init_backlight_funcs(struct intel_panel *panel) container_of(panel, struct intel_connector, panel); struct drm_i915_private *dev_priv = to_i915(connector->base.dev); - if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP && - intel_dp_aux_init_backlight_funcs(connector) == 0) - return; - if (connector->base.connector_type == DRM_MODE_CONNECTOR_DSI && intel_dsi_dcs_init_backlight_funcs(connector) == 0) return; if (IS_GEN9_LP(dev_priv)) { - panel->backlight.funcs = &bxt_funcs; + panel->backlight.pwm_funcs = &bxt_pwm_funcs; } else if (INTEL_PCH_TYPE(dev_priv) >= PCH_CNP) { - panel->backlight.funcs = &cnp_funcs; + panel->backlight.pwm_funcs = &cnp_pwm_funcs; } else if (INTEL_PCH_TYPE(dev_priv) >= PCH_LPT) { if (HAS_PCH_LPT(dev_priv)) - panel->backlight.funcs = &lpt_funcs; + panel->backlight.pwm_funcs = &lpt_pwm_funcs; else - panel->backlight.funcs = &spt_funcs; + panel->backlight.pwm_funcs = &spt_pwm_funcs; } else if (HAS_PCH_SPLIT(dev_priv)) { - panel->backlight.funcs = &pch_funcs; + panel->backlight.pwm_funcs = &pch_pwm_funcs; } else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { if (connector->base.connector_type == DRM_MODE_CONNECTOR_DSI) { - panel->backlight.funcs = &ext_pwm_funcs; + panel->backlight.pwm_funcs = &ext_pwm_funcs; } else { - panel->backlight.funcs = &vlv_funcs; + panel->backlight.pwm_funcs = &vlv_pwm_funcs; } } else if (IS_GEN(dev_priv, 4)) { - panel->backlight.funcs = &i965_funcs; + panel->backlight.pwm_funcs = &i965_pwm_funcs; } else { - panel->backlight.funcs = &i9xx_funcs; + panel->backlight.pwm_funcs = &i9xx_pwm_funcs; } + + if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP && + intel_dp_aux_init_backlight_funcs(connector) == 0) + return; + + /* We're using a standard PWM backlight interface */ + panel->backlight.funcs = &pwm_bl_funcs; } enum drm_connector_status diff --git a/drivers/gpu/drm/i915/display/intel_panel.h b/drivers/gpu/drm/i915/display/intel_panel.h index 5b813fe90557..1d340f77bffc 100644 --- a/drivers/gpu/drm/i915/display/intel_panel.h +++ b/drivers/gpu/drm/i915/display/intel_panel.h @@ -49,6 +49,10 @@ struct drm_display_mode * intel_panel_edid_fixed_mode(struct intel_connector *connector); struct drm_display_mode * intel_panel_vbt_fixed_mode(struct intel_connector *connector); +void intel_panel_set_pwm_level(const struct drm_connector_state *conn_state, u32 level); +u32 intel_panel_invert_pwm_level(struct intel_connector *connector, u32 level); +u32 intel_panel_backlight_level_to_pwm(struct intel_connector *connector, u32 level); +u32 intel_panel_backlight_level_from_pwm(struct intel_connector *connector, u32 val); #if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE) int intel_backlight_device_register(struct intel_connector *connector); diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c new file mode 100644 index 000000000000..c4867a8020a5 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_pps.c @@ -0,0 +1,1406 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ + +#include "i915_drv.h" +#include "intel_display_types.h" +#include "intel_dp.h" +#include "intel_pps.h" + +static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv, + enum pipe pipe); + +static void pps_init_delays(struct intel_dp *intel_dp); +static void pps_init_registers(struct intel_dp *intel_dp, bool force_disable_vdd); + +intel_wakeref_t intel_pps_lock(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + intel_wakeref_t wakeref; + + /* + * See intel_pps_reset_all() why we need a power domain reference here. + */ + wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_DISPLAY_CORE); + mutex_lock(&dev_priv->pps_mutex); + + return wakeref; +} + +intel_wakeref_t intel_pps_unlock(struct intel_dp *intel_dp, + intel_wakeref_t wakeref) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + mutex_unlock(&dev_priv->pps_mutex); + intel_display_power_put(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref); + + return 0; +} + +static void +vlv_power_sequencer_kick(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum pipe pipe = intel_dp->pps.pps_pipe; + bool pll_enabled, release_cl_override = false; + enum dpio_phy phy = DPIO_PHY(pipe); + enum dpio_channel ch = vlv_pipe_to_channel(pipe); + u32 DP; + + if (drm_WARN(&dev_priv->drm, + intel_de_read(dev_priv, intel_dp->output_reg) & DP_PORT_EN, + "skipping pipe %c power sequencer kick due to [ENCODER:%d:%s] being active\n", + pipe_name(pipe), dig_port->base.base.base.id, + dig_port->base.base.name)) + return; + + drm_dbg_kms(&dev_priv->drm, + "kicking pipe %c power sequencer for [ENCODER:%d:%s]\n", + pipe_name(pipe), dig_port->base.base.base.id, + dig_port->base.base.name); + + /* Preserve the BIOS-computed detected bit. This is + * supposed to be read-only. + */ + DP = intel_de_read(dev_priv, intel_dp->output_reg) & DP_DETECTED; + DP |= DP_VOLTAGE_0_4 | DP_PRE_EMPHASIS_0; + DP |= DP_PORT_WIDTH(1); + DP |= DP_LINK_TRAIN_PAT_1; + + if (IS_CHERRYVIEW(dev_priv)) + DP |= DP_PIPE_SEL_CHV(pipe); + else + DP |= DP_PIPE_SEL(pipe); + + pll_enabled = intel_de_read(dev_priv, DPLL(pipe)) & DPLL_VCO_ENABLE; + + /* + * The DPLL for the pipe must be enabled for this to work. + * So enable temporarily it if it's not already enabled. + */ + if (!pll_enabled) { + release_cl_override = IS_CHERRYVIEW(dev_priv) && + !chv_phy_powergate_ch(dev_priv, phy, ch, true); + + if (vlv_force_pll_on(dev_priv, pipe, vlv_get_dpll(dev_priv))) { + drm_err(&dev_priv->drm, + "Failed to force on pll for pipe %c!\n", + pipe_name(pipe)); + return; + } + } + + /* + * Similar magic as in intel_dp_enable_port(). + * We _must_ do this port enable + disable trick + * to make this power sequencer lock onto the port. + * Otherwise even VDD force bit won't work. + */ + intel_de_write(dev_priv, intel_dp->output_reg, DP); + intel_de_posting_read(dev_priv, intel_dp->output_reg); + + intel_de_write(dev_priv, intel_dp->output_reg, DP | DP_PORT_EN); + intel_de_posting_read(dev_priv, intel_dp->output_reg); + + intel_de_write(dev_priv, intel_dp->output_reg, DP & ~DP_PORT_EN); + intel_de_posting_read(dev_priv, intel_dp->output_reg); + + if (!pll_enabled) { + vlv_force_pll_off(dev_priv, pipe); + + if (release_cl_override) + chv_phy_powergate_ch(dev_priv, phy, ch, false); + } +} + +static enum pipe vlv_find_free_pps(struct drm_i915_private *dev_priv) +{ + struct intel_encoder *encoder; + unsigned int pipes = (1 << PIPE_A) | (1 << PIPE_B); + + /* + * We don't have power sequencer currently. + * Pick one that's not used by other ports. + */ + for_each_intel_dp(&dev_priv->drm, encoder) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + if (encoder->type == INTEL_OUTPUT_EDP) { + drm_WARN_ON(&dev_priv->drm, + intel_dp->pps.active_pipe != INVALID_PIPE && + intel_dp->pps.active_pipe != + intel_dp->pps.pps_pipe); + + if (intel_dp->pps.pps_pipe != INVALID_PIPE) + pipes &= ~(1 << intel_dp->pps.pps_pipe); + } else { + drm_WARN_ON(&dev_priv->drm, + intel_dp->pps.pps_pipe != INVALID_PIPE); + + if (intel_dp->pps.active_pipe != INVALID_PIPE) + pipes &= ~(1 << intel_dp->pps.active_pipe); + } + } + + if (pipes == 0) + return INVALID_PIPE; + + return ffs(pipes) - 1; +} + +static enum pipe +vlv_power_sequencer_pipe(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum pipe pipe; + + lockdep_assert_held(&dev_priv->pps_mutex); + + /* We should never land here with regular DP ports */ + drm_WARN_ON(&dev_priv->drm, !intel_dp_is_edp(intel_dp)); + + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.active_pipe != INVALID_PIPE && + intel_dp->pps.active_pipe != intel_dp->pps.pps_pipe); + + if (intel_dp->pps.pps_pipe != INVALID_PIPE) + return intel_dp->pps.pps_pipe; + + pipe = vlv_find_free_pps(dev_priv); + + /* + * Didn't find one. This should not happen since there + * are two power sequencers and up to two eDP ports. + */ + if (drm_WARN_ON(&dev_priv->drm, pipe == INVALID_PIPE)) + pipe = PIPE_A; + + vlv_steal_power_sequencer(dev_priv, pipe); + intel_dp->pps.pps_pipe = pipe; + + drm_dbg_kms(&dev_priv->drm, + "picked pipe %c power sequencer for [ENCODER:%d:%s]\n", + pipe_name(intel_dp->pps.pps_pipe), + dig_port->base.base.base.id, + dig_port->base.base.name); + + /* init power sequencer on this pipe and port */ + pps_init_delays(intel_dp); + pps_init_registers(intel_dp, true); + + /* + * Even vdd force doesn't work until we've made + * the power sequencer lock in on the port. + */ + vlv_power_sequencer_kick(intel_dp); + + return intel_dp->pps.pps_pipe; +} + +static int +bxt_power_sequencer_idx(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + int backlight_controller = dev_priv->vbt.backlight.controller; + + lockdep_assert_held(&dev_priv->pps_mutex); + + /* We should never land here with regular DP ports */ + drm_WARN_ON(&dev_priv->drm, !intel_dp_is_edp(intel_dp)); + + if (!intel_dp->pps.pps_reset) + return backlight_controller; + + intel_dp->pps.pps_reset = false; + + /* + * Only the HW needs to be reprogrammed, the SW state is fixed and + * has been setup during connector init. + */ + pps_init_registers(intel_dp, false); + + return backlight_controller; +} + +typedef bool (*vlv_pipe_check)(struct drm_i915_private *dev_priv, + enum pipe pipe); + +static bool vlv_pipe_has_pp_on(struct drm_i915_private *dev_priv, + enum pipe pipe) +{ + return intel_de_read(dev_priv, PP_STATUS(pipe)) & PP_ON; +} + +static bool vlv_pipe_has_vdd_on(struct drm_i915_private *dev_priv, + enum pipe pipe) +{ + return intel_de_read(dev_priv, PP_CONTROL(pipe)) & EDP_FORCE_VDD; +} + +static bool vlv_pipe_any(struct drm_i915_private *dev_priv, + enum pipe pipe) +{ + return true; +} + +static enum pipe +vlv_initial_pps_pipe(struct drm_i915_private *dev_priv, + enum port port, + vlv_pipe_check pipe_check) +{ + enum pipe pipe; + + for (pipe = PIPE_A; pipe <= PIPE_B; pipe++) { + u32 port_sel = intel_de_read(dev_priv, PP_ON_DELAYS(pipe)) & + PANEL_PORT_SELECT_MASK; + + if (port_sel != PANEL_PORT_SELECT_VLV(port)) + continue; + + if (!pipe_check(dev_priv, pipe)) + continue; + + return pipe; + } + + return INVALID_PIPE; +} + +static void +vlv_initial_power_sequencer_setup(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + enum port port = dig_port->base.port; + + lockdep_assert_held(&dev_priv->pps_mutex); + + /* try to find a pipe with this port selected */ + /* first pick one where the panel is on */ + intel_dp->pps.pps_pipe = vlv_initial_pps_pipe(dev_priv, port, + vlv_pipe_has_pp_on); + /* didn't find one? pick one where vdd is on */ + if (intel_dp->pps.pps_pipe == INVALID_PIPE) + intel_dp->pps.pps_pipe = vlv_initial_pps_pipe(dev_priv, port, + vlv_pipe_has_vdd_on); + /* didn't find one? pick one with just the correct port */ + if (intel_dp->pps.pps_pipe == INVALID_PIPE) + intel_dp->pps.pps_pipe = vlv_initial_pps_pipe(dev_priv, port, + vlv_pipe_any); + + /* didn't find one? just let vlv_power_sequencer_pipe() pick one when needed */ + if (intel_dp->pps.pps_pipe == INVALID_PIPE) { + drm_dbg_kms(&dev_priv->drm, + "no initial power sequencer for [ENCODER:%d:%s]\n", + dig_port->base.base.base.id, + dig_port->base.base.name); + return; + } + + drm_dbg_kms(&dev_priv->drm, + "initial power sequencer for [ENCODER:%d:%s]: pipe %c\n", + dig_port->base.base.base.id, + dig_port->base.base.name, + pipe_name(intel_dp->pps.pps_pipe)); +} + +void intel_pps_reset_all(struct drm_i915_private *dev_priv) +{ + struct intel_encoder *encoder; + + if (drm_WARN_ON(&dev_priv->drm, + !(IS_VALLEYVIEW(dev_priv) || + IS_CHERRYVIEW(dev_priv) || + IS_GEN9_LP(dev_priv)))) + return; + + /* + * We can't grab pps_mutex here due to deadlock with power_domain + * mutex when power_domain functions are called while holding pps_mutex. + * That also means that in order to use pps_pipe the code needs to + * hold both a power domain reference and pps_mutex, and the power domain + * reference get/put must be done while _not_ holding pps_mutex. + * pps_{lock,unlock}() do these steps in the correct order, so one + * should use them always. + */ + + for_each_intel_dp(&dev_priv->drm, encoder) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + drm_WARN_ON(&dev_priv->drm, + intel_dp->pps.active_pipe != INVALID_PIPE); + + if (encoder->type != INTEL_OUTPUT_EDP) + continue; + + if (IS_GEN9_LP(dev_priv)) + intel_dp->pps.pps_reset = true; + else + intel_dp->pps.pps_pipe = INVALID_PIPE; + } +} + +struct pps_registers { + i915_reg_t pp_ctrl; + i915_reg_t pp_stat; + i915_reg_t pp_on; + i915_reg_t pp_off; + i915_reg_t pp_div; +}; + +static void intel_pps_get_registers(struct intel_dp *intel_dp, + struct pps_registers *regs) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + int pps_idx = 0; + + memset(regs, 0, sizeof(*regs)); + + if (IS_GEN9_LP(dev_priv)) + pps_idx = bxt_power_sequencer_idx(intel_dp); + else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) + pps_idx = vlv_power_sequencer_pipe(intel_dp); + + regs->pp_ctrl = PP_CONTROL(pps_idx); + regs->pp_stat = PP_STATUS(pps_idx); + regs->pp_on = PP_ON_DELAYS(pps_idx); + regs->pp_off = PP_OFF_DELAYS(pps_idx); + + /* Cycle delay moved from PP_DIVISOR to PP_CONTROL */ + if (IS_GEN9_LP(dev_priv) || INTEL_PCH_TYPE(dev_priv) >= PCH_CNP) + regs->pp_div = INVALID_MMIO_REG; + else + regs->pp_div = PP_DIVISOR(pps_idx); +} + +static i915_reg_t +_pp_ctrl_reg(struct intel_dp *intel_dp) +{ + struct pps_registers regs; + + intel_pps_get_registers(intel_dp, ®s); + + return regs.pp_ctrl; +} + +static i915_reg_t +_pp_stat_reg(struct intel_dp *intel_dp) +{ + struct pps_registers regs; + + intel_pps_get_registers(intel_dp, ®s); + + return regs.pp_stat; +} + +static bool edp_have_panel_power(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + lockdep_assert_held(&dev_priv->pps_mutex); + + if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && + intel_dp->pps.pps_pipe == INVALID_PIPE) + return false; + + return (intel_de_read(dev_priv, _pp_stat_reg(intel_dp)) & PP_ON) != 0; +} + +static bool edp_have_panel_vdd(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + lockdep_assert_held(&dev_priv->pps_mutex); + + if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) && + intel_dp->pps.pps_pipe == INVALID_PIPE) + return false; + + return intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp)) & EDP_FORCE_VDD; +} + +void intel_pps_check_power_unlocked(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + if (!intel_dp_is_edp(intel_dp)) + return; + + if (!edp_have_panel_power(intel_dp) && !edp_have_panel_vdd(intel_dp)) { + drm_WARN(&dev_priv->drm, 1, + "eDP powered off while attempting aux channel communication.\n"); + drm_dbg_kms(&dev_priv->drm, "Status 0x%08x Control 0x%08x\n", + intel_de_read(dev_priv, _pp_stat_reg(intel_dp)), + intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp))); + } +} + +#define IDLE_ON_MASK (PP_ON | PP_SEQUENCE_MASK | 0 | PP_SEQUENCE_STATE_MASK) +#define IDLE_ON_VALUE (PP_ON | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_ON_IDLE) + +#define IDLE_OFF_MASK (PP_ON | PP_SEQUENCE_MASK | 0 | 0) +#define IDLE_OFF_VALUE (0 | PP_SEQUENCE_NONE | 0 | 0) + +#define IDLE_CYCLE_MASK (PP_ON | PP_SEQUENCE_MASK | PP_CYCLE_DELAY_ACTIVE | PP_SEQUENCE_STATE_MASK) +#define IDLE_CYCLE_VALUE (0 | PP_SEQUENCE_NONE | 0 | PP_SEQUENCE_STATE_OFF_IDLE) + +static void intel_pps_verify_state(struct intel_dp *intel_dp); + +static void wait_panel_status(struct intel_dp *intel_dp, + u32 mask, + u32 value) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + i915_reg_t pp_stat_reg, pp_ctrl_reg; + + lockdep_assert_held(&dev_priv->pps_mutex); + + intel_pps_verify_state(intel_dp); + + pp_stat_reg = _pp_stat_reg(intel_dp); + pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + + drm_dbg_kms(&dev_priv->drm, + "mask %08x value %08x status %08x control %08x\n", + mask, value, + intel_de_read(dev_priv, pp_stat_reg), + intel_de_read(dev_priv, pp_ctrl_reg)); + + if (intel_de_wait_for_register(dev_priv, pp_stat_reg, + mask, value, 5000)) + drm_err(&dev_priv->drm, + "Panel status timeout: status %08x control %08x\n", + intel_de_read(dev_priv, pp_stat_reg), + intel_de_read(dev_priv, pp_ctrl_reg)); + + drm_dbg_kms(&dev_priv->drm, "Wait complete\n"); +} + +static void wait_panel_on(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + + drm_dbg_kms(&i915->drm, "Wait for panel power on\n"); + wait_panel_status(intel_dp, IDLE_ON_MASK, IDLE_ON_VALUE); +} + +static void wait_panel_off(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + + drm_dbg_kms(&i915->drm, "Wait for panel power off time\n"); + wait_panel_status(intel_dp, IDLE_OFF_MASK, IDLE_OFF_VALUE); +} + +static void wait_panel_power_cycle(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + ktime_t panel_power_on_time; + s64 panel_power_off_duration; + + drm_dbg_kms(&i915->drm, "Wait for panel power cycle\n"); + + /* take the difference of currrent time and panel power off time + * and then make panel wait for t11_t12 if needed. */ + panel_power_on_time = ktime_get_boottime(); + panel_power_off_duration = ktime_ms_delta(panel_power_on_time, intel_dp->pps.panel_power_off_time); + + /* When we disable the VDD override bit last we have to do the manual + * wait. */ + if (panel_power_off_duration < (s64)intel_dp->pps.panel_power_cycle_delay) + wait_remaining_ms_from_jiffies(jiffies, + intel_dp->pps.panel_power_cycle_delay - panel_power_off_duration); + + wait_panel_status(intel_dp, IDLE_CYCLE_MASK, IDLE_CYCLE_VALUE); +} + +void intel_pps_wait_power_cycle(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + with_intel_pps_lock(intel_dp, wakeref) + wait_panel_power_cycle(intel_dp); +} + +static void wait_backlight_on(struct intel_dp *intel_dp) +{ + wait_remaining_ms_from_jiffies(intel_dp->pps.last_power_on, + intel_dp->pps.backlight_on_delay); +} + +static void edp_wait_backlight_off(struct intel_dp *intel_dp) +{ + wait_remaining_ms_from_jiffies(intel_dp->pps.last_backlight_off, + intel_dp->pps.backlight_off_delay); +} + +/* Read the current pp_control value, unlocking the register if it + * is locked + */ + +static u32 ilk_get_pp_control(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + u32 control; + + lockdep_assert_held(&dev_priv->pps_mutex); + + control = intel_de_read(dev_priv, _pp_ctrl_reg(intel_dp)); + if (drm_WARN_ON(&dev_priv->drm, !HAS_DDI(dev_priv) && + (control & PANEL_UNLOCK_MASK) != PANEL_UNLOCK_REGS)) { + control &= ~PANEL_UNLOCK_MASK; + control |= PANEL_UNLOCK_REGS; + } + return control; +} + +/* + * Must be paired with intel_pps_vdd_off_unlocked(). + * Must hold pps_mutex around the whole on/off sequence. + * Can be nested with intel_pps_vdd_{on,off}() calls. + */ +bool intel_pps_vdd_on_unlocked(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + u32 pp; + i915_reg_t pp_stat_reg, pp_ctrl_reg; + bool need_to_disable = !intel_dp->pps.want_panel_vdd; + + lockdep_assert_held(&dev_priv->pps_mutex); + + if (!intel_dp_is_edp(intel_dp)) + return false; + + cancel_delayed_work(&intel_dp->pps.panel_vdd_work); + intel_dp->pps.want_panel_vdd = true; + + if (edp_have_panel_vdd(intel_dp)) + return need_to_disable; + + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.vdd_wakeref); + intel_dp->pps.vdd_wakeref = intel_display_power_get(dev_priv, + intel_aux_power_domain(dig_port)); + + drm_dbg_kms(&dev_priv->drm, "Turning [ENCODER:%d:%s] VDD on\n", + dig_port->base.base.base.id, + dig_port->base.base.name); + + if (!edp_have_panel_power(intel_dp)) + wait_panel_power_cycle(intel_dp); + + pp = ilk_get_pp_control(intel_dp); + pp |= EDP_FORCE_VDD; + + pp_stat_reg = _pp_stat_reg(intel_dp); + pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + drm_dbg_kms(&dev_priv->drm, "PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n", + intel_de_read(dev_priv, pp_stat_reg), + intel_de_read(dev_priv, pp_ctrl_reg)); + /* + * If the panel wasn't on, delay before accessing aux channel + */ + if (!edp_have_panel_power(intel_dp)) { + drm_dbg_kms(&dev_priv->drm, + "[ENCODER:%d:%s] panel power wasn't enabled\n", + dig_port->base.base.base.id, + dig_port->base.base.name); + msleep(intel_dp->pps.panel_power_up_delay); + } + + return need_to_disable; +} + +/* + * Must be paired with intel_pps_off(). + * Nested calls to these functions are not allowed since + * we drop the lock. Caller must use some higher level + * locking to prevent nested calls from other threads. + */ +void intel_pps_vdd_on(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + bool vdd; + + if (!intel_dp_is_edp(intel_dp)) + return; + + vdd = false; + with_intel_pps_lock(intel_dp, wakeref) + vdd = intel_pps_vdd_on_unlocked(intel_dp); + I915_STATE_WARN(!vdd, "[ENCODER:%d:%s] VDD already requested on\n", + dp_to_dig_port(intel_dp)->base.base.base.id, + dp_to_dig_port(intel_dp)->base.base.name); +} + +static void intel_pps_vdd_off_sync_unlocked(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = + dp_to_dig_port(intel_dp); + u32 pp; + i915_reg_t pp_stat_reg, pp_ctrl_reg; + + lockdep_assert_held(&dev_priv->pps_mutex); + + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.want_panel_vdd); + + if (!edp_have_panel_vdd(intel_dp)) + return; + + drm_dbg_kms(&dev_priv->drm, "Turning [ENCODER:%d:%s] VDD off\n", + dig_port->base.base.base.id, + dig_port->base.base.name); + + pp = ilk_get_pp_control(intel_dp); + pp &= ~EDP_FORCE_VDD; + + pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + pp_stat_reg = _pp_stat_reg(intel_dp); + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + + /* Make sure sequencer is idle before allowing subsequent activity */ + drm_dbg_kms(&dev_priv->drm, "PP_STATUS: 0x%08x PP_CONTROL: 0x%08x\n", + intel_de_read(dev_priv, pp_stat_reg), + intel_de_read(dev_priv, pp_ctrl_reg)); + + if ((pp & PANEL_POWER_ON) == 0) + intel_dp->pps.panel_power_off_time = ktime_get_boottime(); + + intel_display_power_put(dev_priv, + intel_aux_power_domain(dig_port), + fetch_and_zero(&intel_dp->pps.vdd_wakeref)); +} + +void intel_pps_vdd_off_sync(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + cancel_delayed_work_sync(&intel_dp->pps.panel_vdd_work); + /* + * vdd might still be enabled due to the delayed vdd off. + * Make sure vdd is actually turned off here. + */ + with_intel_pps_lock(intel_dp, wakeref) + intel_pps_vdd_off_sync_unlocked(intel_dp); +} + +static void edp_panel_vdd_work(struct work_struct *__work) +{ + struct intel_pps *pps = container_of(to_delayed_work(__work), + struct intel_pps, panel_vdd_work); + struct intel_dp *intel_dp = container_of(pps, struct intel_dp, pps); + intel_wakeref_t wakeref; + + with_intel_pps_lock(intel_dp, wakeref) { + if (!intel_dp->pps.want_panel_vdd) + intel_pps_vdd_off_sync_unlocked(intel_dp); + } +} + +static void edp_panel_vdd_schedule_off(struct intel_dp *intel_dp) +{ + unsigned long delay; + + /* + * Queue the timer to fire a long time from now (relative to the power + * down delay) to keep the panel power up across a sequence of + * operations. + */ + delay = msecs_to_jiffies(intel_dp->pps.panel_power_cycle_delay * 5); + schedule_delayed_work(&intel_dp->pps.panel_vdd_work, delay); +} + +/* + * Must be paired with edp_panel_vdd_on(). + * Must hold pps_mutex around the whole on/off sequence. + * Can be nested with intel_pps_vdd_{on,off}() calls. + */ +void intel_pps_vdd_off_unlocked(struct intel_dp *intel_dp, bool sync) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + + lockdep_assert_held(&dev_priv->pps_mutex); + + if (!intel_dp_is_edp(intel_dp)) + return; + + I915_STATE_WARN(!intel_dp->pps.want_panel_vdd, "[ENCODER:%d:%s] VDD not forced on", + dp_to_dig_port(intel_dp)->base.base.base.id, + dp_to_dig_port(intel_dp)->base.base.name); + + intel_dp->pps.want_panel_vdd = false; + + if (sync) + intel_pps_vdd_off_sync_unlocked(intel_dp); + else + edp_panel_vdd_schedule_off(intel_dp); +} + +void intel_pps_on_unlocked(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + u32 pp; + i915_reg_t pp_ctrl_reg; + + lockdep_assert_held(&dev_priv->pps_mutex); + + if (!intel_dp_is_edp(intel_dp)) + return; + + drm_dbg_kms(&dev_priv->drm, "Turn [ENCODER:%d:%s] panel power on\n", + dp_to_dig_port(intel_dp)->base.base.base.id, + dp_to_dig_port(intel_dp)->base.base.name); + + if (drm_WARN(&dev_priv->drm, edp_have_panel_power(intel_dp), + "[ENCODER:%d:%s] panel power already on\n", + dp_to_dig_port(intel_dp)->base.base.base.id, + dp_to_dig_port(intel_dp)->base.base.name)) + return; + + wait_panel_power_cycle(intel_dp); + + pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + pp = ilk_get_pp_control(intel_dp); + if (IS_GEN(dev_priv, 5)) { + /* ILK workaround: disable reset around power sequence */ + pp &= ~PANEL_POWER_RESET; + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + } + + pp |= PANEL_POWER_ON; + if (!IS_GEN(dev_priv, 5)) + pp |= PANEL_POWER_RESET; + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + + wait_panel_on(intel_dp); + intel_dp->pps.last_power_on = jiffies; + + if (IS_GEN(dev_priv, 5)) { + pp |= PANEL_POWER_RESET; /* restore panel reset bit */ + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + } +} + +void intel_pps_on(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + with_intel_pps_lock(intel_dp, wakeref) + intel_pps_on_unlocked(intel_dp); +} + +void intel_pps_off_unlocked(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + u32 pp; + i915_reg_t pp_ctrl_reg; + + lockdep_assert_held(&dev_priv->pps_mutex); + + if (!intel_dp_is_edp(intel_dp)) + return; + + drm_dbg_kms(&dev_priv->drm, "Turn [ENCODER:%d:%s] panel power off\n", + dig_port->base.base.base.id, dig_port->base.base.name); + + drm_WARN(&dev_priv->drm, !intel_dp->pps.want_panel_vdd, + "Need [ENCODER:%d:%s] VDD to turn off panel\n", + dig_port->base.base.base.id, dig_port->base.base.name); + + pp = ilk_get_pp_control(intel_dp); + /* We need to switch off panel power _and_ force vdd, for otherwise some + * panels get very unhappy and cease to work. */ + pp &= ~(PANEL_POWER_ON | PANEL_POWER_RESET | EDP_FORCE_VDD | + EDP_BLC_ENABLE); + + pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + + intel_dp->pps.want_panel_vdd = false; + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + + wait_panel_off(intel_dp); + intel_dp->pps.panel_power_off_time = ktime_get_boottime(); + + /* We got a reference when we enabled the VDD. */ + intel_display_power_put(dev_priv, + intel_aux_power_domain(dig_port), + fetch_and_zero(&intel_dp->pps.vdd_wakeref)); +} + +void intel_pps_off(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + with_intel_pps_lock(intel_dp, wakeref) + intel_pps_off_unlocked(intel_dp); +} + +/* Enable backlight in the panel power control. */ +void intel_pps_backlight_on(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + intel_wakeref_t wakeref; + + /* + * If we enable the backlight right away following a panel power + * on, we may see slight flicker as the panel syncs with the eDP + * link. So delay a bit to make sure the image is solid before + * allowing it to appear. + */ + wait_backlight_on(intel_dp); + + with_intel_pps_lock(intel_dp, wakeref) { + i915_reg_t pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + u32 pp; + + pp = ilk_get_pp_control(intel_dp); + pp |= EDP_BLC_ENABLE; + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + } +} + +/* Disable backlight in the panel power control. */ +void intel_pps_backlight_off(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + with_intel_pps_lock(intel_dp, wakeref) { + i915_reg_t pp_ctrl_reg = _pp_ctrl_reg(intel_dp); + u32 pp; + + pp = ilk_get_pp_control(intel_dp); + pp &= ~EDP_BLC_ENABLE; + + intel_de_write(dev_priv, pp_ctrl_reg, pp); + intel_de_posting_read(dev_priv, pp_ctrl_reg); + } + + intel_dp->pps.last_backlight_off = jiffies; + edp_wait_backlight_off(intel_dp); +} + +/* + * Hook for controlling the panel power control backlight through the bl_power + * sysfs attribute. Take care to handle multiple calls. + */ +void intel_pps_backlight_power(struct intel_connector *connector, bool enable) +{ + struct drm_i915_private *i915 = to_i915(connector->base.dev); + struct intel_dp *intel_dp = intel_attached_dp(connector); + intel_wakeref_t wakeref; + bool is_enabled; + + is_enabled = false; + with_intel_pps_lock(intel_dp, wakeref) + is_enabled = ilk_get_pp_control(intel_dp) & EDP_BLC_ENABLE; + if (is_enabled == enable) + return; + + drm_dbg_kms(&i915->drm, "panel power control backlight %s\n", + enable ? "enable" : "disable"); + + if (enable) + intel_pps_backlight_on(intel_dp); + else + intel_pps_backlight_off(intel_dp); +} + +static void vlv_detach_power_sequencer(struct intel_dp *intel_dp) +{ + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); + enum pipe pipe = intel_dp->pps.pps_pipe; + i915_reg_t pp_on_reg = PP_ON_DELAYS(pipe); + + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.active_pipe != INVALID_PIPE); + + if (drm_WARN_ON(&dev_priv->drm, pipe != PIPE_A && pipe != PIPE_B)) + return; + + intel_pps_vdd_off_sync_unlocked(intel_dp); + + /* + * VLV seems to get confused when multiple power sequencers + * have the same port selected (even if only one has power/vdd + * enabled). The failure manifests as vlv_wait_port_ready() failing + * CHV on the other hand doesn't seem to mind having the same port + * selected in multiple power sequencers, but let's clear the + * port select always when logically disconnecting a power sequencer + * from a port. + */ + drm_dbg_kms(&dev_priv->drm, + "detaching pipe %c power sequencer from [ENCODER:%d:%s]\n", + pipe_name(pipe), dig_port->base.base.base.id, + dig_port->base.base.name); + intel_de_write(dev_priv, pp_on_reg, 0); + intel_de_posting_read(dev_priv, pp_on_reg); + + intel_dp->pps.pps_pipe = INVALID_PIPE; +} + +static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv, + enum pipe pipe) +{ + struct intel_encoder *encoder; + + lockdep_assert_held(&dev_priv->pps_mutex); + + for_each_intel_dp(&dev_priv->drm, encoder) { + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + + drm_WARN(&dev_priv->drm, intel_dp->pps.active_pipe == pipe, + "stealing pipe %c power sequencer from active [ENCODER:%d:%s]\n", + pipe_name(pipe), encoder->base.base.id, + encoder->base.name); + + if (intel_dp->pps.pps_pipe != pipe) + continue; + + drm_dbg_kms(&dev_priv->drm, + "stealing pipe %c power sequencer from [ENCODER:%d:%s]\n", + pipe_name(pipe), encoder->base.base.id, + encoder->base.name); + + /* make sure vdd is off before we steal it */ + vlv_detach_power_sequencer(intel_dp); + } +} + +void vlv_pps_init(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct intel_dp *intel_dp = enc_to_intel_dp(encoder); + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + + lockdep_assert_held(&dev_priv->pps_mutex); + + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.active_pipe != INVALID_PIPE); + + if (intel_dp->pps.pps_pipe != INVALID_PIPE && + intel_dp->pps.pps_pipe != crtc->pipe) { + /* + * If another power sequencer was being used on this + * port previously make sure to turn off vdd there while + * we still have control of it. + */ + vlv_detach_power_sequencer(intel_dp); + } + + /* + * We may be stealing the power + * sequencer from another port. + */ + vlv_steal_power_sequencer(dev_priv, crtc->pipe); + + intel_dp->pps.active_pipe = crtc->pipe; + + if (!intel_dp_is_edp(intel_dp)) + return; + + /* now it's all ours */ + intel_dp->pps.pps_pipe = crtc->pipe; + + drm_dbg_kms(&dev_priv->drm, + "initializing pipe %c power sequencer for [ENCODER:%d:%s]\n", + pipe_name(intel_dp->pps.pps_pipe), encoder->base.base.id, + encoder->base.name); + + /* init power sequencer on this pipe and port */ + pps_init_delays(intel_dp); + pps_init_registers(intel_dp, true); +} + +static void intel_pps_vdd_sanitize(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); + + lockdep_assert_held(&dev_priv->pps_mutex); + + if (!edp_have_panel_vdd(intel_dp)) + return; + + /* + * The VDD bit needs a power domain reference, so if the bit is + * already enabled when we boot or resume, grab this reference and + * schedule a vdd off, so we don't hold on to the reference + * indefinitely. + */ + drm_dbg_kms(&dev_priv->drm, + "VDD left on by BIOS, adjusting state tracking\n"); + drm_WARN_ON(&dev_priv->drm, intel_dp->pps.vdd_wakeref); + intel_dp->pps.vdd_wakeref = intel_display_power_get(dev_priv, + intel_aux_power_domain(dig_port)); + + edp_panel_vdd_schedule_off(intel_dp); +} + +bool intel_pps_have_power(struct intel_dp *intel_dp) +{ + intel_wakeref_t wakeref; + bool have_power = false; + + with_intel_pps_lock(intel_dp, wakeref) { + have_power = edp_have_panel_power(intel_dp) && + edp_have_panel_vdd(intel_dp); + } + + return have_power; +} + +static void pps_init_timestamps(struct intel_dp *intel_dp) +{ + intel_dp->pps.panel_power_off_time = ktime_get_boottime(); + intel_dp->pps.last_power_on = jiffies; + intel_dp->pps.last_backlight_off = jiffies; +} + +static void +intel_pps_readout_hw_state(struct intel_dp *intel_dp, struct edp_power_seq *seq) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + u32 pp_on, pp_off, pp_ctl; + struct pps_registers regs; + + intel_pps_get_registers(intel_dp, ®s); + + pp_ctl = ilk_get_pp_control(intel_dp); + + /* Ensure PPS is unlocked */ + if (!HAS_DDI(dev_priv)) + intel_de_write(dev_priv, regs.pp_ctrl, pp_ctl); + + pp_on = intel_de_read(dev_priv, regs.pp_on); + pp_off = intel_de_read(dev_priv, regs.pp_off); + + /* Pull timing values out of registers */ + seq->t1_t3 = REG_FIELD_GET(PANEL_POWER_UP_DELAY_MASK, pp_on); + seq->t8 = REG_FIELD_GET(PANEL_LIGHT_ON_DELAY_MASK, pp_on); + seq->t9 = REG_FIELD_GET(PANEL_LIGHT_OFF_DELAY_MASK, pp_off); + seq->t10 = REG_FIELD_GET(PANEL_POWER_DOWN_DELAY_MASK, pp_off); + + if (i915_mmio_reg_valid(regs.pp_div)) { + u32 pp_div; + + pp_div = intel_de_read(dev_priv, regs.pp_div); + + seq->t11_t12 = REG_FIELD_GET(PANEL_POWER_CYCLE_DELAY_MASK, pp_div) * 1000; + } else { + seq->t11_t12 = REG_FIELD_GET(BXT_POWER_CYCLE_DELAY_MASK, pp_ctl) * 1000; + } +} + +static void +intel_pps_dump_state(const char *state_name, const struct edp_power_seq *seq) +{ + DRM_DEBUG_KMS("%s t1_t3 %d t8 %d t9 %d t10 %d t11_t12 %d\n", + state_name, + seq->t1_t3, seq->t8, seq->t9, seq->t10, seq->t11_t12); +} + +static void +intel_pps_verify_state(struct intel_dp *intel_dp) +{ + struct edp_power_seq hw; + struct edp_power_seq *sw = &intel_dp->pps.pps_delays; + + intel_pps_readout_hw_state(intel_dp, &hw); + + if (hw.t1_t3 != sw->t1_t3 || hw.t8 != sw->t8 || hw.t9 != sw->t9 || + hw.t10 != sw->t10 || hw.t11_t12 != sw->t11_t12) { + DRM_ERROR("PPS state mismatch\n"); + intel_pps_dump_state("sw", sw); + intel_pps_dump_state("hw", &hw); + } +} + +static void pps_init_delays(struct intel_dp *intel_dp) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + struct edp_power_seq cur, vbt, spec, + *final = &intel_dp->pps.pps_delays; + + lockdep_assert_held(&dev_priv->pps_mutex); + + /* already initialized? */ + if (final->t11_t12 != 0) + return; + + intel_pps_readout_hw_state(intel_dp, &cur); + + intel_pps_dump_state("cur", &cur); + + vbt = dev_priv->vbt.edp.pps; + /* On Toshiba Satellite P50-C-18C system the VBT T12 delay + * of 500ms appears to be too short. Ocassionally the panel + * just fails to power back on. Increasing the delay to 800ms + * seems sufficient to avoid this problem. + */ + if (dev_priv->quirks & QUIRK_INCREASE_T12_DELAY) { + vbt.t11_t12 = max_t(u16, vbt.t11_t12, 1300 * 10); + drm_dbg_kms(&dev_priv->drm, + "Increasing T12 panel delay as per the quirk to %d\n", + vbt.t11_t12); + } + /* T11_T12 delay is special and actually in units of 100ms, but zero + * based in the hw (so we need to add 100 ms). But the sw vbt + * table multiplies it with 1000 to make it in units of 100usec, + * too. */ + vbt.t11_t12 += 100 * 10; + + /* Upper limits from eDP 1.3 spec. Note that we use the clunky units of + * our hw here, which are all in 100usec. */ + spec.t1_t3 = 210 * 10; + spec.t8 = 50 * 10; /* no limit for t8, use t7 instead */ + spec.t9 = 50 * 10; /* no limit for t9, make it symmetric with t8 */ + spec.t10 = 500 * 10; + /* This one is special and actually in units of 100ms, but zero + * based in the hw (so we need to add 100 ms). But the sw vbt + * table multiplies it with 1000 to make it in units of 100usec, + * too. */ + spec.t11_t12 = (510 + 100) * 10; + + intel_pps_dump_state("vbt", &vbt); + + /* Use the max of the register settings and vbt. If both are + * unset, fall back to the spec limits. */ +#define assign_final(field) final->field = (max(cur.field, vbt.field) == 0 ? \ + spec.field : \ + max(cur.field, vbt.field)) + assign_final(t1_t3); + assign_final(t8); + assign_final(t9); + assign_final(t10); + assign_final(t11_t12); +#undef assign_final + +#define get_delay(field) (DIV_ROUND_UP(final->field, 10)) + intel_dp->pps.panel_power_up_delay = get_delay(t1_t3); + intel_dp->pps.backlight_on_delay = get_delay(t8); + intel_dp->pps.backlight_off_delay = get_delay(t9); + intel_dp->pps.panel_power_down_delay = get_delay(t10); + intel_dp->pps.panel_power_cycle_delay = get_delay(t11_t12); +#undef get_delay + + drm_dbg_kms(&dev_priv->drm, + "panel power up delay %d, power down delay %d, power cycle delay %d\n", + intel_dp->pps.panel_power_up_delay, + intel_dp->pps.panel_power_down_delay, + intel_dp->pps.panel_power_cycle_delay); + + drm_dbg_kms(&dev_priv->drm, "backlight on delay %d, off delay %d\n", + intel_dp->pps.backlight_on_delay, + intel_dp->pps.backlight_off_delay); + + /* + * We override the HW backlight delays to 1 because we do manual waits + * on them. For T8, even BSpec recommends doing it. For T9, if we + * don't do this, we'll end up waiting for the backlight off delay + * twice: once when we do the manual sleep, and once when we disable + * the panel and wait for the PP_STATUS bit to become zero. + */ + final->t8 = 1; + final->t9 = 1; + + /* + * HW has only a 100msec granularity for t11_t12 so round it up + * accordingly. + */ + final->t11_t12 = roundup(final->t11_t12, 100 * 10); +} + +static void pps_init_registers(struct intel_dp *intel_dp, bool force_disable_vdd) +{ + struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); + u32 pp_on, pp_off, port_sel = 0; + int div = RUNTIME_INFO(dev_priv)->rawclk_freq / 1000; + struct pps_registers regs; + enum port port = dp_to_dig_port(intel_dp)->base.port; + const struct edp_power_seq *seq = &intel_dp->pps.pps_delays; + + lockdep_assert_held(&dev_priv->pps_mutex); + + intel_pps_get_registers(intel_dp, ®s); + + /* + * On some VLV machines the BIOS can leave the VDD + * enabled even on power sequencers which aren't + * hooked up to any port. This would mess up the + * power domain tracking the first time we pick + * one of these power sequencers for use since + * intel_pps_vdd_on_unlocked() would notice that the VDD was + * already on and therefore wouldn't grab the power + * domain reference. Disable VDD first to avoid this. + * This also avoids spuriously turning the VDD on as + * soon as the new power sequencer gets initialized. + */ + if (force_disable_vdd) { + u32 pp = ilk_get_pp_control(intel_dp); + + drm_WARN(&dev_priv->drm, pp & PANEL_POWER_ON, + "Panel power already on\n"); + + if (pp & EDP_FORCE_VDD) + drm_dbg_kms(&dev_priv->drm, + "VDD already on, disabling first\n"); + + pp &= ~EDP_FORCE_VDD; + + intel_de_write(dev_priv, regs.pp_ctrl, pp); + } + + pp_on = REG_FIELD_PREP(PANEL_POWER_UP_DELAY_MASK, seq->t1_t3) | + REG_FIELD_PREP(PANEL_LIGHT_ON_DELAY_MASK, seq->t8); + pp_off = REG_FIELD_PREP(PANEL_LIGHT_OFF_DELAY_MASK, seq->t9) | + REG_FIELD_PREP(PANEL_POWER_DOWN_DELAY_MASK, seq->t10); + + /* Haswell doesn't have any port selection bits for the panel + * power sequencer any more. */ + if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { + port_sel = PANEL_PORT_SELECT_VLV(port); + } else if (HAS_PCH_IBX(dev_priv) || HAS_PCH_CPT(dev_priv)) { + switch (port) { + case PORT_A: + port_sel = PANEL_PORT_SELECT_DPA; + break; + case PORT_C: + port_sel = PANEL_PORT_SELECT_DPC; + break; + case PORT_D: + port_sel = PANEL_PORT_SELECT_DPD; + break; + default: + MISSING_CASE(port); + break; + } + } + + pp_on |= port_sel; + + intel_de_write(dev_priv, regs.pp_on, pp_on); + intel_de_write(dev_priv, regs.pp_off, pp_off); + + /* + * Compute the divisor for the pp clock, simply match the Bspec formula. + */ + if (i915_mmio_reg_valid(regs.pp_div)) { + intel_de_write(dev_priv, regs.pp_div, + REG_FIELD_PREP(PP_REFERENCE_DIVIDER_MASK, (100 * div) / 2 - 1) | REG_FIELD_PREP(PANEL_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(seq->t11_t12, 1000))); + } else { + u32 pp_ctl; + + pp_ctl = intel_de_read(dev_priv, regs.pp_ctrl); + pp_ctl &= ~BXT_POWER_CYCLE_DELAY_MASK; + pp_ctl |= REG_FIELD_PREP(BXT_POWER_CYCLE_DELAY_MASK, DIV_ROUND_UP(seq->t11_t12, 1000)); + intel_de_write(dev_priv, regs.pp_ctrl, pp_ctl); + } + + drm_dbg_kms(&dev_priv->drm, + "panel power sequencer register settings: PP_ON %#x, PP_OFF %#x, PP_DIV %#x\n", + intel_de_read(dev_priv, regs.pp_on), + intel_de_read(dev_priv, regs.pp_off), + i915_mmio_reg_valid(regs.pp_div) ? + intel_de_read(dev_priv, regs.pp_div) : + (intel_de_read(dev_priv, regs.pp_ctrl) & BXT_POWER_CYCLE_DELAY_MASK)); +} + +void intel_pps_encoder_reset(struct intel_dp *intel_dp) +{ + struct drm_i915_private *i915 = dp_to_i915(intel_dp); + intel_wakeref_t wakeref; + + if (!intel_dp_is_edp(intel_dp)) + return; + + with_intel_pps_lock(intel_dp, wakeref) { + /* + * Reinit the power sequencer also on the resume path, in case + * BIOS did something nasty with it. + */ + if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) + vlv_initial_power_sequencer_setup(intel_dp); + + pps_init_delays(intel_dp); + pps_init_registers(intel_dp, false); + + intel_pps_vdd_sanitize(intel_dp); + } +} + +void intel_pps_init(struct intel_dp *intel_dp) +{ + INIT_DELAYED_WORK(&intel_dp->pps.panel_vdd_work, edp_panel_vdd_work); + + pps_init_timestamps(intel_dp); + + intel_pps_encoder_reset(intel_dp); +} + +void intel_pps_unlock_regs_wa(struct drm_i915_private *dev_priv) +{ + int pps_num; + int pps_idx; + + if (HAS_DDI(dev_priv)) + return; + /* + * This w/a is needed at least on CPT/PPT, but to be sure apply it + * everywhere where registers can be write protected. + */ + if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) + pps_num = 2; + else + pps_num = 1; + + for (pps_idx = 0; pps_idx < pps_num; pps_idx++) { + u32 val = intel_de_read(dev_priv, PP_CONTROL(pps_idx)); + + val = (val & ~PANEL_UNLOCK_MASK) | PANEL_UNLOCK_REGS; + intel_de_write(dev_priv, PP_CONTROL(pps_idx), val); + } +} + +void intel_pps_setup(struct drm_i915_private *i915) +{ + if (HAS_PCH_SPLIT(i915) || IS_GEN9_LP(i915)) + i915->pps_mmio_base = PCH_PPS_BASE; + else if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) + i915->pps_mmio_base = VLV_PPS_BASE; + else + i915->pps_mmio_base = PPS_BASE; +} diff --git a/drivers/gpu/drm/i915/display/intel_pps.h b/drivers/gpu/drm/i915/display/intel_pps.h new file mode 100644 index 000000000000..fbbcca782e7b --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_pps.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef __INTEL_PPS_H__ +#define __INTEL_PPS_H__ + +#include <linux/types.h> + +#include "intel_wakeref.h" + +struct drm_i915_private; +struct intel_connector; +struct intel_crtc_state; +struct intel_dp; +struct intel_encoder; + +intel_wakeref_t intel_pps_lock(struct intel_dp *intel_dp); +intel_wakeref_t intel_pps_unlock(struct intel_dp *intel_dp, intel_wakeref_t wakeref); + +#define with_intel_pps_lock(dp, wf) \ + for ((wf) = intel_pps_lock(dp); (wf); (wf) = intel_pps_unlock((dp), (wf))) + +void intel_pps_backlight_on(struct intel_dp *intel_dp); +void intel_pps_backlight_off(struct intel_dp *intel_dp); +void intel_pps_backlight_power(struct intel_connector *connector, bool enable); + +bool intel_pps_vdd_on_unlocked(struct intel_dp *intel_dp); +void intel_pps_vdd_off_unlocked(struct intel_dp *intel_dp, bool sync); +void intel_pps_on_unlocked(struct intel_dp *intel_dp); +void intel_pps_off_unlocked(struct intel_dp *intel_dp); +void intel_pps_check_power_unlocked(struct intel_dp *intel_dp); + +void intel_pps_vdd_on(struct intel_dp *intel_dp); +void intel_pps_on(struct intel_dp *intel_dp); +void intel_pps_off(struct intel_dp *intel_dp); +void intel_pps_vdd_off_sync(struct intel_dp *intel_dp); +bool intel_pps_have_power(struct intel_dp *intel_dp); +void intel_pps_wait_power_cycle(struct intel_dp *intel_dp); + +void intel_pps_init(struct intel_dp *intel_dp); +void intel_pps_encoder_reset(struct intel_dp *intel_dp); +void intel_pps_reset_all(struct drm_i915_private *i915); + +void vlv_pps_init(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state); + +void intel_pps_unlock_regs_wa(struct drm_i915_private *i915); +void intel_pps_setup(struct drm_i915_private *i915); + +#endif /* __INTEL_PPS_H__ */ diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c index c24ae69426cf..850cb7f5b332 100644 --- a/drivers/gpu/drm/i915/display/intel_psr.c +++ b/drivers/gpu/drm/i915/display/intel_psr.c @@ -28,9 +28,10 @@ #include "i915_drv.h" #include "intel_atomic.h" #include "intel_display_types.h" +#include "intel_dp_aux.h" +#include "intel_hdmi.h" #include "intel_psr.h" #include "intel_sprite.h" -#include "intel_hdmi.h" /** * DOC: Panel Self Refresh (PSR/SRD) @@ -305,7 +306,7 @@ void intel_psr_init_dpcd(struct intel_dp *intel_dp) drm_dbg_kms(&dev_priv->drm, "eDP panel supports PSR version %x\n", intel_dp->psr_dpcd[0]); - if (drm_dp_has_quirk(&intel_dp->desc, 0, DP_DPCD_QUIRK_NO_PSR)) { + if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) { drm_dbg_kms(&dev_priv->drm, "PSR support not currently available for this panel\n"); return; @@ -811,6 +812,13 @@ void intel_psr_compute_config(struct intel_dp *intel_dp, &crtc_state->hw.adjusted_mode; int psr_setup_time; + /* + * Current PSR panels dont work reliably with VRR enabled + * So if VRR is enabled, do not enable PSR. + */ + if (crtc_state->vrr.enable) + return; + if (!CAN_PSR(dev_priv)) return; diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c index cf3589fd0ddb..993543334a1e 100644 --- a/drivers/gpu/drm/i915/display/intel_sprite.c +++ b/drivers/gpu/drm/i915/display/intel_sprite.c @@ -50,6 +50,7 @@ #include "intel_dsi.h" #include "intel_sprite.h" #include "i9xx_plane.h" +#include "intel_vrr.h" int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode, int usecs) @@ -62,6 +63,16 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode, 1000 * adjusted_mode->crtc_htotal); } +static int intel_mode_vblank_start(const struct drm_display_mode *mode) +{ + int vblank_start = mode->crtc_vblank_start; + + if (mode->flags & DRM_MODE_FLAG_INTERLACE) + vblank_start = DIV_ROUND_UP(vblank_start, 2); + + return vblank_start; +} + /** * intel_pipe_update_start() - start update of a set of display registers * @new_crtc_state: the new crtc state @@ -90,9 +101,10 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state) if (new_crtc_state->uapi.async_flip) return; - vblank_start = adjusted_mode->crtc_vblank_start; - if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) - vblank_start = DIV_ROUND_UP(vblank_start, 2); + if (new_crtc_state->vrr.enable) + vblank_start = intel_vrr_vmax_vblank_start(new_crtc_state); + else + vblank_start = intel_mode_vblank_start(adjusted_mode); /* FIXME needs to be calibrated sensibly */ min = vblank_start - intel_usecs_to_scanlines(adjusted_mode, @@ -258,6 +270,9 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state) local_irq_enable(); + /* Send VRR Push to terminate Vblank */ + intel_vrr_send_push(new_crtc_state); + if (intel_vgpu_active(dev_priv)) return; @@ -634,13 +649,19 @@ skl_program_scaler(struct intel_plane *plane, /* Preoffset values for YUV to RGB Conversion */ #define PREOFF_YUV_TO_RGB_HI 0x1800 -#define PREOFF_YUV_TO_RGB_ME 0x1F00 +#define PREOFF_YUV_TO_RGB_ME 0x0000 #define PREOFF_YUV_TO_RGB_LO 0x1800 #define ROFF(x) (((x) & 0xffff) << 16) #define GOFF(x) (((x) & 0xffff) << 0) #define BOFF(x) (((x) & 0xffff) << 16) +/* + * Programs the input color space conversion stage for ICL HDR planes. + * Note that it is assumed that this stage always happens after YUV + * range correction. Thus, the input to this stage is assumed to be + * in full-range YCbCr. + */ static void icl_program_input_csc(struct intel_plane *plane, const struct intel_crtc_state *crtc_state, @@ -688,52 +709,7 @@ icl_program_input_csc(struct intel_plane *plane, 0x0, 0x7800, 0x7F10, }, }; - - /* Matrix for Limited Range to Full Range Conversion */ - static const u16 input_csc_matrix_lr[][9] = { - /* - * BT.601 Limted range YCbCr -> full range RGB - * The matrix required is : - * [1.164384, 0.000, 1.596027, - * 1.164384, -0.39175, -0.812813, - * 1.164384, 2.017232, 0.0000] - */ - [DRM_COLOR_YCBCR_BT601] = { - 0x7CC8, 0x7950, 0x0, - 0x8D00, 0x7950, 0x9C88, - 0x0, 0x7950, 0x6810, - }, - /* - * BT.709 Limited range YCbCr -> full range RGB - * The matrix required is : - * [1.164384, 0.000, 1.792741, - * 1.164384, -0.213249, -0.532909, - * 1.164384, 2.112402, 0.0000] - */ - [DRM_COLOR_YCBCR_BT709] = { - 0x7E58, 0x7950, 0x0, - 0x8888, 0x7950, 0xADA8, - 0x0, 0x7950, 0x6870, - }, - /* - * BT.2020 Limited range YCbCr -> full range RGB - * The matrix required is : - * [1.164, 0.000, 1.678, - * 1.164, -0.1873, -0.6504, - * 1.164, 2.1417, 0.0000] - */ - [DRM_COLOR_YCBCR_BT2020] = { - 0x7D70, 0x7950, 0x0, - 0x8A68, 0x7950, 0xAC00, - 0x0, 0x7950, 0x6890, - }, - }; - const u16 *csc; - - if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE) - csc = input_csc_matrix[plane_state->hw.color_encoding]; - else - csc = input_csc_matrix_lr[plane_state->hw.color_encoding]; + const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding]; intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0), ROFF(csc[0]) | GOFF(csc[1])); @@ -750,14 +726,8 @@ icl_program_input_csc(struct intel_plane *plane, intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0), PREOFF_YUV_TO_RGB_HI); - if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE) - intel_de_write_fw(dev_priv, - PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), - 0); - else - intel_de_write_fw(dev_priv, - PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), - PREOFF_YUV_TO_RGB_ME); + intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), + PREOFF_YUV_TO_RGB_ME); intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2), PREOFF_YUV_TO_RGB_LO); intel_de_write_fw(dev_priv, @@ -771,7 +741,8 @@ icl_program_input_csc(struct intel_plane *plane, static void skl_plane_async_flip(struct intel_plane *plane, const struct intel_crtc_state *crtc_state, - const struct intel_plane_state *plane_state) + const struct intel_plane_state *plane_state, + bool async_flip) { struct drm_i915_private *dev_priv = to_i915(plane->base.dev); unsigned long irqflags; @@ -782,6 +753,9 @@ skl_plane_async_flip(struct intel_plane *plane, plane_ctl |= skl_plane_ctl_crtc(crtc_state); + if (async_flip) + plane_ctl |= PLANE_CTL_ASYNC_FLIP; + spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl); @@ -867,6 +841,10 @@ skl_program_plane(struct intel_plane *plane, if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id)) icl_program_input_csc(plane, crtc_state, plane_state); + if (fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC) + intel_uncore_write64_fw(&dev_priv->uncore, + PLANE_CC_VAL(pipe, plane_id), plane_state->ccval); + skl_write_plane_wm(plane, crtc_state); intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), @@ -958,6 +936,28 @@ skl_plane_get_hw_state(struct intel_plane *plane, return ret; } +static void +skl_plane_enable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + bdw_enable_pipe_irq(i915, pipe, GEN9_PIPE_PLANE_FLIP_DONE(plane->id)); + spin_unlock_irq(&i915->irq_lock); +} + +static void +skl_plane_disable_flip_done(struct intel_plane *plane) +{ + struct drm_i915_private *i915 = to_i915(plane->base.dev); + enum pipe pipe = plane->pipe; + + spin_lock_irq(&i915->irq_lock); + bdw_disable_pipe_irq(i915, pipe, GEN9_PIPE_PLANE_FLIP_DONE(plane->id)); + spin_unlock_irq(&i915->irq_lock); +} + static void i9xx_plane_linear_gamma(u16 gamma[8]) { /* The points are not evenly spaced. */ @@ -1851,7 +1851,26 @@ g4x_sprite_max_stride(struct intel_plane *plane, u32 pixel_format, u64 modifier, unsigned int rotation) { - return 16384; + const struct drm_format_info *info = drm_format_info(pixel_format); + int cpp = info->cpp[0]; + + /* Limit to 4k pixels to guarantee TILEOFF.x doesn't get too big. */ + if (modifier == I915_FORMAT_MOD_X_TILED) + return min(4096 * cpp, 16 * 1024); + else + return 16 * 1024; +} + +static unsigned int +hsw_sprite_max_stride(struct intel_plane *plane, + u32 pixel_format, u64 modifier, + unsigned int rotation) +{ + const struct drm_format_info *info = drm_format_info(pixel_format); + int cpp = info->cpp[0]; + + /* Limit to 8k pixels to guarantee OFFSET.x doesn't get too big. */ + return min(8192 * cpp, 16 * 1024); } static u32 g4x_sprite_ctl_crtc(const struct intel_crtc_state *crtc_state) @@ -2366,7 +2385,8 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state, fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS || fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS || fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS || - fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS)) { + fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS || + fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC)) { drm_dbg_kms(&dev_priv->drm, "Y/Yf tiling not supported in IF-ID mode\n"); return -EINVAL; @@ -2856,6 +2876,7 @@ static const u64 skl_plane_format_modifiers_ccs[] = { static const u64 gen12_plane_format_modifiers_mc_ccs[] = { I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS, I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS, + I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC, I915_FORMAT_MOD_Y_TILED, I915_FORMAT_MOD_X_TILED, DRM_FORMAT_MOD_LINEAR, @@ -2864,6 +2885,7 @@ static const u64 gen12_plane_format_modifiers_mc_ccs[] = { static const u64 gen12_plane_format_modifiers_rc_ccs[] = { I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS, + I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC, I915_FORMAT_MOD_Y_TILED, I915_FORMAT_MOD_X_TILED, DRM_FORMAT_MOD_LINEAR, @@ -3054,6 +3076,7 @@ static bool gen12_plane_format_mod_supported(struct drm_plane *_plane, case I915_FORMAT_MOD_X_TILED: case I915_FORMAT_MOD_Y_TILED: case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS: + case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC: break; default: return false; @@ -3290,7 +3313,13 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv, plane->get_hw_state = skl_plane_get_hw_state; plane->check_plane = skl_plane_check; plane->min_cdclk = skl_plane_min_cdclk; - plane->async_flip = skl_plane_async_flip; + + if (plane_id == PLANE_PRIMARY) { + plane->need_async_flip_disable_wa = IS_GEN_RANGE(dev_priv, 9, 10); + plane->async_flip = skl_plane_async_flip; + plane->enable_flip_done = skl_plane_enable_flip_done; + plane->disable_flip_done = skl_plane_disable_flip_done; + } if (INTEL_GEN(dev_priv) >= 11) formats = icl_get_plane_formats(dev_priv, pipe, @@ -3398,11 +3427,11 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, return plane; if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) { - plane->max_stride = i9xx_plane_max_stride; plane->update_plane = vlv_update_plane; plane->disable_plane = vlv_disable_plane; plane->get_hw_state = vlv_plane_get_hw_state; plane->check_plane = vlv_sprite_check; + plane->max_stride = i965_plane_max_stride; plane->min_cdclk = vlv_plane_min_cdclk; if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B) { @@ -3416,16 +3445,18 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, plane_funcs = &vlv_sprite_funcs; } else if (INTEL_GEN(dev_priv) >= 7) { - plane->max_stride = g4x_sprite_max_stride; plane->update_plane = ivb_update_plane; plane->disable_plane = ivb_disable_plane; plane->get_hw_state = ivb_plane_get_hw_state; plane->check_plane = g4x_sprite_check; - if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) + if (IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) { + plane->max_stride = hsw_sprite_max_stride; plane->min_cdclk = hsw_plane_min_cdclk; - else + } else { + plane->max_stride = g4x_sprite_max_stride; plane->min_cdclk = ivb_sprite_min_cdclk; + } formats = snb_plane_formats; num_formats = ARRAY_SIZE(snb_plane_formats); @@ -3433,11 +3464,11 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv, plane_funcs = &snb_sprite_funcs; } else { - plane->max_stride = g4x_sprite_max_stride; plane->update_plane = g4x_update_plane; plane->disable_plane = g4x_disable_plane; plane->get_hw_state = g4x_plane_get_hw_state; plane->check_plane = g4x_sprite_check; + plane->max_stride = g4x_sprite_max_stride; plane->min_cdclk = g4x_sprite_min_cdclk; modifiers = i9xx_plane_format_modifiers; diff --git a/drivers/gpu/drm/i915/display/intel_vrr.c b/drivers/gpu/drm/i915/display/intel_vrr.c new file mode 100644 index 000000000000..a9c2b2fd9252 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_vrr.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + * + */ + +#include "i915_drv.h" +#include "intel_display_types.h" +#include "intel_vrr.h" + +bool intel_vrr_is_capable(struct drm_connector *connector) +{ + struct intel_dp *intel_dp; + const struct drm_display_info *info = &connector->display_info; + struct drm_i915_private *i915 = to_i915(connector->dev); + + if (connector->connector_type != DRM_MODE_CONNECTOR_eDP && + connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) + return false; + + intel_dp = intel_attached_dp(to_intel_connector(connector)); + /* + * DP Sink is capable of VRR video timings if + * Ignore MSA bit is set in DPCD. + * EDID monitor range also should be atleast 10 for reasonable + * Adaptive Sync or Variable Refresh Rate end user experience. + */ + return HAS_VRR(i915) && + drm_dp_sink_can_do_video_without_timing_msa(intel_dp->dpcd) && + info->monitor_range.max_vfreq - info->monitor_range.min_vfreq > 10; +} + +void +intel_vrr_check_modeset(struct intel_atomic_state *state) +{ + int i; + struct intel_crtc_state *old_crtc_state, *new_crtc_state; + struct intel_crtc *crtc; + + for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, + new_crtc_state, i) { + if (new_crtc_state->uapi.vrr_enabled != + old_crtc_state->uapi.vrr_enabled) + new_crtc_state->uapi.mode_changed = true; + } +} + +/* + * Without VRR registers get latched at: + * vblank_start + * + * With VRR the earliest registers can get latched is: + * intel_vrr_vmin_vblank_start(), which if we want to maintain + * the correct min vtotal is >=vblank_start+1 + * + * The latest point registers can get latched is the vmax decision boundary: + * intel_vrr_vmax_vblank_start() + * + * Between those two points the vblank exit starts (and hence registers get + * latched) ASAP after a push is sent. + * + * framestart_delay is programmable 0-3. + */ +static int intel_vrr_vblank_exit_length(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *i915 = to_i915(crtc->base.dev); + + /* The hw imposes the extra scanline before frame start */ + return crtc_state->vrr.pipeline_full + i915->framestart_delay + 1; +} + +int intel_vrr_vmin_vblank_start(const struct intel_crtc_state *crtc_state) +{ + /* Min vblank actually determined by flipline that is always >=vmin+1 */ + return crtc_state->vrr.vmin + 1 - intel_vrr_vblank_exit_length(crtc_state); +} + +int intel_vrr_vmax_vblank_start(const struct intel_crtc_state *crtc_state) +{ + return crtc_state->vrr.vmax - intel_vrr_vblank_exit_length(crtc_state); +} + +void +intel_vrr_compute_config(struct intel_crtc_state *crtc_state, + struct drm_connector_state *conn_state) +{ + struct intel_connector *connector = + to_intel_connector(conn_state->connector); + struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode; + const struct drm_display_info *info = &connector->base.display_info; + int vmin, vmax; + + if (!intel_vrr_is_capable(&connector->base)) + return; + + if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) + return; + + if (!crtc_state->uapi.vrr_enabled) + return; + + vmin = DIV_ROUND_UP(adjusted_mode->crtc_clock * 1000, + adjusted_mode->crtc_htotal * info->monitor_range.max_vfreq); + vmax = adjusted_mode->crtc_clock * 1000 / + (adjusted_mode->crtc_htotal * info->monitor_range.min_vfreq); + + vmin = max_t(int, vmin, adjusted_mode->crtc_vtotal); + vmax = max_t(int, vmax, adjusted_mode->crtc_vtotal); + + if (vmin >= vmax) + return; + + /* + * flipline determines the min vblank length the hardware will + * generate, and flipline>=vmin+1, hence we reduce vmin by one + * to make sure we can get the actual min vblank length. + */ + crtc_state->vrr.vmin = vmin - 1; + crtc_state->vrr.vmax = vmax; + crtc_state->vrr.enable = true; + + crtc_state->vrr.flipline = crtc_state->vrr.vmin + 1; + + /* + * FIXME: s/4/framestart_delay+1/ to get consistent + * earliest/latest points for register latching regardless + * of the framestart_delay used? + * + * FIXME: this really needs the extra scanline to provide consistent + * behaviour for all framestart_delay values. Otherwise with + * framestart_delay==3 we will end up extending the min vblank by + * one extra line. + */ + crtc_state->vrr.pipeline_full = + min(255, crtc_state->vrr.vmin - adjusted_mode->crtc_vdisplay - 4 - 1); + + crtc_state->mode_flags |= I915_MODE_FLAG_VRR; +} + +void intel_vrr_enable(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; + u32 trans_vrr_ctl; + + if (!crtc_state->vrr.enable) + return; + + trans_vrr_ctl = VRR_CTL_VRR_ENABLE | + VRR_CTL_IGN_MAX_SHIFT | VRR_CTL_FLIP_LINE_EN | + VRR_CTL_PIPELINE_FULL(crtc_state->vrr.pipeline_full) | + VRR_CTL_PIPELINE_FULL_OVERRIDE; + + intel_de_write(dev_priv, TRANS_VRR_VMIN(cpu_transcoder), crtc_state->vrr.vmin - 1); + intel_de_write(dev_priv, TRANS_VRR_VMAX(cpu_transcoder), crtc_state->vrr.vmax - 1); + intel_de_write(dev_priv, TRANS_VRR_CTL(cpu_transcoder), trans_vrr_ctl); + intel_de_write(dev_priv, TRANS_VRR_FLIPLINE(cpu_transcoder), crtc_state->vrr.flipline - 1); + intel_de_write(dev_priv, TRANS_PUSH(cpu_transcoder), TRANS_PUSH_EN); +} + +void intel_vrr_send_push(const struct intel_crtc_state *crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; + + if (!crtc_state->vrr.enable) + return; + + intel_de_write(dev_priv, TRANS_PUSH(cpu_transcoder), + TRANS_PUSH_EN | TRANS_PUSH_SEND); +} + +void intel_vrr_disable(const struct intel_crtc_state *old_crtc_state) +{ + struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc); + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder; + + if (!old_crtc_state->vrr.enable) + return; + + intel_de_write(dev_priv, TRANS_VRR_CTL(cpu_transcoder), 0); + intel_de_write(dev_priv, TRANS_PUSH(cpu_transcoder), 0); +} + +void intel_vrr_get_config(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; + u32 trans_vrr_ctl; + + trans_vrr_ctl = intel_de_read(dev_priv, TRANS_VRR_CTL(cpu_transcoder)); + crtc_state->vrr.enable = trans_vrr_ctl & VRR_CTL_VRR_ENABLE; + if (!crtc_state->vrr.enable) + return; + + if (trans_vrr_ctl & VRR_CTL_PIPELINE_FULL_OVERRIDE) + crtc_state->vrr.pipeline_full = REG_FIELD_GET(VRR_CTL_PIPELINE_FULL_MASK, trans_vrr_ctl); + if (trans_vrr_ctl & VRR_CTL_FLIP_LINE_EN) + crtc_state->vrr.flipline = intel_de_read(dev_priv, TRANS_VRR_FLIPLINE(cpu_transcoder)) + 1; + crtc_state->vrr.vmax = intel_de_read(dev_priv, TRANS_VRR_VMAX(cpu_transcoder)) + 1; + crtc_state->vrr.vmin = intel_de_read(dev_priv, TRANS_VRR_VMIN(cpu_transcoder)) + 1; + + crtc_state->mode_flags |= I915_MODE_FLAG_VRR; +} diff --git a/drivers/gpu/drm/i915/display/intel_vrr.h b/drivers/gpu/drm/i915/display/intel_vrr.h new file mode 100644 index 000000000000..fac01bf4ab50 --- /dev/null +++ b/drivers/gpu/drm/i915/display/intel_vrr.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __INTEL_VRR_H__ +#define __INTEL_VRR_H__ + +#include <linux/types.h> + +struct drm_connector; +struct drm_connector_state; +struct intel_atomic_state; +struct intel_crtc; +struct intel_crtc_state; +struct intel_dp; +struct intel_encoder; +struct intel_crtc; + +bool intel_vrr_is_capable(struct drm_connector *connector); +void intel_vrr_check_modeset(struct intel_atomic_state *state); +void intel_vrr_compute_config(struct intel_crtc_state *crtc_state, + struct drm_connector_state *conn_state); +void intel_vrr_enable(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state); +void intel_vrr_send_push(const struct intel_crtc_state *crtc_state); +void intel_vrr_disable(const struct intel_crtc_state *old_crtc_state); +void intel_vrr_get_config(struct intel_crtc *crtc, + struct intel_crtc_state *crtc_state); +int intel_vrr_vmax_vblank_start(const struct intel_crtc_state *crtc_state); +int intel_vrr_vmin_vblank_start(const struct intel_crtc_state *crtc_state); + +#endif /* __INTEL_VRR_H__ */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 68f58762d5e3..4d2f40cf237b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -408,7 +408,7 @@ __active_engine(struct i915_request *rq, struct intel_engine_cs **active) } if (i915_request_is_active(rq)) { - if (!i915_request_completed(rq)) + if (!__i915_request_is_complete(rq)) *active = locked; ret = true; } @@ -717,7 +717,8 @@ err_free: } static inline struct i915_gem_engines * -__context_engines_await(const struct i915_gem_context *ctx) +__context_engines_await(const struct i915_gem_context *ctx, + bool *user_engines) { struct i915_gem_engines *engines; @@ -726,6 +727,10 @@ __context_engines_await(const struct i915_gem_context *ctx) engines = rcu_dereference(ctx->engines); GEM_BUG_ON(!engines); + if (user_engines) + *user_engines = i915_gem_context_user_engines(ctx); + + /* successful await => strong mb */ if (unlikely(!i915_sw_fence_await(&engines->fence))) continue; @@ -749,7 +754,7 @@ context_apply_all(struct i915_gem_context *ctx, struct intel_context *ce; int err = 0; - e = __context_engines_await(ctx); + e = __context_engines_await(ctx, NULL); for_each_gem_engine(ce, e, it) { err = fn(ce, data); if (err) @@ -1075,7 +1080,7 @@ static int context_barrier_task(struct i915_gem_context *ctx, return err; } - e = __context_engines_await(ctx); + e = __context_engines_await(ctx, NULL); if (!e) { i915_active_release(&cb->base); return -ENOENT; @@ -1838,27 +1843,6 @@ replace: return 0; } -static struct i915_gem_engines * -__copy_engines(struct i915_gem_engines *e) -{ - struct i915_gem_engines *copy; - unsigned int n; - - copy = alloc_engines(e->num_engines); - if (!copy) - return ERR_PTR(-ENOMEM); - - for (n = 0; n < e->num_engines; n++) { - if (e->engines[n]) - copy->engines[n] = intel_context_get(e->engines[n]); - else - copy->engines[n] = NULL; - } - copy->num_engines = n; - - return copy; -} - static int get_engines(struct i915_gem_context *ctx, struct drm_i915_gem_context_param *args) @@ -1866,19 +1850,17 @@ get_engines(struct i915_gem_context *ctx, struct i915_context_param_engines __user *user; struct i915_gem_engines *e; size_t n, count, size; + bool user_engines; int err = 0; - err = mutex_lock_interruptible(&ctx->engines_mutex); - if (err) - return err; + e = __context_engines_await(ctx, &user_engines); + if (!e) + return -ENOENT; - e = NULL; - if (i915_gem_context_user_engines(ctx)) - e = __copy_engines(i915_gem_context_engines(ctx)); - mutex_unlock(&ctx->engines_mutex); - if (IS_ERR_OR_NULL(e)) { + if (!user_engines) { + i915_sw_fence_complete(&e->fence); args->size = 0; - return PTR_ERR_OR_ZERO(e); + return 0; } count = e->num_engines; @@ -1929,7 +1911,7 @@ get_engines(struct i915_gem_context *ctx, args->size = size; err_free: - free_engines(e); + i915_sw_fence_complete(&e->fence); return err; } @@ -2095,11 +2077,14 @@ static int copy_ring_size(struct intel_context *dst, static int clone_engines(struct i915_gem_context *dst, struct i915_gem_context *src) { - struct i915_gem_engines *e = i915_gem_context_lock_engines(src); - struct i915_gem_engines *clone; + struct i915_gem_engines *clone, *e; bool user_engines; unsigned long n; + e = __context_engines_await(src, &user_engines); + if (!e) + return -ENOENT; + clone = alloc_engines(e->num_engines); if (!clone) goto err_unlock; @@ -2141,9 +2126,7 @@ static int clone_engines(struct i915_gem_context *dst, } } clone->num_engines = n; - - user_engines = i915_gem_context_user_engines(src); - i915_gem_context_unlock_engines(src); + i915_sw_fence_complete(&e->fence); /* Serialised by constructor */ engines_idle_release(dst, rcu_replace_pointer(dst->engines, clone, 1)); @@ -2154,7 +2137,7 @@ static int clone_engines(struct i915_gem_context *dst, return 0; err_unlock: - i915_gem_context_unlock_engines(src); + i915_sw_fence_complete(&e->fence); return -ENOMEM; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c new file mode 100644 index 000000000000..45d60e3d98e3 --- /dev/null +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2020 Intel Corporation + */ + +#include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_region.h" + +#include "i915_drv.h" + +static int +i915_gem_create(struct drm_file *file, + struct intel_memory_region *mr, + u64 *size_p, + u32 *handle_p) +{ + struct drm_i915_gem_object *obj; + u32 handle; + u64 size; + int ret; + + GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); + size = round_up(*size_p, mr->min_page_size); + if (size == 0) + return -EINVAL; + + /* For most of the ABI (e.g. mmap) we think in system pages */ + GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); + + /* Allocate the new object */ + obj = i915_gem_object_create_region(mr, size, 0); + if (IS_ERR(obj)) + return PTR_ERR(obj); + + GEM_BUG_ON(size != obj->base.size); + + ret = drm_gem_handle_create(file, &obj->base, &handle); + /* drop reference from allocate - handle holds it now */ + i915_gem_object_put(obj); + if (ret) + return ret; + + *handle_p = handle; + *size_p = size; + return 0; +} + +int +i915_gem_dumb_create(struct drm_file *file, + struct drm_device *dev, + struct drm_mode_create_dumb *args) +{ + enum intel_memory_type mem_type; + int cpp = DIV_ROUND_UP(args->bpp, 8); + u32 format; + + switch (cpp) { + case 1: + format = DRM_FORMAT_C8; + break; + case 2: + format = DRM_FORMAT_RGB565; + break; + case 4: + format = DRM_FORMAT_XRGB8888; + break; + default: + return -EINVAL; + } + + /* have to work out size/pitch and return them */ + args->pitch = ALIGN(args->width * cpp, 64); + + /* align stride to page size so that we can remap */ + if (args->pitch > intel_plane_fb_max_stride(to_i915(dev), format, + DRM_FORMAT_MOD_LINEAR)) + args->pitch = ALIGN(args->pitch, 4096); + + if (args->pitch < args->width) + return -EINVAL; + + args->size = mul_u32_u32(args->pitch, args->height); + + mem_type = INTEL_MEMORY_SYSTEM; + if (HAS_LMEM(to_i915(dev))) + mem_type = INTEL_MEMORY_LOCAL; + + return i915_gem_create(file, + intel_memory_region_by_type(to_i915(dev), + mem_type), + &args->size, &args->handle); +} + +/** + * Creates a new mm object and returns a handle to it. + * @dev: drm device pointer + * @data: ioctl data blob + * @file: drm file pointer + */ +int +i915_gem_create_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_private *i915 = to_i915(dev); + struct drm_i915_gem_create *args = data; + + i915_gem_flush_free_objects(i915); + + return i915_gem_create(file, + intel_memory_region_by_type(i915, + INTEL_MEMORY_SYSTEM), + &args->size, &args->handle); +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c index fcce6909f201..36f54cedaaeb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c @@ -5,6 +5,7 @@ */ #include "display/intel_frontbuffer.h" +#include "gt/intel_gt.h" #include "i915_drv.h" #include "i915_gem_clflush.h" @@ -15,13 +16,58 @@ #include "i915_gem_lmem.h" #include "i915_gem_mman.h" +static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) +{ + return !(obj->cache_level == I915_CACHE_NONE || + obj->cache_level == I915_CACHE_WT); +} + +static void +flush_write_domain(struct drm_i915_gem_object *obj, unsigned int flush_domains) +{ + struct i915_vma *vma; + + assert_object_held(obj); + + if (!(obj->write_domain & flush_domains)) + return; + + switch (obj->write_domain) { + case I915_GEM_DOMAIN_GTT: + spin_lock(&obj->vma.lock); + for_each_ggtt_vma(vma, obj) { + if (i915_vma_unset_ggtt_write(vma)) + intel_gt_flush_ggtt_writes(vma->vm->gt); + } + spin_unlock(&obj->vma.lock); + + i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); + break; + + case I915_GEM_DOMAIN_WC: + wmb(); + break; + + case I915_GEM_DOMAIN_CPU: + i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC); + break; + + case I915_GEM_DOMAIN_RENDER: + if (gpu_write_needs_clflush(obj)) + obj->cache_dirty = true; + break; + } + + obj->write_domain = 0; +} + static void __i915_gem_object_flush_for_display(struct drm_i915_gem_object *obj) { /* * We manually flush the CPU domain so that we can override and * force the flush for the display, and perform it asyncrhonously. */ - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); + flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); if (obj->cache_dirty) i915_gem_clflush_object(obj, I915_CLFLUSH_FORCE); obj->write_domain = 0; @@ -80,7 +126,7 @@ i915_gem_object_set_to_wc_domain(struct drm_i915_gem_object *obj, bool write) if (ret) return ret; - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_WC); + flush_write_domain(obj, ~I915_GEM_DOMAIN_WC); /* Serialise direct access to this object with the barriers for * coherent writes from the GPU, by effectively invalidating the @@ -141,7 +187,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write) if (ret) return ret; - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_GTT); + flush_write_domain(obj, ~I915_GEM_DOMAIN_GTT); /* Serialise direct access to this object with the barriers for * coherent writes from the GPU, by effectively invalidating the @@ -370,6 +416,7 @@ retry: } vma->display_alignment = max_t(u64, vma->display_alignment, alignment); + i915_vma_mark_scanout(vma); i915_gem_object_flush_if_display_locked(obj); @@ -387,48 +434,6 @@ err: return vma; } -static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj) -{ - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct i915_vma *vma; - - if (list_empty(&obj->vma.list)) - return; - - mutex_lock(&i915->ggtt.vm.mutex); - spin_lock(&obj->vma.lock); - for_each_ggtt_vma(vma, obj) { - if (!drm_mm_node_allocated(&vma->node)) - continue; - - GEM_BUG_ON(vma->vm != &i915->ggtt.vm); - list_move_tail(&vma->vm_link, &vma->vm->bound_list); - } - spin_unlock(&obj->vma.lock); - mutex_unlock(&i915->ggtt.vm.mutex); - - if (i915_gem_object_is_shrinkable(obj)) { - unsigned long flags; - - spin_lock_irqsave(&i915->mm.obj_lock, flags); - - if (obj->mm.madv == I915_MADV_WILLNEED && - !atomic_read(&obj->mm.shrink_pin)) - list_move_tail(&obj->mm.link, &i915->mm.shrink_list); - - spin_unlock_irqrestore(&i915->mm.obj_lock, flags); - } -} - -void -i915_gem_object_unpin_from_display_plane(struct i915_vma *vma) -{ - /* Bump the LRU to try and avoid premature eviction whilst flipping */ - i915_gem_object_bump_inactive_ggtt(vma->obj); - - i915_vma_unpin(vma); -} - /** * Moves a single object to the CPU read, and possibly write domain. * @obj: object to act on @@ -451,7 +456,7 @@ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write) if (ret) return ret; - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); + flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); /* Flush the CPU cache if it's still invalid. */ if ((obj->read_domains & I915_GEM_DOMAIN_CPU) == 0) { @@ -569,9 +574,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data, else err = i915_gem_object_set_to_cpu_domain(obj, write_domain); - /* And bump the LRU for this access */ - i915_gem_object_bump_inactive_ggtt(obj); - i915_gem_object_unlock(obj); if (write_domain) @@ -619,7 +621,7 @@ int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj, goto out; } - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); + flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); /* If we're not in the cpu read domain, set ourself into the gtt * read domain and manually flush cachelines (if required). This @@ -670,7 +672,7 @@ int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj, goto out; } - i915_gem_object_flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); + flush_write_domain(obj, ~I915_GEM_DOMAIN_CPU); /* If we're not in the cpu write domain, set ourself into the * gtt write domain and manually flush cachelines (as required). diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index b91b32195dcf..d70ca36f74f6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1276,7 +1276,10 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, int err; if (!pool) { - pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE); + pool = intel_gt_get_buffer_pool(engine->gt, PAGE_SIZE, + cache->has_llc ? + I915_MAP_WB : + I915_MAP_WC); if (IS_ERR(pool)) return PTR_ERR(pool); } @@ -1286,10 +1289,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, if (err) goto err_pool; - cmd = i915_gem_object_pin_map(pool->obj, - cache->has_llc ? - I915_MAP_FORCE_WB : - I915_MAP_FORCE_WC); + cmd = i915_gem_object_pin_map(pool->obj, pool->type); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto err_pool; @@ -2458,7 +2458,8 @@ static int eb_parse(struct i915_execbuffer *eb) return -EINVAL; if (!pool) { - pool = intel_gt_get_buffer_pool(eb->engine->gt, len); + pool = intel_gt_get_buffer_pool(eb->engine->gt, len, + I915_MAP_WB); if (IS_ERR(pool)) return PTR_ERR(pool); eb->batch_pool = pool; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index 932ee21e6609..194f35342710 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -31,18 +31,13 @@ i915_gem_object_create_lmem(struct drm_i915_private *i915, size, flags); } -struct drm_i915_gem_object * -__i915_gem_lmem_object_create(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags) +int __i915_gem_lmem_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags) { static struct lock_class_key lock_class; struct drm_i915_private *i915 = mem->i915; - struct drm_i915_gem_object *obj; - - obj = i915_gem_object_alloc(); - if (!obj) - return ERR_PTR(-ENOMEM); drm_gem_private_object_init(&i915->drm, &obj->base, size); i915_gem_object_init(obj, &i915_gem_lmem_obj_ops, &lock_class); @@ -53,5 +48,5 @@ __i915_gem_lmem_object_create(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem, flags); - return obj; + return 0; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index fc3f15580fe3..036d53c01de9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -21,9 +21,9 @@ i915_gem_object_create_lmem(struct drm_i915_private *i915, resource_size_t size, unsigned int flags); -struct drm_i915_gem_object * -__i915_gem_lmem_object_create(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags); +int __i915_gem_lmem_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags); #endif /* !__I915_GEM_LMEM_H */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 00d24000b5e8..70f798405f7f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -25,13 +25,13 @@ #include <linux/sched/mm.h> #include "display/intel_frontbuffer.h" -#include "gt/intel_gt.h" #include "i915_drv.h" #include "i915_gem_clflush.h" #include "i915_gem_context.h" #include "i915_gem_mman.h" #include "i915_gem_object.h" #include "i915_globals.h" +#include "i915_memcpy.h" #include "i915_trace.h" static struct i915_global_object { @@ -313,52 +313,6 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj) queue_work(i915->wq, &i915->mm.free_work); } -static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj) -{ - return !(obj->cache_level == I915_CACHE_NONE || - obj->cache_level == I915_CACHE_WT); -} - -void -i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj, - unsigned int flush_domains) -{ - struct i915_vma *vma; - - assert_object_held(obj); - - if (!(obj->write_domain & flush_domains)) - return; - - switch (obj->write_domain) { - case I915_GEM_DOMAIN_GTT: - spin_lock(&obj->vma.lock); - for_each_ggtt_vma(vma, obj) { - if (i915_vma_unset_ggtt_write(vma)) - intel_gt_flush_ggtt_writes(vma->vm->gt); - } - spin_unlock(&obj->vma.lock); - - i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU); - break; - - case I915_GEM_DOMAIN_WC: - wmb(); - break; - - case I915_GEM_DOMAIN_CPU: - i915_gem_clflush_object(obj, I915_CLFLUSH_SYNC); - break; - - case I915_GEM_DOMAIN_RENDER: - if (gpu_write_needs_clflush(obj)) - obj->cache_dirty = true; - break; - } - - obj->write_domain = 0; -} - void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj, enum fb_op_origin origin) { @@ -383,6 +337,70 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, } } +static void +i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) +{ + void *src_map; + void *src_ptr; + + src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT)); + + src_ptr = src_map + offset_in_page(offset); + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) + drm_clflush_virt_range(src_ptr, size); + memcpy(dst, src_ptr, size); + + kunmap_atomic(src_map); +} + +static void +i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) +{ + void __iomem *src_map; + void __iomem *src_ptr; + dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT); + + src_map = io_mapping_map_wc(&obj->mm.region->iomap, + dma - obj->mm.region->region.start, + PAGE_SIZE); + + src_ptr = src_map + offset_in_page(offset); + if (!i915_memcpy_from_wc(dst, (void __force *)src_ptr, size)) + memcpy_fromio(dst, src_ptr, size); + + io_mapping_unmap(src_map); +} + +/** + * i915_gem_object_read_from_page - read data from the page of a GEM object + * @obj: GEM object to read from + * @offset: offset within the object + * @dst: buffer to store the read data + * @size: size to read + * + * Reads data from @obj at the specified offset. The requested region to read + * from can't cross a page boundary. The caller must ensure that @obj pages + * are pinned and that @obj is synced wrt. any related writes. + * + * Returns 0 on success or -ENODEV if the type of @obj's backing store is + * unsupported. + */ +int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) +{ + GEM_BUG_ON(offset >= obj->base.size); + GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); + GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); + + if (i915_gem_object_has_struct_page(obj)) + i915_gem_object_read_from_page_kmap(obj, offset, dst, size); + else if (i915_gem_object_has_iomem(obj)) + i915_gem_object_read_from_page_iomap(obj, offset, dst, size); + else + return -ENODEV; + + return 0; +} + void i915_gem_init__objects(struct drm_i915_private *i915) { INIT_WORK(&i915->mm.free_work, __i915_gem_free_work); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index be14486f63a7..d0ae834d787a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -188,6 +188,24 @@ i915_gem_object_set_volatile(struct drm_i915_gem_object *obj) } static inline bool +i915_gem_object_has_tiling_quirk(struct drm_i915_gem_object *obj) +{ + return test_bit(I915_TILING_QUIRK_BIT, &obj->flags); +} + +static inline void +i915_gem_object_set_tiling_quirk(struct drm_i915_gem_object *obj) +{ + set_bit(I915_TILING_QUIRK_BIT, &obj->flags); +} + +static inline void +i915_gem_object_clear_tiling_quirk(struct drm_i915_gem_object *obj) +{ + clear_bit(I915_TILING_QUIRK_BIT, &obj->flags); +} + +static inline bool i915_gem_object_type_has(const struct drm_i915_gem_object *obj, unsigned long flags) { @@ -201,6 +219,12 @@ i915_gem_object_has_struct_page(const struct drm_i915_gem_object *obj) } static inline bool +i915_gem_object_has_iomem(const struct drm_i915_gem_object *obj) +{ + return i915_gem_object_type_has(obj, I915_GEM_OBJECT_HAS_IOMEM); +} + +static inline bool i915_gem_object_is_shrinkable(const struct drm_i915_gem_object *obj) { return i915_gem_object_type_has(obj, I915_GEM_OBJECT_IS_SHRINKABLE); @@ -384,14 +408,6 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj); -enum i915_map_type { - I915_MAP_WB = 0, - I915_MAP_WC, -#define I915_MAP_OVERRIDE BIT(31) - I915_MAP_FORCE_WB = I915_MAP_WB | I915_MAP_OVERRIDE, - I915_MAP_FORCE_WC = I915_MAP_WC | I915_MAP_OVERRIDE, -}; - /** * i915_gem_object_pin_map - return a contiguous mapping of the entire object * @obj: the object to map into kernel address space @@ -435,10 +451,6 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj) void __i915_gem_object_release_map(struct drm_i915_gem_object *obj); -void -i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj, - unsigned int flush_domains); - int i915_gem_object_prepare_read(struct drm_i915_gem_object *obj, unsigned int *needs_clflush); int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj, @@ -486,7 +498,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj, u32 alignment, const struct i915_ggtt_view *view, unsigned int flags); -void i915_gem_object_unpin_from_display_plane(struct i915_vma *vma); void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object *obj); void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj); @@ -512,6 +523,9 @@ static inline void __start_cpu_write(struct drm_i915_gem_object *obj) obj->cache_dirty = true; } +void i915_gem_fence_wait_priority(struct dma_fence *fence, + const struct i915_sched_attr *attr); + int i915_gem_object_wait(struct drm_i915_gem_object *obj, unsigned int flags, long timeout); @@ -540,4 +554,8 @@ i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, __i915_gem_object_invalidate_frontbuffer(obj, origin); } +int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size); + +bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj); + #endif diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c index 10cac9fac79b..d6dac21fce0b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c @@ -35,7 +35,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce, count = div_u64(round_up(vma->size, block_size), block_size); size = (1 + 8 * count) * sizeof(u32); size = round_up(size, PAGE_SIZE); - pool = intel_gt_get_buffer_pool(ce->engine->gt, size); + pool = intel_gt_get_buffer_pool(ce->engine->gt, size, I915_MAP_WC); if (IS_ERR(pool)) { err = PTR_ERR(pool); goto out_pm; @@ -55,7 +55,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce, if (unlikely(err)) goto out_put; - cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); + cmd = i915_gem_object_pin_map(pool->obj, pool->type); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out_unpin; @@ -257,7 +257,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce, count = div_u64(round_up(dst->size, block_size), block_size); size = (1 + 11 * count) * sizeof(u32); size = round_up(size, PAGE_SIZE); - pool = intel_gt_get_buffer_pool(ce->engine->gt, size); + pool = intel_gt_get_buffer_pool(ce->engine->gt, size, I915_MAP_WC); if (IS_ERR(pool)) { err = PTR_ERR(pool); goto out_pm; @@ -277,7 +277,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce, if (unlikely(err)) goto out_put; - cmd = i915_gem_object_pin_map(pool->obj, I915_MAP_WC); + cmd = i915_gem_object_pin_map(pool->obj, pool->type); if (IS_ERR(cmd)) { err = PTR_ERR(cmd); goto out_unpin; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index e2d9b7e1e152..0438e00d4ca7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -67,6 +67,14 @@ struct drm_i915_gem_object_ops { const char *name; /* friendly name for debug, e.g. lockdep classes */ }; +enum i915_map_type { + I915_MAP_WB = 0, + I915_MAP_WC, +#define I915_MAP_OVERRIDE BIT(31) + I915_MAP_FORCE_WB = I915_MAP_WB | I915_MAP_OVERRIDE, + I915_MAP_FORCE_WC = I915_MAP_WC | I915_MAP_OVERRIDE, +}; + enum i915_mmap_type { I915_MMAP_TYPE_GTT = 0, I915_MMAP_TYPE_WC, @@ -142,8 +150,6 @@ struct drm_i915_gem_object { */ struct list_head obj_link; - /** Stolen memory for this object, instead of being backed by shmem. */ - struct drm_mm_node *stolen; union { struct rcu_head rcu; struct llist_node freed; @@ -167,6 +173,7 @@ struct drm_i915_gem_object { #define I915_BO_ALLOC_VOLATILE BIT(1) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE) #define I915_BO_READONLY BIT(2) +#define I915_TILING_QUIRK_BIT 3 /* unknown swizzling; do not release! */ /* * Is the object to be mapped as read-only to the GPU @@ -275,12 +282,6 @@ struct drm_i915_gem_object { * pages were last acquired. */ bool dirty:1; - - /** - * This is set if the object has been pinned due to unknown - * swizzling. - */ - bool quirked:1; } mm; /** Record of address bit 17 of each page at last unbind. */ @@ -295,6 +296,8 @@ struct drm_i915_gem_object { struct work_struct *work; } userptr; + struct drm_mm_node *stolen; + unsigned long scratch; u64 encode; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 3db3c667c486..43028f3539a6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -16,6 +16,7 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, { struct drm_i915_private *i915 = to_i915(obj->base.dev); unsigned long supported = INTEL_INFO(i915)->page_sizes; + bool shrinkable; int i; lockdep_assert_held(&obj->mm.lock); @@ -38,13 +39,6 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, obj->mm.pages = pages; - if (i915_gem_object_is_tiled(obj) && - i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { - GEM_BUG_ON(obj->mm.quirked); - __i915_gem_object_pin_pages(obj); - obj->mm.quirked = true; - } - GEM_BUG_ON(!sg_page_sizes); obj->mm.page_sizes.phys = sg_page_sizes; @@ -63,7 +57,16 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, } GEM_BUG_ON(!HAS_PAGE_SIZES(i915, obj->mm.page_sizes.sg)); - if (i915_gem_object_is_shrinkable(obj)) { + shrinkable = i915_gem_object_is_shrinkable(obj); + + if (i915_gem_object_is_tiled(obj) && + i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { + GEM_BUG_ON(i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_set_tiling_quirk(obj); + shrinkable = false; + } + + if (shrinkable) { struct list_head *list; unsigned long flags; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 3a4dfe2ef1da..3c0b157e2a35 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -213,7 +213,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (obj->ops == &i915_gem_phys_ops) return 0; - if (obj->ops != &i915_gem_shmem_ops) + if (!i915_gem_object_is_shmem(obj)) return -EINVAL; err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE); @@ -227,7 +227,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) goto err_unlock; } - if (obj->mm.quirked) { + if (i915_gem_object_has_tiling_quirk(obj)) { err = -EFAULT; goto err_unlock; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pm.c b/drivers/gpu/drm/i915/gem/i915_gem_pm.c index 40d3e40500fa..215766cc22bf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pm.c @@ -11,6 +11,13 @@ #include "i915_drv.h" +#if defined(CONFIG_X86) +#include <asm/smp.h> +#else +#define wbinvd_on_all_cpus() \ + pr_warn(DRIVER_NAME ": Missing cache flush in %s\n", __func__) +#endif + void i915_gem_suspend(struct drm_i915_private *i915) { GEM_TRACE("%s\n", dev_name(i915->drm.dev)); @@ -32,13 +39,6 @@ void i915_gem_suspend(struct drm_i915_private *i915) i915_gem_drain_freed_objects(i915); } -static struct drm_i915_gem_object *first_mm_object(struct list_head *list) -{ - return list_first_entry_or_null(list, - struct drm_i915_gem_object, - mm.link); -} - void i915_gem_suspend_late(struct drm_i915_private *i915) { struct drm_i915_gem_object *obj; @@ -48,6 +48,7 @@ void i915_gem_suspend_late(struct drm_i915_private *i915) NULL }, **phase; unsigned long flags; + bool flush = false; /* * Neither the BIOS, ourselves or any other kernel @@ -73,29 +74,15 @@ void i915_gem_suspend_late(struct drm_i915_private *i915) spin_lock_irqsave(&i915->mm.obj_lock, flags); for (phase = phases; *phase; phase++) { - LIST_HEAD(keep); - - while ((obj = first_mm_object(*phase))) { - list_move_tail(&obj->mm.link, &keep); - - /* Beware the background _i915_gem_free_objects */ - if (!kref_get_unless_zero(&obj->base.refcount)) - continue; - - spin_unlock_irqrestore(&i915->mm.obj_lock, flags); - - i915_gem_object_lock(obj, NULL); - drm_WARN_ON(&i915->drm, - i915_gem_object_set_to_gtt_domain(obj, false)); - i915_gem_object_unlock(obj); - i915_gem_object_put(obj); - - spin_lock_irqsave(&i915->mm.obj_lock, flags); + list_for_each_entry(obj, *phase, mm.link) { + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) + flush |= (obj->read_domains & I915_GEM_DOMAIN_CPU) == 0; + __start_cpu_write(obj); /* presume auto-hibernate */ } - - list_splice_tail(&keep, *phase); } spin_unlock_irqrestore(&i915->mm.obj_lock, flags); + if (flush) + wbinvd_on_all_cpus(); } void i915_gem_resume(struct drm_i915_private *i915) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 835bd01f2e5d..3e3dad22a683 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -143,6 +143,7 @@ i915_gem_object_create_region(struct intel_memory_region *mem, unsigned int flags) { struct drm_i915_gem_object *obj; + int err; /* * NB: Our use of resource_size_t for the size stems from using struct @@ -173,9 +174,18 @@ i915_gem_object_create_region(struct intel_memory_region *mem, if (overflows_type(size, obj->base.size)) return ERR_PTR(-E2BIG); - obj = mem->ops->create_object(mem, size, flags); - if (!IS_ERR(obj)) - trace_i915_gem_object_create(obj); + obj = i915_gem_object_alloc(); + if (!obj) + return ERR_PTR(-ENOMEM); + err = mem->ops->init_object(mem, obj, size, flags); + if (err) + goto err_object_free; + + trace_i915_gem_object_create(obj); return obj; + +err_object_free: + i915_gem_object_free(obj); + return ERR_PTR(err); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 75e8b71c18b9..cf83c208688c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -464,26 +464,21 @@ static int __create_shmem(struct drm_i915_private *i915, return 0; } -static struct drm_i915_gem_object * -create_shmem(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags) +static int shmem_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags) { static struct lock_class_key lock_class; struct drm_i915_private *i915 = mem->i915; - struct drm_i915_gem_object *obj; struct address_space *mapping; unsigned int cache_level; gfp_t mask; int ret; - obj = i915_gem_object_alloc(); - if (!obj) - return ERR_PTR(-ENOMEM); - ret = __create_shmem(i915, &obj->base, size); if (ret) - goto fail; + return ret; mask = GFP_HIGHUSER | __GFP_RECLAIMABLE; if (IS_I965GM(i915) || IS_I965G(i915)) { @@ -522,11 +517,7 @@ create_shmem(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem, 0); - return obj; - -fail: - i915_gem_object_free(obj); - return ERR_PTR(ret); + return 0; } struct drm_i915_gem_object * @@ -611,7 +602,7 @@ static void release_shmem(struct intel_memory_region *mem) static const struct intel_memory_region_ops shmem_region_ops = { .init = init_shmem, .release = release_shmem, - .create_object = create_shmem, + .init_object = shmem_object_init, }; struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915) @@ -621,3 +612,8 @@ struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915) PAGE_SIZE, 0, &shmem_region_ops); } + +bool i915_gem_object_is_shmem(const struct drm_i915_gem_object *obj) +{ + return obj->ops == &i915_gem_shmem_ops; +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index 41b9fbf4dbcc..551935348ad8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -621,18 +621,13 @@ static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = { .release = i915_gem_object_release_stolen, }; -static struct drm_i915_gem_object * -__i915_gem_object_create_stolen(struct intel_memory_region *mem, - struct drm_mm_node *stolen) +static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + struct drm_mm_node *stolen) { static struct lock_class_key lock_class; - struct drm_i915_gem_object *obj; unsigned int cache_level; - int err = -ENOMEM; - - obj = i915_gem_object_alloc(); - if (!obj) - goto err; + int err; drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class); @@ -644,55 +639,47 @@ __i915_gem_object_create_stolen(struct intel_memory_region *mem, err = i915_gem_object_pin_pages(obj); if (err) - goto cleanup; + return err; i915_gem_object_init_memory_region(obj, mem, 0); - return obj; - -cleanup: - i915_gem_object_free(obj); -err: - return ERR_PTR(err); + return 0; } -static struct drm_i915_gem_object * -_i915_gem_object_create_stolen(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags) +static int _i915_gem_object_stolen_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags) { struct drm_i915_private *i915 = mem->i915; - struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; int ret; if (!drm_mm_initialized(&i915->mm.stolen)) - return ERR_PTR(-ENODEV); + return -ENODEV; if (size == 0) - return ERR_PTR(-EINVAL); + return -EINVAL; stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); if (!stolen) - return ERR_PTR(-ENOMEM); + return -ENOMEM; ret = i915_gem_stolen_insert_node(i915, stolen, size, 4096); - if (ret) { - obj = ERR_PTR(ret); + if (ret) goto err_free; - } - obj = __i915_gem_object_create_stolen(mem, stolen); - if (IS_ERR(obj)) + ret = __i915_gem_object_create_stolen(mem, obj, stolen); + if (ret) goto err_remove; - return obj; + return 0; err_remove: i915_gem_stolen_remove_node(i915, stolen); err_free: kfree(stolen); - return obj; + return ret; } struct drm_i915_gem_object * @@ -722,7 +709,7 @@ static void release_stolen(struct intel_memory_region *mem) static const struct intel_memory_region_ops i915_region_stolen_ops = { .init = init_stolen, .release = release_stolen, - .create_object = _i915_gem_object_create_stolen, + .init_object = _i915_gem_object_stolen_init, }; struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915) @@ -771,16 +758,31 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915, goto err_free; } - obj = __i915_gem_object_create_stolen(mem, stolen); - if (IS_ERR(obj)) + obj = i915_gem_object_alloc(); + if (!obj) { + obj = ERR_PTR(-ENOMEM); goto err_stolen; + } + + ret = __i915_gem_object_create_stolen(mem, obj, stolen); + if (ret) { + obj = ERR_PTR(ret); + goto err_object_free; + } i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE); return obj; +err_object_free: + i915_gem_object_free(obj); err_stolen: i915_gem_stolen_remove_node(i915, stolen); err_free: kfree(stolen); return obj; } + +bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj) +{ + return obj->ops == &i915_gem_object_stolen_ops; +} diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h index 61e028063f9f..b03489706796 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h @@ -30,6 +30,8 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv resource_size_t stolen_offset, resource_size_t size); +bool i915_gem_object_is_stolen(const struct drm_i915_gem_object *obj); + #define I915_GEM_STOLEN_BIAS SZ_128K #endif /* __I915_GEM_STOLEN_H__ */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c index ffcaee74a249..d589d3d81085 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c @@ -270,14 +270,14 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, obj->mm.madv == I915_MADV_WILLNEED && i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { if (tiling == I915_TILING_NONE) { - GEM_BUG_ON(!obj->mm.quirked); - __i915_gem_object_unpin_pages(obj); - obj->mm.quirked = false; + GEM_BUG_ON(!i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_clear_tiling_quirk(obj); + i915_gem_object_make_shrinkable(obj); } if (!i915_gem_object_is_tiled(obj)) { - GEM_BUG_ON(obj->mm.quirked); - __i915_gem_object_pin_pages(obj); - obj->mm.quirked = true; + GEM_BUG_ON(i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_make_unshrinkable(obj); + i915_gem_object_set_tiling_quirk(obj); } } mutex_unlock(&obj->mm.lock); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index c1b13ac50d0f..4b9856d5ba14 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -5,6 +5,7 @@ */ #include <linux/dma-fence-array.h> +#include <linux/dma-fence-chain.h> #include <linux/jiffies.h> #include "gt/intel_engine.h" @@ -44,8 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, unsigned int count, i; int ret; - ret = dma_resv_get_fences_rcu(resv, - &excl, &count, &shared); + ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared); if (ret) return ret; @@ -91,8 +91,8 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, return timeout; } -static void __fence_set_priority(struct dma_fence *fence, - const struct i915_sched_attr *attr) +static void fence_set_priority(struct dma_fence *fence, + const struct i915_sched_attr *attr) { struct i915_request *rq; struct intel_engine_cs *engine; @@ -103,27 +103,47 @@ static void __fence_set_priority(struct dma_fence *fence, rq = to_request(fence); engine = rq->engine; - local_bh_disable(); rcu_read_lock(); /* RCU serialisation for set-wedged protection */ if (engine->schedule) engine->schedule(rq, attr); rcu_read_unlock(); - local_bh_enable(); /* kick the tasklets if queues were reprioritised */ } -static void fence_set_priority(struct dma_fence *fence, - const struct i915_sched_attr *attr) +static inline bool __dma_fence_is_chain(const struct dma_fence *fence) +{ + return fence->ops == &dma_fence_chain_ops; +} + +void i915_gem_fence_wait_priority(struct dma_fence *fence, + const struct i915_sched_attr *attr) { + if (dma_fence_is_signaled(fence)) + return; + + local_bh_disable(); + /* Recurse once into a fence-array */ if (dma_fence_is_array(fence)) { struct dma_fence_array *array = to_dma_fence_array(fence); int i; for (i = 0; i < array->num_fences; i++) - __fence_set_priority(array->fences[i], attr); + fence_set_priority(array->fences[i], attr); + } else if (__dma_fence_is_chain(fence)) { + struct dma_fence *iter; + + /* The chain is ordered; if we boost the last, we boost all */ + dma_fence_chain_for_each(iter, fence) { + fence_set_priority(to_dma_fence_chain(iter)->fence, + attr); + break; + } + dma_fence_put(iter); } else { - __fence_set_priority(fence, attr); + fence_set_priority(fence, attr); } + + local_bh_enable(); /* kick the tasklets if queues were reprioritised */ } int @@ -139,12 +159,12 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, int ret; ret = dma_resv_get_fences_rcu(obj->base.resv, - &excl, &count, &shared); + &excl, &count, &shared); if (ret) return ret; for (i = 0; i < count; i++) { - fence_set_priority(shared[i], attr); + i915_gem_fence_wait_priority(shared[i], attr); dma_fence_put(shared[i]); } @@ -154,7 +174,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, } if (excl) { - fence_set_priority(excl, attr); + i915_gem_fence_wait_priority(excl, attr); dma_fence_put(excl); } return 0; diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c index 680bd9442eb0..e08dff376339 100644 --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c @@ -12,9 +12,9 @@ #include "intel_gt.h" /* Write pde (index) from the page directory @pd to the page table @pt */ -static inline void gen6_write_pde(const struct gen6_ppgtt *ppgtt, - const unsigned int pde, - const struct i915_page_table *pt) +static void gen6_write_pde(const struct gen6_ppgtt *ppgtt, + const unsigned int pde, + const struct i915_page_table *pt) { dma_addr_t addr = pt ? px_dma(pt) : px_dma(ppgtt->base.vm.scratch[1]); @@ -27,8 +27,6 @@ void gen7_ppgtt_enable(struct intel_gt *gt) { struct drm_i915_private *i915 = gt->i915; struct intel_uncore *uncore = gt->uncore; - struct intel_engine_cs *engine; - enum intel_engine_id id; u32 ecochk; intel_uncore_rmw(uncore, GAC_ECO_BITS, 0, ECOBITS_PPGTT_CACHE64B); @@ -41,13 +39,6 @@ void gen7_ppgtt_enable(struct intel_gt *gt) ecochk &= ~ECOCHK_PPGTT_GFDT_IVB; } intel_uncore_write(uncore, GAM_ECOCHK, ecochk); - - for_each_engine(engine, gt, id) { - /* GFX_MODE is per-ring on gen7+ */ - ENGINE_WRITE(engine, - RING_MODE_GEN7, - _MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE)); - } } void gen6_ppgtt_enable(struct intel_gt *gt) diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c index 94465374ca2f..8551e6de50e8 100644 --- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c +++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c @@ -40,7 +40,7 @@ struct batch_vals { u32 size; }; -static inline int num_primitives(const struct batch_vals *bv) +static int num_primitives(const struct batch_vals *bv) { /* * We need to saturate the GPU with work in order to dispatch @@ -353,19 +353,21 @@ static void gen7_emit_pipeline_flush(struct batch_chunk *batch) static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch) { - u32 *cs = batch_alloc_items(batch, 0, 8); + u32 *cs = batch_alloc_items(batch, 0, 10); /* ivb: Stall before STATE_CACHE_INVALIDATE */ - *cs++ = GFX_OP_PIPE_CONTROL(4); + *cs++ = GFX_OP_PIPE_CONTROL(5); *cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD | PIPE_CONTROL_CS_STALL; *cs++ = 0; *cs++ = 0; + *cs++ = 0; - *cs++ = GFX_OP_PIPE_CONTROL(4); + *cs++ = GFX_OP_PIPE_CONTROL(5); *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE; *cs++ = 0; *cs++ = 0; + *cs++ = 0; batch_advance(batch, cs); } @@ -390,6 +392,17 @@ static void emit_batch(struct i915_vma * const vma, &cb_kernel_ivb, desc_count); + /* Reset inherited context registers */ + gen7_emit_pipeline_invalidate(&cmds); + batch_add(&cmds, MI_LOAD_REGISTER_IMM(2)); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7)); + batch_add(&cmds, 0xffff0000); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1)); + batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE); + gen7_emit_pipeline_invalidate(&cmds); + gen7_emit_pipeline_flush(&cmds); + + /* Switch to the media pipeline and our base address */ gen7_emit_pipeline_invalidate(&cmds); batch_add(&cmds, PIPELINE_SELECT | PIPELINE_SELECT_MEDIA); batch_add(&cmds, MI_NOOP); @@ -399,9 +412,11 @@ static void emit_batch(struct i915_vma * const vma, gen7_emit_state_base_address(&cmds, descriptors); gen7_emit_pipeline_invalidate(&cmds); + /* Set the clear-residual kernel state */ gen7_emit_vfe_state(&cmds, bv, urb_size - 1, 0, 0); gen7_emit_interface_descriptor_load(&cmds, descriptors, desc_count); + /* Execute the kernel on all HW threads */ for (i = 0; i < num_primitives(bv); i++) gen7_emit_media_object(&cmds, i); diff --git a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c index 8066b93e6dc4..07ba524da90b 100644 --- a/drivers/gpu/drm/i915/gt/gen8_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/gen8_engine_cs.c @@ -330,7 +330,7 @@ int gen12_emit_flush_xcs(struct i915_request *rq, u32 mode) return 0; } -static inline u32 preempt_address(struct intel_engine_cs *engine) +static u32 preempt_address(struct intel_engine_cs *engine) { return (i915_ggtt_offset(engine->status_page.vma) + I915_GEM_HWS_PREEMPT_ADDR); @@ -488,6 +488,7 @@ static u32 *gen8_emit_wa_tail(struct i915_request *rq, u32 *cs) static u32 *emit_preempt_busywait(struct i915_request *rq, u32 *cs) { + *cs++ = MI_ARB_CHECK; /* trigger IDLE->ACTIVE first */ *cs++ = MI_SEMAPHORE_WAIT | MI_SEMAPHORE_GLOBAL_GTT | MI_SEMAPHORE_POLL | @@ -495,6 +496,7 @@ static u32 *emit_preempt_busywait(struct i915_request *rq, u32 *cs) *cs++ = 0; *cs++ = preempt_address(rq->engine); *cs++ = 0; + *cs++ = MI_NOOP; return cs; } diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c index a37c968ef8f7..755522ced60d 100644 --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c @@ -109,7 +109,7 @@ static void gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create) #define as_pd(x) container_of((x), typeof(struct i915_page_directory), pt) -static inline unsigned int +static unsigned int gen8_pd_range(u64 start, u64 end, int lvl, unsigned int *idx) { const int shift = gen8_pd_shift(lvl); @@ -125,7 +125,7 @@ gen8_pd_range(u64 start, u64 end, int lvl, unsigned int *idx) return i915_pde_index(end, shift) - *idx; } -static inline bool gen8_pd_contains(u64 start, u64 end, int lvl) +static bool gen8_pd_contains(u64 start, u64 end, int lvl) { const u64 mask = ~0ull << gen8_pd_shift(lvl + 1); @@ -133,7 +133,7 @@ static inline bool gen8_pd_contains(u64 start, u64 end, int lvl) return (start ^ end) & mask && (start & ~mask) == 0; } -static inline unsigned int gen8_pt_count(u64 start, u64 end) +static unsigned int gen8_pt_count(u64 start, u64 end) { GEM_BUG_ON(start >= end); if ((start ^ end) >> gen8_pd_shift(1)) @@ -142,14 +142,13 @@ static inline unsigned int gen8_pt_count(u64 start, u64 end) return end - start; } -static inline unsigned int -gen8_pd_top_count(const struct i915_address_space *vm) +static unsigned int gen8_pd_top_count(const struct i915_address_space *vm) { unsigned int shift = __gen8_pte_shift(vm->top); return (vm->total + (1ull << shift) - 1) >> shift; } -static inline struct i915_page_directory * +static struct i915_page_directory * gen8_pdp_for_page_index(struct i915_address_space * const vm, const u64 idx) { struct i915_ppgtt * const ppgtt = i915_vm_to_ppgtt(vm); @@ -160,7 +159,7 @@ gen8_pdp_for_page_index(struct i915_address_space * const vm, const u64 idx) return i915_pd_entry(ppgtt->pd, gen8_pd_index(idx, vm->top)); } -static inline struct i915_page_directory * +static struct i915_page_directory * gen8_pdp_for_page_address(struct i915_address_space * const vm, const u64 addr) { return gen8_pdp_for_page_index(vm, addr >> GEN8_PTE_SHIFT); diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c index be2c285a0ac7..34a645d6babd 100644 --- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c +++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c @@ -453,16 +453,17 @@ void i915_request_cancel_breadcrumb(struct i915_request *rq) { struct intel_breadcrumbs *b = READ_ONCE(rq->engine)->breadcrumbs; struct intel_context *ce = rq->context; - unsigned long flags; bool release; - if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags)) + spin_lock(&ce->signal_lock); + if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags)) { + spin_unlock(&ce->signal_lock); return; + } - spin_lock_irqsave(&ce->signal_lock, flags); list_del_rcu(&rq->signal_link); release = remove_signaling_context(b, ce); - spin_unlock_irqrestore(&ce->signal_lock, flags); + spin_unlock(&ce->signal_lock); if (release) intel_context_put(ce); @@ -517,8 +518,8 @@ static void print_signals(struct intel_breadcrumbs *b, struct drm_printer *p) list_for_each_entry_rcu(rq, &ce->signals, signal_link) drm_printf(p, "\t[%llx:%llx%s] @ %dms\n", rq->fence.context, rq->fence.seqno, - i915_request_completed(rq) ? "!" : - i915_request_started(rq) ? "*" : + __i915_request_is_complete(rq) ? "!" : + __i915_request_has_started(rq) ? "*" : "", jiffies_to_msecs(jiffies - rq->emitted_jiffies)); } diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index fa76602f9852..fb1b1d096975 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -342,7 +342,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id) engine->schedule = NULL; ewma__engine_latency_init(&engine->latency); - seqlock_init(&engine->stats.lock); + seqcount_init(&engine->stats.lock); ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier); @@ -1676,7 +1676,7 @@ void intel_engine_dump(struct intel_engine_cs *engine, ktime_to_ms(intel_engine_get_busy_time(engine, &dummy))); drm_printf(m, "\tForcewake: %x domains, %d active\n", - engine->fw_domain, atomic_read(&engine->fw_active)); + engine->fw_domain, READ_ONCE(engine->fw_active)); rcu_read_lock(); rq = READ_ONCE(engine->heartbeat.systole); @@ -1754,7 +1754,7 @@ static ktime_t __intel_engine_get_busy_time(struct intel_engine_cs *engine, * add it to the total. */ *now = ktime_get(); - if (atomic_read(&engine->stats.active)) + if (READ_ONCE(engine->stats.active)) total = ktime_add(total, ktime_sub(*now, engine->stats.start)); return total; @@ -1773,9 +1773,9 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now) ktime_t total; do { - seq = read_seqbegin(&engine->stats.lock); + seq = read_seqcount_begin(&engine->stats.lock); total = __intel_engine_get_busy_time(engine, now); - } while (read_seqretry(&engine->stats.lock, seq)); + } while (read_seqcount_retry(&engine->stats.lock, seq)); return total; } @@ -1811,7 +1811,7 @@ intel_engine_find_active_request(struct intel_engine_cs *engine) struct intel_timeline *tl = request->context->timeline; list_for_each_entry_from_reverse(request, &tl->requests, link) { - if (i915_request_completed(request)) + if (__i915_request_is_complete(request)) break; active = request; @@ -1822,10 +1822,10 @@ intel_engine_find_active_request(struct intel_engine_cs *engine) return active; list_for_each_entry(request, &engine->active.requests, sched.link) { - if (i915_request_completed(request)) + if (__i915_request_is_complete(request)) continue; - if (!i915_request_started(request)) + if (!__i915_request_has_started(request)) continue; /* More than one preemptible request may match! */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c index 2843db731b7d..e67d09259dd0 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c @@ -79,7 +79,7 @@ static int __engine_unpark(struct intel_wakeref *wf) #if IS_ENABLED(CONFIG_LOCKDEP) -static inline unsigned long __timeline_mark_lock(struct intel_context *ce) +static unsigned long __timeline_mark_lock(struct intel_context *ce) { unsigned long flags; @@ -89,8 +89,8 @@ static inline unsigned long __timeline_mark_lock(struct intel_context *ce) return flags; } -static inline void __timeline_mark_unlock(struct intel_context *ce, - unsigned long flags) +static void __timeline_mark_unlock(struct intel_context *ce, + unsigned long flags) { mutex_release(&ce->timeline->mutex.dep_map, _THIS_IP_); local_irq_restore(flags); @@ -98,13 +98,13 @@ static inline void __timeline_mark_unlock(struct intel_context *ce, #else -static inline unsigned long __timeline_mark_lock(struct intel_context *ce) +static unsigned long __timeline_mark_lock(struct intel_context *ce) { return 0; } -static inline void __timeline_mark_unlock(struct intel_context *ce, - unsigned long flags) +static void __timeline_mark_unlock(struct intel_context *ce, + unsigned long flags) { } diff --git a/drivers/gpu/drm/i915/gt/intel_engine_stats.h b/drivers/gpu/drm/i915/gt/intel_engine_stats.h new file mode 100644 index 000000000000..24fbdd94351a --- /dev/null +++ b/drivers/gpu/drm/i915/gt/intel_engine_stats.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2020 Intel Corporation + */ + +#ifndef __INTEL_ENGINE_STATS_H__ +#define __INTEL_ENGINE_STATS_H__ + +#include <linux/atomic.h> +#include <linux/ktime.h> +#include <linux/seqlock.h> + +#include "i915_gem.h" /* GEM_BUG_ON */ +#include "intel_engine.h" + +static inline void intel_engine_context_in(struct intel_engine_cs *engine) +{ + unsigned long flags; + + if (engine->stats.active) { + engine->stats.active++; + return; + } + + /* The writer is serialised; but the pmu reader may be from hardirq */ + local_irq_save(flags); + write_seqcount_begin(&engine->stats.lock); + + engine->stats.start = ktime_get(); + engine->stats.active++; + + write_seqcount_end(&engine->stats.lock); + local_irq_restore(flags); + + GEM_BUG_ON(!engine->stats.active); +} + +static inline void intel_engine_context_out(struct intel_engine_cs *engine) +{ + unsigned long flags; + + GEM_BUG_ON(!engine->stats.active); + if (engine->stats.active > 1) { + engine->stats.active--; + return; + } + + local_irq_save(flags); + write_seqcount_begin(&engine->stats.lock); + + engine->stats.active--; + engine->stats.total = + ktime_add(engine->stats.total, + ktime_sub(ktime_get(), engine->stats.start)); + + write_seqcount_end(&engine->stats.lock); + local_irq_restore(flags); +} + +#endif /* __INTEL_ENGINE_STATS_H__ */ diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h index df62e793e747..d2346b425547 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_types.h +++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h @@ -319,7 +319,7 @@ struct intel_engine_cs { * as possible. */ enum forcewake_domains fw_domain; - atomic_t fw_active; + unsigned int fw_active; unsigned long context_tag; @@ -516,12 +516,12 @@ struct intel_engine_cs { /** * @active: Number of contexts currently scheduled in. */ - atomic_t active; + unsigned int active; /** * @lock: Lock protecting the below fields. */ - seqlock_t lock; + seqcount_t lock; /** * @total: Total time this engine was busy. diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index d7d5a58990bb..ac1be7a632d3 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -115,6 +115,7 @@ #include "intel_breadcrumbs.h" #include "intel_context.h" #include "intel_engine_pm.h" +#include "intel_engine_stats.h" #include "intel_execlists_submission.h" #include "intel_gt.h" #include "intel_gt_pm.h" @@ -230,8 +231,7 @@ active_request(const struct intel_timeline * const tl, struct i915_request *rq) return __active_request(tl, rq, 0); } -static inline void -ring_set_paused(const struct intel_engine_cs *engine, int state) +static void ring_set_paused(const struct intel_engine_cs *engine, int state) { /* * We inspect HWS_PREEMPT with a semaphore inside @@ -244,12 +244,12 @@ ring_set_paused(const struct intel_engine_cs *engine, int state) wmb(); } -static inline struct i915_priolist *to_priolist(struct rb_node *rb) +static struct i915_priolist *to_priolist(struct rb_node *rb) { return rb_entry(rb, struct i915_priolist, node); } -static inline int rq_prio(const struct i915_request *rq) +static int rq_prio(const struct i915_request *rq) { return READ_ONCE(rq->sched.attr.priority); } @@ -299,8 +299,8 @@ static int virtual_prio(const struct intel_engine_execlists *el) return rb ? rb_entry(rb, struct ve_node, rb)->prio : INT_MIN; } -static inline bool need_preempt(const struct intel_engine_cs *engine, - const struct i915_request *rq) +static bool need_preempt(const struct intel_engine_cs *engine, + const struct i915_request *rq) { int last_prio; @@ -351,7 +351,7 @@ static inline bool need_preempt(const struct intel_engine_cs *engine, queue_prio(&engine->execlists)) > last_prio; } -__maybe_unused static inline bool +__maybe_unused static bool assert_priority_queue(const struct i915_request *prev, const struct i915_request *next) { @@ -418,7 +418,7 @@ execlists_unwind_incomplete_requests(struct intel_engine_execlists *execlists) return __unwind_incomplete_requests(engine); } -static inline void +static void execlists_context_status_change(struct i915_request *rq, unsigned long status) { /* @@ -432,39 +432,6 @@ execlists_context_status_change(struct i915_request *rq, unsigned long status) status, rq); } -static void intel_engine_context_in(struct intel_engine_cs *engine) -{ - unsigned long flags; - - if (atomic_add_unless(&engine->stats.active, 1, 0)) - return; - - write_seqlock_irqsave(&engine->stats.lock, flags); - if (!atomic_add_unless(&engine->stats.active, 1, 0)) { - engine->stats.start = ktime_get(); - atomic_inc(&engine->stats.active); - } - write_sequnlock_irqrestore(&engine->stats.lock, flags); -} - -static void intel_engine_context_out(struct intel_engine_cs *engine) -{ - unsigned long flags; - - GEM_BUG_ON(!atomic_read(&engine->stats.active)); - - if (atomic_add_unless(&engine->stats.active, -1, 1)) - return; - - write_seqlock_irqsave(&engine->stats.lock, flags); - if (atomic_dec_and_test(&engine->stats.active)) { - engine->stats.total = - ktime_add(engine->stats.total, - ktime_sub(ktime_get(), engine->stats.start)); - } - write_sequnlock_irqrestore(&engine->stats.lock, flags); -} - static void reset_active(struct i915_request *rq, struct intel_engine_cs *engine) { @@ -503,7 +470,7 @@ static void reset_active(struct i915_request *rq, ce->lrc.lrca = lrc_update_regs(ce, engine, head); } -static inline struct intel_engine_cs * +static struct intel_engine_cs * __execlists_schedule_in(struct i915_request *rq) { struct intel_engine_cs * const engine = rq->engine; @@ -539,7 +506,7 @@ __execlists_schedule_in(struct i915_request *rq) ce->lrc.ccid |= engine->execlists.ccid; __intel_gt_pm_get(engine->gt); - if (engine->fw_domain && !atomic_fetch_inc(&engine->fw_active)) + if (engine->fw_domain && !engine->fw_active++) intel_uncore_forcewake_get(engine->uncore, engine->fw_domain); execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_IN); intel_engine_context_in(engine); @@ -549,7 +516,7 @@ __execlists_schedule_in(struct i915_request *rq) return engine; } -static inline void execlists_schedule_in(struct i915_request *rq, int idx) +static void execlists_schedule_in(struct i915_request *rq, int idx) { struct intel_context * const ce = rq->context; struct intel_engine_cs *old; @@ -608,9 +575,9 @@ static void kick_siblings(struct i915_request *rq, struct intel_context *ce) tasklet_hi_schedule(&ve->base.execlists.tasklet); } -static inline void __execlists_schedule_out(struct i915_request *rq) +static void __execlists_schedule_out(struct i915_request * const rq, + struct intel_context * const ce) { - struct intel_context * const ce = rq->context; struct intel_engine_cs * const engine = rq->engine; unsigned int ccid; @@ -621,6 +588,7 @@ static inline void __execlists_schedule_out(struct i915_request *rq) */ CE_TRACE(ce, "schedule-out, ccid:%x\n", ce->lrc.ccid); + GEM_BUG_ON(ce->inflight != engine); if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) lrc_check_regs(ce, engine, "after"); @@ -645,7 +613,7 @@ static inline void __execlists_schedule_out(struct i915_request *rq) lrc_update_runtime(ce); intel_engine_context_out(engine); execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); - if (engine->fw_domain && !atomic_dec_return(&engine->fw_active)) + if (engine->fw_domain && !--engine->fw_active) intel_uncore_forcewake_put(engine->uncore, engine->fw_domain); intel_gt_pm_put_async(engine->gt); @@ -660,10 +628,12 @@ static inline void __execlists_schedule_out(struct i915_request *rq) */ if (ce->engine != engine) kick_siblings(rq, ce); + + WRITE_ONCE(ce->inflight, NULL); + intel_context_put(ce); } -static inline void -execlists_schedule_out(struct i915_request *rq) +static inline void execlists_schedule_out(struct i915_request *rq) { struct intel_context * const ce = rq->context; @@ -671,12 +641,8 @@ execlists_schedule_out(struct i915_request *rq) GEM_BUG_ON(!ce->inflight); ce->inflight = ptr_dec(ce->inflight); - if (!__intel_context_inflight_count(ce->inflight)) { - GEM_BUG_ON(ce->inflight != rq->engine); - __execlists_schedule_out(rq); - WRITE_ONCE(ce->inflight, NULL); - intel_context_put(ce); - } + if (!__intel_context_inflight_count(ce->inflight)) + __execlists_schedule_out(rq, ce); i915_request_put(rq); } @@ -728,7 +694,7 @@ static u64 execlists_update_context(struct i915_request *rq) return desc; } -static inline void write_desc(struct intel_engine_execlists *execlists, u64 desc, u32 port) +static void write_desc(struct intel_engine_execlists *execlists, u64 desc, u32 port) { if (execlists->ctrl_reg) { writel(lower_32_bits(desc), execlists->submit_reg + port * 2); @@ -757,7 +723,7 @@ dump_port(char *buf, int buflen, const char *prefix, struct i915_request *rq) return buf; } -static __maybe_unused void +static __maybe_unused noinline void trace_ports(const struct intel_engine_execlists *execlists, const char *msg, struct i915_request * const *ports) @@ -774,13 +740,13 @@ trace_ports(const struct intel_engine_execlists *execlists, dump_port(p1, sizeof(p1), ", ", ports[1])); } -static inline bool +static bool reset_in_progress(const struct intel_engine_execlists *execlists) { return unlikely(!__tasklet_is_enabled(&execlists->tasklet)); } -static __maybe_unused bool +static __maybe_unused noinline bool assert_pending_valid(const struct intel_engine_execlists *execlists, const char *msg) { @@ -1258,12 +1224,20 @@ static void set_preempt_timeout(struct intel_engine_cs *engine, active_preempt_timeout(engine, rq)); } +static bool completed(const struct i915_request *rq) +{ + if (i915_request_has_sentinel(rq)) + return false; + + return __i915_request_is_complete(rq); +} + static void execlists_dequeue(struct intel_engine_cs *engine) { struct intel_engine_execlists * const execlists = &engine->execlists; struct i915_request **port = execlists->pending; struct i915_request ** const last_port = port + execlists->port_mask; - struct i915_request *last = *execlists->active; + struct i915_request *last, * const *active; struct virtual_engine *ve; struct rb_node *rb; bool submit = false; @@ -1300,21 +1274,13 @@ static void execlists_dequeue(struct intel_engine_cs *engine) * i.e. we will retrigger preemption following the ack in case * of trouble. * - * In theory we can skip over completed contexts that have not - * yet been processed by events (as those events are in flight): - * - * while ((last = *active) && i915_request_completed(last)) - * active++; - * - * However, the GPU cannot handle this as it will ultimately - * find itself trying to jump back into a context it has just - * completed and barf. */ + active = execlists->active; + while ((last = *active) && completed(last)) + active++; if (last) { - if (__i915_request_is_complete(last)) { - goto check_secondary; - } else if (need_preempt(engine, last)) { + if (need_preempt(engine, last)) { ENGINE_TRACE(engine, "preempting last=%llx:%lld, prio=%d, hint=%d\n", last->fence.context, @@ -1393,9 +1359,7 @@ static void execlists_dequeue(struct intel_engine_cs *engine) * we hopefully coalesce several updates into a single * submission. */ -check_secondary: - if (!list_is_last(&last->sched.link, - &engine->active.requests)) { + if (active[1]) { /* * Even if ELSP[1] is occupied and not worthy * of timeslices, our queue might be. @@ -1596,7 +1560,7 @@ done: * of ordered contexts. */ if (submit && - memcmp(execlists->active, + memcmp(active, execlists->pending, (port - execlists->pending) * sizeof(*port))) { *port = NULL; @@ -1604,7 +1568,7 @@ done: execlists_schedule_in(*port, port - execlists->pending); WRITE_ONCE(execlists->yield, -1); - set_preempt_timeout(engine, *execlists->active); + set_preempt_timeout(engine, *active); execlists_submit_ports(engine); } else { ring_set_paused(engine, 0); @@ -1621,12 +1585,12 @@ static void execlists_dequeue_irq(struct intel_engine_cs *engine) local_irq_enable(); /* flush irq_work (e.g. breadcrumb enabling) */ } -static inline void clear_ports(struct i915_request **ports, int count) +static void clear_ports(struct i915_request **ports, int count) { memset_p((void **)ports, NULL, count); } -static inline void +static void copy_ports(struct i915_request **dst, struct i915_request **src, int count) { /* A memcpy_p() would be very useful here! */ @@ -1660,8 +1624,7 @@ cancel_port_requests(struct intel_engine_execlists * const execlists, return inactive; } -static inline void -invalidate_csb_entries(const u64 *first, const u64 *last) +static void invalidate_csb_entries(const u64 *first, const u64 *last) { clflush((void *)first); clflush((void *)last); @@ -1693,7 +1656,7 @@ invalidate_csb_entries(const u64 *first, const u64 *last) * bits 47-57: sw context id of the lrc the GT switched away from * bits 58-63: sw counter of the lrc the GT switched away from */ -static inline bool gen12_csb_parse(const u64 csb) +static bool gen12_csb_parse(const u64 csb) { bool ctx_away_valid = GEN12_CSB_CTX_VALID(upper_32_bits(csb)); bool new_queue = @@ -1720,7 +1683,7 @@ static inline bool gen12_csb_parse(const u64 csb) return false; } -static inline bool gen8_csb_parse(const u64 csb) +static bool gen8_csb_parse(const u64 csb) { return csb & (GEN8_CTX_STATUS_IDLE_ACTIVE | GEN8_CTX_STATUS_PREEMPTED); } @@ -1759,8 +1722,7 @@ wa_csb_read(const struct intel_engine_cs *engine, u64 * const csb) return entry; } -static inline u64 -csb_read(const struct intel_engine_cs *engine, u64 * const csb) +static u64 csb_read(const struct intel_engine_cs *engine, u64 * const csb) { u64 entry = READ_ONCE(*csb); @@ -2026,6 +1988,9 @@ static void __execlists_hold(struct i915_request *rq) struct i915_request *w = container_of(p->waiter, typeof(*w), sched); + if (p->flags & I915_DEPENDENCY_WEAK) + continue; + /* Leave semaphores spinning on the other engines */ if (w->engine != rq->engine) continue; @@ -2124,6 +2089,9 @@ static void __execlists_unhold(struct i915_request *rq) struct i915_request *w = container_of(p->waiter, typeof(*w), sched); + if (p->flags & I915_DEPENDENCY_WEAK) + continue; + /* Propagate any change in error status */ if (rq->fence.error) i915_request_set_error_once(w, rq->fence.error); @@ -3180,8 +3148,7 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine) } } -static inline void -logical_ring_default_irqs(struct intel_engine_cs *engine) +static void logical_ring_default_irqs(struct intel_engine_cs *engine) { unsigned int shift = 0; @@ -3296,7 +3263,7 @@ static void rcu_virtual_context_destroy(struct work_struct *wrk) old = fetch_and_zero(&ve->request); if (old) { - GEM_BUG_ON(!i915_request_completed(old)); + GEM_BUG_ON(!__i915_request_is_complete(old)); __i915_request_submit(old); i915_request_put(old); } @@ -3573,7 +3540,7 @@ static void virtual_submit_request(struct i915_request *rq) } if (ve->request) { /* background completion from preempt-to-busy */ - GEM_BUG_ON(!i915_request_completed(ve->request)); + GEM_BUG_ON(!__i915_request_is_complete(ve->request)); __i915_request_submit(ve->request); i915_request_put(ve->request); } diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c index 104cb30e8c13..06d84cf09570 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.c @@ -145,7 +145,8 @@ static void pool_retire(struct i915_active *ref) } static struct intel_gt_buffer_pool_node * -node_create(struct intel_gt_buffer_pool *pool, size_t sz) +node_create(struct intel_gt_buffer_pool *pool, size_t sz, + enum i915_map_type type) { struct intel_gt *gt = to_gt(pool); struct intel_gt_buffer_pool_node *node; @@ -169,12 +170,14 @@ node_create(struct intel_gt_buffer_pool *pool, size_t sz) i915_gem_object_set_readonly(obj); + node->type = type; node->obj = obj; return node; } struct intel_gt_buffer_pool_node * -intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size) +intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size, + enum i915_map_type type) { struct intel_gt_buffer_pool *pool = >->buffer_pool; struct intel_gt_buffer_pool_node *node; @@ -191,6 +194,9 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size) if (node->obj->base.size < size) continue; + if (node->type != type) + continue; + age = READ_ONCE(node->age); if (!age) continue; @@ -205,7 +211,7 @@ intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size) rcu_read_unlock(); if (&node->link == list) { - node = node_create(pool, size); + node = node_create(pool, size, type); if (IS_ERR(node)) return node; } diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h index 42cbac003e8a..6068f8f1762e 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h @@ -15,7 +15,8 @@ struct intel_gt; struct i915_request; struct intel_gt_buffer_pool_node * -intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size); +intel_gt_get_buffer_pool(struct intel_gt *gt, size_t size, + enum i915_map_type type); static inline int intel_gt_buffer_pool_mark_active(struct intel_gt_buffer_pool_node *node, diff --git a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h index bcf1658c9633..d8d82c890da8 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h +++ b/drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h @@ -11,10 +11,9 @@ #include <linux/spinlock.h> #include <linux/workqueue.h> +#include "gem/i915_gem_object_types.h" #include "i915_active_types.h" -struct drm_i915_gem_object; - struct intel_gt_buffer_pool { spinlock_t lock; struct list_head cache_list[4]; @@ -31,6 +30,7 @@ struct intel_gt_buffer_pool_node { struct rcu_head rcu; }; unsigned long age; + enum i915_map_type type; }; #endif /* INTEL_GT_BUFFER_POOL_TYPES_H */ diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c index a0fc78c89b61..94f485b591af 100644 --- a/drivers/gpu/drm/i915/gt/intel_lrc.c +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c @@ -1035,7 +1035,7 @@ gen12_emit_indirect_ctx_xcs(const struct intel_context *ce, u32 *cs) return cs; } -static inline u32 context_wa_bb_offset(const struct intel_context *ce) +static u32 context_wa_bb_offset(const struct intel_context *ce) { return PAGE_SIZE * ce->wa_bb_page; } @@ -1098,7 +1098,7 @@ setup_indirect_ctx_bb(const struct intel_context *ce, * engine info, SW context ID and SW counter need to form a unique number * (Context ID) per lrc. */ -static inline u32 lrc_descriptor(const struct intel_context *ce) +static u32 lrc_descriptor(const struct intel_context *ce) { u32 desc; diff --git a/drivers/gpu/drm/i915/gt/intel_mocs.c b/drivers/gpu/drm/i915/gt/intel_mocs.c index c4512ee4daf2..8acb84960cd0 100644 --- a/drivers/gpu/drm/i915/gt/intel_mocs.c +++ b/drivers/gpu/drm/i915/gt/intel_mocs.c @@ -472,7 +472,7 @@ static u16 get_entry_l3cc(const struct drm_i915_mocs_table *table, return table->table[I915_MOCS_PTE].l3cc_value; } -static inline u32 l3cc_combine(u16 low, u16 high) +static u32 l3cc_combine(u16 low, u16 high) { return low | (u32)high << 16; } diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index 46d9aceda64c..96b85a10ef33 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -80,7 +80,7 @@ void free_px(struct i915_address_space *vm, struct i915_page_table *pt, int lvl) kfree(pt); } -static inline void +static void write_dma_entry(struct drm_i915_gem_object * const pdma, const unsigned short idx, const u64 encoded_entry) diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c index d7b8e4457fc2..35504c97f11d 100644 --- a/drivers/gpu/drm/i915/gt/intel_rc6.c +++ b/drivers/gpu/drm/i915/gt/intel_rc6.c @@ -49,7 +49,7 @@ static struct drm_i915_private *rc6_to_i915(struct intel_rc6 *rc) return rc6_to_gt(rc)->i915; } -static inline void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val) +static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val) { intel_uncore_write_fw(uncore, reg, val); } diff --git a/drivers/gpu/drm/i915/gt/intel_region_lmem.c b/drivers/gpu/drm/i915/gt/intel_region_lmem.c index 83992312a34b..60393ce5614d 100644 --- a/drivers/gpu/drm/i915/gt/intel_region_lmem.c +++ b/drivers/gpu/drm/i915/gt/intel_region_lmem.c @@ -98,7 +98,7 @@ region_lmem_init(struct intel_memory_region *mem) static const struct intel_memory_region_ops intel_region_lmem_ops = { .init = region_lmem_init, .release = region_lmem_release, - .create_object = __i915_gem_lmem_object_create, + .init_object = __i915_gem_lmem_object_init, }; struct intel_memory_region * diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c index 9d177297db79..61410cd62927 100644 --- a/drivers/gpu/drm/i915/gt/intel_reset.c +++ b/drivers/gpu/drm/i915/gt/intel_reset.c @@ -151,8 +151,7 @@ static void mark_innocent(struct i915_request *rq) void __i915_request_reset(struct i915_request *rq, bool guilty) { RQ_TRACE(rq, "guilty? %s\n", yesno(guilty)); - - GEM_BUG_ON(i915_request_completed(rq)); + GEM_BUG_ON(__i915_request_is_complete(rq)); rcu_read_lock(); /* protect the GEM context */ if (guilty) { @@ -1110,7 +1109,7 @@ error: goto finish; } -static inline int intel_gt_reset_engine(struct intel_engine_cs *engine) +static int intel_gt_reset_engine(struct intel_engine_cs *engine) { return __intel_gt_reset(engine->gt, engine->mask); } diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c index 06385550450c..78d1360caa0f 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring.c +++ b/drivers/gpu/drm/i915/gt/intel_ring.c @@ -42,7 +42,7 @@ int intel_ring_pin(struct intel_ring *ring, struct i915_gem_ww_ctx *ww) /* Ring wraparound at offset 0 sometimes hangs. No idea why. */ flags = PIN_OFFSET_BIAS | i915_ggtt_pin_bias(vma); - if (vma->obj->stolen) + if (i915_gem_object_is_stolen(vma->obj)) flags |= PIN_MAPPABLE; else flags |= PIN_HIGH; diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c index 20f42722be8b..4984ff565424 100644 --- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c @@ -122,31 +122,27 @@ static void set_hwsp(struct intel_engine_cs *engine, u32 offset) hwsp = RING_HWS_PGA(engine->mmio_base); } - intel_uncore_write(engine->uncore, hwsp, offset); - intel_uncore_posting_read(engine->uncore, hwsp); + intel_uncore_write_fw(engine->uncore, hwsp, offset); + intel_uncore_posting_read_fw(engine->uncore, hwsp); } static void flush_cs_tlb(struct intel_engine_cs *engine) { - struct drm_i915_private *dev_priv = engine->i915; - - if (!IS_GEN_RANGE(dev_priv, 6, 7)) + if (!IS_GEN_RANGE(engine->i915, 6, 7)) return; /* ring should be idle before issuing a sync flush*/ - drm_WARN_ON(&dev_priv->drm, - (ENGINE_READ(engine, RING_MI_MODE) & MODE_IDLE) == 0); - - ENGINE_WRITE(engine, RING_INSTPM, - _MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE | - INSTPM_SYNC_FLUSH)); - if (intel_wait_for_register(engine->uncore, - RING_INSTPM(engine->mmio_base), - INSTPM_SYNC_FLUSH, 0, - 1000)) - drm_err(&dev_priv->drm, - "%s: wait for SyncFlush to complete for TLB invalidation timed out\n", - engine->name); + GEM_DEBUG_WARN_ON((ENGINE_READ(engine, RING_MI_MODE) & MODE_IDLE) == 0); + + ENGINE_WRITE_FW(engine, RING_INSTPM, + _MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE | + INSTPM_SYNC_FLUSH)); + if (__intel_wait_for_register_fw(engine->uncore, + RING_INSTPM(engine->mmio_base), + INSTPM_SYNC_FLUSH, 0, + 2000, 0, NULL)) + ENGINE_TRACE(engine, + "wait for SyncFlush to complete for TLB invalidation timed out\n"); } static void ring_setup_status_page(struct intel_engine_cs *engine) @@ -157,21 +153,6 @@ static void ring_setup_status_page(struct intel_engine_cs *engine) flush_cs_tlb(engine); } -static bool stop_ring(struct intel_engine_cs *engine) -{ - intel_engine_stop_cs(engine); - - ENGINE_WRITE(engine, RING_HEAD, ENGINE_READ(engine, RING_TAIL)); - - ENGINE_WRITE(engine, RING_HEAD, 0); - ENGINE_WRITE(engine, RING_TAIL, 0); - - /* The ring must be empty before it is disabled */ - ENGINE_WRITE(engine, RING_CTL, 0); - - return (ENGINE_READ(engine, RING_HEAD) & HEAD_ADDR) == 0; -} - static struct i915_address_space *vm_alias(struct i915_address_space *vm) { if (i915_is_ggtt(vm)) @@ -189,9 +170,16 @@ static void set_pp_dir(struct intel_engine_cs *engine) { struct i915_address_space *vm = vm_alias(engine->gt->vm); - if (vm) { - ENGINE_WRITE(engine, RING_PP_DIR_DCLV, PP_DIR_DCLV_2G); - ENGINE_WRITE(engine, RING_PP_DIR_BASE, pp_dir(vm)); + if (!vm) + return; + + ENGINE_WRITE_FW(engine, RING_PP_DIR_DCLV, PP_DIR_DCLV_2G); + ENGINE_WRITE_FW(engine, RING_PP_DIR_BASE, pp_dir(vm)); + + if (INTEL_GEN(engine->i915) >= 7) { + ENGINE_WRITE_FW(engine, + RING_MODE_GEN7, + _MASKED_BIT_ENABLE(GFX_PPGTT_ENABLE)); } } @@ -199,38 +187,10 @@ static int xcs_resume(struct intel_engine_cs *engine) { struct drm_i915_private *dev_priv = engine->i915; struct intel_ring *ring = engine->legacy.ring; - int ret = 0; ENGINE_TRACE(engine, "ring:{HEAD:%04x, TAIL:%04x}\n", ring->head, ring->tail); - intel_uncore_forcewake_get(engine->uncore, FORCEWAKE_ALL); - - /* WaClearRingBufHeadRegAtInit:ctg,elk */ - if (!stop_ring(engine)) { - /* G45 ring initialization often fails to reset head to zero */ - drm_dbg(&dev_priv->drm, "%s head not reset to zero " - "ctl %08x head %08x tail %08x start %08x\n", - engine->name, - ENGINE_READ(engine, RING_CTL), - ENGINE_READ(engine, RING_HEAD), - ENGINE_READ(engine, RING_TAIL), - ENGINE_READ(engine, RING_START)); - - if (!stop_ring(engine)) { - drm_err(&dev_priv->drm, - "failed to set %s head to zero " - "ctl %08x head %08x tail %08x start %08x\n", - engine->name, - ENGINE_READ(engine, RING_CTL), - ENGINE_READ(engine, RING_HEAD), - ENGINE_READ(engine, RING_TAIL), - ENGINE_READ(engine, RING_START)); - ret = -EIO; - goto out; - } - } - if (HWS_NEEDS_PHYSICAL(dev_priv)) ring_setup_phys_status_page(engine); else @@ -247,7 +207,7 @@ static int xcs_resume(struct intel_engine_cs *engine) * also enforces ordering), otherwise the hw might lose the new ring * register values. */ - ENGINE_WRITE(engine, RING_START, i915_ggtt_offset(ring->vma)); + ENGINE_WRITE_FW(engine, RING_START, i915_ggtt_offset(ring->vma)); /* Check that the ring offsets point within the ring! */ GEM_BUG_ON(!intel_ring_offset_valid(ring, ring->head)); @@ -257,46 +217,44 @@ static int xcs_resume(struct intel_engine_cs *engine) set_pp_dir(engine); /* First wake the ring up to an empty/idle ring */ - ENGINE_WRITE(engine, RING_HEAD, ring->head); - ENGINE_WRITE(engine, RING_TAIL, ring->head); + ENGINE_WRITE_FW(engine, RING_HEAD, ring->head); + ENGINE_WRITE_FW(engine, RING_TAIL, ring->head); ENGINE_POSTING_READ(engine, RING_TAIL); - ENGINE_WRITE(engine, RING_CTL, RING_CTL_SIZE(ring->size) | RING_VALID); + ENGINE_WRITE_FW(engine, RING_CTL, + RING_CTL_SIZE(ring->size) | RING_VALID); /* If the head is still not zero, the ring is dead */ - if (intel_wait_for_register(engine->uncore, - RING_CTL(engine->mmio_base), - RING_VALID, RING_VALID, - 50)) { - drm_err(&dev_priv->drm, "%s initialization failed " - "ctl %08x (valid? %d) head %08x [%08x] tail %08x [%08x] start %08x [expected %08x]\n", - engine->name, - ENGINE_READ(engine, RING_CTL), - ENGINE_READ(engine, RING_CTL) & RING_VALID, - ENGINE_READ(engine, RING_HEAD), ring->head, - ENGINE_READ(engine, RING_TAIL), ring->tail, - ENGINE_READ(engine, RING_START), - i915_ggtt_offset(ring->vma)); - ret = -EIO; - goto out; + if (__intel_wait_for_register_fw(engine->uncore, + RING_CTL(engine->mmio_base), + RING_VALID, RING_VALID, + 5000, 0, NULL)) { + drm_err(&dev_priv->drm, + "%s initialization failed; " + "ctl %08x (valid? %d) head %08x [%08x] tail %08x [%08x] start %08x [expected %08x]\n", + engine->name, + ENGINE_READ(engine, RING_CTL), + ENGINE_READ(engine, RING_CTL) & RING_VALID, + ENGINE_READ(engine, RING_HEAD), ring->head, + ENGINE_READ(engine, RING_TAIL), ring->tail, + ENGINE_READ(engine, RING_START), + i915_ggtt_offset(ring->vma)); + return -EIO; } if (INTEL_GEN(dev_priv) > 2) - ENGINE_WRITE(engine, - RING_MI_MODE, _MASKED_BIT_DISABLE(STOP_RING)); + ENGINE_WRITE_FW(engine, + RING_MI_MODE, _MASKED_BIT_DISABLE(STOP_RING)); /* Now awake, let it get started */ if (ring->tail != ring->head) { - ENGINE_WRITE(engine, RING_TAIL, ring->tail); + ENGINE_WRITE_FW(engine, RING_TAIL, ring->tail); ENGINE_POSTING_READ(engine, RING_TAIL); } /* Papering over lost _interrupts_ immediately following the restart */ intel_engine_signal_breadcrumbs(engine); -out: - intel_uncore_forcewake_put(engine->uncore, FORCEWAKE_ALL); - - return ret; + return 0; } static void sanitize_hwsp(struct intel_engine_cs *engine) @@ -332,11 +290,25 @@ static void xcs_sanitize(struct intel_engine_cs *engine) clflush_cache_range(engine->status_page.addr, PAGE_SIZE); } -static void reset_prepare(struct intel_engine_cs *engine) +static bool stop_ring(struct intel_engine_cs *engine) { - struct intel_uncore *uncore = engine->uncore; - const u32 base = engine->mmio_base; + /* Empty the ring by skipping to the end */ + ENGINE_WRITE_FW(engine, RING_HEAD, ENGINE_READ_FW(engine, RING_TAIL)); + ENGINE_POSTING_READ(engine, RING_HEAD); + /* The ring must be empty before it is disabled */ + ENGINE_WRITE_FW(engine, RING_CTL, 0); + ENGINE_POSTING_READ(engine, RING_CTL); + + /* Then reset the disabled ring */ + ENGINE_WRITE_FW(engine, RING_HEAD, 0); + ENGINE_WRITE_FW(engine, RING_TAIL, 0); + + return (ENGINE_READ_FW(engine, RING_HEAD) & HEAD_ADDR) == 0; +} + +static void reset_prepare(struct intel_engine_cs *engine) +{ /* * We stop engines, otherwise we might get failed reset and a * dead gpu (on elk). Also as modern gpu as kbl can suffer @@ -348,30 +320,35 @@ static void reset_prepare(struct intel_engine_cs *engine) * WaKBLVECSSemaphoreWaitPoll:kbl (on ALL_ENGINES) * * WaMediaResetMainRingCleanup:ctg,elk (presumably) + * WaClearRingBufHeadRegAtInit:ctg,elk * * FIXME: Wa for more modern gens needs to be validated */ ENGINE_TRACE(engine, "\n"); + intel_engine_stop_cs(engine); - if (intel_engine_stop_cs(engine)) - ENGINE_TRACE(engine, "timed out on STOP_RING\n"); - - intel_uncore_write_fw(uncore, - RING_HEAD(base), - intel_uncore_read_fw(uncore, RING_TAIL(base))); - intel_uncore_posting_read_fw(uncore, RING_HEAD(base)); /* paranoia */ - - intel_uncore_write_fw(uncore, RING_HEAD(base), 0); - intel_uncore_write_fw(uncore, RING_TAIL(base), 0); - intel_uncore_posting_read_fw(uncore, RING_TAIL(base)); - - /* The ring must be empty before it is disabled */ - intel_uncore_write_fw(uncore, RING_CTL(base), 0); + if (!stop_ring(engine)) { + /* G45 ring initialization often fails to reset head to zero */ + drm_dbg(&engine->i915->drm, + "%s head not reset to zero " + "ctl %08x head %08x tail %08x start %08x\n", + engine->name, + ENGINE_READ_FW(engine, RING_CTL), + ENGINE_READ_FW(engine, RING_HEAD), + ENGINE_READ_FW(engine, RING_TAIL), + ENGINE_READ_FW(engine, RING_START)); + } - /* Check acts as a post */ - if (intel_uncore_read_fw(uncore, RING_HEAD(base))) - ENGINE_TRACE(engine, "ring head [%x] not parked\n", - intel_uncore_read_fw(uncore, RING_HEAD(base))); + if (!stop_ring(engine)) { + drm_err(&engine->i915->drm, + "failed to set %s head to zero " + "ctl %08x head %08x tail %08x start %08x\n", + engine->name, + ENGINE_READ_FW(engine, RING_CTL), + ENGINE_READ_FW(engine, RING_HEAD), + ENGINE_READ_FW(engine, RING_TAIL), + ENGINE_READ_FW(engine, RING_START)); + } } static void reset_rewind(struct intel_engine_cs *engine, bool stalled) @@ -382,12 +359,14 @@ static void reset_rewind(struct intel_engine_cs *engine, bool stalled) rq = NULL; spin_lock_irqsave(&engine->active.lock, flags); + rcu_read_lock(); list_for_each_entry(pos, &engine->active.requests, sched.link) { - if (!i915_request_completed(pos)) { + if (!__i915_request_is_complete(pos)) { rq = pos; break; } } + rcu_read_unlock(); /* * The guilty request will get skipped on a hung engine. @@ -663,9 +642,9 @@ static int load_pd_dir(struct i915_request *rq, return rq->engine->emit_flush(rq, EMIT_FLUSH); } -static inline int mi_set_context(struct i915_request *rq, - struct intel_context *ce, - u32 flags) +static int mi_set_context(struct i915_request *rq, + struct intel_context *ce, + u32 flags) { struct intel_engine_cs *engine = rq->engine; struct drm_i915_private *i915 = engine->i915; diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c index 69e1bd46cc46..ee5835c29c03 100644 --- a/drivers/gpu/drm/i915/gt/intel_rps.c +++ b/drivers/gpu/drm/i915/gt/intel_rps.c @@ -43,7 +43,7 @@ static u32 rps_pm_sanitize_mask(struct intel_rps *rps, u32 mask) return mask & ~rps->pm_intrmsk_mbz; } -static inline void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val) +static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val) { intel_uncore_write_fw(uncore, reg, val); } diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c index 7fe05918a76e..037b0e3ccbed 100644 --- a/drivers/gpu/drm/i915/gt/intel_timeline.c +++ b/drivers/gpu/drm/i915/gt/intel_timeline.c @@ -582,11 +582,11 @@ int intel_timeline_read_hwsp(struct i915_request *from, rcu_read_lock(); cl = rcu_dereference(from->hwsp_cacheline); - if (i915_request_completed(from)) /* confirm cacheline is valid */ + if (i915_request_signaled(from)) /* confirm cacheline is valid */ goto unlock; if (unlikely(!i915_active_acquire_if_busy(&cl->active))) goto unlock; /* seqno wrapped and completed! */ - if (unlikely(i915_request_completed(from))) + if (unlikely(__i915_request_is_complete(from))) goto release; rcu_read_unlock(); diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c index d99773e6776e..3fdcd5ff71dd 100644 --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c @@ -1304,7 +1304,7 @@ bool intel_gt_verify_workarounds(struct intel_gt *gt, const char *from) } __maybe_unused -static inline bool is_nonpriv_flags_valid(u32 flags) +static bool is_nonpriv_flags_valid(u32 flags) { /* Check only valid flag bits are set */ if (flags & ~RING_FORCE_TO_NONPRIV_MASK_VALID) diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c index 460c3e9542f4..463bb6a700c8 100644 --- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c +++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c @@ -704,6 +704,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) for_each_engine(engine, gt, id) { unsigned int reset_count, reset_engine_count; + unsigned long count; IGT_TIMEOUT(end_time); if (active && !intel_engine_can_store_dword(engine)) @@ -721,6 +722,7 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) st_engine_heartbeat_disable(engine); set_bit(I915_RESET_ENGINE + id, >->reset.flags); + count = 0; do { if (active) { struct i915_request *rq; @@ -770,9 +772,13 @@ static int __igt_reset_engine(struct intel_gt *gt, bool active) err = -EINVAL; break; } + + count++; } while (time_before(jiffies, end_time)); clear_bit(I915_RESET_ENGINE + id, >->reset.flags); st_engine_heartbeat_enable(engine); + pr_info("%s: Completed %lu %s resets\n", + engine->name, count, active ? "active" : "idle"); if (err) break; @@ -1623,7 +1629,8 @@ static int igt_reset_queue(void *arg) prev = rq; count++; } while (time_before(jiffies, end_time)); - pr_info("%s: Completed %d resets\n", engine->name, count); + pr_info("%s: Completed %d queued resets\n", + engine->name, count); *h.batch = MI_BATCH_BUFFER_END; intel_gt_chipset_flush(engine->gt); @@ -1720,7 +1727,8 @@ static int __igt_atomic_reset_engine(struct intel_engine_cs *engine, GEM_TRACE("i915_reset_engine(%s:%s) under %s\n", engine->name, mode, p->name); - tasklet_disable(t); + if (t->func) + tasklet_disable(t); if (strcmp(p->name, "softirq")) local_bh_disable(); p->critical_section_begin(); @@ -1730,8 +1738,10 @@ static int __igt_atomic_reset_engine(struct intel_engine_cs *engine, p->critical_section_end(); if (strcmp(p->name, "softirq")) local_bh_enable(); - tasklet_enable(t); - tasklet_hi_schedule(t); + if (t->func) { + tasklet_enable(t); + tasklet_hi_schedule(t); + } if (err) pr_err("i915_reset_engine(%s:%s) failed under %s\n", diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c index b7befcfbdcde..8784257ec808 100644 --- a/drivers/gpu/drm/i915/gt/selftest_reset.c +++ b/drivers/gpu/drm/i915/gt/selftest_reset.c @@ -321,7 +321,10 @@ static int igt_atomic_engine_reset(void *arg) goto out_unlock; for_each_engine(engine, gt, id) { - tasklet_disable(&engine->execlists.tasklet); + struct tasklet_struct *t = &engine->execlists.tasklet; + + if (t->func) + tasklet_disable(t); intel_engine_pm_get(engine); for (p = igt_atomic_phases; p->name; p++) { @@ -345,8 +348,10 @@ static int igt_atomic_engine_reset(void *arg) } intel_engine_pm_put(engine); - tasklet_enable(&engine->execlists.tasklet); - tasklet_hi_schedule(&engine->execlists.tasklet); + if (t->func) { + tasklet_enable(t); + tasklet_hi_schedule(t); + } if (err) break; } diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 5982b62f913d..a4d8fc9e2374 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -33,7 +33,7 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) struct file *file; void *ptr; - if (obj->ops == &i915_gem_shmem_ops) { + if (i915_gem_object_is_shmem(obj)) { file = obj->base.filp; atomic_long_inc(&file->f_count); return file; diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c index 6a0452815c41..6abb8f2dc33d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c @@ -15,6 +15,29 @@ static const struct intel_uc_ops uc_ops_off; static const struct intel_uc_ops uc_ops_on; +static void uc_expand_default_options(struct intel_uc *uc) +{ + struct drm_i915_private *i915 = uc_to_gt(uc)->i915; + + if (i915->params.enable_guc != -1) + return; + + /* Don't enable GuC/HuC on pre-Gen12 */ + if (INTEL_GEN(i915) < 12) { + i915->params.enable_guc = 0; + return; + } + + /* Don't enable GuC/HuC on older Gen12 platforms */ + if (IS_TIGERLAKE(i915) || IS_ROCKETLAKE(i915)) { + i915->params.enable_guc = 0; + return; + } + + /* Default: enable HuC authentication only */ + i915->params.enable_guc = ENABLE_GUC_LOAD_HUC; +} + /* Reset GuC providing us with fresh state for both GuC and HuC. */ static int __intel_uc_reset_hw(struct intel_uc *uc) @@ -52,9 +75,6 @@ static void __confirm_options(struct intel_uc *uc) yesno(intel_uc_wants_guc_submission(uc)), yesno(intel_uc_wants_huc(uc))); - if (i915->params.enable_guc == -1) - return; - if (i915->params.enable_guc == 0) { GEM_BUG_ON(intel_uc_wants_guc(uc)); GEM_BUG_ON(intel_uc_wants_guc_submission(uc)); @@ -79,8 +99,7 @@ static void __confirm_options(struct intel_uc *uc) "Incompatible option enable_guc=%d - %s\n", i915->params.enable_guc, "GuC submission is N/A"); - if (i915->params.enable_guc & ~(ENABLE_GUC_SUBMISSION | - ENABLE_GUC_LOAD_HUC)) + if (i915->params.enable_guc & ~ENABLE_GUC_MASK) drm_info(&i915->drm, "Incompatible option enable_guc=%d - %s\n", i915->params.enable_guc, "undocumented flag"); @@ -88,6 +107,8 @@ static void __confirm_options(struct intel_uc *uc) void intel_uc_init_early(struct intel_uc *uc) { + uc_expand_default_options(uc); + intel_guc_init_early(&uc->guc); intel_huc_init_early(&uc->huc); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index 602f1a0bc587..67b06fde1225 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -152,16 +152,11 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw) uc_fw->path = NULL; } } - - /* We don't want to enable GuC/HuC on pre-Gen11 by default */ - if (i915->params.enable_guc == -1 && p < INTEL_ICELAKE) - uc_fw->path = NULL; } static const char *__override_guc_firmware_path(struct drm_i915_private *i915) { - if (i915->params.enable_guc & (ENABLE_GUC_SUBMISSION | - ENABLE_GUC_LOAD_HUC)) + if (i915->params.enable_guc & ENABLE_GUC_MASK) return i915->params.guc_firmware_path; return ""; } diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c index 3fea967ee817..7fb91de06557 100644 --- a/drivers/gpu/drm/i915/gvt/cmd_parser.c +++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c @@ -38,11 +38,17 @@ #include "i915_drv.h" #include "gt/intel_gpu_commands.h" +#include "gt/intel_lrc.h" #include "gt/intel_ring.h" +#include "gt/intel_gt_requests.h" #include "gvt.h" #include "i915_pvinfo.h" #include "trace.h" +#include "gem/i915_gem_context.h" +#include "gem/i915_gem_pm.h" +#include "gt/intel_context.h" + #define INVALID_OP (~0U) #define OP_LEN_MI 9 @@ -455,6 +461,7 @@ enum { RING_BUFFER_INSTRUCTION, BATCH_BUFFER_INSTRUCTION, BATCH_BUFFER_2ND_LEVEL, + RING_BUFFER_CTX, }; enum { @@ -496,6 +503,7 @@ struct parser_exec_state { */ int saved_buf_addr_type; bool is_ctx_wa; + bool is_init_ctx; const struct cmd_info *info; @@ -709,6 +717,11 @@ static inline u32 cmd_val(struct parser_exec_state *s, int index) return *cmd_ptr(s, index); } +static inline bool is_init_ctx(struct parser_exec_state *s) +{ + return (s->buf_type == RING_BUFFER_CTX && s->is_init_ctx); +} + static void parser_exec_state_dump(struct parser_exec_state *s) { int cnt = 0; @@ -722,7 +735,8 @@ static void parser_exec_state_dump(struct parser_exec_state *s) gvt_dbg_cmd(" %s %s ip_gma(%08lx) ", s->buf_type == RING_BUFFER_INSTRUCTION ? - "RING_BUFFER" : "BATCH_BUFFER", + "RING_BUFFER" : ((s->buf_type == RING_BUFFER_CTX) ? + "CTX_BUFFER" : "BATCH_BUFFER"), s->buf_addr_type == GTT_BUFFER ? "GTT" : "PPGTT", s->ip_gma); @@ -757,7 +771,8 @@ static inline void update_ip_va(struct parser_exec_state *s) if (WARN_ON(s->ring_head == s->ring_tail)) return; - if (s->buf_type == RING_BUFFER_INSTRUCTION) { + if (s->buf_type == RING_BUFFER_INSTRUCTION || + s->buf_type == RING_BUFFER_CTX) { unsigned long ring_top = s->ring_start + s->ring_size; if (s->ring_head > s->ring_tail) { @@ -821,68 +836,12 @@ static inline int cmd_length(struct parser_exec_state *s) *addr = val; \ } while (0) -static bool is_shadowed_mmio(unsigned int offset) -{ - bool ret = false; - - if ((offset == 0x2168) || /*BB current head register UDW */ - (offset == 0x2140) || /*BB current header register */ - (offset == 0x211c) || /*second BB header register UDW */ - (offset == 0x2114)) { /*second BB header register UDW */ - ret = true; - } - return ret; -} - -static inline bool is_force_nonpriv_mmio(unsigned int offset) -{ - return (offset >= 0x24d0 && offset < 0x2500); -} - -static int force_nonpriv_reg_handler(struct parser_exec_state *s, - unsigned int offset, unsigned int index, char *cmd) -{ - struct intel_gvt *gvt = s->vgpu->gvt; - unsigned int data; - u32 ring_base; - u32 nopid; - - if (!strcmp(cmd, "lri")) - data = cmd_val(s, index + 1); - else { - gvt_err("Unexpected forcenonpriv 0x%x write from cmd %s\n", - offset, cmd); - return -EINVAL; - } - - ring_base = s->engine->mmio_base; - nopid = i915_mmio_reg_offset(RING_NOPID(ring_base)); - - if (!intel_gvt_in_force_nonpriv_whitelist(gvt, data) && - data != nopid) { - gvt_err("Unexpected forcenonpriv 0x%x LRI write, value=0x%x\n", - offset, data); - patch_value(s, cmd_ptr(s, index), nopid); - return 0; - } - return 0; -} - static inline bool is_mocs_mmio(unsigned int offset) { return ((offset >= 0xc800) && (offset <= 0xcff8)) || ((offset >= 0xb020) && (offset <= 0xb0a0)); } -static int mocs_cmd_reg_handler(struct parser_exec_state *s, - unsigned int offset, unsigned int index) -{ - if (!is_mocs_mmio(offset)) - return -EINVAL; - vgpu_vreg(s->vgpu, offset) = cmd_val(s, index + 1); - return 0; -} - static int is_cmd_update_pdps(unsigned int offset, struct parser_exec_state *s) { @@ -930,6 +889,7 @@ static int cmd_reg_handler(struct parser_exec_state *s, struct intel_vgpu *vgpu = s->vgpu; struct intel_gvt *gvt = vgpu->gvt; u32 ctx_sr_ctl; + u32 *vreg, vreg_old; if (offset + 4 > gvt->device_info.mmio_size) { gvt_vgpu_err("%s access to (%x) outside of MMIO range\n", @@ -937,34 +897,101 @@ static int cmd_reg_handler(struct parser_exec_state *s, return -EFAULT; } + if (is_init_ctx(s)) { + struct intel_gvt_mmio_info *mmio_info; + + intel_gvt_mmio_set_cmd_accessible(gvt, offset); + mmio_info = intel_gvt_find_mmio_info(gvt, offset); + if (mmio_info && mmio_info->write) + intel_gvt_mmio_set_cmd_write_patch(gvt, offset); + return 0; + } + if (!intel_gvt_mmio_is_cmd_accessible(gvt, offset)) { gvt_vgpu_err("%s access to non-render register (%x)\n", cmd, offset); return -EBADRQC; } - if (is_shadowed_mmio(offset)) { - gvt_vgpu_err("found access of shadowed MMIO %x\n", offset); - return 0; + if (!strncmp(cmd, "srm", 3) || + !strncmp(cmd, "lrm", 3)) { + if (offset != i915_mmio_reg_offset(GEN8_L3SQCREG4) && + offset != 0x21f0) { + gvt_vgpu_err("%s access to register (%x)\n", + cmd, offset); + return -EPERM; + } else + return 0; } - if (is_mocs_mmio(offset) && - mocs_cmd_reg_handler(s, offset, index)) - return -EINVAL; + if (!strncmp(cmd, "lrr-src", 7) || + !strncmp(cmd, "lrr-dst", 7)) { + gvt_vgpu_err("not allowed cmd %s\n", cmd); + return -EPERM; + } + + if (!strncmp(cmd, "pipe_ctrl", 9)) { + /* TODO: add LRI POST logic here */ + return 0; + } - if (is_force_nonpriv_mmio(offset) && - force_nonpriv_reg_handler(s, offset, index, cmd)) + if (strncmp(cmd, "lri", 3)) return -EPERM; + /* below are all lri handlers */ + vreg = &vgpu_vreg(s->vgpu, offset); + if (!intel_gvt_mmio_is_cmd_accessible(gvt, offset)) { + gvt_vgpu_err("%s access to non-render register (%x)\n", + cmd, offset); + return -EBADRQC; + } + + if (is_cmd_update_pdps(offset, s) && + cmd_pdp_mmio_update_handler(s, offset, index)) + return -EINVAL; + if (offset == i915_mmio_reg_offset(DERRMR) || offset == i915_mmio_reg_offset(FORCEWAKE_MT)) { /* Writing to HW VGT_PVINFO_PAGE offset will be discarded */ patch_value(s, cmd_ptr(s, index), VGT_PVINFO_PAGE); } - if (is_cmd_update_pdps(offset, s) && - cmd_pdp_mmio_update_handler(s, offset, index)) - return -EINVAL; + if (is_mocs_mmio(offset)) + *vreg = cmd_val(s, index + 1); + + vreg_old = *vreg; + + if (intel_gvt_mmio_is_cmd_write_patch(gvt, offset)) { + u32 cmdval_new, cmdval; + struct intel_gvt_mmio_info *mmio_info; + + cmdval = cmd_val(s, index + 1); + + mmio_info = intel_gvt_find_mmio_info(gvt, offset); + if (!mmio_info) { + cmdval_new = cmdval; + } else { + u64 ro_mask = mmio_info->ro_mask; + int ret; + + if (likely(!ro_mask)) + ret = mmio_info->write(s->vgpu, offset, + &cmdval, 4); + else { + gvt_vgpu_err("try to write RO reg %x\n", + offset); + ret = -EBADRQC; + } + if (ret) + return ret; + cmdval_new = *vreg; + } + if (cmdval_new != cmdval) + patch_value(s, cmd_ptr(s, index+1), cmdval_new); + } + + /* only patch cmd. restore vreg value if changed in mmio write handler*/ + *vreg = vreg_old; /* TODO * In order to let workload with inhibit context to generate @@ -1216,6 +1243,8 @@ static int cmd_handler_mi_batch_buffer_end(struct parser_exec_state *s) s->buf_type = BATCH_BUFFER_INSTRUCTION; ret = ip_gma_set(s, s->ret_ip_gma_bb); s->buf_addr_type = s->saved_buf_addr_type; + } else if (s->buf_type == RING_BUFFER_CTX) { + ret = ip_gma_set(s, s->ring_tail); } else { s->buf_type = RING_BUFFER_INSTRUCTION; s->buf_addr_type = GTT_BUFFER; @@ -2764,7 +2793,8 @@ static int command_scan(struct parser_exec_state *s, gma_bottom = rb_start + rb_len; while (s->ip_gma != gma_tail) { - if (s->buf_type == RING_BUFFER_INSTRUCTION) { + if (s->buf_type == RING_BUFFER_INSTRUCTION || + s->buf_type == RING_BUFFER_CTX) { if (!(s->ip_gma >= rb_start) || !(s->ip_gma < gma_bottom)) { gvt_vgpu_err("ip_gma %lx out of ring scope." @@ -3057,6 +3087,171 @@ int intel_gvt_scan_and_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx) return 0; } +/* generate dummy contexts by sending empty requests to HW, and let + * the HW to fill Engine Contexts. This dummy contexts are used for + * initialization purpose (update reg whitelist), so referred to as + * init context here + */ +void intel_gvt_update_reg_whitelist(struct intel_vgpu *vgpu) +{ + struct intel_gvt *gvt = vgpu->gvt; + struct drm_i915_private *dev_priv = gvt->gt->i915; + struct intel_engine_cs *engine; + enum intel_engine_id id; + const unsigned long start = LRC_STATE_PN * PAGE_SIZE; + struct i915_request *rq; + struct intel_vgpu_submission *s = &vgpu->submission; + struct i915_request *requests[I915_NUM_ENGINES] = {}; + bool is_ctx_pinned[I915_NUM_ENGINES] = {}; + int ret; + + if (gvt->is_reg_whitelist_updated) + return; + + for_each_engine(engine, &dev_priv->gt, id) { + ret = intel_context_pin(s->shadow[id]); + if (ret) { + gvt_vgpu_err("fail to pin shadow ctx\n"); + goto out; + } + is_ctx_pinned[id] = true; + + rq = i915_request_create(s->shadow[id]); + if (IS_ERR(rq)) { + gvt_vgpu_err("fail to alloc default request\n"); + ret = -EIO; + goto out; + } + requests[id] = i915_request_get(rq); + i915_request_add(rq); + } + + if (intel_gt_wait_for_idle(&dev_priv->gt, + I915_GEM_IDLE_TIMEOUT) == -ETIME) { + ret = -EIO; + goto out; + } + + /* scan init ctx to update cmd accessible list */ + for_each_engine(engine, &dev_priv->gt, id) { + int size = engine->context_size - PAGE_SIZE; + void *vaddr; + struct parser_exec_state s; + struct drm_i915_gem_object *obj; + struct i915_request *rq; + + rq = requests[id]; + GEM_BUG_ON(!i915_request_completed(rq)); + GEM_BUG_ON(!intel_context_is_pinned(rq->context)); + obj = rq->context->state->obj; + + if (!obj) { + ret = -EIO; + goto out; + } + + i915_gem_object_set_cache_coherency(obj, + I915_CACHE_LLC); + + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); + if (IS_ERR(vaddr)) { + gvt_err("failed to pin init ctx obj, ring=%d, err=%lx\n", + id, PTR_ERR(vaddr)); + goto out; + } + + s.buf_type = RING_BUFFER_CTX; + s.buf_addr_type = GTT_BUFFER; + s.vgpu = vgpu; + s.engine = engine; + s.ring_start = 0; + s.ring_size = size; + s.ring_head = 0; + s.ring_tail = size; + s.rb_va = vaddr + start; + s.workload = NULL; + s.is_ctx_wa = false; + s.is_init_ctx = true; + + /* skipping the first RING_CTX_SIZE(0x50) dwords */ + ret = ip_gma_set(&s, RING_CTX_SIZE); + if (ret) { + i915_gem_object_unpin_map(obj); + goto out; + } + + ret = command_scan(&s, 0, size, 0, size); + if (ret) + gvt_err("Scan init ctx error\n"); + + i915_gem_object_unpin_map(obj); + } + +out: + if (!ret) + gvt->is_reg_whitelist_updated = true; + + for (id = 0; id < I915_NUM_ENGINES ; id++) { + if (requests[id]) + i915_request_put(requests[id]); + + if (is_ctx_pinned[id]) + intel_context_unpin(s->shadow[id]); + } +} + +int intel_gvt_scan_engine_context(struct intel_vgpu_workload *workload) +{ + struct intel_vgpu *vgpu = workload->vgpu; + unsigned long gma_head, gma_tail, gma_start, ctx_size; + struct parser_exec_state s; + int ring_id = workload->engine->id; + struct intel_context *ce = vgpu->submission.shadow[ring_id]; + int ret; + + GEM_BUG_ON(atomic_read(&ce->pin_count) < 0); + + ctx_size = workload->engine->context_size - PAGE_SIZE; + + /* Only ring contxt is loaded to HW for inhibit context, no need to + * scan engine context + */ + if (is_inhibit_context(ce)) + return 0; + + gma_start = i915_ggtt_offset(ce->state) + LRC_STATE_PN*PAGE_SIZE; + gma_head = 0; + gma_tail = ctx_size; + + s.buf_type = RING_BUFFER_CTX; + s.buf_addr_type = GTT_BUFFER; + s.vgpu = workload->vgpu; + s.engine = workload->engine; + s.ring_start = gma_start; + s.ring_size = ctx_size; + s.ring_head = gma_start + gma_head; + s.ring_tail = gma_start + gma_tail; + s.rb_va = ce->lrc_reg_state; + s.workload = workload; + s.is_ctx_wa = false; + s.is_init_ctx = false; + + /* don't scan the first RING_CTX_SIZE(0x50) dwords, as it's ring + * context + */ + ret = ip_gma_set(&s, gma_start + gma_head + RING_CTX_SIZE); + if (ret) + goto out; + + ret = command_scan(&s, gma_head, gma_tail, + gma_start, ctx_size); +out: + if (ret) + gvt_vgpu_err("scan shadow ctx error\n"); + + return ret; +} + static int init_cmd_table(struct intel_gvt *gvt) { unsigned int gen_type = intel_gvt_get_device_type(gvt); diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.h b/drivers/gpu/drm/i915/gvt/cmd_parser.h index ab25d151932a..416d345e2816 100644 --- a/drivers/gpu/drm/i915/gvt/cmd_parser.h +++ b/drivers/gpu/drm/i915/gvt/cmd_parser.h @@ -40,6 +40,7 @@ struct intel_gvt; struct intel_shadow_wa_ctx; +struct intel_vgpu; struct intel_vgpu_workload; void intel_gvt_clean_cmd_parser(struct intel_gvt *gvt); @@ -50,4 +51,8 @@ int intel_gvt_scan_and_shadow_ringbuffer(struct intel_vgpu_workload *workload); int intel_gvt_scan_and_shadow_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx); +void intel_gvt_update_reg_whitelist(struct intel_vgpu *vgpu); + +int intel_gvt_scan_engine_context(struct intel_vgpu_workload *workload); + #endif diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c index a15f87539657..62a5b0dd2003 100644 --- a/drivers/gpu/drm/i915/gvt/display.c +++ b/drivers/gpu/drm/i915/gvt/display.c @@ -217,6 +217,15 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) DDI_BUF_CTL_ENABLE); vgpu_vreg_t(vgpu, DDI_BUF_CTL(port)) |= DDI_BUF_IS_IDLE; } + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~(PORTA_HOTPLUG_ENABLE | PORTA_HOTPLUG_STATUS_MASK); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~(PORTB_HOTPLUG_ENABLE | PORTB_HOTPLUG_STATUS_MASK); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~(PORTC_HOTPLUG_ENABLE | PORTC_HOTPLUG_STATUS_MASK); + /* No hpd_invert set in vgpu vbt, need to clear invert mask */ + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= ~BXT_DDI_HPD_INVERT_MASK; + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~BXT_DE_PORT_HOTPLUG_MASK; vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= ~(BIT(0) | BIT(1)); vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &= @@ -273,6 +282,8 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_EDP)) |= (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | TRANS_DDI_FUNC_ENABLE); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTA_HOTPLUG_ENABLE; vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); } @@ -301,6 +312,8 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | (PORT_B << TRANS_DDI_PORT_SHIFT) | TRANS_DDI_FUNC_ENABLE); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTB_HOTPLUG_ENABLE; vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); } @@ -329,6 +342,8 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu) (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | (PORT_B << TRANS_DDI_PORT_SHIFT) | TRANS_DDI_FUNC_ENABLE); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTC_HOTPLUG_ENABLE; vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); } @@ -661,44 +676,62 @@ void intel_vgpu_emulate_hotplug(struct intel_vgpu *vgpu, bool connected) PORTD_HOTPLUG_STATUS_MASK; intel_vgpu_trigger_virtual_event(vgpu, DP_D_HOTPLUG); } else if (IS_BROXTON(i915)) { - if (connected) { - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { + if (connected) { vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); + } else { + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= + ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); } - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { - vgpu_vreg_t(vgpu, SFUSE_STRAP) |= - SFUSE_STRAP_DDIB_DETECTED; + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= + GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~PORTA_HOTPLUG_STATUS_MASK; + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTA_HOTPLUG_LONG_DETECT; + intel_vgpu_trigger_virtual_event(vgpu, DP_A_HOTPLUG); + } + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { + if (connected) { vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); - } - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { vgpu_vreg_t(vgpu, SFUSE_STRAP) |= - SFUSE_STRAP_DDIC_DETECTED; - vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= - GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); - } - } else { - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) { + SFUSE_STRAP_DDIB_DETECTED; + } else { vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= - ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_A); - } - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) { + ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); vgpu_vreg_t(vgpu, SFUSE_STRAP) &= ~SFUSE_STRAP_DDIB_DETECTED; - vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= - ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); } - if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { - vgpu_vreg_t(vgpu, SFUSE_STRAP) &= - ~SFUSE_STRAP_DDIC_DETECTED; + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= + GEN8_DE_PORT_HOTPLUG(HPD_PORT_B); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~PORTB_HOTPLUG_STATUS_MASK; + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTB_HOTPLUG_LONG_DETECT; + intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG); + } + if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) { + if (connected) { + vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |= + GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); + vgpu_vreg_t(vgpu, SFUSE_STRAP) |= + SFUSE_STRAP_DDIC_DETECTED; + } else { vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); + vgpu_vreg_t(vgpu, SFUSE_STRAP) &= + ~SFUSE_STRAP_DDIC_DETECTED; } + vgpu_vreg_t(vgpu, GEN8_DE_PORT_IIR) |= + GEN8_DE_PORT_HOTPLUG(HPD_PORT_C); + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) &= + ~PORTC_HOTPLUG_STATUS_MASK; + vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= + PORTC_HOTPLUG_LONG_DETECT; + intel_vgpu_trigger_virtual_event(vgpu, DP_C_HOTPLUG); } - vgpu_vreg_t(vgpu, PCH_PORT_HOTPLUG) |= - PORTB_HOTPLUG_STATUS_MASK; - intel_vgpu_trigger_virtual_event(vgpu, DP_B_HOTPLUG); } } diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index 62a4807424bb..03c993d68f10 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -248,7 +248,7 @@ struct gvt_mmio_block { #define INTEL_GVT_MMIO_HASH_BITS 11 struct intel_gvt_mmio { - u8 *mmio_attribute; + u16 *mmio_attribute; /* Register contains RO bits */ #define F_RO (1 << 0) /* Register contains graphics address */ @@ -267,6 +267,8 @@ struct intel_gvt_mmio { * logical context image */ #define F_SR_IN_CTX (1 << 7) +/* Value of command write of this reg needs to be patched */ +#define F_CMD_WRITE_PATCH (1 << 8) struct gvt_mmio_block *mmio_block; unsigned int num_mmio_block; @@ -333,6 +335,7 @@ struct intel_gvt { u32 *mocs_mmio_offset_list; u32 mocs_mmio_offset_list_cnt; } engine_mmio_list; + bool is_reg_whitelist_updated; struct dentry *debugfs_root; }; @@ -416,6 +419,9 @@ int intel_gvt_load_firmware(struct intel_gvt *gvt); #define vgpu_fence_base(vgpu) (vgpu->fence.base) #define vgpu_fence_sz(vgpu) (vgpu->fence.size) +/* ring context size i.e. the first 0x50 dwords*/ +#define RING_CTX_SIZE 320 + struct intel_vgpu_creation_params { __u64 handle; __u64 low_gm_sz; /* in MB */ @@ -687,6 +693,35 @@ static inline void intel_gvt_mmio_set_sr_in_ctx( } void intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu); +/** + * intel_gvt_mmio_set_cmd_write_patch - + * mark an MMIO if its cmd write needs to be + * patched + * @gvt: a GVT device + * @offset: register offset + * + */ +static inline void intel_gvt_mmio_set_cmd_write_patch( + struct intel_gvt *gvt, unsigned int offset) +{ + gvt->mmio.mmio_attribute[offset >> 2] |= F_CMD_WRITE_PATCH; +} + +/** + * intel_gvt_mmio_is_cmd_write_patch - check if an mmio's cmd access needs to + * be patched + * @gvt: a GVT device + * @offset: register offset + * + * Returns: + * True if GPU commmand write to an MMIO should be patched + */ +static inline bool intel_gvt_mmio_is_cmd_write_patch( + struct intel_gvt *gvt, unsigned int offset) +{ + return gvt->mmio.mmio_attribute[offset >> 2] & F_CMD_WRITE_PATCH; +} + void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu); void intel_gvt_debugfs_init(struct intel_gvt *gvt); void intel_gvt_debugfs_clean(struct intel_gvt *gvt); diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index 0d124ced5f94..6eeaeecb7f85 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -83,7 +83,7 @@ static void write_vreg(struct intel_vgpu *vgpu, unsigned int offset, memcpy(&vgpu_vreg(vgpu, offset), p_data, bytes); } -static struct intel_gvt_mmio_info *find_mmio_info(struct intel_gvt *gvt, +struct intel_gvt_mmio_info *intel_gvt_find_mmio_info(struct intel_gvt *gvt, unsigned int offset) { struct intel_gvt_mmio_info *e; @@ -96,7 +96,7 @@ static struct intel_gvt_mmio_info *find_mmio_info(struct intel_gvt *gvt, } static int new_mmio_info(struct intel_gvt *gvt, - u32 offset, u8 flags, u32 size, + u32 offset, u16 flags, u32 size, u32 addr_mask, u32 ro_mask, u32 device, gvt_mmio_func read, gvt_mmio_func write) { @@ -118,7 +118,7 @@ static int new_mmio_info(struct intel_gvt *gvt, return -ENOMEM; info->offset = i; - p = find_mmio_info(gvt, info->offset); + p = intel_gvt_find_mmio_info(gvt, info->offset); if (p) { WARN(1, "dup mmio definition offset %x\n", info->offset); @@ -1965,7 +1965,8 @@ static int init_generic_mmio_info(struct intel_gvt *gvt) /* RING MODE */ #define RING_REG(base) _MMIO((base) + 0x29c) - MMIO_RING_DFH(RING_REG, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, + MMIO_RING_DFH(RING_REG, D_ALL, + F_MODE_MASK | F_CMD_ACCESS | F_CMD_WRITE_PATCH, NULL, ring_mode_mmio_write); #undef RING_REG @@ -2885,8 +2886,8 @@ static int init_bdw_mmio_info(struct intel_gvt *gvt) MMIO_DFH(_MMIO(0xb10c), D_BDW, F_CMD_ACCESS, NULL, NULL); MMIO_D(_MMIO(0xb110), D_BDW); - MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS, 0, 0, D_BDW_PLUS, - NULL, force_nonpriv_write); + MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS | F_CMD_WRITE_PATCH, 0, 0, + D_BDW_PLUS, NULL, force_nonpriv_write); MMIO_D(_MMIO(0x44484), D_BDW_PLUS); MMIO_D(_MMIO(0x4448c), D_BDW_PLUS); @@ -3626,7 +3627,7 @@ int intel_vgpu_mmio_reg_rw(struct intel_vgpu *vgpu, unsigned int offset, /* * Normal tracked MMIOs. */ - mmio_info = find_mmio_info(gvt, offset); + mmio_info = intel_gvt_find_mmio_info(gvt, offset); if (!mmio_info) { gvt_dbg_mmio("untracked MMIO %08x len %d\n", offset, bytes); goto default_rw; diff --git a/drivers/gpu/drm/i915/gvt/mmio.h b/drivers/gpu/drm/i915/gvt/mmio.h index 9e862dc73579..7c26af39fbfc 100644 --- a/drivers/gpu/drm/i915/gvt/mmio.h +++ b/drivers/gpu/drm/i915/gvt/mmio.h @@ -80,6 +80,9 @@ int intel_gvt_for_each_tracked_mmio(struct intel_gvt *gvt, int (*handler)(struct intel_gvt *gvt, u32 offset, void *data), void *data); +struct intel_gvt_mmio_info *intel_gvt_find_mmio_info(struct intel_gvt *gvt, + unsigned int offset); + int intel_vgpu_init_mmio(struct intel_vgpu *vgpu); void intel_vgpu_reset_mmio(struct intel_vgpu *vgpu, bool dmlr); void intel_vgpu_clean_mmio(struct intel_vgpu *vgpu); diff --git a/drivers/gpu/drm/i915/gvt/reg.h b/drivers/gpu/drm/i915/gvt/reg.h index b58860dee970..244cc7320b54 100644 --- a/drivers/gpu/drm/i915/gvt/reg.h +++ b/drivers/gpu/drm/i915/gvt/reg.h @@ -133,4 +133,6 @@ #define RING_GFX_MODE(base) _MMIO((base) + 0x29c) #define VF_GUARDBAND _MMIO(0x83a4) + +#define BCS_TILE_REGISTER_VAL_OFFSET (0x43*4) #endif diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index 6af5c06caee0..43f31c2eab14 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -137,6 +137,7 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload) int i; bool skip = false; int ring_id = workload->engine->id; + int ret; GEM_BUG_ON(!intel_context_is_pinned(ctx)); @@ -163,16 +164,24 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload) COPY_REG(bb_per_ctx_ptr); COPY_REG(rcs_indirect_ctx); COPY_REG(rcs_indirect_ctx_offset); - } + } else if (workload->engine->id == BCS0) + intel_gvt_hypervisor_read_gpa(vgpu, + workload->ring_context_gpa + + BCS_TILE_REGISTER_VAL_OFFSET, + (void *)shadow_ring_context + + BCS_TILE_REGISTER_VAL_OFFSET, 4); #undef COPY_REG #undef COPY_REG_MASKED + /* don't copy Ring Context (the first 0x50 dwords), + * only copy the Engine Context part from guest + */ intel_gvt_hypervisor_read_gpa(vgpu, workload->ring_context_gpa + - sizeof(*shadow_ring_context), + RING_CTX_SIZE, (void *)shadow_ring_context + - sizeof(*shadow_ring_context), - I915_GTT_PAGE_SIZE - sizeof(*shadow_ring_context)); + RING_CTX_SIZE, + I915_GTT_PAGE_SIZE - RING_CTX_SIZE); sr_oa_regs(workload, (u32 *)shadow_ring_context, false); @@ -239,6 +248,11 @@ read: gpa_size = I915_GTT_PAGE_SIZE; dst = context_base + (i << I915_GTT_PAGE_SHIFT); } + ret = intel_gvt_scan_engine_context(workload); + if (ret) { + gvt_vgpu_err("invalid cmd found in guest context pages\n"); + return ret; + } s->last_ctx[ring_id].valid = true; return 0; } diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c index e49944fde333..6a16d0ca7cda 100644 --- a/drivers/gpu/drm/i915/gvt/vgpu.c +++ b/drivers/gpu/drm/i915/gvt/vgpu.c @@ -437,10 +437,9 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt, if (ret) goto out_clean_sched_policy; - if (IS_BROADWELL(dev_priv)) + if (IS_BROADWELL(dev_priv) || IS_BROXTON(dev_priv)) ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B); - /* FixMe: Re-enable APL/BXT once vfio_edid enabled */ - else if (!IS_BROXTON(dev_priv)) + else ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D); if (ret) goto out_clean_sched_policy; @@ -500,9 +499,11 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt, mutex_lock(&gvt->lock); vgpu = __intel_gvt_create_vgpu(gvt, ¶m); - if (!IS_ERR(vgpu)) + if (!IS_ERR(vgpu)) { /* calculate left instance change for types */ intel_gvt_update_vgpu_types(gvt); + intel_gvt_update_reg_whitelist(vgpu); + } mutex_unlock(&gvt->lock); return vgpu; diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index 82d0f19e86df..ced9a96d7c34 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -1143,7 +1143,7 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj, void *dst, *src; int ret; - dst = i915_gem_object_pin_map(dst_obj, I915_MAP_FORCE_WB); + dst = i915_gem_object_pin_map(dst_obj, I915_MAP_WB); if (IS_ERR(dst)) return dst; diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c index de8e0e44cfb6..88336ff4bf09 100644 --- a/drivers/gpu/drm/i915/i915_debugfs.c +++ b/drivers/gpu/drm/i915/i915_debugfs.c @@ -210,7 +210,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) spin_unlock(&obj->vma.lock); seq_printf(m, " (pinned x %d)", pin_count); - if (obj->stolen) + if (i915_gem_object_is_stolen(obj)) seq_printf(m, " (stolen: %08llx)", obj->stolen->start); if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); @@ -220,145 +220,6 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) seq_printf(m, " (%s)", engine->name); } -struct file_stats { - struct i915_address_space *vm; - unsigned long count; - u64 total; - u64 active, inactive; - u64 closed; -}; - -static int per_file_stats(int id, void *ptr, void *data) -{ - struct drm_i915_gem_object *obj = ptr; - struct file_stats *stats = data; - struct i915_vma *vma; - - if (IS_ERR_OR_NULL(obj) || !kref_get_unless_zero(&obj->base.refcount)) - return 0; - - stats->count++; - stats->total += obj->base.size; - - spin_lock(&obj->vma.lock); - if (!stats->vm) { - for_each_ggtt_vma(vma, obj) { - if (!drm_mm_node_allocated(&vma->node)) - continue; - - if (i915_vma_is_active(vma)) - stats->active += vma->node.size; - else - stats->inactive += vma->node.size; - - if (i915_vma_is_closed(vma)) - stats->closed += vma->node.size; - } - } else { - struct rb_node *p = obj->vma.tree.rb_node; - - while (p) { - long cmp; - - vma = rb_entry(p, typeof(*vma), obj_node); - cmp = i915_vma_compare(vma, stats->vm, NULL); - if (cmp == 0) { - if (drm_mm_node_allocated(&vma->node)) { - if (i915_vma_is_active(vma)) - stats->active += vma->node.size; - else - stats->inactive += vma->node.size; - - if (i915_vma_is_closed(vma)) - stats->closed += vma->node.size; - } - break; - } - if (cmp < 0) - p = p->rb_right; - else - p = p->rb_left; - } - } - spin_unlock(&obj->vma.lock); - - i915_gem_object_put(obj); - return 0; -} - -#define print_file_stats(m, name, stats) do { \ - if (stats.count) \ - seq_printf(m, "%s: %lu objects, %llu bytes (%llu active, %llu inactive, %llu closed)\n", \ - name, \ - stats.count, \ - stats.total, \ - stats.active, \ - stats.inactive, \ - stats.closed); \ -} while (0) - -static void print_context_stats(struct seq_file *m, - struct drm_i915_private *i915) -{ - struct file_stats kstats = {}; - struct i915_gem_context *ctx, *cn; - - spin_lock(&i915->gem.contexts.lock); - list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) { - struct i915_gem_engines_iter it; - struct intel_context *ce; - - if (!kref_get_unless_zero(&ctx->ref)) - continue; - - spin_unlock(&i915->gem.contexts.lock); - - for_each_gem_engine(ce, - i915_gem_context_lock_engines(ctx), it) { - if (intel_context_pin_if_active(ce)) { - rcu_read_lock(); - if (ce->state) - per_file_stats(0, - ce->state->obj, &kstats); - per_file_stats(0, ce->ring->vma->obj, &kstats); - rcu_read_unlock(); - intel_context_unpin(ce); - } - } - i915_gem_context_unlock_engines(ctx); - - mutex_lock(&ctx->mutex); - if (!IS_ERR_OR_NULL(ctx->file_priv)) { - struct file_stats stats = { - .vm = rcu_access_pointer(ctx->vm), - }; - struct drm_file *file = ctx->file_priv->file; - struct task_struct *task; - char name[80]; - - rcu_read_lock(); - idr_for_each(&file->object_idr, per_file_stats, &stats); - rcu_read_unlock(); - - rcu_read_lock(); - task = pid_task(ctx->pid ?: file->pid, PIDTYPE_PID); - snprintf(name, sizeof(name), "%s", - task ? task->comm : "<unknown>"); - rcu_read_unlock(); - - print_file_stats(m, name, stats); - } - mutex_unlock(&ctx->mutex); - - spin_lock(&i915->gem.contexts.lock); - list_safe_reset_next(ctx, cn, link); - i915_gem_context_put(ctx); - } - spin_unlock(&i915->gem.contexts.lock); - - print_file_stats(m, "[k]contexts", kstats); -} - static int i915_gem_object_info(struct seq_file *m, void *data) { struct drm_i915_private *i915 = node_to_i915(m->private); @@ -372,9 +233,6 @@ static int i915_gem_object_info(struct seq_file *m, void *data) for_each_memory_region(mr, i915, id) seq_printf(m, "%s: total:%pa, available:%pa bytes\n", mr->name, &mr->total, &mr->avail); - seq_putc(m, '\n'); - - print_context_stats(m, i915); return 0; } diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index f5666b44ea9d..2febda554bca 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -58,6 +58,7 @@ #include "display/intel_hotplug.h" #include "display/intel_overlay.h" #include "display/intel_pipe_crc.h" +#include "display/intel_pps.h" #include "display/intel_sprite.h" #include "display/intel_vga.h" @@ -608,14 +609,15 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv) goto err_msi; intel_opregion_setup(dev_priv); + + intel_pcode_init(dev_priv); + /* - * Fill the dram structure to get the system raw bandwidth and - * dram info. This will be used for memory latency calculation. + * Fill the dram structure to get the system dram info. This will be + * used for memory latency calculation. */ intel_dram_detect(dev_priv); - intel_pcode_init(dev_priv); - intel_bw_init_hw(dev_priv); return 0; diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index e3d58299b323..574f3575879d 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1122,23 +1122,11 @@ struct drm_i915_private { * crtc_state->wm.need_postvbl_update. */ struct mutex wm_mutex; - - /* - * Set during HW readout of watermarks/DDB. Some platforms - * need to know when we're still using BIOS-provided values - * (which we don't fully trust). - * - * FIXME get rid of this. - */ - bool distrust_bios_wm; } wm; struct dram_info { - bool valid; - bool is_16gb_dimm; + bool wm_lv_0_adjust_needed; u8 num_channels; - u8 ranks; - u32 bandwidth_kbps; bool symmetric_memory; enum intel_dram_type { INTEL_DRAM_UNKNOWN, @@ -1147,6 +1135,7 @@ struct drm_i915_private { INTEL_DRAM_LPDDR3, INTEL_DRAM_LPDDR4 } type; + u8 num_qgv_points; } dram_info; struct intel_bw_info { @@ -1182,6 +1171,8 @@ struct drm_i915_private { struct file *mmap_singleton; } gem; + u8 framestart_delay; + u8 pch_ssc_use; /* For i915gm/i945gm vblank irq workaround */ @@ -1758,10 +1749,17 @@ tgl_revids_get(struct drm_i915_private *dev_priv) #define HAS_DISPLAY(dev_priv) (INTEL_INFO(dev_priv)->pipe_mask != 0) +#define HAS_VRR(i915) (INTEL_GEN(i915) >= 12) + /* Only valid when HAS_DISPLAY() is true */ #define INTEL_DISPLAY_ENABLED(dev_priv) \ (drm_WARN_ON(&(dev_priv)->drm, !HAS_DISPLAY(dev_priv)), !(dev_priv)->params.disable_display) +static inline bool run_as_guest(void) +{ + return !hypervisor_is_type(X86_HYPER_NATIVE); +} + static inline bool intel_vtd_active(void) { #ifdef CONFIG_INTEL_IOMMU @@ -1770,7 +1768,7 @@ static inline bool intel_vtd_active(void) #endif /* Running as a guest, we assume the host is enforcing VT'd */ - return !hypervisor_is_type(X86_HYPER_NATIVE); + return run_as_guest(); } static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 17a4636ee542..9b04dff5eb32 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -180,108 +180,6 @@ try_again: } static int -i915_gem_create(struct drm_file *file, - struct intel_memory_region *mr, - u64 *size_p, - u32 *handle_p) -{ - struct drm_i915_gem_object *obj; - u32 handle; - u64 size; - int ret; - - GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); - size = round_up(*size_p, mr->min_page_size); - if (size == 0) - return -EINVAL; - - /* For most of the ABI (e.g. mmap) we think in system pages */ - GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); - - /* Allocate the new object */ - obj = i915_gem_object_create_region(mr, size, 0); - if (IS_ERR(obj)) - return PTR_ERR(obj); - - ret = drm_gem_handle_create(file, &obj->base, &handle); - /* drop reference from allocate - handle holds it now */ - i915_gem_object_put(obj); - if (ret) - return ret; - - *handle_p = handle; - *size_p = size; - return 0; -} - -int -i915_gem_dumb_create(struct drm_file *file, - struct drm_device *dev, - struct drm_mode_create_dumb *args) -{ - enum intel_memory_type mem_type; - int cpp = DIV_ROUND_UP(args->bpp, 8); - u32 format; - - switch (cpp) { - case 1: - format = DRM_FORMAT_C8; - break; - case 2: - format = DRM_FORMAT_RGB565; - break; - case 4: - format = DRM_FORMAT_XRGB8888; - break; - default: - return -EINVAL; - } - - /* have to work out size/pitch and return them */ - args->pitch = ALIGN(args->width * cpp, 64); - - /* align stride to page size so that we can remap */ - if (args->pitch > intel_plane_fb_max_stride(to_i915(dev), format, - DRM_FORMAT_MOD_LINEAR)) - args->pitch = ALIGN(args->pitch, 4096); - - if (args->pitch < args->width) - return -EINVAL; - - args->size = mul_u32_u32(args->pitch, args->height); - - mem_type = INTEL_MEMORY_SYSTEM; - if (HAS_LMEM(to_i915(dev))) - mem_type = INTEL_MEMORY_LOCAL; - - return i915_gem_create(file, - intel_memory_region_by_type(to_i915(dev), - mem_type), - &args->size, &args->handle); -} - -/** - * Creates a new mm object and returns a handle to it. - * @dev: drm device pointer - * @data: ioctl data blob - * @file: drm file pointer - */ -int -i915_gem_create_ioctl(struct drm_device *dev, void *data, - struct drm_file *file) -{ - struct drm_i915_private *i915 = to_i915(dev); - struct drm_i915_gem_create *args = data; - - i915_gem_flush_free_objects(i915); - - return i915_gem_create(file, - intel_memory_region_by_type(i915, - INTEL_MEMORY_SYSTEM), - &args->size, &args->handle); -} - -static int shmem_pread(struct page *page, int offset, int len, char __user *user_data, bool needs_clflush) { @@ -1059,14 +957,14 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, i915_gem_object_is_tiled(obj) && i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { if (obj->mm.madv == I915_MADV_WILLNEED) { - GEM_BUG_ON(!obj->mm.quirked); - __i915_gem_object_unpin_pages(obj); - obj->mm.quirked = false; + GEM_BUG_ON(!i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_clear_tiling_quirk(obj); + i915_gem_object_make_shrinkable(obj); } if (args->madv == I915_MADV_WILLNEED) { - GEM_BUG_ON(obj->mm.quirked); - __i915_gem_object_pin_pages(obj); - obj->mm.quirked = true; + GEM_BUG_ON(i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_make_unshrinkable(obj); + i915_gem_object_set_tiling_quirk(obj); } } @@ -1277,19 +1175,13 @@ int i915_gem_freeze_late(struct drm_i915_private *i915) * the objects as well, see i915_gem_freeze() */ - wakeref = intel_runtime_pm_get(&i915->runtime_pm); - - i915_gem_shrink(i915, -1UL, NULL, ~0); + with_intel_runtime_pm(&i915->runtime_pm, wakeref) + i915_gem_shrink(i915, -1UL, NULL, ~0); i915_gem_drain_freed_objects(i915); - list_for_each_entry(obj, &i915->mm.shrink_list, mm.link) { - i915_gem_object_lock(obj, NULL); - drm_WARN_ON(&i915->drm, - i915_gem_object_set_to_cpu_domain(obj, true)); - i915_gem_object_unlock(obj); - } - - intel_runtime_pm_put(&i915->runtime_pm, wakeref); + wbinvd_on_all_cpus(); + list_for_each_entry(obj, &i915->mm.shrink_list, mm.link) + __start_cpu_write(obj); return 0; } diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h index a4cad3f154ca..e622aee6e4be 100644 --- a/drivers/gpu/drm/i915/i915_gem.h +++ b/drivers/gpu/drm/i915/i915_gem.h @@ -38,11 +38,18 @@ struct drm_i915_private; #define GEM_SHOW_DEBUG() drm_debug_enabled(DRM_UT_DRIVER) +#ifdef CONFIG_DRM_I915_DEBUG_GEM_ONCE +#define __GEM_BUG(cond) BUG() +#else +#define __GEM_BUG(cond) \ + WARN(1, "%s:%d GEM_BUG_ON(%s)\n", __func__, __LINE__, __stringify(cond)) +#endif + #define GEM_BUG_ON(condition) do { if (unlikely((condition))) { \ GEM_TRACE_ERR("%s:%d GEM_BUG_ON(%s)\n", \ __func__, __LINE__, __stringify(condition)); \ GEM_TRACE_DUMP(); \ - BUG(); \ + __GEM_BUG(condition); \ } \ } while(0) #define GEM_WARN_ON(expr) WARN_ON(expr) diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c index e1a66c8245b8..4d2d59a9942b 100644 --- a/drivers/gpu/drm/i915/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/i915_gem_evict.c @@ -61,6 +61,17 @@ mark_free(struct drm_mm_scan *scan, return drm_mm_scan_add_block(scan, &vma->node); } +static bool defer_evict(struct i915_vma *vma) +{ + if (i915_vma_is_active(vma)) + return true; + + if (i915_vma_is_scanout(vma)) + return true; + + return false; +} + /** * i915_gem_evict_something - Evict vmas to make room for binding a new one * @vm: address space to evict from @@ -150,7 +161,7 @@ search_again: * To notice when we complete one full cycle, we record the * first active element seen, before moving it to the tail. */ - if (active != ERR_PTR(-EAGAIN) && i915_vma_is_active(vma)) { + if (active != ERR_PTR(-EAGAIN) && defer_evict(vma)) { if (!active) active = vma; diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 8b163ee1b86d..f962693404b7 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1051,7 +1051,9 @@ i915_vma_coredump_create(const struct intel_gt *gt, for_each_sgt_daddr(dma, iter, vma->pages) { void __iomem *s; - s = io_mapping_map_wc(&mem->iomap, dma, PAGE_SIZE); + s = io_mapping_map_wc(&mem->iomap, + dma - mem->region.start, + PAGE_SIZE); ret = compress_page(compress, (void __force *)s, dst, true); diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index dd1971040bbc..1a701367a718 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -718,25 +718,15 @@ u32 g4x_get_vblank_counter(struct drm_crtc *crtc) return intel_uncore_read(&dev_priv->uncore, PIPE_FRMCOUNT_G4X(pipe)); } -/* - * On certain encoders on certain platforms, pipe - * scanline register will not work to get the scanline, - * since the timings are driven from the PORT or issues - * with scanline register updates. - * This function will use Framestamp and current - * timestamp registers to calculate the scanline. - */ -static u32 __intel_get_crtc_scanline_from_timestamp(struct intel_crtc *crtc) +static u32 intel_crtc_scanlines_since_frame_timestamp(struct intel_crtc *crtc) { struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct drm_vblank_crtc *vblank = &crtc->base.dev->vblank[drm_crtc_index(&crtc->base)]; const struct drm_display_mode *mode = &vblank->hwmode; - u32 vblank_start = mode->crtc_vblank_start; - u32 vtotal = mode->crtc_vtotal; u32 htotal = mode->crtc_htotal; u32 clock = mode->crtc_clock; - u32 scanline, scan_prev_time, scan_curr_time, scan_post_time; + u32 scan_prev_time, scan_curr_time, scan_post_time; /* * To avoid the race condition where we might cross into the @@ -763,8 +753,28 @@ static u32 __intel_get_crtc_scanline_from_timestamp(struct intel_crtc *crtc) PIPE_FRMTMSTMP(crtc->pipe)); } while (scan_post_time != scan_prev_time); - scanline = div_u64(mul_u32_u32(scan_curr_time - scan_prev_time, - clock), 1000 * htotal); + return div_u64(mul_u32_u32(scan_curr_time - scan_prev_time, + clock), 1000 * htotal); +} + +/* + * On certain encoders on certain platforms, pipe + * scanline register will not work to get the scanline, + * since the timings are driven from the PORT or issues + * with scanline register updates. + * This function will use Framestamp and current + * timestamp registers to calculate the scanline. + */ +static u32 __intel_get_crtc_scanline_from_timestamp(struct intel_crtc *crtc) +{ + struct drm_vblank_crtc *vblank = + &crtc->base.dev->vblank[drm_crtc_index(&crtc->base)]; + const struct drm_display_mode *mode = &vblank->hwmode; + u32 vblank_start = mode->crtc_vblank_start; + u32 vtotal = mode->crtc_vtotal; + u32 scanline; + + scanline = intel_crtc_scanlines_since_frame_timestamp(crtc); scanline = min(scanline, vtotal - 1); scanline = (scanline + vblank_start) % vtotal; @@ -883,7 +893,20 @@ static bool i915_get_crtc_scanoutpos(struct drm_crtc *_crtc, if (stime) *stime = ktime_get(); - if (use_scanline_counter) { + if (crtc->mode_flags & I915_MODE_FLAG_VRR) { + int scanlines = intel_crtc_scanlines_since_frame_timestamp(crtc); + + position = __intel_get_crtc_scanline(crtc); + + /* + * Already exiting vblank? If so, shift our position + * so it looks like we're already apporaching the full + * vblank end. This should make the generated timestamp + * more or less match when the active portion will start. + */ + if (position >= vbl_start && scanlines < position) + position = min(crtc->vmax_vblank_start + scanlines, vtotal - 1); + } else if (use_scanline_counter) { /* No obvious pixelcount register. Only query vertical * scanout position from Display scan line register. */ @@ -1517,6 +1540,9 @@ static void valleyview_pipestat_irq_handler(struct drm_i915_private *dev_priv, if (pipe_stats[pipe] & PIPE_START_VBLANK_INTERRUPT_STATUS) intel_handle_vblank(dev_priv, pipe); + if (pipe_stats[pipe] & PLANE_FLIP_DONE_INT_STATUS_VLV) + flip_done_handler(dev_priv, pipe); + if (pipe_stats[pipe] & PIPE_CRC_DONE_INTERRUPT_STATUS) i9xx_pipe_crc_irq_handler(dev_priv, pipe); @@ -2029,6 +2055,9 @@ static void ilk_display_irq_handler(struct drm_i915_private *dev_priv, if (de_iir & DE_PIPE_VBLANK(pipe)) intel_handle_vblank(dev_priv, pipe); + if (de_iir & DE_PLANE_FLIP_DONE(pipe)) + flip_done_handler(dev_priv, pipe); + if (de_iir & DE_PIPE_FIFO_UNDERRUN(pipe)) intel_cpu_fifo_underrun_irq_handler(dev_priv, pipe); @@ -2079,8 +2108,11 @@ static void ivb_display_irq_handler(struct drm_i915_private *dev_priv, intel_opregion_asle_intr(dev_priv); for_each_pipe(dev_priv, pipe) { - if (de_iir & (DE_PIPE_VBLANK_IVB(pipe))) + if (de_iir & DE_PIPE_VBLANK_IVB(pipe)) intel_handle_vblank(dev_priv, pipe); + + if (de_iir & DE_PLANE_FLIP_DONE_IVB(pipe)) + flip_done_handler(dev_priv, pipe); } /* check event from PCH */ @@ -2357,6 +2389,14 @@ static void gen11_dsi_te_interrupt_handler(struct drm_i915_private *dev_priv, intel_uncore_write(&dev_priv->uncore, DSI_INTR_IDENT_REG(port), tmp); } +static u32 gen8_de_pipe_flip_done_mask(struct drm_i915_private *i915) +{ + if (INTEL_GEN(i915) >= 9) + return GEN9_PIPE_PLANE1_FLIP_DONE; + else + return GEN8_PIPE_PRIMARY_FLIP_DONE; +} + static irqreturn_t gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) { @@ -2459,7 +2499,7 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl) if (iir & GEN8_PIPE_VBLANK) intel_handle_vblank(dev_priv, pipe); - if (iir & GEN9_PIPE_PLANE1_FLIP_DONE) + if (iir & gen8_de_pipe_flip_done_mask(dev_priv)) flip_done_handler(dev_priv, pipe); if (iir & GEN8_PIPE_CDCLK_CRC_DONE) @@ -2822,19 +2862,6 @@ int bdw_enable_vblank(struct drm_crtc *crtc) return 0; } -void skl_enable_flip_done(struct intel_crtc *crtc) -{ - struct drm_i915_private *i915 = to_i915(crtc->base.dev); - enum pipe pipe = crtc->pipe; - unsigned long irqflags; - - spin_lock_irqsave(&i915->irq_lock, irqflags); - - bdw_enable_pipe_irq(i915, pipe, GEN9_PIPE_PLANE1_FLIP_DONE); - - spin_unlock_irqrestore(&i915->irq_lock, irqflags); -} - /* Called from drm generic code, passed 'crtc' which * we use as a pipe index */ @@ -2899,19 +2926,6 @@ void bdw_disable_vblank(struct drm_crtc *crtc) spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); } -void skl_disable_flip_done(struct intel_crtc *crtc) -{ - struct drm_i915_private *i915 = to_i915(crtc->base.dev); - enum pipe pipe = crtc->pipe; - unsigned long irqflags; - - spin_lock_irqsave(&i915->irq_lock, irqflags); - - bdw_disable_pipe_irq(i915, pipe, GEN9_PIPE_PLANE1_FLIP_DONE); - - spin_unlock_irqrestore(&i915->irq_lock, irqflags); -} - static void ibx_irq_reset(struct drm_i915_private *dev_priv) { struct intel_uncore *uncore = &dev_priv->uncore; @@ -3104,13 +3118,10 @@ void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv, u8 pipe_mask) { struct intel_uncore *uncore = &dev_priv->uncore; - - u32 extra_ier = GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN; + u32 extra_ier = GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN | + gen8_de_pipe_flip_done_mask(dev_priv); enum pipe pipe; - if (INTEL_GEN(dev_priv) >= 9) - extra_ier |= GEN9_PIPE_PLANE1_FLIP_DONE; - spin_lock_irq(&dev_priv->irq_lock); if (!intel_irqs_enabled(dev_priv)) { @@ -3585,6 +3596,9 @@ static void ilk_irq_postinstall(struct drm_i915_private *dev_priv) DE_PCH_EVENT_IVB | DE_AUX_CHANNEL_A_IVB); extra_mask = (DE_PIPEC_VBLANK_IVB | DE_PIPEB_VBLANK_IVB | DE_PIPEA_VBLANK_IVB | DE_ERR_INT_IVB | + DE_PLANE_FLIP_DONE_IVB(PLANE_C) | + DE_PLANE_FLIP_DONE_IVB(PLANE_B) | + DE_PLANE_FLIP_DONE_IVB(PLANE_A) | DE_DP_A_HOTPLUG_IVB); } else { display_mask = (DE_MASTER_IRQ_CONTROL | DE_GSE | DE_PCH_EVENT | @@ -3592,6 +3606,8 @@ static void ilk_irq_postinstall(struct drm_i915_private *dev_priv) DE_PIPEA_CRC_DONE | DE_POISON); extra_mask = (DE_PIPEA_VBLANK | DE_PIPEB_VBLANK | DE_PIPEB_FIFO_UNDERRUN | DE_PIPEA_FIFO_UNDERRUN | + DE_PLANE_FLIP_DONE(PLANE_A) | + DE_PLANE_FLIP_DONE(PLANE_B) | DE_DP_A_HOTPLUG); } @@ -3682,11 +3698,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv) de_port_masked |= DSI0_TE | DSI1_TE; } - de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK | - GEN8_PIPE_FIFO_UNDERRUN; - - if (INTEL_GEN(dev_priv) >= 9) - de_pipe_enables |= GEN9_PIPE_PLANE1_FLIP_DONE; + de_pipe_enables = de_pipe_masked | + GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN | + gen8_de_pipe_flip_done_mask(dev_priv); de_port_enables = de_port_masked; if (IS_GEN9_LP(dev_priv)) diff --git a/drivers/gpu/drm/i915/i915_irq.h b/drivers/gpu/drm/i915/i915_irq.h index 2efe609519ca..25f25cd95818 100644 --- a/drivers/gpu/drm/i915/i915_irq.h +++ b/drivers/gpu/drm/i915/i915_irq.h @@ -118,9 +118,6 @@ void i965_disable_vblank(struct drm_crtc *crtc); void ilk_disable_vblank(struct drm_crtc *crtc); void bdw_disable_vblank(struct drm_crtc *crtc); -void skl_enable_flip_done(struct intel_crtc *crtc); -void skl_disable_flip_done(struct intel_crtc *crtc); - void gen2_irq_reset(struct intel_uncore *uncore); void gen3_irq_reset(struct intel_uncore *uncore, i915_reg_t imr, i915_reg_t iir, i915_reg_t ier); diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index 43039dc8c607..666808cb3a32 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -62,7 +62,7 @@ static int remap_sg(pte_t *pte, unsigned long addr, void *data) { struct remap_pfn *r = data; - if (GEM_WARN_ON(!r->sgt.pfn)) + if (GEM_WARN_ON(!r->sgt.sgp)) return -EINVAL; /* Special PTE are not associated with any struct page */ diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c index 7f139ea4a90b..6939634e56ed 100644 --- a/drivers/gpu/drm/i915/i915_params.c +++ b/drivers/gpu/drm/i915/i915_params.c @@ -185,7 +185,7 @@ i915_param_named_unsafe(inject_probe_failure, uint, 0400, i915_param_named(enable_dpcd_backlight, int, 0400, "Enable support for DPCD backlight control" - "(-1=use per-VBT LFP backlight type setting [default], 0=disabled, 1=enabled)"); + "(-1=use per-VBT LFP backlight type setting [default], 0=disabled, 1=enable, 2=force VESA interface, 3=force Intel interface)"); #if IS_ENABLED(CONFIG_DRM_I915_GVT) i915_param_named(enable_gvt, bool, 0400, diff --git a/drivers/gpu/drm/i915/i915_params.h b/drivers/gpu/drm/i915/i915_params.h index 330c03e2b4f7..f031966af5b7 100644 --- a/drivers/gpu/drm/i915/i915_params.h +++ b/drivers/gpu/drm/i915/i915_params.h @@ -32,6 +32,7 @@ struct drm_printer; #define ENABLE_GUC_SUBMISSION BIT(0) #define ENABLE_GUC_LOAD_HUC BIT(1) +#define ENABLE_GUC_MASK GENMASK(1, 0) /* * Invoke param, a function-like macro, for each i915 param, with arguments: diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c index 39608381b4a4..020b5f561f07 100644 --- a/drivers/gpu/drm/i915/i915_pci.c +++ b/drivers/gpu/drm/i915/i915_pci.c @@ -455,6 +455,7 @@ static const struct intel_device_info snb_m_gt2_info = { .has_llc = 1, \ .has_rc6 = 1, \ .has_rc6p = 1, \ + .has_reset_engine = true, \ .has_rps = true, \ .dma_mask_size = 40, \ .ppgtt_type = INTEL_PPGTT_ALIASING, \ @@ -513,6 +514,7 @@ static const struct intel_device_info vlv_info = { .cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), .has_runtime_pm = 1, .has_rc6 = 1, + .has_reset_engine = true, .has_rps = true, .display.has_gmch = 1, .display.has_hotplug = 1, @@ -571,8 +573,7 @@ static const struct intel_device_info hsw_gt3_info = { .dma_mask_size = 39, \ .ppgtt_type = INTEL_PPGTT_FULL, \ .ppgtt_size = 48, \ - .has_64bit_reloc = 1, \ - .has_reset_engine = 1 + .has_64bit_reloc = 1 #define BDW_PLATFORM \ GEN8_FEATURES, \ diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index 1d8ba10847ca..598abd2f5b5f 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -4346,13 +4346,13 @@ enum { #define _TRANS_VRR_CTL_B 0x61420 #define _TRANS_VRR_CTL_C 0x62420 #define _TRANS_VRR_CTL_D 0x63420 -#define TRANS_VRR_CTL(trans) _MMIO_TRANS2(trans, _TRANS_VRR_CTL_A) -#define VRR_CTL_VRR_ENABLE REG_BIT(31) -#define VRR_CTL_IGN_MAX_SHIFT REG_BIT(30) -#define VRR_CTL_FLIP_LINE_EN REG_BIT(29) -#define VRR_CTL_LINE_COUNT_MASK REG_GENMASK(10, 3) -#define VRR_CTL_LINE_COUNT(x) REG_FIELD_PREP(VRR_CTL_LINE_COUNT_MASK, (x)) -#define VRR_CTL_SW_FULLLINE_COUNT REG_BIT(0) +#define TRANS_VRR_CTL(trans) _MMIO_TRANS2(trans, _TRANS_VRR_CTL_A) +#define VRR_CTL_VRR_ENABLE REG_BIT(31) +#define VRR_CTL_IGN_MAX_SHIFT REG_BIT(30) +#define VRR_CTL_FLIP_LINE_EN REG_BIT(29) +#define VRR_CTL_PIPELINE_FULL_MASK REG_GENMASK(10, 3) +#define VRR_CTL_PIPELINE_FULL(x) REG_FIELD_PREP(VRR_CTL_PIPELINE_FULL_MASK, (x)) +#define VRR_CTL_PIPELINE_FULL_OVERRIDE REG_BIT(0) #define _TRANS_VRR_VMAX_A 0x60424 #define _TRANS_VRR_VMAX_B 0x61424 @@ -6578,6 +6578,7 @@ enum { #define TGL_CURSOR_D_OFFSET 0x73080 /* Display A control */ +#define _DSPAADDR_VLV 0x7017C /* vlv/chv */ #define _DSPACNTR 0x70180 #define DISPLAY_PLANE_ENABLE (1 << 31) #define DISPLAY_PLANE_DISABLE 0 @@ -6614,6 +6615,7 @@ enum { #define DISPPLANE_ROTATE_180 (1 << 15) #define DISPPLANE_TRICKLE_FEED_DISABLE (1 << 14) /* Ironlake */ #define DISPPLANE_TILED (1 << 10) +#define DISPPLANE_ASYNC_FLIP (1 << 9) /* g4x+ */ #define DISPPLANE_MIRROR (1 << 8) /* CHV pipe B */ #define _DSPAADDR 0x70184 #define _DSPASTRIDE 0x70188 @@ -6625,6 +6627,7 @@ enum { #define _DSPASURFLIVE 0x701AC #define _DSPAGAMC 0x701E0 +#define DSPADDR_VLV(plane) _MMIO_PIPE2(plane, _DSPAADDR_VLV) #define DSPCNTR(plane) _MMIO_PIPE2(plane, _DSPACNTR) #define DSPADDR(plane) _MMIO_PIPE2(plane, _DSPAADDR) #define DSPSTRIDE(plane) _MMIO_PIPE2(plane, _DSPASTRIDE) @@ -7070,6 +7073,8 @@ enum { #define _PLANE_KEYMAX_1_A 0x701a0 #define _PLANE_KEYMAX_2_A 0x702a0 #define PLANE_KEYMAX_ALPHA(a) ((a) << 24) +#define _PLANE_CC_VAL_1_A 0x701b4 +#define _PLANE_CC_VAL_2_A 0x702b4 #define _PLANE_AUX_DIST_1_A 0x701c0 #define _PLANE_AUX_DIST_2_A 0x702c0 #define _PLANE_AUX_OFFSET_1_A 0x701c4 @@ -7111,6 +7116,13 @@ enum { #define _PLANE_NV12_BUF_CFG_1_A 0x70278 #define _PLANE_NV12_BUF_CFG_2_A 0x70378 +#define _PLANE_CC_VAL_1_B 0x711b4 +#define _PLANE_CC_VAL_2_B 0x712b4 +#define _PLANE_CC_VAL_1(pipe) _PIPE(pipe, _PLANE_CC_VAL_1_A, _PLANE_CC_VAL_1_B) +#define _PLANE_CC_VAL_2(pipe) _PIPE(pipe, _PLANE_CC_VAL_2_A, _PLANE_CC_VAL_2_B) +#define PLANE_CC_VAL(pipe, plane) \ + _MMIO_PLANE(plane, _PLANE_CC_VAL_1(pipe), _PLANE_CC_VAL_2(pipe)) + /* Input CSC Register Definitions */ #define _PLANE_INPUT_CSC_RY_GY_1_A 0x701E0 #define _PLANE_INPUT_CSC_RY_GY_2_A 0x702E0 @@ -9872,6 +9884,7 @@ enum skl_power_gate { _PORTD_HDCP2_BASE, \ _PORTE_HDCP2_BASE, \ _PORTF_HDCP2_BASE) + (x)) + #define PORT_HDCP2_AUTH(port) _PORT_HDCP2_BASE(port, 0x98) #define _TRANSA_HDCP2_AUTH 0x66498 #define _TRANSB_HDCP2_AUTH 0x66598 @@ -9911,6 +9924,44 @@ enum skl_power_gate { TRANS_HDCP2_STATUS(trans) : \ PORT_HDCP2_STATUS(port)) +#define _PIPEA_HDCP2_STREAM_STATUS 0x668C0 +#define _PIPEB_HDCP2_STREAM_STATUS 0x665C0 +#define _PIPEC_HDCP2_STREAM_STATUS 0x666C0 +#define _PIPED_HDCP2_STREAM_STATUS 0x667C0 +#define PIPE_HDCP2_STREAM_STATUS(pipe) _MMIO(_PICK((pipe), \ + _PIPEA_HDCP2_STREAM_STATUS, \ + _PIPEB_HDCP2_STREAM_STATUS, \ + _PIPEC_HDCP2_STREAM_STATUS, \ + _PIPED_HDCP2_STREAM_STATUS)) + +#define _TRANSA_HDCP2_STREAM_STATUS 0x664C0 +#define _TRANSB_HDCP2_STREAM_STATUS 0x665C0 +#define TRANS_HDCP2_STREAM_STATUS(trans) _MMIO_TRANS(trans, \ + _TRANSA_HDCP2_STREAM_STATUS, \ + _TRANSB_HDCP2_STREAM_STATUS) +#define STREAM_ENCRYPTION_STATUS BIT(31) +#define STREAM_TYPE_STATUS BIT(30) +#define HDCP2_STREAM_STATUS(dev_priv, trans, port) \ + (INTEL_GEN(dev_priv) >= 12 ? \ + TRANS_HDCP2_STREAM_STATUS(trans) : \ + PIPE_HDCP2_STREAM_STATUS(pipe)) + +#define _PORTA_HDCP2_AUTH_STREAM 0x66F00 +#define _PORTB_HDCP2_AUTH_STREAM 0x66F04 +#define PORT_HDCP2_AUTH_STREAM(port) _MMIO_PORT(port, \ + _PORTA_HDCP2_AUTH_STREAM, \ + _PORTB_HDCP2_AUTH_STREAM) +#define _TRANSA_HDCP2_AUTH_STREAM 0x66F00 +#define _TRANSB_HDCP2_AUTH_STREAM 0x66F04 +#define TRANS_HDCP2_AUTH_STREAM(trans) _MMIO_TRANS(trans, \ + _TRANSA_HDCP2_AUTH_STREAM, \ + _TRANSB_HDCP2_AUTH_STREAM) +#define AUTH_STREAM_TYPE BIT(31) +#define HDCP2_AUTH_STREAM(dev_priv, trans, port) \ + (INTEL_GEN(dev_priv) >= 12 ? \ + TRANS_HDCP2_AUTH_STREAM(trans) : \ + PORT_HDCP2_AUTH_STREAM(port)) + /* Per-pipe DDI Function Control */ #define _TRANS_DDI_FUNC_CTL_A 0x60400 #define _TRANS_DDI_FUNC_CTL_B 0x61400 @@ -9960,6 +10011,7 @@ enum skl_power_gate { #define TRANS_DDI_DP_VC_PAYLOAD_ALLOC (1 << 8) #define TRANS_DDI_HDMI_SCRAMBLER_CTS_ENABLE (1 << 7) #define TRANS_DDI_HDMI_SCRAMBLER_RESET_FREQ (1 << 6) +#define TRANS_DDI_HDCP_SELECT REG_BIT(5) #define TRANS_DDI_BFI_ENABLE (1 << 4) #define TRANS_DDI_HIGH_TMDS_CHAR_RATE (1 << 4) #define TRANS_DDI_HDMI_SCRAMBLING (1 << 0) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 0b1a46a0d866..22e39d938f17 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -276,7 +276,7 @@ static void remove_from_engine(struct i915_request *rq) bool i915_request_retire(struct i915_request *rq) { - if (!i915_request_completed(rq)) + if (!__i915_request_is_complete(rq)) return false; RQ_TRACE(rq, "\n"); @@ -342,8 +342,7 @@ void i915_request_retire_upto(struct i915_request *rq) struct i915_request *tmp; RQ_TRACE(rq, "\n"); - - GEM_BUG_ON(!i915_request_completed(rq)); + GEM_BUG_ON(!__i915_request_is_complete(rq)); do { tmp = list_first_entry(&tl->requests, typeof(*tmp), link); @@ -552,8 +551,10 @@ bool __i915_request_submit(struct i915_request *request) * dropped upon retiring. (Otherwise if resubmit a *retired* * request, this would be a horrible use-after-free.) */ - if (i915_request_completed(request)) - goto xfer; + if (__i915_request_is_complete(request)) { + list_del_init(&request->sched.link); + goto active; + } if (unlikely(intel_context_is_banned(request->context))) i915_request_set_error_once(request, -EIO); @@ -588,11 +589,11 @@ bool __i915_request_submit(struct i915_request *request) engine->serial++; result = true; -xfer: - if (!test_and_set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags)) { - list_move_tail(&request->sched.link, &engine->active.requests); - clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags); - } + GEM_BUG_ON(test_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags)); + list_move_tail(&request->sched.link, &engine->active.requests); +active: + clear_bit(I915_FENCE_FLAG_PQUEUE, &request->fence.flags); + set_bit(I915_FENCE_FLAG_ACTIVE, &request->fence.flags); /* * XXX Rollback bonded-execution on __i915_request_unsubmit()? @@ -652,7 +653,7 @@ void __i915_request_unsubmit(struct i915_request *request) i915_request_cancel_breadcrumb(request); /* We've already spun, don't charge on resubmitting. */ - if (request->sched.semaphores && i915_request_started(request)) + if (request->sched.semaphores && __i915_request_has_started(request)) request->sched.semaphores = 0; /* @@ -864,7 +865,7 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) RCU_INIT_POINTER(rq->timeline, tl); RCU_INIT_POINTER(rq->hwsp_cacheline, tl->hwsp_cacheline); rq->hwsp_seqno = tl->hwsp_seqno; - GEM_BUG_ON(i915_request_completed(rq)); + GEM_BUG_ON(__i915_request_is_complete(rq)); rq->rcustate = get_state_synchronize_rcu(); /* acts as smp_mb() */ @@ -970,15 +971,22 @@ i915_request_await_start(struct i915_request *rq, struct i915_request *signal) if (i915_request_started(signal)) return 0; + /* + * The caller holds a reference on @signal, but we do not serialise + * against it being retired and removed from the lists. + * + * We do not hold a reference to the request before @signal, and + * so must be very careful to ensure that it is not _recycled_ as + * we follow the link backwards. + */ fence = NULL; rcu_read_lock(); - spin_lock_irq(&signal->lock); do { struct list_head *pos = READ_ONCE(signal->link.prev); struct i915_request *prev; /* Confirm signal has not been retired, the link is valid */ - if (unlikely(i915_request_started(signal))) + if (unlikely(__i915_request_has_started(signal))) break; /* Is signal the earliest request on its timeline? */ @@ -1003,7 +1011,6 @@ i915_request_await_start(struct i915_request *rq, struct i915_request *signal) fence = &prev->fence; } while (0); - spin_unlock_irq(&signal->lock); rcu_read_unlock(); if (!fence) return 0; @@ -1520,7 +1527,7 @@ __i915_request_add_to_timeline(struct i915_request *rq) */ prev = to_request(__i915_active_fence_set(&timeline->last_request, &rq->fence)); - if (prev && !i915_request_completed(prev)) { + if (prev && !__i915_request_is_complete(prev)) { /* * The requests are supposed to be kept in order. However, * we need to be wary in case the timeline->last_request @@ -1897,10 +1904,10 @@ static char queue_status(const struct i915_request *rq) static const char *run_status(const struct i915_request *rq) { - if (i915_request_completed(rq)) + if (__i915_request_is_complete(rq)) return "!"; - if (i915_request_started(rq)) + if (__i915_request_has_started(rq)) return "*"; if (!i915_sw_fence_signaled(&rq->semaphore)) diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c index 318e359bf5c3..7144239f08df 100644 --- a/drivers/gpu/drm/i915/i915_scheduler.c +++ b/drivers/gpu/drm/i915/i915_scheduler.c @@ -520,7 +520,7 @@ void i915_request_show_with_schedule(struct drm_printer *m, if (signaler->timeline == rq->timeline) continue; - if (i915_request_completed(signaler)) + if (__i915_request_is_complete(signaler)) continue; i915_request_show(m, signaler, prefix, indent + 2); diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index 5b3a3c653454..a64adc8c883b 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -363,6 +363,21 @@ i915_vma_unpin_fence(struct i915_vma *vma) void i915_vma_parked(struct intel_gt *gt); +static inline bool i915_vma_is_scanout(const struct i915_vma *vma) +{ + return test_bit(I915_VMA_SCANOUT_BIT, __i915_vma_flags(vma)); +} + +static inline void i915_vma_mark_scanout(struct i915_vma *vma) +{ + set_bit(I915_VMA_SCANOUT_BIT, __i915_vma_flags(vma)); +} + +static inline void i915_vma_clear_scanout(struct i915_vma *vma) +{ + clear_bit(I915_VMA_SCANOUT_BIT, __i915_vma_flags(vma)); +} + #define for_each_until(cond) if (cond) break; else /** diff --git a/drivers/gpu/drm/i915/i915_vma_types.h b/drivers/gpu/drm/i915/i915_vma_types.h index 9e9082dc8f4b..f5cb848b7a7e 100644 --- a/drivers/gpu/drm/i915/i915_vma_types.h +++ b/drivers/gpu/drm/i915/i915_vma_types.h @@ -249,6 +249,9 @@ struct i915_vma { #define I915_VMA_USERFAULT ((int)BIT(I915_VMA_USERFAULT_BIT)) #define I915_VMA_GGTT_WRITE ((int)BIT(I915_VMA_GGTT_WRITE_BIT)) +#define I915_VMA_SCANOUT_BIT 18 +#define I915_VMA_SCANOUT ((int)BIT(I915_VMA_SCANOUT_BIT)) + struct i915_active active; #define I915_VMA_PAGES_BIAS 24 diff --git a/drivers/gpu/drm/i915/intel_dram.c b/drivers/gpu/drm/i915/intel_dram.c index 4754296a250e..73d256fc6830 100644 --- a/drivers/gpu/drm/i915/intel_dram.c +++ b/drivers/gpu/drm/i915/intel_dram.c @@ -5,6 +5,7 @@ #include "i915_drv.h" #include "intel_dram.h" +#include "intel_sideband.h" struct dram_dimm_info { u16 size; @@ -201,22 +202,12 @@ skl_dram_get_channels_info(struct drm_i915_private *i915) return -EINVAL; } - /* - * If any of the channel is single rank channel, worst case output - * will be same as if single rank memory, so consider single rank - * memory. - */ - if (ch0.ranks == 1 || ch1.ranks == 1) - dram_info->ranks = 1; - else - dram_info->ranks = max(ch0.ranks, ch1.ranks); - - if (dram_info->ranks == 0) { + if (ch0.ranks == 0 && ch1.ranks == 0) { drm_info(&i915->drm, "couldn't get memory rank information\n"); return -EINVAL; } - dram_info->is_16gb_dimm = ch0.is_16gb_dimm || ch1.is_16gb_dimm; + dram_info->wm_lv_0_adjust_needed = ch0.is_16gb_dimm || ch1.is_16gb_dimm; dram_info->symmetric_memory = intel_is_dram_symmetric(&ch0, &ch1); @@ -269,16 +260,12 @@ skl_get_dram_info(struct drm_i915_private *i915) mem_freq_khz = DIV_ROUND_UP((val & SKL_REQ_DATA_MASK) * SKL_MEMORY_FREQ_MULTIPLIER_HZ, 1000); - dram_info->bandwidth_kbps = dram_info->num_channels * - mem_freq_khz * 8; - - if (dram_info->bandwidth_kbps == 0) { + if (dram_info->num_channels * mem_freq_khz == 0) { drm_info(&i915->drm, "Couldn't get system memory bandwidth\n"); return -EINVAL; } - dram_info->valid = true; return 0; } @@ -365,7 +352,7 @@ static int bxt_get_dram_info(struct drm_i915_private *i915) struct dram_info *dram_info = &i915->dram_info; u32 dram_channels; u32 mem_freq_khz, val; - u8 num_active_channels; + u8 num_active_channels, valid_ranks = 0; int i; val = intel_uncore_read(&i915->uncore, BXT_P_CR_MC_BIOS_REQ_0_0_0); @@ -375,10 +362,7 @@ static int bxt_get_dram_info(struct drm_i915_private *i915) dram_channels = val & BXT_DRAM_CHANNEL_ACTIVE_MASK; num_active_channels = hweight32(dram_channels); - /* Each active bit represents 4-byte channel */ - dram_info->bandwidth_kbps = (mem_freq_khz * num_active_channels * 4); - - if (dram_info->bandwidth_kbps == 0) { + if (mem_freq_khz * num_active_channels == 0) { drm_info(&i915->drm, "Couldn't get system memory bandwidth\n"); return -EINVAL; @@ -410,57 +394,125 @@ static int bxt_get_dram_info(struct drm_i915_private *i915) dimm.size, dimm.width, dimm.ranks, intel_dram_type_str(type)); - /* - * If any of the channel is single rank channel, - * worst case output will be same as if single rank - * memory, so consider single rank memory. - */ - if (dram_info->ranks == 0) - dram_info->ranks = dimm.ranks; - else if (dimm.ranks == 1) - dram_info->ranks = 1; + if (valid_ranks == 0) + valid_ranks = dimm.ranks; if (type != INTEL_DRAM_UNKNOWN) dram_info->type = type; } - if (dram_info->type == INTEL_DRAM_UNKNOWN || dram_info->ranks == 0) { + if (dram_info->type == INTEL_DRAM_UNKNOWN || valid_ranks == 0) { drm_info(&i915->drm, "couldn't get memory information\n"); return -EINVAL; } - dram_info->valid = true; + return 0; +} + +static int icl_pcode_read_mem_global_info(struct drm_i915_private *dev_priv) +{ + struct dram_info *dram_info = &dev_priv->dram_info; + u32 val = 0; + int ret; + + ret = sandybridge_pcode_read(dev_priv, + ICL_PCODE_MEM_SUBSYSYSTEM_INFO | + ICL_PCODE_MEM_SS_READ_GLOBAL_INFO, + &val, NULL); + if (ret) + return ret; + + if (IS_GEN(dev_priv, 12)) { + switch (val & 0xf) { + case 0: + dram_info->type = INTEL_DRAM_DDR4; + break; + case 3: + dram_info->type = INTEL_DRAM_LPDDR4; + break; + case 4: + dram_info->type = INTEL_DRAM_DDR3; + break; + case 5: + dram_info->type = INTEL_DRAM_LPDDR3; + break; + default: + MISSING_CASE(val & 0xf); + return -1; + } + } else { + switch (val & 0xf) { + case 0: + dram_info->type = INTEL_DRAM_DDR4; + break; + case 1: + dram_info->type = INTEL_DRAM_DDR3; + break; + case 2: + dram_info->type = INTEL_DRAM_LPDDR3; + break; + case 3: + dram_info->type = INTEL_DRAM_LPDDR4; + break; + default: + MISSING_CASE(val & 0xf); + return -1; + } + } + + dram_info->num_channels = (val & 0xf0) >> 4; + dram_info->num_qgv_points = (val & 0xf00) >> 8; return 0; } +static int gen11_get_dram_info(struct drm_i915_private *i915) +{ + int ret = skl_get_dram_info(i915); + + if (ret) + return ret; + + return icl_pcode_read_mem_global_info(i915); +} + +static int gen12_get_dram_info(struct drm_i915_private *i915) +{ + /* Always needed for GEN12+ */ + i915->dram_info.wm_lv_0_adjust_needed = true; + + return icl_pcode_read_mem_global_info(i915); +} + void intel_dram_detect(struct drm_i915_private *i915) { struct dram_info *dram_info = &i915->dram_info; int ret; /* - * Assume 16Gb DIMMs are present until proven otherwise. - * This is only used for the level 0 watermark latency - * w/a which does not apply to bxt/glk. + * Assume level 0 watermark latency adjustment is needed until proven + * otherwise, this w/a is not needed by bxt/glk. */ - dram_info->is_16gb_dimm = !IS_GEN9_LP(i915); + dram_info->wm_lv_0_adjust_needed = !IS_GEN9_LP(i915); if (INTEL_GEN(i915) < 9 || !HAS_DISPLAY(i915)) return; - if (IS_GEN9_LP(i915)) + if (INTEL_GEN(i915) >= 12) + ret = gen12_get_dram_info(i915); + else if (INTEL_GEN(i915) >= 11) + ret = gen11_get_dram_info(i915); + else if (IS_GEN9_LP(i915)) ret = bxt_get_dram_info(i915); else ret = skl_get_dram_info(i915); if (ret) return; - drm_dbg_kms(&i915->drm, "DRAM bandwidth: %u kBps, channels: %u\n", - dram_info->bandwidth_kbps, dram_info->num_channels); + drm_dbg_kms(&i915->drm, "DRAM channels: %u\n", dram_info->num_channels); - drm_dbg_kms(&i915->drm, "DRAM ranks: %u, 16Gb DIMMs: %s\n", - dram_info->ranks, yesno(dram_info->is_16gb_dimm)); + drm_dbg_kms(&i915->drm, "Watermark level 0 adjustment needed: %s\n", + yesno(dram_info->wm_lv_0_adjust_needed)); } static u32 gen9_edram_size_mb(struct drm_i915_private *i915, u32 cap) diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h index 6590d55df6cb..6ffc0673f005 100644 --- a/drivers/gpu/drm/i915/intel_memory_region.h +++ b/drivers/gpu/drm/i915/intel_memory_region.h @@ -57,10 +57,10 @@ struct intel_memory_region_ops { int (*init)(struct intel_memory_region *mem); void (*release)(struct intel_memory_region *mem); - struct drm_i915_gem_object * - (*create_object)(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags); + int (*init_object)(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags); }; struct intel_memory_region { diff --git a/drivers/gpu/drm/i915/intel_pch.c b/drivers/gpu/drm/i915/intel_pch.c index f31c0dabd0cc..ecaf314d60b6 100644 --- a/drivers/gpu/drm/i915/intel_pch.c +++ b/drivers/gpu/drm/i915/intel_pch.c @@ -143,8 +143,9 @@ static bool intel_is_virt_pch(unsigned short id, sdevice == PCI_SUBDEVICE_ID_QEMU)); } -static unsigned short -intel_virt_detect_pch(const struct drm_i915_private *dev_priv) +static void +intel_virt_detect_pch(const struct drm_i915_private *dev_priv, + unsigned short *pch_id, enum intel_pch *pch_type) { unsigned short id = 0; @@ -181,12 +182,21 @@ intel_virt_detect_pch(const struct drm_i915_private *dev_priv) else drm_dbg_kms(&dev_priv->drm, "Assuming no PCH\n"); - return id; + *pch_type = intel_pch_type(dev_priv, id); + + /* Sanity check virtual PCH id */ + if (drm_WARN_ON(&dev_priv->drm, + id && *pch_type == PCH_NONE)) + id = 0; + + *pch_id = id; } void intel_detect_pch(struct drm_i915_private *dev_priv) { struct pci_dev *pch = NULL; + unsigned short id; + enum intel_pch pch_type; /* DG1 has south engine display on the same PCI device */ if (IS_DG1(dev_priv)) { @@ -206,9 +216,6 @@ void intel_detect_pch(struct drm_i915_private *dev_priv) * of only checking the first one. */ while ((pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, pch))) { - unsigned short id; - enum intel_pch pch_type; - if (pch->vendor != PCI_VENDOR_ID_INTEL) continue; @@ -221,14 +228,7 @@ void intel_detect_pch(struct drm_i915_private *dev_priv) break; } else if (intel_is_virt_pch(id, pch->subsystem_vendor, pch->subsystem_device)) { - id = intel_virt_detect_pch(dev_priv); - pch_type = intel_pch_type(dev_priv, id); - - /* Sanity check virtual PCH id */ - if (drm_WARN_ON(&dev_priv->drm, - id && pch_type == PCH_NONE)) - id = 0; - + intel_virt_detect_pch(dev_priv, &id, &pch_type); dev_priv->pch_type = pch_type; dev_priv->pch_id = id; break; @@ -244,10 +244,15 @@ void intel_detect_pch(struct drm_i915_private *dev_priv) "Display disabled, reverting to NOP PCH\n"); dev_priv->pch_type = PCH_NOP; dev_priv->pch_id = 0; + } else if (!pch) { + if (run_as_guest() && HAS_DISPLAY(dev_priv)) { + intel_virt_detect_pch(dev_priv, &id, &pch_type); + dev_priv->pch_type = pch_type; + dev_priv->pch_id = id; + } else { + drm_dbg_kms(&dev_priv->drm, "No PCH found.\n"); + } } - if (!pch) - drm_dbg_kms(&dev_priv->drm, "No PCH found.\n"); - pci_dev_put(pch); } diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index bbc73df7f753..0c3e63f27c29 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -2930,7 +2930,7 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv, * any underrun. If not able to get Dimm info assume 16GB dimm * to avoid any underrun. */ - if (dev_priv->dram_info.is_16gb_dimm) + if (dev_priv->dram_info.wm_lv_0_adjust_needed) wm[0] += 1; } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { @@ -4017,43 +4017,48 @@ static int intel_compute_sagv_mask(struct intel_atomic_state *state) return 0; } -/* - * Calculate initial DBuf slice offset, based on slice size - * and mask(i.e if slice size is 1024 and second slice is enabled - * offset would be 1024) - */ -static unsigned int -icl_get_first_dbuf_slice_offset(u32 dbuf_slice_mask, - u32 slice_size, - u32 ddb_size) +static int intel_dbuf_size(struct drm_i915_private *dev_priv) { - unsigned int offset = 0; + int ddb_size = INTEL_INFO(dev_priv)->ddb_size; - if (!dbuf_slice_mask) - return 0; + drm_WARN_ON(&dev_priv->drm, ddb_size == 0); - offset = (ffs(dbuf_slice_mask) - 1) * slice_size; + if (INTEL_GEN(dev_priv) < 11) + return ddb_size - 4; /* 4 blocks for bypass path allocation */ - WARN_ON(offset >= ddb_size); - return offset; + return ddb_size; } -u16 intel_get_ddb_size(struct drm_i915_private *dev_priv) +static int intel_dbuf_slice_size(struct drm_i915_private *dev_priv) { - u16 ddb_size = INTEL_INFO(dev_priv)->ddb_size; - drm_WARN_ON(&dev_priv->drm, ddb_size == 0); + return intel_dbuf_size(dev_priv) / + INTEL_INFO(dev_priv)->num_supported_dbuf_slices; +} - if (INTEL_GEN(dev_priv) < 11) - return ddb_size - 4; /* 4 blocks for bypass path allocation */ +static void +skl_ddb_entry_for_slices(struct drm_i915_private *dev_priv, u8 slice_mask, + struct skl_ddb_entry *ddb) +{ + int slice_size = intel_dbuf_slice_size(dev_priv); - return ddb_size; + if (!slice_mask) { + ddb->start = 0; + ddb->end = 0; + return; + } + + ddb->start = (ffs(slice_mask) - 1) * slice_size; + ddb->end = fls(slice_mask) * slice_size; + + WARN_ON(ddb->start >= ddb->end); + WARN_ON(ddb->end > intel_dbuf_size(dev_priv)); } u32 skl_ddb_dbuf_slice_mask(struct drm_i915_private *dev_priv, const struct skl_ddb_entry *entry) { u32 slice_mask = 0; - u16 ddb_size = intel_get_ddb_size(dev_priv); + u16 ddb_size = intel_dbuf_size(dev_priv); u16 num_supported_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices; u16 slice_size = ddb_size / num_supported_slices; u16 start_slice; @@ -4077,116 +4082,40 @@ u32 skl_ddb_dbuf_slice_mask(struct drm_i915_private *dev_priv, return slice_mask; } -static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state, - u8 active_pipes); - -static int -skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, - const struct intel_crtc_state *crtc_state, - const u64 total_data_rate, - struct skl_ddb_entry *alloc, /* out */ - int *num_active /* out */) -{ - struct drm_atomic_state *state = crtc_state->uapi.state; - struct intel_atomic_state *intel_state = to_intel_atomic_state(state); - struct drm_crtc *for_crtc = crtc_state->uapi.crtc; - const struct intel_crtc *crtc; - u32 pipe_width = 0, total_width_in_range = 0, width_before_pipe_in_range = 0; - enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe; - struct intel_dbuf_state *new_dbuf_state = - intel_atomic_get_new_dbuf_state(intel_state); - const struct intel_dbuf_state *old_dbuf_state = - intel_atomic_get_old_dbuf_state(intel_state); - u8 active_pipes = new_dbuf_state->active_pipes; - u16 ddb_size; - u32 ddb_range_size; - u32 i; - u32 dbuf_slice_mask; - u32 offset; - u32 slice_size; - u32 total_slice_mask; - u32 start, end; - int ret; - - *num_active = hweight8(active_pipes); - - if (!crtc_state->hw.active) { - alloc->start = 0; - alloc->end = 0; - return 0; - } - - ddb_size = intel_get_ddb_size(dev_priv); - - slice_size = ddb_size / INTEL_INFO(dev_priv)->num_supported_dbuf_slices; +static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_state) +{ + const struct drm_display_mode *pipe_mode = &crtc_state->hw.pipe_mode; + int hdisplay, vdisplay; - /* - * If the state doesn't change the active CRTC's or there is no - * modeset request, then there's no need to recalculate; - * the existing pipe allocation limits should remain unchanged. - * Note that we're safe from racing commits since any racing commit - * that changes the active CRTC list or do modeset would need to - * grab _all_ crtc locks, including the one we currently hold. - */ - if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes && - !dev_priv->wm.distrust_bios_wm) { - /* - * alloc may be cleared by clear_intel_crtc_state, - * copy from old state to be sure - * - * FIXME get rid of this mess - */ - *alloc = to_intel_crtc_state(for_crtc->state)->wm.skl.ddb; + if (!crtc_state->hw.active) return 0; - } - - /* - * Get allowed DBuf slices for correspondent pipe and platform. - */ - dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, active_pipes); - - /* - * Figure out at which DBuf slice we start, i.e if we start at Dbuf S2 - * and slice size is 1024, the offset would be 1024 - */ - offset = icl_get_first_dbuf_slice_offset(dbuf_slice_mask, - slice_size, ddb_size); - - /* - * Figure out total size of allowed DBuf slices, which is basically - * a number of allowed slices for that pipe multiplied by slice size. - * Inside of this - * range ddb entries are still allocated in proportion to display width. - */ - ddb_range_size = hweight8(dbuf_slice_mask) * slice_size; /* * Watermark/ddb requirement highly depends upon width of the * framebuffer, So instead of allocating DDB equally among pipes * distribute DDB based on resolution/width of the display. */ - total_slice_mask = dbuf_slice_mask; - for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) { - const struct drm_display_mode *pipe_mode = - &crtc_state->hw.pipe_mode; - enum pipe pipe = crtc->pipe; - int hdisplay, vdisplay; - u32 pipe_dbuf_slice_mask; + drm_mode_get_hv_timing(pipe_mode, &hdisplay, &vdisplay); - if (!crtc_state->hw.active) - continue; + return hdisplay; +} - pipe_dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, - active_pipes); +static void intel_crtc_dbuf_weights(const struct intel_dbuf_state *dbuf_state, + enum pipe for_pipe, + unsigned int *weight_start, + unsigned int *weight_end, + unsigned int *weight_total) +{ + struct drm_i915_private *dev_priv = + to_i915(dbuf_state->base.state->base.dev); + enum pipe pipe; - /* - * According to BSpec pipe can share one dbuf slice with another - * pipes or pipe can use multiple dbufs, in both cases we - * account for other pipes only if they have exactly same mask. - * However we need to account how many slices we should enable - * in total. - */ - total_slice_mask |= pipe_dbuf_slice_mask; + *weight_start = 0; + *weight_end = 0; + *weight_total = 0; + + for_each_pipe(dev_priv, pipe) { + int weight = dbuf_state->weight[pipe]; /* * Do not account pipes using other slice sets @@ -4195,42 +4124,78 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv, * i.e no partial intersection), so it is enough to check for * equality for now. */ - if (dbuf_slice_mask != pipe_dbuf_slice_mask) + if (dbuf_state->slices[pipe] != dbuf_state->slices[for_pipe]) continue; - drm_mode_get_hv_timing(pipe_mode, &hdisplay, &vdisplay); + *weight_total += weight; + if (pipe < for_pipe) { + *weight_start += weight; + *weight_end += weight; + } else if (pipe == for_pipe) { + *weight_end += weight; + } + } +} - total_width_in_range += hdisplay; +static int +skl_crtc_allocate_ddb(struct intel_atomic_state *state, struct intel_crtc *crtc) +{ + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); + unsigned int weight_total, weight_start, weight_end; + const struct intel_dbuf_state *old_dbuf_state = + intel_atomic_get_old_dbuf_state(state); + struct intel_dbuf_state *new_dbuf_state = + intel_atomic_get_new_dbuf_state(state); + struct intel_crtc_state *crtc_state; + struct skl_ddb_entry ddb_slices; + enum pipe pipe = crtc->pipe; + u32 ddb_range_size; + u32 dbuf_slice_mask; + u32 start, end; + int ret; - if (pipe < for_pipe) - width_before_pipe_in_range += hdisplay; - else if (pipe == for_pipe) - pipe_width = hdisplay; + if (new_dbuf_state->weight[pipe] == 0) { + new_dbuf_state->ddb[pipe].start = 0; + new_dbuf_state->ddb[pipe].end = 0; + goto out; } - /* - * FIXME: For now we always enable slice S1 as per - * the Bspec display initialization sequence. - */ - new_dbuf_state->enabled_slices = total_slice_mask | BIT(DBUF_S1); + dbuf_slice_mask = new_dbuf_state->slices[pipe]; - if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) { - ret = intel_atomic_serialize_global_state(&new_dbuf_state->base); - if (ret) - return ret; - } + skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices); + ddb_range_size = skl_ddb_entry_size(&ddb_slices); + + intel_crtc_dbuf_weights(new_dbuf_state, pipe, + &weight_start, &weight_end, &weight_total); + + start = ddb_range_size * weight_start / weight_total; + end = ddb_range_size * weight_end / weight_total; + + new_dbuf_state->ddb[pipe].start = ddb_slices.start + start; + new_dbuf_state->ddb[pipe].end = ddb_slices.start + end; + +out: + if (skl_ddb_entry_equal(&old_dbuf_state->ddb[pipe], + &new_dbuf_state->ddb[pipe])) + return 0; + + ret = intel_atomic_lock_global_state(&new_dbuf_state->base); + if (ret) + return ret; - start = ddb_range_size * width_before_pipe_in_range / total_width_in_range; - end = ddb_range_size * - (width_before_pipe_in_range + pipe_width) / total_width_in_range; + crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); + if (IS_ERR(crtc_state)) + return PTR_ERR(crtc_state); - alloc->start = offset + start; - alloc->end = offset + end; + crtc_state->wm.skl.ddb = new_dbuf_state->ddb[pipe]; drm_dbg_kms(&dev_priv->drm, - "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n", - for_crtc->base.id, for_crtc->name, - dbuf_slice_mask, alloc->start, alloc->end, active_pipes); + "[CRTC:%d:%s] dbuf slices 0x%x -> 0x%x, ddb (%d - %d) -> (%d - %d), active pipes 0x%x -> 0x%x\n", + crtc->base.base.id, crtc->base.name, + old_dbuf_state->slices[pipe], new_dbuf_state->slices[pipe], + old_dbuf_state->ddb[pipe].start, old_dbuf_state->ddb[pipe].end, + new_dbuf_state->ddb[pipe].start, new_dbuf_state->ddb[pipe].end, + old_dbuf_state->active_pipes, new_dbuf_state->active_pipes); return 0; } @@ -4632,10 +4597,8 @@ static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes) return compute_dbuf_slices(pipe, active_pipes, tgl_allowed_dbufs); } -static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state, - u8 active_pipes) +static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc, u8 active_pipes) { - struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); enum pipe pipe = crtc->pipe; @@ -4798,55 +4761,30 @@ skl_plane_wm_level(const struct intel_crtc_state *crtc_state, } static int -skl_allocate_pipe_ddb(struct intel_atomic_state *state, - struct intel_crtc *crtc) +skl_allocate_plane_ddb(struct intel_atomic_state *state, + struct intel_crtc *crtc) { + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); - struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); - struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb; + const struct intel_dbuf_state *dbuf_state = + intel_atomic_get_new_dbuf_state(state); + const struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe]; + int num_active = hweight8(dbuf_state->active_pipes); u16 alloc_size, start = 0; u16 total[I915_MAX_PLANES] = {}; u16 uv_total[I915_MAX_PLANES] = {}; u64 total_data_rate; enum plane_id plane_id; - int num_active; u32 blocks; int level; - int ret; /* Clear the partitioning for disabled planes. */ memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y)); memset(crtc_state->wm.skl.plane_ddb_uv, 0, sizeof(crtc_state->wm.skl.plane_ddb_uv)); - if (!crtc_state->hw.active) { - struct intel_atomic_state *state = - to_intel_atomic_state(crtc_state->uapi.state); - struct intel_dbuf_state *new_dbuf_state = - intel_atomic_get_new_dbuf_state(state); - const struct intel_dbuf_state *old_dbuf_state = - intel_atomic_get_old_dbuf_state(state); - - /* - * FIXME hack to make sure we compute this sensibly when - * turning off all the pipes. Otherwise we leave it at - * whatever we had previously, and then runtime PM will - * mess it up by turning off all but S1. Remove this - * once the dbuf state computation flow becomes sane. - */ - if (new_dbuf_state->active_pipes == 0) { - new_dbuf_state->enabled_slices = BIT(DBUF_S1); - - if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) { - ret = intel_atomic_serialize_global_state(&new_dbuf_state->base); - if (ret) - return ret; - } - } - - alloc->start = alloc->end = 0; + if (!crtc_state->hw.active) return 0; - } if (INTEL_GEN(dev_priv) >= 11) total_data_rate = @@ -4855,12 +4793,6 @@ skl_allocate_pipe_ddb(struct intel_atomic_state *state, total_data_rate = skl_get_total_relative_data_rate(state, crtc); - ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state, - total_data_rate, - alloc, &num_active); - if (ret) - return ret; - alloc_size = skl_ddb_entry_size(alloc); if (alloc_size == 0) return 0; @@ -5731,6 +5663,18 @@ static bool skl_ddb_entries_overlap(const struct skl_ddb_entry *a, return a->start < b->end && b->start < a->end; } +static void skl_ddb_entry_union(struct skl_ddb_entry *a, + const struct skl_ddb_entry *b) +{ + if (a->end && b->end) { + a->start = min(a->start, b->start); + a->end = max(a->end, b->end); + } else if (b->end) { + a->start = b->start; + a->end = b->end; + } +} + bool skl_ddb_allocation_overlaps(const struct skl_ddb_entry *ddb, const struct skl_ddb_entry *entries, int num_entries, int ignore_idx) @@ -5775,39 +5719,114 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state, return 0; } +static u8 intel_dbuf_enabled_slices(const struct intel_dbuf_state *dbuf_state) +{ + struct drm_i915_private *dev_priv = to_i915(dbuf_state->base.state->base.dev); + u8 enabled_slices; + enum pipe pipe; + + /* + * FIXME: For now we always enable slice S1 as per + * the Bspec display initialization sequence. + */ + enabled_slices = BIT(DBUF_S1); + + for_each_pipe(dev_priv, pipe) + enabled_slices |= dbuf_state->slices[pipe]; + + return enabled_slices; +} + static int skl_compute_ddb(struct intel_atomic_state *state) { struct drm_i915_private *dev_priv = to_i915(state->base.dev); const struct intel_dbuf_state *old_dbuf_state; - const struct intel_dbuf_state *new_dbuf_state; + struct intel_dbuf_state *new_dbuf_state = NULL; const struct intel_crtc_state *old_crtc_state; struct intel_crtc_state *new_crtc_state; struct intel_crtc *crtc; int ret, i; - for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, - new_crtc_state, i) { - ret = skl_allocate_pipe_ddb(state, crtc); + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { + new_dbuf_state = intel_atomic_get_dbuf_state(state); + if (IS_ERR(new_dbuf_state)) + return PTR_ERR(new_dbuf_state); + + old_dbuf_state = intel_atomic_get_old_dbuf_state(state); + break; + } + + if (!new_dbuf_state) + return 0; + + new_dbuf_state->active_pipes = + intel_calc_active_pipes(state, old_dbuf_state->active_pipes); + + if (old_dbuf_state->active_pipes != new_dbuf_state->active_pipes) { + ret = intel_atomic_lock_global_state(&new_dbuf_state->base); if (ret) return ret; + } - ret = skl_ddb_add_affected_planes(old_crtc_state, - new_crtc_state); + for_each_intel_crtc(&dev_priv->drm, crtc) { + enum pipe pipe = crtc->pipe; + + new_dbuf_state->slices[pipe] = + skl_compute_dbuf_slices(crtc, new_dbuf_state->active_pipes); + + if (old_dbuf_state->slices[pipe] == new_dbuf_state->slices[pipe]) + continue; + + ret = intel_atomic_lock_global_state(&new_dbuf_state->base); if (ret) return ret; } - old_dbuf_state = intel_atomic_get_old_dbuf_state(state); - new_dbuf_state = intel_atomic_get_new_dbuf_state(state); + new_dbuf_state->enabled_slices = intel_dbuf_enabled_slices(new_dbuf_state); + + if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) { + ret = intel_atomic_serialize_global_state(&new_dbuf_state->base); + if (ret) + return ret; - if (new_dbuf_state && - new_dbuf_state->enabled_slices != old_dbuf_state->enabled_slices) drm_dbg_kms(&dev_priv->drm, "Enabled dbuf slices 0x%x -> 0x%x (out of %d dbuf slices)\n", old_dbuf_state->enabled_slices, new_dbuf_state->enabled_slices, INTEL_INFO(dev_priv)->num_supported_dbuf_slices); + } + + for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { + enum pipe pipe = crtc->pipe; + + new_dbuf_state->weight[pipe] = intel_crtc_ddb_weight(new_crtc_state); + + if (old_dbuf_state->weight[pipe] == new_dbuf_state->weight[pipe]) + continue; + + ret = intel_atomic_lock_global_state(&new_dbuf_state->base); + if (ret) + return ret; + } + + for_each_intel_crtc(&dev_priv->drm, crtc) { + ret = skl_crtc_allocate_ddb(state, crtc); + if (ret) + return ret; + } + + for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, + new_crtc_state, i) { + ret = skl_allocate_plane_ddb(state, crtc); + if (ret) + return ret; + + ret = skl_ddb_add_affected_planes(old_crtc_state, + new_crtc_state); + if (ret) + return ret; + } return 0; } @@ -5944,83 +5963,6 @@ skl_print_wm_changes(struct intel_atomic_state *state) } } -static int intel_add_affected_pipes(struct intel_atomic_state *state, - u8 pipe_mask) -{ - struct drm_i915_private *dev_priv = to_i915(state->base.dev); - struct intel_crtc *crtc; - - for_each_intel_crtc(&dev_priv->drm, crtc) { - struct intel_crtc_state *crtc_state; - - if ((pipe_mask & BIT(crtc->pipe)) == 0) - continue; - - crtc_state = intel_atomic_get_crtc_state(&state->base, crtc); - if (IS_ERR(crtc_state)) - return PTR_ERR(crtc_state); - } - - return 0; -} - -static int -skl_ddb_add_affected_pipes(struct intel_atomic_state *state) -{ - struct drm_i915_private *dev_priv = to_i915(state->base.dev); - struct intel_crtc_state *crtc_state; - struct intel_crtc *crtc; - int i, ret; - - if (dev_priv->wm.distrust_bios_wm) { - /* - * skl_ddb_get_pipe_allocation_limits() currently requires - * all active pipes to be included in the state so that - * it can redistribute the dbuf among them, and it really - * wants to recompute things when distrust_bios_wm is set - * so we add all the pipes to the state. - */ - ret = intel_add_affected_pipes(state, ~0); - if (ret) - return ret; - } - - for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) { - struct intel_dbuf_state *new_dbuf_state; - const struct intel_dbuf_state *old_dbuf_state; - - new_dbuf_state = intel_atomic_get_dbuf_state(state); - if (IS_ERR(new_dbuf_state)) - return PTR_ERR(new_dbuf_state); - - old_dbuf_state = intel_atomic_get_old_dbuf_state(state); - - new_dbuf_state->active_pipes = - intel_calc_active_pipes(state, old_dbuf_state->active_pipes); - - if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes) - break; - - ret = intel_atomic_lock_global_state(&new_dbuf_state->base); - if (ret) - return ret; - - /* - * skl_ddb_get_pipe_allocation_limits() currently requires - * all active pipes to be included in the state so that - * it can redistribute the dbuf among them. - */ - ret = intel_add_affected_pipes(state, - new_dbuf_state->active_pipes); - if (ret) - return ret; - - break; - } - - return 0; -} - /* * To make sure the cursor watermark registers are always consistent * with our computed state the following scenario needs special @@ -6088,15 +6030,6 @@ skl_compute_wm(struct intel_atomic_state *state) struct intel_crtc_state *new_crtc_state; int ret, i; - ret = skl_ddb_add_affected_pipes(state); - if (ret) - return ret; - - /* - * Calculate WM's for all pipes that are part of this transaction. - * Note that skl_ddb_add_affected_pipes may have added more CRTC's that - * weren't otherwise being modified if pipe allocations had to change. - */ for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) { ret = skl_build_pipe_wm(state, crtc); if (ret) @@ -6255,20 +6188,49 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc, void skl_wm_get_hw_state(struct drm_i915_private *dev_priv) { + struct intel_dbuf_state *dbuf_state = + to_intel_dbuf_state(dev_priv->dbuf.obj.state); struct intel_crtc *crtc; - struct intel_crtc_state *crtc_state; for_each_intel_crtc(&dev_priv->drm, crtc) { - crtc_state = to_intel_crtc_state(crtc->base.state); + struct intel_crtc_state *crtc_state = + to_intel_crtc_state(crtc->base.state); + enum pipe pipe = crtc->pipe; + enum plane_id plane_id; skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal); crtc_state->wm.skl.raw = crtc_state->wm.skl.optimal; - } - if (dev_priv->active_pipes) { - /* Fully recompute DDB on first atomic commit */ - dev_priv->wm.distrust_bios_wm = true; + memset(&dbuf_state->ddb[pipe], 0, sizeof(dbuf_state->ddb[pipe])); + + for_each_plane_id_on_crtc(crtc, plane_id) { + struct skl_ddb_entry *ddb_y = + &crtc_state->wm.skl.plane_ddb_y[plane_id]; + struct skl_ddb_entry *ddb_uv = + &crtc_state->wm.skl.plane_ddb_uv[plane_id]; + + skl_ddb_get_hw_plane_state(dev_priv, crtc->pipe, + plane_id, ddb_y, ddb_uv); + + skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_y); + skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_uv); + } + + dbuf_state->slices[pipe] = + skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes); + + dbuf_state->weight[pipe] = intel_crtc_ddb_weight(crtc_state); + + crtc_state->wm.skl.ddb = dbuf_state->ddb[pipe]; + + drm_dbg_kms(&dev_priv->drm, + "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n", + crtc->base.base.id, crtc->base.name, + dbuf_state->slices[pipe], dbuf_state->ddb[pipe].start, + dbuf_state->ddb[pipe].end, dbuf_state->active_pipes); } + + dbuf_state->enabled_slices = dev_priv->dbuf.enabled_slices; } static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc) @@ -7103,24 +7065,26 @@ static void icl_init_clock_gating(struct drm_i915_private *dev_priv) 0, CNL_DELAY_PMRSP); } -static void tgl_init_clock_gating(struct drm_i915_private *dev_priv) +static void gen12lp_init_clock_gating(struct drm_i915_private *dev_priv) { - /* Wa_1409120013:tgl */ + /* Wa_1409120013:tgl,rkl,adl_s,dg1 */ intel_uncore_write(&dev_priv->uncore, ILK_DPFC_CHICKEN, - ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL); + ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL); /* Wa_1409825376:tgl (pre-prod)*/ if (IS_TGL_DISP_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_B1)) intel_uncore_write(&dev_priv->uncore, GEN9_CLKGATE_DIS_3, intel_uncore_read(&dev_priv->uncore, GEN9_CLKGATE_DIS_3) | TGL_VRH_GATING_DIS); - /* Wa_14011059788:tgl */ + /* Wa_14011059788:tgl,rkl,adl_s,dg1 */ intel_uncore_rmw(&dev_priv->uncore, GEN10_DFR_RATIO_EN_AND_CHICKEN, 0, DFR_DISABLE); } static void dg1_init_clock_gating(struct drm_i915_private *dev_priv) { + gen12lp_init_clock_gating(dev_priv); + /* Wa_1409836686:dg1[a0] */ if (IS_DG1_REVID(dev_priv, DG1_REVID_A0, DG1_REVID_A0)) intel_uncore_write(&dev_priv->uncore, GEN9_CLKGATE_DIS_3, intel_uncore_read(&dev_priv->uncore, GEN9_CLKGATE_DIS_3) | @@ -7583,7 +7547,7 @@ void intel_init_clock_gating_hooks(struct drm_i915_private *dev_priv) if (IS_DG1(dev_priv)) dev_priv->display.init_clock_gating = dg1_init_clock_gating; else if (IS_GEN(dev_priv, 12)) - dev_priv->display.init_clock_gating = tgl_init_clock_gating; + dev_priv->display.init_clock_gating = gen12lp_init_clock_gating; else if (IS_GEN(dev_priv, 11)) dev_priv->display.init_clock_gating = icl_init_clock_gating; else if (IS_CANNONLAKE(dev_priv)) diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h index eab83e251dd5..97550cf0b6df 100644 --- a/drivers/gpu/drm/i915/intel_pm.h +++ b/drivers/gpu/drm/i915/intel_pm.h @@ -9,8 +9,10 @@ #include <linux/types.h> #include "display/intel_bw.h" +#include "display/intel_display.h" #include "display/intel_global_state.h" +#include "i915_drv.h" #include "i915_reg.h" struct drm_device; @@ -40,7 +42,6 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc, struct skl_ddb_entry *ddb_y, struct skl_ddb_entry *ddb_uv); void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv); -u16 intel_get_ddb_size(struct drm_i915_private *dev_priv); u32 skl_ddb_dbuf_slice_mask(struct drm_i915_private *dev_priv, const struct skl_ddb_entry *entry); void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc, @@ -69,6 +70,10 @@ bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable); struct intel_dbuf_state { struct intel_global_state base; + struct skl_ddb_entry ddb[I915_MAX_PIPES]; + unsigned int weight[I915_MAX_PIPES]; + u8 slices[I915_MAX_PIPES]; + u8 enabled_slices; u8 active_pipes; }; diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c index 3512bb8433cf..f99bb0113726 100644 --- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c @@ -38,8 +38,8 @@ static void quirk_add(struct drm_i915_gem_object *obj, struct list_head *objects) { /* quirk is only for live tiled objects, use it to declare ownership */ - GEM_BUG_ON(obj->mm.quirked); - obj->mm.quirked = true; + GEM_BUG_ON(i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_set_tiling_quirk(obj); list_add(&obj->st_link, objects); } @@ -85,7 +85,7 @@ static void unpin_ggtt(struct i915_ggtt *ggtt) struct i915_vma *vma; list_for_each_entry(vma, &ggtt->vm.bound_list, vm_link) - if (vma->obj->mm.quirked) + if (i915_gem_object_has_tiling_quirk(vma->obj)) i915_vma_unpin(vma); } @@ -94,8 +94,8 @@ static void cleanup_objects(struct i915_ggtt *ggtt, struct list_head *list) struct drm_i915_gem_object *obj, *on; list_for_each_entry_safe(obj, on, list, st_link) { - GEM_BUG_ON(!obj->mm.quirked); - obj->mm.quirked = false; + GEM_BUG_ON(!i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_set_tiling_quirk(obj); i915_gem_object_put(obj); } diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c index 979d96f27c43..3c6021415274 100644 --- a/drivers/gpu/drm/i915/selftests/mock_region.c +++ b/drivers/gpu/drm/i915/selftests/mock_region.c @@ -15,21 +15,16 @@ static const struct drm_i915_gem_object_ops mock_region_obj_ops = { .release = i915_gem_object_release_memory_region, }; -static struct drm_i915_gem_object * -mock_object_create(struct intel_memory_region *mem, - resource_size_t size, - unsigned int flags) +static int mock_object_init(struct intel_memory_region *mem, + struct drm_i915_gem_object *obj, + resource_size_t size, + unsigned int flags) { static struct lock_class_key lock_class; struct drm_i915_private *i915 = mem->i915; - struct drm_i915_gem_object *obj; if (size > mem->mm.size) - return ERR_PTR(-E2BIG); - - obj = i915_gem_object_alloc(); - if (!obj) - return ERR_PTR(-ENOMEM); + return -E2BIG; drm_gem_private_object_init(&i915->drm, &obj->base, size); i915_gem_object_init(obj, &mock_region_obj_ops, &lock_class); @@ -40,13 +35,13 @@ mock_object_create(struct intel_memory_region *mem, i915_gem_object_init_memory_region(obj, mem, flags); - return obj; + return 0; } static const struct intel_memory_region_ops mock_region_ops = { .init = intel_memory_region_init_buddy, .release = intel_memory_region_release_buddy, - .create_object = mock_object_create, + .init_object = mock_object_init, }; struct intel_memory_region * diff --git a/drivers/gpu/drm/mediatek/Makefile b/drivers/gpu/drm/mediatek/Makefile index a892edec5563..dc54a7a69005 100644 --- a/drivers/gpu/drm/mediatek/Makefile +++ b/drivers/gpu/drm/mediatek/Makefile @@ -1,10 +1,11 @@ # SPDX-License-Identifier: GPL-2.0 -mediatek-drm-y := mtk_disp_color.o \ +mediatek-drm-y := mtk_disp_ccorr.o \ + mtk_disp_color.o \ + mtk_disp_gamma.o \ mtk_disp_ovl.o \ mtk_disp_rdma.o \ mtk_drm_crtc.o \ - mtk_drm_ddp.o \ mtk_drm_ddp_comp.o \ mtk_drm_drv.o \ mtk_drm_gem.o \ diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ccorr.c b/drivers/gpu/drm/mediatek/mtk_disp_ccorr.c new file mode 100644 index 000000000000..141cb36b9c07 --- /dev/null +++ b/drivers/gpu/drm/mediatek/mtk_disp_ccorr.c @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021 MediaTek Inc. + */ + +#include <linux/clk.h> +#include <linux/component.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/of_irq.h> +#include <linux/platform_device.h> +#include <linux/soc/mediatek/mtk-cmdq.h> + +#include "mtk_disp_drv.h" +#include "mtk_drm_crtc.h" +#include "mtk_drm_ddp_comp.h" + +#define DISP_CCORR_EN 0x0000 +#define CCORR_EN BIT(0) +#define DISP_CCORR_CFG 0x0020 +#define CCORR_RELAY_MODE BIT(0) +#define CCORR_ENGINE_EN BIT(1) +#define CCORR_GAMMA_OFF BIT(2) +#define CCORR_WGAMUT_SRC_CLIP BIT(3) +#define DISP_CCORR_SIZE 0x0030 +#define DISP_CCORR_COEF_0 0x0080 +#define DISP_CCORR_COEF_1 0x0084 +#define DISP_CCORR_COEF_2 0x0088 +#define DISP_CCORR_COEF_3 0x008C +#define DISP_CCORR_COEF_4 0x0090 + +struct mtk_disp_ccorr_data { + u32 matrix_bits; +}; + +/** + * struct mtk_disp_ccorr - DISP_CCORR driver structure + * @ddp_comp - structure containing type enum and hardware resources + * @crtc - associated crtc to report irq events to + */ +struct mtk_disp_ccorr { + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; + const struct mtk_disp_ccorr_data *data; +}; + +int mtk_ccorr_clk_enable(struct device *dev) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + + return clk_prepare_enable(ccorr->clk); +} + +void mtk_ccorr_clk_disable(struct device *dev) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + + clk_disable_unprepare(ccorr->clk); +} + +void mtk_ccorr_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + + mtk_ddp_write(cmdq_pkt, w << 16 | h, &ccorr->cmdq_reg, ccorr->regs, + DISP_CCORR_SIZE); + mtk_ddp_write(cmdq_pkt, CCORR_ENGINE_EN, &ccorr->cmdq_reg, ccorr->regs, + DISP_CCORR_CFG); +} + +void mtk_ccorr_start(struct device *dev) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + + writel(CCORR_EN, ccorr->regs + DISP_CCORR_EN); +} + +void mtk_ccorr_stop(struct device *dev) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + + writel_relaxed(0x0, ccorr->regs + DISP_CCORR_EN); +} + +/* Converts a DRM S31.32 value to the HW S1.n format. */ +static u16 mtk_ctm_s31_32_to_s1_n(u64 in, u32 n) +{ + u16 r; + + /* Sign bit. */ + r = in & BIT_ULL(63) ? BIT(n + 1) : 0; + + if ((in & GENMASK_ULL(62, 33)) > 0) { + /* identity value 0x100000000 -> 0x400(mt8183), */ + /* identity value 0x100000000 -> 0x800(mt8192), */ + /* if bigger this, set it to max 0x7ff. */ + r |= GENMASK(n, 0); + } else { + /* take the n+1 most important bits. */ + r |= (in >> (32 - n)) & GENMASK(n, 0); + } + + return r; +} + +void mtk_ccorr_ctm_set(struct device *dev, struct drm_crtc_state *state) +{ + struct mtk_disp_ccorr *ccorr = dev_get_drvdata(dev); + struct drm_property_blob *blob = state->ctm; + struct drm_color_ctm *ctm; + const u64 *input; + uint16_t coeffs[9] = { 0 }; + int i; + struct cmdq_pkt *cmdq_pkt = NULL; + u32 matrix_bits = ccorr->data->matrix_bits; + + if (!blob) + return; + + ctm = (struct drm_color_ctm *)blob->data; + input = ctm->matrix; + + for (i = 0; i < ARRAY_SIZE(coeffs); i++) + coeffs[i] = mtk_ctm_s31_32_to_s1_n(input[i], matrix_bits); + + mtk_ddp_write(cmdq_pkt, coeffs[0] << 16 | coeffs[1], + &ccorr->cmdq_reg, ccorr->regs, DISP_CCORR_COEF_0); + mtk_ddp_write(cmdq_pkt, coeffs[2] << 16 | coeffs[3], + &ccorr->cmdq_reg, ccorr->regs, DISP_CCORR_COEF_1); + mtk_ddp_write(cmdq_pkt, coeffs[4] << 16 | coeffs[5], + &ccorr->cmdq_reg, ccorr->regs, DISP_CCORR_COEF_2); + mtk_ddp_write(cmdq_pkt, coeffs[6] << 16 | coeffs[7], + &ccorr->cmdq_reg, ccorr->regs, DISP_CCORR_COEF_3); + mtk_ddp_write(cmdq_pkt, coeffs[8] << 16, + &ccorr->cmdq_reg, ccorr->regs, DISP_CCORR_COEF_4); +} + +static int mtk_disp_ccorr_bind(struct device *dev, struct device *master, + void *data) +{ + return 0; +} + +static void mtk_disp_ccorr_unbind(struct device *dev, struct device *master, + void *data) +{ +} + +static const struct component_ops mtk_disp_ccorr_component_ops = { + .bind = mtk_disp_ccorr_bind, + .unbind = mtk_disp_ccorr_unbind, +}; + +static int mtk_disp_ccorr_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct mtk_disp_ccorr *priv; + struct resource *res; + int ret; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get ccorr clk\n"); + return PTR_ERR(priv->clk); + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->regs)) { + dev_err(dev, "failed to ioremap ccorr\n"); + return PTR_ERR(priv->regs); + } + +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); +#endif + + priv->data = of_device_get_match_data(dev); + platform_set_drvdata(pdev, priv); + + ret = component_add(dev, &mtk_disp_ccorr_component_ops); + if (ret) + dev_err(dev, "Failed to add component: %d\n", ret); + + return ret; +} + +static int mtk_disp_ccorr_remove(struct platform_device *pdev) +{ + component_del(&pdev->dev, &mtk_disp_ccorr_component_ops); + + return 0; +} + +static const struct mtk_disp_ccorr_data mt8183_ccorr_driver_data = { + .matrix_bits = 10, +}; + +static const struct of_device_id mtk_disp_ccorr_driver_dt_match[] = { + { .compatible = "mediatek,mt8183-disp-ccorr", + .data = &mt8183_ccorr_driver_data}, + {}, +}; +MODULE_DEVICE_TABLE(of, mtk_disp_ccorr_driver_dt_match); + +struct platform_driver mtk_disp_ccorr_driver = { + .probe = mtk_disp_ccorr_probe, + .remove = mtk_disp_ccorr_remove, + .driver = { + .name = "mediatek-disp-ccorr", + .owner = THIS_MODULE, + .of_match_table = mtk_disp_ccorr_driver_dt_match, + }, +}; diff --git a/drivers/gpu/drm/mediatek/mtk_disp_color.c b/drivers/gpu/drm/mediatek/mtk_disp_color.c index 6048cbc9f0ec..63f411ab393b 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_color.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_color.c @@ -11,6 +11,7 @@ #include <linux/platform_device.h> #include <linux/soc/mediatek/mtk-cmdq.h> +#include "mtk_disp_drv.h" #include "mtk_drm_crtc.h" #include "mtk_drm_ddp_comp.h" @@ -36,64 +37,55 @@ struct mtk_disp_color_data { * @data: platform colour driver data */ struct mtk_disp_color { - struct mtk_ddp_comp ddp_comp; struct drm_crtc *crtc; + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; const struct mtk_disp_color_data *data; }; -static inline struct mtk_disp_color *comp_to_color(struct mtk_ddp_comp *comp) +int mtk_color_clk_enable(struct device *dev) { - return container_of(comp, struct mtk_disp_color, ddp_comp); + struct mtk_disp_color *color = dev_get_drvdata(dev); + + return clk_prepare_enable(color->clk); } -static void mtk_color_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +void mtk_color_clk_disable(struct device *dev) { - struct mtk_disp_color *color = comp_to_color(comp); + struct mtk_disp_color *color = dev_get_drvdata(dev); - mtk_ddp_write(cmdq_pkt, w, comp, DISP_COLOR_WIDTH(color)); - mtk_ddp_write(cmdq_pkt, h, comp, DISP_COLOR_HEIGHT(color)); + clk_disable_unprepare(color->clk); } -static void mtk_color_start(struct mtk_ddp_comp *comp) +void mtk_color_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { - struct mtk_disp_color *color = comp_to_color(comp); + struct mtk_disp_color *color = dev_get_drvdata(dev); - writel(COLOR_BYPASS_ALL | COLOR_SEQ_SEL, - comp->regs + DISP_COLOR_CFG_MAIN); - writel(0x1, comp->regs + DISP_COLOR_START(color)); + mtk_ddp_write(cmdq_pkt, w, &color->cmdq_reg, color->regs, DISP_COLOR_WIDTH(color)); + mtk_ddp_write(cmdq_pkt, h, &color->cmdq_reg, color->regs, DISP_COLOR_HEIGHT(color)); } -static const struct mtk_ddp_comp_funcs mtk_disp_color_funcs = { - .config = mtk_color_config, - .start = mtk_color_start, -}; +void mtk_color_start(struct device *dev) +{ + struct mtk_disp_color *color = dev_get_drvdata(dev); + + writel(COLOR_BYPASS_ALL | COLOR_SEQ_SEL, + color->regs + DISP_COLOR_CFG_MAIN); + writel(0x1, color->regs + DISP_COLOR_START(color)); +} static int mtk_disp_color_bind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_color *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - int ret; - - ret = mtk_ddp_comp_register(drm_dev, &priv->ddp_comp); - if (ret < 0) { - dev_err(dev, "Failed to register component %pOF: %d\n", - dev->of_node, ret); - return ret; - } - return 0; } static void mtk_disp_color_unbind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_color *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - - mtk_ddp_comp_unregister(drm_dev, &priv->ddp_comp); } static const struct component_ops mtk_disp_color_component_ops = { @@ -105,31 +97,32 @@ static int mtk_disp_color_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct mtk_disp_color *priv; - int comp_id; + struct resource *res; int ret; priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; - comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DISP_COLOR); - if (comp_id < 0) { - dev_err(dev, "Failed to identify by alias: %d\n", comp_id); - return comp_id; + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get color clk\n"); + return PTR_ERR(priv->clk); } - ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, - &mtk_disp_color_funcs); - if (ret) { - if (ret != -EPROBE_DEFER) - dev_err(dev, "Failed to initialize component: %d\n", - ret); - - return ret; + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->regs)) { + dev_err(dev, "failed to ioremap color\n"); + return PTR_ERR(priv->regs); } +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); +#endif priv->data = of_device_get_match_data(dev); - platform_set_drvdata(pdev, priv); ret = component_add(dev, &mtk_disp_color_component_ops); @@ -141,8 +134,6 @@ static int mtk_disp_color_probe(struct platform_device *pdev) static int mtk_disp_color_remove(struct platform_device *pdev) { - component_del(&pdev->dev, &mtk_disp_color_component_ops); - return 0; } diff --git a/drivers/gpu/drm/mediatek/mtk_disp_drv.h b/drivers/gpu/drm/mediatek/mtk_disp_drv.h new file mode 100644 index 000000000000..cafd9df2d63b --- /dev/null +++ b/drivers/gpu/drm/mediatek/mtk_disp_drv.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020 MediaTek Inc. + */ + +#ifndef _MTK_DISP_DRV_H_ +#define _MTK_DISP_DRV_H_ + +#include <linux/soc/mediatek/mtk-cmdq.h> +#include "mtk_drm_plane.h" + +void mtk_ccorr_ctm_set(struct device *dev, struct drm_crtc_state *state); +int mtk_ccorr_clk_enable(struct device *dev); +void mtk_ccorr_clk_disable(struct device *dev); +void mtk_ccorr_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt); +void mtk_ccorr_start(struct device *dev); +void mtk_ccorr_stop(struct device *dev); + +void mtk_color_bypass_shadow(struct device *dev); +int mtk_color_clk_enable(struct device *dev); +void mtk_color_clk_disable(struct device *dev); +void mtk_color_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt); +void mtk_color_start(struct device *dev); + +void mtk_dither_set_common(void __iomem *regs, struct cmdq_client_reg *cmdq_reg, + unsigned int bpc, unsigned int cfg, + unsigned int dither_en, struct cmdq_pkt *cmdq_pkt); + +void mtk_dpi_start(struct device *dev); +void mtk_dpi_stop(struct device *dev); + +void mtk_dsi_ddp_start(struct device *dev); +void mtk_dsi_ddp_stop(struct device *dev); + +int mtk_gamma_clk_enable(struct device *dev); +void mtk_gamma_clk_disable(struct device *dev); +void mtk_gamma_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt); +void mtk_gamma_set(struct device *dev, struct drm_crtc_state *state); +void mtk_gamma_set_common(void __iomem *regs, struct drm_crtc_state *state); +void mtk_gamma_start(struct device *dev); +void mtk_gamma_stop(struct device *dev); + +void mtk_ovl_bgclr_in_on(struct device *dev); +void mtk_ovl_bgclr_in_off(struct device *dev); +void mtk_ovl_bypass_shadow(struct device *dev); +int mtk_ovl_clk_enable(struct device *dev); +void mtk_ovl_clk_disable(struct device *dev); +void mtk_ovl_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt); +int mtk_ovl_layer_check(struct device *dev, unsigned int idx, + struct mtk_plane_state *mtk_state); +void mtk_ovl_layer_config(struct device *dev, unsigned int idx, + struct mtk_plane_state *state, + struct cmdq_pkt *cmdq_pkt); +unsigned int mtk_ovl_layer_nr(struct device *dev); +void mtk_ovl_layer_on(struct device *dev, unsigned int idx, + struct cmdq_pkt *cmdq_pkt); +void mtk_ovl_layer_off(struct device *dev, unsigned int idx, + struct cmdq_pkt *cmdq_pkt); +void mtk_ovl_start(struct device *dev); +void mtk_ovl_stop(struct device *dev); +unsigned int mtk_ovl_supported_rotations(struct device *dev); +void mtk_ovl_enable_vblank(struct device *dev, + void (*vblank_cb)(void *), + void *vblank_cb_data); +void mtk_ovl_disable_vblank(struct device *dev); + +void mtk_rdma_bypass_shadow(struct device *dev); +int mtk_rdma_clk_enable(struct device *dev); +void mtk_rdma_clk_disable(struct device *dev); +void mtk_rdma_config(struct device *dev, unsigned int width, + unsigned int height, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt); +unsigned int mtk_rdma_layer_nr(struct device *dev); +void mtk_rdma_layer_config(struct device *dev, unsigned int idx, + struct mtk_plane_state *state, + struct cmdq_pkt *cmdq_pkt); +void mtk_rdma_start(struct device *dev); +void mtk_rdma_stop(struct device *dev); +void mtk_rdma_enable_vblank(struct device *dev, + void (*vblank_cb)(void *), + void *vblank_cb_data); +void mtk_rdma_disable_vblank(struct device *dev); + +#endif diff --git a/drivers/gpu/drm/mediatek/mtk_disp_gamma.c b/drivers/gpu/drm/mediatek/mtk_disp_gamma.c new file mode 100644 index 000000000000..3ebf91e0ab41 --- /dev/null +++ b/drivers/gpu/drm/mediatek/mtk_disp_gamma.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021 MediaTek Inc. + */ + +#include <linux/clk.h> +#include <linux/component.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/of_irq.h> +#include <linux/platform_device.h> +#include <linux/soc/mediatek/mtk-cmdq.h> + +#include "mtk_disp_drv.h" +#include "mtk_drm_crtc.h" +#include "mtk_drm_ddp_comp.h" + +#define DISP_GAMMA_EN 0x0000 +#define GAMMA_EN BIT(0) +#define DISP_GAMMA_CFG 0x0020 +#define GAMMA_LUT_EN BIT(1) +#define GAMMA_DITHERING BIT(2) +#define DISP_GAMMA_SIZE 0x0030 +#define DISP_GAMMA_LUT 0x0700 + +#define LUT_10BIT_MASK 0x03ff + +struct mtk_disp_gamma_data { + bool has_dither; +}; + +/** + * struct mtk_disp_gamma - DISP_GAMMA driver structure + * @ddp_comp - structure containing type enum and hardware resources + * @crtc - associated crtc to report irq events to + */ +struct mtk_disp_gamma { + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; + const struct mtk_disp_gamma_data *data; +}; + +int mtk_gamma_clk_enable(struct device *dev) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + return clk_prepare_enable(gamma->clk); +} + +void mtk_gamma_clk_disable(struct device *dev) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + clk_disable_unprepare(gamma->clk); +} + +void mtk_gamma_set_common(void __iomem *regs, struct drm_crtc_state *state) +{ + unsigned int i, reg; + struct drm_color_lut *lut; + void __iomem *lut_base; + u32 word; + + if (state->gamma_lut) { + reg = readl(regs + DISP_GAMMA_CFG); + reg = reg | GAMMA_LUT_EN; + writel(reg, regs + DISP_GAMMA_CFG); + lut_base = regs + DISP_GAMMA_LUT; + lut = (struct drm_color_lut *)state->gamma_lut->data; + for (i = 0; i < MTK_LUT_SIZE; i++) { + word = (((lut[i].red >> 6) & LUT_10BIT_MASK) << 20) + + (((lut[i].green >> 6) & LUT_10BIT_MASK) << 10) + + ((lut[i].blue >> 6) & LUT_10BIT_MASK); + writel(word, (lut_base + i * 4)); + } + } +} + +void mtk_gamma_set(struct device *dev, struct drm_crtc_state *state) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + mtk_gamma_set_common(gamma->regs, state); +} + +void mtk_gamma_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + mtk_ddp_write(cmdq_pkt, h << 16 | w, &gamma->cmdq_reg, gamma->regs, + DISP_GAMMA_SIZE); + if (gamma->data && gamma->data->has_dither) + mtk_dither_set_common(gamma->regs, &gamma->cmdq_reg, bpc, + DISP_GAMMA_CFG, GAMMA_DITHERING, cmdq_pkt); +} + +void mtk_gamma_start(struct device *dev) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + writel(GAMMA_EN, gamma->regs + DISP_GAMMA_EN); +} + +void mtk_gamma_stop(struct device *dev) +{ + struct mtk_disp_gamma *gamma = dev_get_drvdata(dev); + + writel_relaxed(0x0, gamma->regs + DISP_GAMMA_EN); +} + +static int mtk_disp_gamma_bind(struct device *dev, struct device *master, + void *data) +{ + return 0; +} + +static void mtk_disp_gamma_unbind(struct device *dev, struct device *master, + void *data) +{ +} + +static const struct component_ops mtk_disp_gamma_component_ops = { + .bind = mtk_disp_gamma_bind, + .unbind = mtk_disp_gamma_unbind, +}; + +static int mtk_disp_gamma_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct mtk_disp_gamma *priv; + struct resource *res; + int ret; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get gamma clk\n"); + return PTR_ERR(priv->clk); + } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->regs)) { + dev_err(dev, "failed to ioremap gamma\n"); + return PTR_ERR(priv->regs); + } + +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); +#endif + + priv->data = of_device_get_match_data(dev); + platform_set_drvdata(pdev, priv); + + ret = component_add(dev, &mtk_disp_gamma_component_ops); + if (ret) + dev_err(dev, "Failed to add component: %d\n", ret); + + return ret; +} + +static int mtk_disp_gamma_remove(struct platform_device *pdev) +{ + component_del(&pdev->dev, &mtk_disp_gamma_component_ops); + + return 0; +} + +static const struct mtk_disp_gamma_data mt8173_gamma_driver_data = { + .has_dither = true, +}; + +static const struct of_device_id mtk_disp_gamma_driver_dt_match[] = { + { .compatible = "mediatek,mt8173-disp-gamma", + .data = &mt8173_gamma_driver_data}, + { .compatible = "mediatek,mt8183-disp-gamma"}, + {}, +}; +MODULE_DEVICE_TABLE(of, mtk_disp_gamma_driver_dt_match); + +struct platform_driver mtk_disp_gamma_driver = { + .probe = mtk_disp_gamma_probe, + .remove = mtk_disp_gamma_remove, + .driver = { + .name = "mediatek-disp-gamma", + .owner = THIS_MODULE, + .of_match_table = mtk_disp_gamma_driver_dt_match, + }, +}; diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c index 74ef6fc0528b..961f87f8d4d1 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c @@ -13,6 +13,7 @@ #include <linux/platform_device.h> #include <linux/soc/mediatek/mtk-cmdq.h> +#include "mtk_disp_drv.h" #include "mtk_drm_crtc.h" #include "mtk_drm_ddp_comp.h" @@ -23,6 +24,7 @@ #define DISP_REG_OVL_RST 0x0014 #define DISP_REG_OVL_ROI_SIZE 0x0020 #define DISP_REG_OVL_DATAPATH_CON 0x0024 +#define OVL_LAYER_SMI_ID_EN BIT(0) #define OVL_BGCLR_SEL_IN BIT(2) #define DISP_REG_OVL_ROI_BGCLR 0x0028 #define DISP_REG_OVL_SRC_CON 0x002c @@ -61,6 +63,7 @@ struct mtk_disp_ovl_data { unsigned int gmc_bits; unsigned int layer_nr; bool fmt_rgb565_is_0; + bool smi_id_en; }; /** @@ -70,88 +73,124 @@ struct mtk_disp_ovl_data { * @data: platform data */ struct mtk_disp_ovl { - struct mtk_ddp_comp ddp_comp; struct drm_crtc *crtc; + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; const struct mtk_disp_ovl_data *data; + void (*vblank_cb)(void *data); + void *vblank_cb_data; }; -static inline struct mtk_disp_ovl *comp_to_ovl(struct mtk_ddp_comp *comp) -{ - return container_of(comp, struct mtk_disp_ovl, ddp_comp); -} - static irqreturn_t mtk_disp_ovl_irq_handler(int irq, void *dev_id) { struct mtk_disp_ovl *priv = dev_id; - struct mtk_ddp_comp *ovl = &priv->ddp_comp; /* Clear frame completion interrupt */ - writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA); + writel(0x0, priv->regs + DISP_REG_OVL_INTSTA); - if (!priv->crtc) + if (!priv->vblank_cb) return IRQ_NONE; - mtk_crtc_ddp_irq(priv->crtc, ovl); + priv->vblank_cb(priv->vblank_cb_data); return IRQ_HANDLED; } -static void mtk_ovl_enable_vblank(struct mtk_ddp_comp *comp, - struct drm_crtc *crtc) +void mtk_ovl_enable_vblank(struct device *dev, + void (*vblank_cb)(void *), + void *vblank_cb_data) +{ + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + + ovl->vblank_cb = vblank_cb; + ovl->vblank_cb_data = vblank_cb_data; + writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA); + writel_relaxed(OVL_FME_CPL_INT, ovl->regs + DISP_REG_OVL_INTEN); +} + +void mtk_ovl_disable_vblank(struct device *dev) +{ + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + + ovl->vblank_cb = NULL; + ovl->vblank_cb_data = NULL; + writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN); +} + +int mtk_ovl_clk_enable(struct device *dev) { - struct mtk_disp_ovl *ovl = comp_to_ovl(comp); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); - ovl->crtc = crtc; - writel(0x0, comp->regs + DISP_REG_OVL_INTSTA); - writel_relaxed(OVL_FME_CPL_INT, comp->regs + DISP_REG_OVL_INTEN); + return clk_prepare_enable(ovl->clk); } -static void mtk_ovl_disable_vblank(struct mtk_ddp_comp *comp) +void mtk_ovl_clk_disable(struct device *dev) { - struct mtk_disp_ovl *ovl = comp_to_ovl(comp); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); - ovl->crtc = NULL; - writel_relaxed(0x0, comp->regs + DISP_REG_OVL_INTEN); + clk_disable_unprepare(ovl->clk); } -static void mtk_ovl_start(struct mtk_ddp_comp *comp) +void mtk_ovl_start(struct device *dev) { - writel_relaxed(0x1, comp->regs + DISP_REG_OVL_EN); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + + if (ovl->data->smi_id_en) { + unsigned int reg; + + reg = readl(ovl->regs + DISP_REG_OVL_DATAPATH_CON); + reg = reg | OVL_LAYER_SMI_ID_EN; + writel_relaxed(reg, ovl->regs + DISP_REG_OVL_DATAPATH_CON); + } + writel_relaxed(0x1, ovl->regs + DISP_REG_OVL_EN); } -static void mtk_ovl_stop(struct mtk_ddp_comp *comp) +void mtk_ovl_stop(struct device *dev) { - writel_relaxed(0x0, comp->regs + DISP_REG_OVL_EN); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + + writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_EN); + if (ovl->data->smi_id_en) { + unsigned int reg; + + reg = readl(ovl->regs + DISP_REG_OVL_DATAPATH_CON); + reg = reg & ~OVL_LAYER_SMI_ID_EN; + writel_relaxed(reg, ovl->regs + DISP_REG_OVL_DATAPATH_CON); + } + } -static void mtk_ovl_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +void mtk_ovl_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + if (w != 0 && h != 0) - mtk_ddp_write_relaxed(cmdq_pkt, h << 16 | w, comp, + mtk_ddp_write_relaxed(cmdq_pkt, h << 16 | w, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_ROI_SIZE); - mtk_ddp_write_relaxed(cmdq_pkt, 0x0, comp, DISP_REG_OVL_ROI_BGCLR); + mtk_ddp_write_relaxed(cmdq_pkt, 0x0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_ROI_BGCLR); - mtk_ddp_write(cmdq_pkt, 0x1, comp, DISP_REG_OVL_RST); - mtk_ddp_write(cmdq_pkt, 0x0, comp, DISP_REG_OVL_RST); + mtk_ddp_write(cmdq_pkt, 0x1, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RST); + mtk_ddp_write(cmdq_pkt, 0x0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RST); } -static unsigned int mtk_ovl_layer_nr(struct mtk_ddp_comp *comp) +unsigned int mtk_ovl_layer_nr(struct device *dev) { - struct mtk_disp_ovl *ovl = comp_to_ovl(comp); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); return ovl->data->layer_nr; } -static unsigned int mtk_ovl_supported_rotations(struct mtk_ddp_comp *comp) +unsigned int mtk_ovl_supported_rotations(struct device *dev) { return DRM_MODE_ROTATE_0 | DRM_MODE_ROTATE_180 | DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y; } -static int mtk_ovl_layer_check(struct mtk_ddp_comp *comp, unsigned int idx, - struct mtk_plane_state *mtk_state) +int mtk_ovl_layer_check(struct device *dev, unsigned int idx, + struct mtk_plane_state *mtk_state) { struct drm_plane_state *state = &mtk_state->base; unsigned int rotation = 0; @@ -178,15 +217,15 @@ static int mtk_ovl_layer_check(struct mtk_ddp_comp *comp, unsigned int idx, return 0; } -static void mtk_ovl_layer_on(struct mtk_ddp_comp *comp, unsigned int idx, - struct cmdq_pkt *cmdq_pkt) +void mtk_ovl_layer_on(struct device *dev, unsigned int idx, + struct cmdq_pkt *cmdq_pkt) { unsigned int gmc_thrshd_l; unsigned int gmc_thrshd_h; unsigned int gmc_value; - struct mtk_disp_ovl *ovl = comp_to_ovl(comp); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); - mtk_ddp_write(cmdq_pkt, 0x1, comp, + mtk_ddp_write(cmdq_pkt, 0x1, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RDMA_CTRL(idx)); gmc_thrshd_l = GMC_THRESHOLD_LOW >> (GMC_THRESHOLD_BITS - ovl->data->gmc_bits); @@ -198,17 +237,19 @@ static void mtk_ovl_layer_on(struct mtk_ddp_comp *comp, unsigned int idx, gmc_value = gmc_thrshd_l | gmc_thrshd_l << 8 | gmc_thrshd_h << 16 | gmc_thrshd_h << 24; mtk_ddp_write(cmdq_pkt, gmc_value, - comp, DISP_REG_OVL_RDMA_GMC(idx)); - mtk_ddp_write_mask(cmdq_pkt, BIT(idx), comp, + &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RDMA_GMC(idx)); + mtk_ddp_write_mask(cmdq_pkt, BIT(idx), &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_SRC_CON, BIT(idx)); } -static void mtk_ovl_layer_off(struct mtk_ddp_comp *comp, unsigned int idx, - struct cmdq_pkt *cmdq_pkt) +void mtk_ovl_layer_off(struct device *dev, unsigned int idx, + struct cmdq_pkt *cmdq_pkt) { - mtk_ddp_write_mask(cmdq_pkt, 0, comp, + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); + + mtk_ddp_write_mask(cmdq_pkt, 0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_SRC_CON, BIT(idx)); - mtk_ddp_write(cmdq_pkt, 0, comp, + mtk_ddp_write(cmdq_pkt, 0, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_RDMA_CTRL(idx)); } @@ -248,11 +289,11 @@ static unsigned int ovl_fmt_convert(struct mtk_disp_ovl *ovl, unsigned int fmt) } } -static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, - struct mtk_plane_state *state, - struct cmdq_pkt *cmdq_pkt) +void mtk_ovl_layer_config(struct device *dev, unsigned int idx, + struct mtk_plane_state *state, + struct cmdq_pkt *cmdq_pkt) { - struct mtk_disp_ovl *ovl = comp_to_ovl(comp); + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); struct mtk_plane_pending_state *pending = &state->pending; unsigned int addr = pending->addr; unsigned int pitch = pending->pitch & 0xffff; @@ -262,12 +303,12 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, unsigned int con; if (!pending->enable) { - mtk_ovl_layer_off(comp, idx, cmdq_pkt); + mtk_ovl_layer_off(dev, idx, cmdq_pkt); return; } con = ovl_fmt_convert(ovl, fmt); - if (state->base.fb->format->has_alpha) + if (state->base.fb && state->base.fb->format->has_alpha) con |= OVL_CON_AEN | OVL_CON_ALPHA; if (pending->rotation & DRM_MODE_REFLECT_Y) { @@ -280,76 +321,49 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, addr += pending->pitch - 1; } - mtk_ddp_write_relaxed(cmdq_pkt, con, comp, + mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_CON(idx)); - mtk_ddp_write_relaxed(cmdq_pkt, pitch, comp, + mtk_ddp_write_relaxed(cmdq_pkt, pitch, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx)); - mtk_ddp_write_relaxed(cmdq_pkt, src_size, comp, + mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_SRC_SIZE(idx)); - mtk_ddp_write_relaxed(cmdq_pkt, offset, comp, + mtk_ddp_write_relaxed(cmdq_pkt, offset, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_OFFSET(idx)); - mtk_ddp_write_relaxed(cmdq_pkt, addr, comp, + mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_ADDR(ovl, idx)); - mtk_ovl_layer_on(comp, idx, cmdq_pkt); + mtk_ovl_layer_on(dev, idx, cmdq_pkt); } -static void mtk_ovl_bgclr_in_on(struct mtk_ddp_comp *comp) +void mtk_ovl_bgclr_in_on(struct device *dev) { + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); unsigned int reg; - reg = readl(comp->regs + DISP_REG_OVL_DATAPATH_CON); + reg = readl(ovl->regs + DISP_REG_OVL_DATAPATH_CON); reg = reg | OVL_BGCLR_SEL_IN; - writel(reg, comp->regs + DISP_REG_OVL_DATAPATH_CON); + writel(reg, ovl->regs + DISP_REG_OVL_DATAPATH_CON); } -static void mtk_ovl_bgclr_in_off(struct mtk_ddp_comp *comp) +void mtk_ovl_bgclr_in_off(struct device *dev) { + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); unsigned int reg; - reg = readl(comp->regs + DISP_REG_OVL_DATAPATH_CON); + reg = readl(ovl->regs + DISP_REG_OVL_DATAPATH_CON); reg = reg & ~OVL_BGCLR_SEL_IN; - writel(reg, comp->regs + DISP_REG_OVL_DATAPATH_CON); + writel(reg, ovl->regs + DISP_REG_OVL_DATAPATH_CON); } -static const struct mtk_ddp_comp_funcs mtk_disp_ovl_funcs = { - .config = mtk_ovl_config, - .start = mtk_ovl_start, - .stop = mtk_ovl_stop, - .enable_vblank = mtk_ovl_enable_vblank, - .disable_vblank = mtk_ovl_disable_vblank, - .supported_rotations = mtk_ovl_supported_rotations, - .layer_nr = mtk_ovl_layer_nr, - .layer_check = mtk_ovl_layer_check, - .layer_config = mtk_ovl_layer_config, - .bgclr_in_on = mtk_ovl_bgclr_in_on, - .bgclr_in_off = mtk_ovl_bgclr_in_off, -}; - static int mtk_disp_ovl_bind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_ovl *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - int ret; - - ret = mtk_ddp_comp_register(drm_dev, &priv->ddp_comp); - if (ret < 0) { - dev_err(dev, "Failed to register component %pOF: %d\n", - dev->of_node, ret); - return ret; - } - return 0; } static void mtk_disp_ovl_unbind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_ovl *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - - mtk_ddp_comp_unregister(drm_dev, &priv->ddp_comp); } static const struct component_ops mtk_disp_ovl_component_ops = { @@ -361,7 +375,7 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct mtk_disp_ovl *priv; - int comp_id; + struct resource *res; int irq; int ret; @@ -373,27 +387,25 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev) if (irq < 0) return irq; - priv->data = of_device_get_match_data(dev); - - comp_id = mtk_ddp_comp_get_id(dev->of_node, - priv->data->layer_nr == 4 ? - MTK_DISP_OVL : - MTK_DISP_OVL_2L); - if (comp_id < 0) { - dev_err(dev, "Failed to identify by alias: %d\n", comp_id); - return comp_id; + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get ovl clk\n"); + return PTR_ERR(priv->clk); } - ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, - &mtk_disp_ovl_funcs); - if (ret) { - if (ret != -EPROBE_DEFER) - dev_err(dev, "Failed to initialize component: %d\n", - ret); - - return ret; + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->regs)) { + dev_err(dev, "failed to ioremap ovl\n"); + return PTR_ERR(priv->regs); } +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); +#endif + priv->data = of_device_get_match_data(dev); platform_set_drvdata(pdev, priv); ret = devm_request_irq(dev, irq, mtk_disp_ovl_irq_handler, @@ -412,8 +424,6 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev) static int mtk_disp_ovl_remove(struct platform_device *pdev) { - component_del(&pdev->dev, &mtk_disp_ovl_component_ops); - return 0; } @@ -431,11 +441,29 @@ static const struct mtk_disp_ovl_data mt8173_ovl_driver_data = { .fmt_rgb565_is_0 = true, }; +static const struct mtk_disp_ovl_data mt8183_ovl_driver_data = { + .addr = DISP_REG_OVL_ADDR_MT8173, + .gmc_bits = 10, + .layer_nr = 4, + .fmt_rgb565_is_0 = true, +}; + +static const struct mtk_disp_ovl_data mt8183_ovl_2l_driver_data = { + .addr = DISP_REG_OVL_ADDR_MT8173, + .gmc_bits = 10, + .layer_nr = 2, + .fmt_rgb565_is_0 = true, +}; + static const struct of_device_id mtk_disp_ovl_driver_dt_match[] = { { .compatible = "mediatek,mt2701-disp-ovl", .data = &mt2701_ovl_driver_data}, { .compatible = "mediatek,mt8173-disp-ovl", .data = &mt8173_ovl_driver_data}, + { .compatible = "mediatek,mt8183-disp-ovl", + .data = &mt8183_ovl_driver_data}, + { .compatible = "mediatek,mt8183-disp-ovl-2l", + .data = &mt8183_ovl_2l_driver_data}, {}, }; MODULE_DEVICE_TABLE(of, mtk_disp_ovl_driver_dt_match); diff --git a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c index d46b8ae1d080..728aaadfea8c 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c @@ -11,6 +11,7 @@ #include <linux/platform_device.h> #include <linux/soc/mediatek/mtk-cmdq.h> +#include "mtk_disp_drv.h" #include "mtk_drm_crtc.h" #include "mtk_drm_ddp_comp.h" @@ -61,83 +62,105 @@ struct mtk_disp_rdma_data { * @data: local driver data */ struct mtk_disp_rdma { - struct mtk_ddp_comp ddp_comp; - struct drm_crtc *crtc; + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; const struct mtk_disp_rdma_data *data; + void (*vblank_cb)(void *data); + void *vblank_cb_data; + u32 fifo_size; }; -static inline struct mtk_disp_rdma *comp_to_rdma(struct mtk_ddp_comp *comp) -{ - return container_of(comp, struct mtk_disp_rdma, ddp_comp); -} - static irqreturn_t mtk_disp_rdma_irq_handler(int irq, void *dev_id) { struct mtk_disp_rdma *priv = dev_id; - struct mtk_ddp_comp *rdma = &priv->ddp_comp; /* Clear frame completion interrupt */ - writel(0x0, rdma->regs + DISP_REG_RDMA_INT_STATUS); + writel(0x0, priv->regs + DISP_REG_RDMA_INT_STATUS); - if (!priv->crtc) + if (!priv->vblank_cb) return IRQ_NONE; - mtk_crtc_ddp_irq(priv->crtc, rdma); + priv->vblank_cb(priv->vblank_cb_data); return IRQ_HANDLED; } -static void rdma_update_bits(struct mtk_ddp_comp *comp, unsigned int reg, +static void rdma_update_bits(struct device *dev, unsigned int reg, unsigned int mask, unsigned int val) { - unsigned int tmp = readl(comp->regs + reg); + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); + unsigned int tmp = readl(rdma->regs + reg); tmp = (tmp & ~mask) | (val & mask); - writel(tmp, comp->regs + reg); + writel(tmp, rdma->regs + reg); } -static void mtk_rdma_enable_vblank(struct mtk_ddp_comp *comp, - struct drm_crtc *crtc) +void mtk_rdma_enable_vblank(struct device *dev, + void (*vblank_cb)(void *), + void *vblank_cb_data) { - struct mtk_disp_rdma *rdma = comp_to_rdma(comp); + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); - rdma->crtc = crtc; - rdma_update_bits(comp, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, + rdma->vblank_cb = vblank_cb; + rdma->vblank_cb_data = vblank_cb_data; + rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, RDMA_FRAME_END_INT); } -static void mtk_rdma_disable_vblank(struct mtk_ddp_comp *comp) +void mtk_rdma_disable_vblank(struct device *dev) +{ + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); + + rdma->vblank_cb = NULL; + rdma->vblank_cb_data = NULL; + rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0); +} + +int mtk_rdma_clk_enable(struct device *dev) { - struct mtk_disp_rdma *rdma = comp_to_rdma(comp); + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); - rdma->crtc = NULL; - rdma_update_bits(comp, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0); + return clk_prepare_enable(rdma->clk); } -static void mtk_rdma_start(struct mtk_ddp_comp *comp) +void mtk_rdma_clk_disable(struct device *dev) { - rdma_update_bits(comp, DISP_REG_RDMA_GLOBAL_CON, RDMA_ENGINE_EN, + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); + + clk_disable_unprepare(rdma->clk); +} + +void mtk_rdma_start(struct device *dev) +{ + rdma_update_bits(dev, DISP_REG_RDMA_GLOBAL_CON, RDMA_ENGINE_EN, RDMA_ENGINE_EN); } -static void mtk_rdma_stop(struct mtk_ddp_comp *comp) +void mtk_rdma_stop(struct device *dev) { - rdma_update_bits(comp, DISP_REG_RDMA_GLOBAL_CON, RDMA_ENGINE_EN, 0); + rdma_update_bits(dev, DISP_REG_RDMA_GLOBAL_CON, RDMA_ENGINE_EN, 0); } -static void mtk_rdma_config(struct mtk_ddp_comp *comp, unsigned int width, - unsigned int height, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +void mtk_rdma_config(struct device *dev, unsigned int width, + unsigned int height, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { unsigned int threshold; unsigned int reg; - struct mtk_disp_rdma *rdma = comp_to_rdma(comp); + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); + u32 rdma_fifo_size; - mtk_ddp_write_mask(cmdq_pkt, width, comp, + mtk_ddp_write_mask(cmdq_pkt, width, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_SIZE_CON_0, 0xfff); - mtk_ddp_write_mask(cmdq_pkt, height, comp, + mtk_ddp_write_mask(cmdq_pkt, height, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_SIZE_CON_1, 0xfffff); + if (rdma->fifo_size) + rdma_fifo_size = rdma->fifo_size; + else + rdma_fifo_size = RDMA_FIFO_SIZE(rdma); + /* * Enable FIFO underflow since DSI and DPI can't be blocked. * Keep the FIFO pseudo size reset default of 8 KiB. Set the @@ -146,9 +169,9 @@ static void mtk_rdma_config(struct mtk_ddp_comp *comp, unsigned int width, */ threshold = width * height * vrefresh * 4 * 7 / 1000000; reg = RDMA_FIFO_UNDERFLOW_EN | - RDMA_FIFO_PSEUDO_SIZE(RDMA_FIFO_SIZE(rdma)) | + RDMA_FIFO_PSEUDO_SIZE(rdma_fifo_size) | RDMA_OUTPUT_VALID_FIFO_THRESHOLD(threshold); - mtk_ddp_write(cmdq_pkt, reg, comp, DISP_REG_RDMA_FIFO_CON); + mtk_ddp_write(cmdq_pkt, reg, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_FIFO_CON); } static unsigned int rdma_fmt_convert(struct mtk_disp_rdma *rdma, @@ -188,16 +211,16 @@ static unsigned int rdma_fmt_convert(struct mtk_disp_rdma *rdma, } } -static unsigned int mtk_rdma_layer_nr(struct mtk_ddp_comp *comp) +unsigned int mtk_rdma_layer_nr(struct device *dev) { return 1; } -static void mtk_rdma_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, - struct mtk_plane_state *state, - struct cmdq_pkt *cmdq_pkt) +void mtk_rdma_layer_config(struct device *dev, unsigned int idx, + struct mtk_plane_state *state, + struct cmdq_pkt *cmdq_pkt) { - struct mtk_disp_rdma *rdma = comp_to_rdma(comp); + struct mtk_disp_rdma *rdma = dev_get_drvdata(dev); struct mtk_plane_pending_state *pending = &state->pending; unsigned int addr = pending->addr; unsigned int pitch = pending->pitch & 0xffff; @@ -205,53 +228,34 @@ static void mtk_rdma_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, unsigned int con; con = rdma_fmt_convert(rdma, fmt); - mtk_ddp_write_relaxed(cmdq_pkt, con, comp, DISP_RDMA_MEM_CON); + mtk_ddp_write_relaxed(cmdq_pkt, con, &rdma->cmdq_reg, rdma->regs, DISP_RDMA_MEM_CON); if (fmt == DRM_FORMAT_UYVY || fmt == DRM_FORMAT_YUYV) { - mtk_ddp_write_mask(cmdq_pkt, RDMA_MATRIX_ENABLE, comp, + mtk_ddp_write_mask(cmdq_pkt, RDMA_MATRIX_ENABLE, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_SIZE_CON_0, RDMA_MATRIX_ENABLE); mtk_ddp_write_mask(cmdq_pkt, RDMA_MATRIX_INT_MTX_BT601_to_RGB, - comp, DISP_REG_RDMA_SIZE_CON_0, + &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_SIZE_CON_0, RDMA_MATRIX_INT_MTX_SEL); } else { - mtk_ddp_write_mask(cmdq_pkt, 0, comp, + mtk_ddp_write_mask(cmdq_pkt, 0, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_SIZE_CON_0, RDMA_MATRIX_ENABLE); } - mtk_ddp_write_relaxed(cmdq_pkt, addr, comp, DISP_RDMA_MEM_START_ADDR); - mtk_ddp_write_relaxed(cmdq_pkt, pitch, comp, DISP_RDMA_MEM_SRC_PITCH); - mtk_ddp_write(cmdq_pkt, RDMA_MEM_GMC, comp, + mtk_ddp_write_relaxed(cmdq_pkt, addr, &rdma->cmdq_reg, rdma->regs, + DISP_RDMA_MEM_START_ADDR); + mtk_ddp_write_relaxed(cmdq_pkt, pitch, &rdma->cmdq_reg, rdma->regs, + DISP_RDMA_MEM_SRC_PITCH); + mtk_ddp_write(cmdq_pkt, RDMA_MEM_GMC, &rdma->cmdq_reg, rdma->regs, DISP_RDMA_MEM_GMC_SETTING_0); - mtk_ddp_write_mask(cmdq_pkt, RDMA_MODE_MEMORY, comp, + mtk_ddp_write_mask(cmdq_pkt, RDMA_MODE_MEMORY, &rdma->cmdq_reg, rdma->regs, DISP_REG_RDMA_GLOBAL_CON, RDMA_MODE_MEMORY); } -static const struct mtk_ddp_comp_funcs mtk_disp_rdma_funcs = { - .config = mtk_rdma_config, - .start = mtk_rdma_start, - .stop = mtk_rdma_stop, - .enable_vblank = mtk_rdma_enable_vblank, - .disable_vblank = mtk_rdma_disable_vblank, - .layer_nr = mtk_rdma_layer_nr, - .layer_config = mtk_rdma_layer_config, -}; - static int mtk_disp_rdma_bind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_rdma *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - int ret; - - ret = mtk_ddp_comp_register(drm_dev, &priv->ddp_comp); - if (ret < 0) { - dev_err(dev, "Failed to register component %pOF: %d\n", - dev->of_node, ret); - return ret; - } - return 0; } @@ -259,10 +263,6 @@ static int mtk_disp_rdma_bind(struct device *dev, struct device *master, static void mtk_disp_rdma_unbind(struct device *dev, struct device *master, void *data) { - struct mtk_disp_rdma *priv = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; - - mtk_ddp_comp_unregister(drm_dev, &priv->ddp_comp); } static const struct component_ops mtk_disp_rdma_component_ops = { @@ -274,7 +274,7 @@ static int mtk_disp_rdma_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct mtk_disp_rdma *priv; - int comp_id; + struct resource *res; int irq; int ret; @@ -286,25 +286,37 @@ static int mtk_disp_rdma_probe(struct platform_device *pdev) if (irq < 0) return irq; - comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DISP_RDMA); - if (comp_id < 0) { - dev_err(dev, "Failed to identify by alias: %d\n", comp_id); - return comp_id; + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get rdma clk\n"); + return PTR_ERR(priv->clk); } - ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, - &mtk_disp_rdma_funcs); - if (ret) { - if (ret != -EPROBE_DEFER) - dev_err(dev, "Failed to initialize component: %d\n", - ret); - - return ret; + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(priv->regs)) { + dev_err(dev, "failed to ioremap rdma\n"); + return PTR_ERR(priv->regs); + } +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); +#endif + + if (of_find_property(dev->of_node, "mediatek,rdma-fifo-size", &ret)) { + ret = of_property_read_u32(dev->of_node, + "mediatek,rdma-fifo-size", + &priv->fifo_size); + if (ret) { + dev_err(dev, "Failed to get rdma fifo size\n"); + return ret; + } } /* Disable and clear pending interrupts */ - writel(0x0, priv->ddp_comp.regs + DISP_REG_RDMA_INT_ENABLE); - writel(0x0, priv->ddp_comp.regs + DISP_REG_RDMA_INT_STATUS); + writel(0x0, priv->regs + DISP_REG_RDMA_INT_ENABLE); + writel(0x0, priv->regs + DISP_REG_RDMA_INT_STATUS); ret = devm_request_irq(dev, irq, mtk_disp_rdma_irq_handler, IRQF_TRIGGER_NONE, dev_name(dev), priv); @@ -339,11 +351,17 @@ static const struct mtk_disp_rdma_data mt8173_rdma_driver_data = { .fifo_size = SZ_8K, }; +static const struct mtk_disp_rdma_data mt8183_rdma_driver_data = { + .fifo_size = 5 * SZ_1K, +}; + static const struct of_device_id mtk_disp_rdma_driver_dt_match[] = { { .compatible = "mediatek,mt2701-disp-rdma", .data = &mt2701_rdma_driver_data}, { .compatible = "mediatek,mt8173-disp-rdma", .data = &mt8173_rdma_driver_data}, + { .compatible = "mediatek,mt8183-disp-rdma", + .data = &mt8183_rdma_driver_data}, {}, }; MODULE_DEVICE_TABLE(of, mtk_disp_rdma_driver_dt_match); diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c index 52f11a63a330..b05f900d9322 100644 --- a/drivers/gpu/drm/mediatek/mtk_dpi.c +++ b/drivers/gpu/drm/mediatek/mtk_dpi.c @@ -20,10 +20,12 @@ #include <drm/drm_atomic_helper.h> #include <drm/drm_bridge.h> +#include <drm/drm_bridge_connector.h> #include <drm/drm_crtc.h> #include <drm/drm_of.h> #include <drm/drm_simple_kms_helper.h> +#include "mtk_disp_drv.h" #include "mtk_dpi_regs.h" #include "mtk_drm_ddp_comp.h" @@ -62,10 +64,10 @@ enum mtk_dpi_out_color_format { }; struct mtk_dpi { - struct mtk_ddp_comp ddp_comp; struct drm_encoder encoder; struct drm_bridge bridge; struct drm_bridge *next_bridge; + struct drm_connector *connector; void __iomem *regs; struct device *dev; struct clk *engine_clk; @@ -562,53 +564,50 @@ static const struct drm_bridge_funcs mtk_dpi_bridge_funcs = { .enable = mtk_dpi_bridge_enable, }; -static void mtk_dpi_start(struct mtk_ddp_comp *comp) +void mtk_dpi_start(struct device *dev) { - struct mtk_dpi *dpi = container_of(comp, struct mtk_dpi, ddp_comp); + struct mtk_dpi *dpi = dev_get_drvdata(dev); mtk_dpi_power_on(dpi); } -static void mtk_dpi_stop(struct mtk_ddp_comp *comp) +void mtk_dpi_stop(struct device *dev) { - struct mtk_dpi *dpi = container_of(comp, struct mtk_dpi, ddp_comp); + struct mtk_dpi *dpi = dev_get_drvdata(dev); mtk_dpi_power_off(dpi); } -static const struct mtk_ddp_comp_funcs mtk_dpi_funcs = { - .start = mtk_dpi_start, - .stop = mtk_dpi_stop, -}; - static int mtk_dpi_bind(struct device *dev, struct device *master, void *data) { struct mtk_dpi *dpi = dev_get_drvdata(dev); struct drm_device *drm_dev = data; int ret; - ret = mtk_ddp_comp_register(drm_dev, &dpi->ddp_comp); - if (ret < 0) { - dev_err(dev, "Failed to register component %pOF: %d\n", - dev->of_node, ret); - return ret; - } - ret = drm_simple_encoder_init(drm_dev, &dpi->encoder, DRM_MODE_ENCODER_TMDS); if (ret) { dev_err(dev, "Failed to initialize decoder: %d\n", ret); - goto err_unregister; + return ret; } - dpi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm_dev, dpi->ddp_comp); + dpi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm_dev, dpi->dev); - ret = drm_bridge_attach(&dpi->encoder, &dpi->bridge, NULL, 0); + ret = drm_bridge_attach(&dpi->encoder, &dpi->bridge, NULL, + DRM_BRIDGE_ATTACH_NO_CONNECTOR); if (ret) { dev_err(dev, "Failed to attach bridge: %d\n", ret); goto err_cleanup; } + dpi->connector = drm_bridge_connector_init(drm_dev, &dpi->encoder); + if (IS_ERR(dpi->connector)) { + dev_err(dev, "Unable to create bridge connector\n"); + ret = PTR_ERR(dpi->connector); + goto err_cleanup; + } + drm_connector_attach_encoder(dpi->connector, &dpi->encoder); + dpi->bit_num = MTK_DPI_OUT_BIT_NUM_8BITS; dpi->channel_swap = MTK_DPI_OUT_CHANNEL_SWAP_RGB; dpi->yc_map = MTK_DPI_OUT_YC_MAP_RGB; @@ -618,8 +617,6 @@ static int mtk_dpi_bind(struct device *dev, struct device *master, void *data) err_cleanup: drm_encoder_cleanup(&dpi->encoder); -err_unregister: - mtk_ddp_comp_unregister(drm_dev, &dpi->ddp_comp); return ret; } @@ -627,10 +624,8 @@ static void mtk_dpi_unbind(struct device *dev, struct device *master, void *data) { struct mtk_dpi *dpi = dev_get_drvdata(dev); - struct drm_device *drm_dev = data; drm_encoder_cleanup(&dpi->encoder); - mtk_ddp_comp_unregister(drm_dev, &dpi->ddp_comp); } static const struct component_ops mtk_dpi_component_ops = { @@ -691,7 +686,6 @@ static int mtk_dpi_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct mtk_dpi *dpi; struct resource *mem; - int comp_id; int ret; dpi = devm_kzalloc(dev, sizeof(*dpi), GFP_KERNEL); @@ -769,19 +763,6 @@ static int mtk_dpi_probe(struct platform_device *pdev) dev_info(dev, "Found bridge node: %pOF\n", dpi->next_bridge->of_node); - comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DPI); - if (comp_id < 0) { - dev_err(dev, "Failed to identify by alias: %d\n", comp_id); - return comp_id; - } - - ret = mtk_ddp_comp_init(dev, dev->of_node, &dpi->ddp_comp, comp_id, - &mtk_dpi_funcs); - if (ret) { - dev_err(dev, "Failed to initialize component: %d\n", ret); - return ret; - } - platform_set_drvdata(pdev, dpi); dpi->bridge.funcs = &mtk_dpi_bridge_funcs; diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c index 584dc26affc1..8b0de90156c6 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c @@ -7,6 +7,7 @@ #include <linux/pm_runtime.h> #include <linux/soc/mediatek/mtk-cmdq.h> #include <linux/soc/mediatek/mtk-mmsys.h> +#include <linux/soc/mediatek/mtk-mutex.h> #include <asm/barrier.h> #include <soc/mediatek/smi.h> @@ -19,7 +20,6 @@ #include "mtk_drm_drv.h" #include "mtk_drm_crtc.h" -#include "mtk_drm_ddp.h" #include "mtk_drm_ddp_comp.h" #include "mtk_drm_gem.h" #include "mtk_drm_plane.h" @@ -55,7 +55,7 @@ struct mtk_drm_crtc { #endif struct device *mmsys_dev; - struct mtk_disp_mutex *mutex; + struct mtk_mutex *mutex; unsigned int ddp_comp_nr; struct mtk_ddp_comp **ddp_comp; @@ -107,7 +107,7 @@ static void mtk_drm_crtc_destroy(struct drm_crtc *crtc) { struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); - mtk_disp_mutex_put(mtk_crtc->mutex); + mtk_mutex_put(mtk_crtc->mutex); drm_crtc_cleanup(crtc); } @@ -169,31 +169,13 @@ static void mtk_drm_crtc_mode_set_nofb(struct drm_crtc *crtc) state->pending_config = true; } -static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc) -{ - struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); - struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0]; - - mtk_ddp_comp_enable_vblank(comp, &mtk_crtc->base); - - return 0; -} - -static void mtk_drm_crtc_disable_vblank(struct drm_crtc *crtc) -{ - struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); - struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0]; - - mtk_ddp_comp_disable_vblank(comp); -} - static int mtk_crtc_ddp_clk_enable(struct mtk_drm_crtc *mtk_crtc) { int ret; int i; for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) { - ret = clk_prepare_enable(mtk_crtc->ddp_comp[i]->clk); + ret = mtk_ddp_comp_clk_enable(mtk_crtc->ddp_comp[i]); if (ret) { DRM_ERROR("Failed to enable clock %d: %d\n", i, ret); goto err; @@ -203,7 +185,7 @@ static int mtk_crtc_ddp_clk_enable(struct mtk_drm_crtc *mtk_crtc) return 0; err: while (--i >= 0) - clk_disable_unprepare(mtk_crtc->ddp_comp[i]->clk); + mtk_ddp_comp_clk_disable(mtk_crtc->ddp_comp[i]); return ret; } @@ -212,7 +194,7 @@ static void mtk_crtc_ddp_clk_disable(struct mtk_drm_crtc *mtk_crtc) int i; for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) - clk_disable_unprepare(mtk_crtc->ddp_comp[i]->clk); + mtk_ddp_comp_clk_disable(mtk_crtc->ddp_comp[i]); } static @@ -283,7 +265,7 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc) return ret; } - ret = mtk_disp_mutex_prepare(mtk_crtc->mutex); + ret = mtk_mutex_prepare(mtk_crtc->mutex); if (ret < 0) { DRM_ERROR("Failed to enable mutex clock: %d\n", ret); goto err_pm_runtime_put; @@ -299,11 +281,11 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc) mtk_mmsys_ddp_connect(mtk_crtc->mmsys_dev, mtk_crtc->ddp_comp[i]->id, mtk_crtc->ddp_comp[i + 1]->id); - mtk_disp_mutex_add_comp(mtk_crtc->mutex, + mtk_mutex_add_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); } - mtk_disp_mutex_add_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); - mtk_disp_mutex_enable(mtk_crtc->mutex); + mtk_mutex_add_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); + mtk_mutex_enable(mtk_crtc->mutex); for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) { struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[i]; @@ -332,7 +314,7 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc) return 0; err_mutex_unprepare: - mtk_disp_mutex_unprepare(mtk_crtc->mutex); + mtk_mutex_unprepare(mtk_crtc->mutex); err_pm_runtime_put: pm_runtime_put(crtc->dev->dev); return ret; @@ -351,19 +333,19 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc) } for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) - mtk_disp_mutex_remove_comp(mtk_crtc->mutex, + mtk_mutex_remove_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); - mtk_disp_mutex_disable(mtk_crtc->mutex); + mtk_mutex_disable(mtk_crtc->mutex); for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) { mtk_mmsys_ddp_disconnect(mtk_crtc->mmsys_dev, mtk_crtc->ddp_comp[i]->id, mtk_crtc->ddp_comp[i + 1]->id); - mtk_disp_mutex_remove_comp(mtk_crtc->mutex, + mtk_mutex_remove_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); } - mtk_disp_mutex_remove_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); + mtk_mutex_remove_comp(mtk_crtc->mutex, mtk_crtc->ddp_comp[i]->id); mtk_crtc_ddp_clk_disable(mtk_crtc); - mtk_disp_mutex_unprepare(mtk_crtc->mutex); + mtk_mutex_unprepare(mtk_crtc->mutex); pm_runtime_put(drm->dev); @@ -475,9 +457,9 @@ static void mtk_drm_crtc_hw_config(struct mtk_drm_crtc *mtk_crtc) mtk_crtc->pending_async_planes = true; if (priv->data->shadow_register) { - mtk_disp_mutex_acquire(mtk_crtc->mutex); + mtk_mutex_acquire(mtk_crtc->mutex); mtk_crtc_ddp_config(crtc, NULL); - mtk_disp_mutex_release(mtk_crtc->mutex); + mtk_mutex_release(mtk_crtc->mutex); } #if IS_REACHABLE(CONFIG_MTK_CMDQ) if (mtk_crtc->cmdq_client) { @@ -493,6 +475,40 @@ static void mtk_drm_crtc_hw_config(struct mtk_drm_crtc *mtk_crtc) mutex_unlock(&mtk_crtc->hw_lock); } +static void mtk_crtc_ddp_irq(void *data) +{ + struct drm_crtc *crtc = data; + struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct mtk_drm_private *priv = crtc->dev->dev_private; + +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + if (!priv->data->shadow_register && !mtk_crtc->cmdq_client) +#else + if (!priv->data->shadow_register) +#endif + mtk_crtc_ddp_config(crtc, NULL); + + mtk_drm_finish_page_flip(mtk_crtc); +} + +static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc) +{ + struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0]; + + mtk_ddp_comp_enable_vblank(comp, mtk_crtc_ddp_irq, &mtk_crtc->base); + + return 0; +} + +static void mtk_drm_crtc_disable_vblank(struct drm_crtc *crtc) +{ + struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0]; + + mtk_ddp_comp_disable_vblank(comp); +} + int mtk_drm_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane, struct mtk_plane_state *state) { @@ -661,21 +677,6 @@ err_cleanup_crtc: return ret; } -void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *comp) -{ - struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); - struct mtk_drm_private *priv = crtc->dev->dev_private; - -#if IS_REACHABLE(CONFIG_MTK_CMDQ) - if (!priv->data->shadow_register && !mtk_crtc->cmdq_client) -#else - if (!priv->data->shadow_register) -#endif - mtk_crtc_ddp_config(crtc, NULL); - - mtk_drm_finish_page_flip(mtk_crtc); -} - static int mtk_drm_crtc_num_comp_planes(struct mtk_drm_crtc *mtk_crtc, int comp_idx) { @@ -771,7 +772,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev, if (!mtk_crtc->ddp_comp) return -ENOMEM; - mtk_crtc->mutex = mtk_disp_mutex_get(priv->mutex_dev, pipe); + mtk_crtc->mutex = mtk_mutex_get(priv->mutex_dev); if (IS_ERR(mtk_crtc->mutex)) { ret = PTR_ERR(mtk_crtc->mutex); dev_err(dev, "Failed to get mutex: %d\n", ret); @@ -784,7 +785,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev, struct device_node *node; node = priv->comp_node[comp_id]; - comp = priv->ddp_comp[comp_id]; + comp = &priv->ddp_comp[comp_id]; if (!comp) { dev_err(dev, "Component %pOF not initialized\n", node); ret = -ENODEV; diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.h b/drivers/gpu/drm/mediatek/mtk_drm_crtc.h index a2b4677a451c..45cfd0a032de 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.h @@ -15,7 +15,6 @@ #define MTK_MIN_BPC 3 void mtk_drm_crtc_commit(struct drm_crtc *crtc); -void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *comp); int mtk_drm_crtc_create(struct drm_device *drm_dev, const enum mtk_ddp_comp_id *path, unsigned int path_len); diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp.h deleted file mode 100644 index 6b691a57be4a..000000000000 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp.h +++ /dev/null @@ -1,28 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (c) 2015 MediaTek Inc. - */ - -#ifndef MTK_DRM_DDP_H -#define MTK_DRM_DDP_H - -#include "mtk_drm_ddp_comp.h" - -struct regmap; -struct device; -struct mtk_disp_mutex; - -struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id); -int mtk_disp_mutex_prepare(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_add_comp(struct mtk_disp_mutex *mutex, - enum mtk_ddp_comp_id id); -void mtk_disp_mutex_enable(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_disable(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_remove_comp(struct mtk_disp_mutex *mutex, - enum mtk_ddp_comp_id id); -void mtk_disp_mutex_unprepare(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_put(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_acquire(struct mtk_disp_mutex *mutex); -void mtk_disp_mutex_release(struct mtk_disp_mutex *mutex); - -#endif /* MTK_DRM_DDP_H */ diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c index 3064eac1a750..75bc00e17fc4 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c @@ -9,12 +9,12 @@ #include <linux/clk.h> #include <linux/of.h> #include <linux/of_address.h> -#include <linux/of_irq.h> #include <linux/of_platform.h> #include <linux/platform_device.h> #include <linux/soc/mediatek/mtk-cmdq.h> #include <drm/drm_print.h> +#include "mtk_disp_drv.h" #include "mtk_drm_drv.h" #include "mtk_drm_plane.h" #include "mtk_drm_ddp_comp.h" @@ -35,31 +35,13 @@ #define DISP_AAL_EN 0x0000 #define DISP_AAL_SIZE 0x0030 -#define DISP_CCORR_EN 0x0000 -#define CCORR_EN BIT(0) -#define DISP_CCORR_CFG 0x0020 -#define CCORR_RELAY_MODE BIT(0) -#define CCORR_ENGINE_EN BIT(1) -#define CCORR_GAMMA_OFF BIT(2) -#define CCORR_WGAMUT_SRC_CLIP BIT(3) -#define DISP_CCORR_SIZE 0x0030 -#define DISP_CCORR_COEF_0 0x0080 -#define DISP_CCORR_COEF_1 0x0084 -#define DISP_CCORR_COEF_2 0x0088 -#define DISP_CCORR_COEF_3 0x008C -#define DISP_CCORR_COEF_4 0x0090 - #define DISP_DITHER_EN 0x0000 #define DITHER_EN BIT(0) #define DISP_DITHER_CFG 0x0020 #define DITHER_RELAY_MODE BIT(0) +#define DITHER_ENGINE_EN BIT(1) #define DISP_DITHER_SIZE 0x0030 -#define DISP_GAMMA_EN 0x0000 -#define DISP_GAMMA_CFG 0x0020 -#define DISP_GAMMA_SIZE 0x0030 -#define DISP_GAMMA_LUT 0x0700 - #define LUT_10BIT_MASK 0x03ff #define OD_RELAYMODE BIT(0) @@ -68,9 +50,6 @@ #define AAL_EN BIT(0) -#define GAMMA_EN BIT(0) -#define GAMMA_LUT_EN BIT(1) - #define DISP_DITHERING BIT(2) #define DITHER_LSB_ERR_SHIFT_R(x) (((x) & 0x7) << 28) #define DITHER_OVFLW_BIT_R(x) (((x) & 0x7) << 24) @@ -86,262 +65,233 @@ #define DITHER_ADD_LSHIFT_G(x) (((x) & 0x7) << 4) #define DITHER_ADD_RSHIFT_G(x) (((x) & 0x7) << 0) +struct mtk_ddp_comp_dev { + struct clk *clk; + void __iomem *regs; + struct cmdq_client_reg cmdq_reg; +}; + void mtk_ddp_write(struct cmdq_pkt *cmdq_pkt, unsigned int value, - struct mtk_ddp_comp *comp, unsigned int offset) + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, + unsigned int offset) { #if IS_REACHABLE(CONFIG_MTK_CMDQ) if (cmdq_pkt) - cmdq_pkt_write(cmdq_pkt, comp->subsys, - comp->regs_pa + offset, value); + cmdq_pkt_write(cmdq_pkt, cmdq_reg->subsys, + cmdq_reg->offset + offset, value); else #endif - writel(value, comp->regs + offset); + writel(value, regs + offset); } void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, unsigned int value, - struct mtk_ddp_comp *comp, + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, unsigned int offset) { #if IS_REACHABLE(CONFIG_MTK_CMDQ) if (cmdq_pkt) - cmdq_pkt_write(cmdq_pkt, comp->subsys, - comp->regs_pa + offset, value); + cmdq_pkt_write(cmdq_pkt, cmdq_reg->subsys, + cmdq_reg->offset + offset, value); else #endif - writel_relaxed(value, comp->regs + offset); + writel_relaxed(value, regs + offset); } -void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, - unsigned int value, - struct mtk_ddp_comp *comp, - unsigned int offset, - unsigned int mask) +void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value, + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, + unsigned int offset, unsigned int mask) { #if IS_REACHABLE(CONFIG_MTK_CMDQ) if (cmdq_pkt) { - cmdq_pkt_write_mask(cmdq_pkt, comp->subsys, - comp->regs_pa + offset, value, mask); + cmdq_pkt_write_mask(cmdq_pkt, cmdq_reg->subsys, + cmdq_reg->offset + offset, value, mask); } else { #endif - u32 tmp = readl(comp->regs + offset); + u32 tmp = readl(regs + offset); tmp = (tmp & ~mask) | (value & mask); - writel(tmp, comp->regs + offset); + writel(tmp, regs + offset); #if IS_REACHABLE(CONFIG_MTK_CMDQ) } #endif } -void mtk_dither_set(struct mtk_ddp_comp *comp, unsigned int bpc, - unsigned int CFG, struct cmdq_pkt *cmdq_pkt) +static int mtk_ddp_clk_enable(struct device *dev) +{ + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); + + return clk_prepare_enable(priv->clk); +} + +static void mtk_ddp_clk_disable(struct device *dev) +{ + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); + + clk_disable_unprepare(priv->clk); +} + +void mtk_dither_set_common(void __iomem *regs, struct cmdq_client_reg *cmdq_reg, + unsigned int bpc, unsigned int cfg, + unsigned int dither_en, struct cmdq_pkt *cmdq_pkt) { /* If bpc equal to 0, the dithering function didn't be enabled */ if (bpc == 0) return; if (bpc >= MTK_MIN_BPC) { - mtk_ddp_write(cmdq_pkt, 0, comp, DISP_DITHER_5); - mtk_ddp_write(cmdq_pkt, 0, comp, DISP_DITHER_7); + mtk_ddp_write(cmdq_pkt, 0, cmdq_reg, regs, DISP_DITHER_5); + mtk_ddp_write(cmdq_pkt, 0, cmdq_reg, regs, DISP_DITHER_7); mtk_ddp_write(cmdq_pkt, DITHER_LSB_ERR_SHIFT_R(MTK_MAX_BPC - bpc) | DITHER_ADD_LSHIFT_R(MTK_MAX_BPC - bpc) | DITHER_NEW_BIT_MODE, - comp, DISP_DITHER_15); + cmdq_reg, regs, DISP_DITHER_15); mtk_ddp_write(cmdq_pkt, DITHER_LSB_ERR_SHIFT_B(MTK_MAX_BPC - bpc) | DITHER_ADD_LSHIFT_B(MTK_MAX_BPC - bpc) | DITHER_LSB_ERR_SHIFT_G(MTK_MAX_BPC - bpc) | DITHER_ADD_LSHIFT_G(MTK_MAX_BPC - bpc), - comp, DISP_DITHER_16); - mtk_ddp_write(cmdq_pkt, DISP_DITHERING, comp, CFG); + cmdq_reg, regs, DISP_DITHER_16); + mtk_ddp_write(cmdq_pkt, dither_en, cmdq_reg, regs, cfg); } } -static void mtk_od_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +static void mtk_dither_set(struct device *dev, unsigned int bpc, + unsigned int cfg, struct cmdq_pkt *cmdq_pkt) { - mtk_ddp_write(cmdq_pkt, w << 16 | h, comp, DISP_OD_SIZE); - mtk_ddp_write(cmdq_pkt, OD_RELAYMODE, comp, DISP_OD_CFG); - mtk_dither_set(comp, bpc, DISP_OD_CFG, cmdq_pkt); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_od_start(struct mtk_ddp_comp *comp) -{ - writel(1, comp->regs + DISP_OD_EN); + mtk_dither_set_common(priv->regs, &priv->cmdq_reg, bpc, cfg, + DISP_DITHERING, cmdq_pkt); } -static void mtk_ufoe_start(struct mtk_ddp_comp *comp) +static void mtk_od_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { - writel(UFO_BYPASS, comp->regs + DISP_REG_UFO_START); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_aal_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) -{ - mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_AAL_SIZE); + mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_OD_SIZE); + mtk_ddp_write(cmdq_pkt, OD_RELAYMODE, &priv->cmdq_reg, priv->regs, DISP_OD_CFG); + mtk_dither_set(dev, bpc, DISP_OD_CFG, cmdq_pkt); } -static void mtk_aal_start(struct mtk_ddp_comp *comp) +static void mtk_od_start(struct device *dev) { - writel(AAL_EN, comp->regs + DISP_AAL_EN); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_aal_stop(struct mtk_ddp_comp *comp) -{ - writel_relaxed(0x0, comp->regs + DISP_AAL_EN); + writel(1, priv->regs + DISP_OD_EN); } -static void mtk_ccorr_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +static void mtk_ufoe_start(struct device *dev) { - mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_CCORR_SIZE); - mtk_ddp_write(cmdq_pkt, CCORR_ENGINE_EN, comp, DISP_CCORR_CFG); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_ccorr_start(struct mtk_ddp_comp *comp) -{ - writel(CCORR_EN, comp->regs + DISP_CCORR_EN); + writel(UFO_BYPASS, priv->regs + DISP_REG_UFO_START); } -static void mtk_ccorr_stop(struct mtk_ddp_comp *comp) +static void mtk_aal_config(struct device *dev, unsigned int w, + unsigned int h, unsigned int vrefresh, + unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { - writel_relaxed(0x0, comp->regs + DISP_CCORR_EN); + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); + + mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_AAL_SIZE); } -/* Converts a DRM S31.32 value to the HW S1.10 format. */ -static u16 mtk_ctm_s31_32_to_s1_10(u64 in) +static void mtk_aal_gamma_set(struct device *dev, struct drm_crtc_state *state) { - u16 r; + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); - /* Sign bit. */ - r = in & BIT_ULL(63) ? BIT(11) : 0; + mtk_gamma_set_common(priv->regs, state); +} - if ((in & GENMASK_ULL(62, 33)) > 0) { - /* identity value 0x100000000 -> 0x400, */ - /* if bigger this, set it to max 0x7ff. */ - r |= GENMASK(10, 0); - } else { - /* take the 11 most important bits. */ - r |= (in >> 22) & GENMASK(10, 0); - } +static void mtk_aal_start(struct device *dev) +{ + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); - return r; + writel(AAL_EN, priv->regs + DISP_AAL_EN); } -static void mtk_ccorr_ctm_set(struct mtk_ddp_comp *comp, - struct drm_crtc_state *state) +static void mtk_aal_stop(struct device *dev) { - struct drm_property_blob *blob = state->ctm; - struct drm_color_ctm *ctm; - const u64 *input; - uint16_t coeffs[9] = { 0 }; - int i; - struct cmdq_pkt *cmdq_pkt = NULL; - - if (!blob) - return; + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); - ctm = (struct drm_color_ctm *)blob->data; - input = ctm->matrix; - - for (i = 0; i < ARRAY_SIZE(coeffs); i++) - coeffs[i] = mtk_ctm_s31_32_to_s1_10(input[i]); - - mtk_ddp_write(cmdq_pkt, coeffs[0] << 16 | coeffs[1], - comp, DISP_CCORR_COEF_0); - mtk_ddp_write(cmdq_pkt, coeffs[2] << 16 | coeffs[3], - comp, DISP_CCORR_COEF_1); - mtk_ddp_write(cmdq_pkt, coeffs[4] << 16 | coeffs[5], - comp, DISP_CCORR_COEF_2); - mtk_ddp_write(cmdq_pkt, coeffs[6] << 16 | coeffs[7], - comp, DISP_CCORR_COEF_3); - mtk_ddp_write(cmdq_pkt, coeffs[8] << 16, - comp, DISP_CCORR_COEF_4); + writel_relaxed(0x0, priv->regs + DISP_AAL_EN); } -static void mtk_dither_config(struct mtk_ddp_comp *comp, unsigned int w, +static void mtk_dither_config(struct device *dev, unsigned int w, unsigned int h, unsigned int vrefresh, unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { - mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_DITHER_SIZE); - mtk_ddp_write(cmdq_pkt, DITHER_RELAY_MODE, comp, DISP_DITHER_CFG); -} - -static void mtk_dither_start(struct mtk_ddp_comp *comp) -{ - writel(DITHER_EN, comp->regs + DISP_DITHER_EN); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_dither_stop(struct mtk_ddp_comp *comp) -{ - writel_relaxed(0x0, comp->regs + DISP_DITHER_EN); + mtk_ddp_write(cmdq_pkt, h << 16 | w, &priv->cmdq_reg, priv->regs, DISP_DITHER_SIZE); + mtk_ddp_write(cmdq_pkt, DITHER_RELAY_MODE, &priv->cmdq_reg, priv->regs, DISP_DITHER_CFG); + mtk_dither_set_common(priv->regs, &priv->cmdq_reg, bpc, DISP_DITHER_CFG, + DITHER_ENGINE_EN, cmdq_pkt); } -static void mtk_gamma_config(struct mtk_ddp_comp *comp, unsigned int w, - unsigned int h, unsigned int vrefresh, - unsigned int bpc, struct cmdq_pkt *cmdq_pkt) +static void mtk_dither_start(struct device *dev) { - mtk_ddp_write(cmdq_pkt, h << 16 | w, comp, DISP_GAMMA_SIZE); - mtk_dither_set(comp, bpc, DISP_GAMMA_CFG, cmdq_pkt); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_gamma_start(struct mtk_ddp_comp *comp) -{ - writel(GAMMA_EN, comp->regs + DISP_GAMMA_EN); + writel(DITHER_EN, priv->regs + DISP_DITHER_EN); } -static void mtk_gamma_stop(struct mtk_ddp_comp *comp) +static void mtk_dither_stop(struct device *dev) { - writel_relaxed(0x0, comp->regs + DISP_GAMMA_EN); -} + struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); -static void mtk_gamma_set(struct mtk_ddp_comp *comp, - struct drm_crtc_state *state) -{ - unsigned int i, reg; - struct drm_color_lut *lut; - void __iomem *lut_base; - u32 word; - - if (state->gamma_lut) { - reg = readl(comp->regs + DISP_GAMMA_CFG); - reg = reg | GAMMA_LUT_EN; - writel(reg, comp->regs + DISP_GAMMA_CFG); - lut_base = comp->regs + DISP_GAMMA_LUT; - lut = (struct drm_color_lut *)state->gamma_lut->data; - for (i = 0; i < MTK_LUT_SIZE; i++) { - word = (((lut[i].red >> 6) & LUT_10BIT_MASK) << 20) + - (((lut[i].green >> 6) & LUT_10BIT_MASK) << 10) + - ((lut[i].blue >> 6) & LUT_10BIT_MASK); - writel(word, (lut_base + i * 4)); - } - } + writel_relaxed(0x0, priv->regs + DISP_DITHER_EN); } static const struct mtk_ddp_comp_funcs ddp_aal = { - .gamma_set = mtk_gamma_set, + .clk_enable = mtk_ddp_clk_enable, + .clk_disable = mtk_ddp_clk_disable, + .gamma_set = mtk_aal_gamma_set, .config = mtk_aal_config, .start = mtk_aal_start, .stop = mtk_aal_stop, }; static const struct mtk_ddp_comp_funcs ddp_ccorr = { + .clk_enable = mtk_ccorr_clk_enable, + .clk_disable = mtk_ccorr_clk_disable, .config = mtk_ccorr_config, .start = mtk_ccorr_start, .stop = mtk_ccorr_stop, .ctm_set = mtk_ccorr_ctm_set, }; +static const struct mtk_ddp_comp_funcs ddp_color = { + .clk_enable = mtk_color_clk_enable, + .clk_disable = mtk_color_clk_disable, + .config = mtk_color_config, + .start = mtk_color_start, +}; + static const struct mtk_ddp_comp_funcs ddp_dither = { + .clk_enable = mtk_ddp_clk_enable, + .clk_disable = mtk_ddp_clk_disable, .config = mtk_dither_config, .start = mtk_dither_start, .stop = mtk_dither_stop, }; +static const struct mtk_ddp_comp_funcs ddp_dpi = { + .start = mtk_dpi_start, + .stop = mtk_dpi_stop, +}; + +static const struct mtk_ddp_comp_funcs ddp_dsi = { + .start = mtk_dsi_ddp_start, + .stop = mtk_dsi_ddp_stop, +}; + static const struct mtk_ddp_comp_funcs ddp_gamma = { + .clk_enable = mtk_gamma_clk_enable, + .clk_disable = mtk_gamma_clk_disable, .gamma_set = mtk_gamma_set, .config = mtk_gamma_config, .start = mtk_gamma_start, @@ -349,11 +299,43 @@ static const struct mtk_ddp_comp_funcs ddp_gamma = { }; static const struct mtk_ddp_comp_funcs ddp_od = { + .clk_enable = mtk_ddp_clk_enable, + .clk_disable = mtk_ddp_clk_disable, .config = mtk_od_config, .start = mtk_od_start, }; +static const struct mtk_ddp_comp_funcs ddp_ovl = { + .clk_enable = mtk_ovl_clk_enable, + .clk_disable = mtk_ovl_clk_disable, + .config = mtk_ovl_config, + .start = mtk_ovl_start, + .stop = mtk_ovl_stop, + .enable_vblank = mtk_ovl_enable_vblank, + .disable_vblank = mtk_ovl_disable_vblank, + .supported_rotations = mtk_ovl_supported_rotations, + .layer_nr = mtk_ovl_layer_nr, + .layer_check = mtk_ovl_layer_check, + .layer_config = mtk_ovl_layer_config, + .bgclr_in_on = mtk_ovl_bgclr_in_on, + .bgclr_in_off = mtk_ovl_bgclr_in_off, +}; + +static const struct mtk_ddp_comp_funcs ddp_rdma = { + .clk_enable = mtk_rdma_clk_enable, + .clk_disable = mtk_rdma_clk_disable, + .config = mtk_rdma_config, + .start = mtk_rdma_start, + .stop = mtk_rdma_stop, + .enable_vblank = mtk_rdma_enable_vblank, + .disable_vblank = mtk_rdma_disable_vblank, + .layer_nr = mtk_rdma_layer_nr, + .layer_config = mtk_rdma_layer_config, +}; + static const struct mtk_ddp_comp_funcs ddp_ufoe = { + .clk_enable = mtk_ddp_clk_enable, + .clk_disable = mtk_ddp_clk_disable, .start = mtk_ufoe_start, }; @@ -387,36 +369,37 @@ static const struct mtk_ddp_comp_match mtk_ddp_matches[DDP_COMPONENT_ID_MAX] = { [DDP_COMPONENT_AAL1] = { MTK_DISP_AAL, 1, &ddp_aal }, [DDP_COMPONENT_BLS] = { MTK_DISP_BLS, 0, NULL }, [DDP_COMPONENT_CCORR] = { MTK_DISP_CCORR, 0, &ddp_ccorr }, - [DDP_COMPONENT_COLOR0] = { MTK_DISP_COLOR, 0, NULL }, - [DDP_COMPONENT_COLOR1] = { MTK_DISP_COLOR, 1, NULL }, + [DDP_COMPONENT_COLOR0] = { MTK_DISP_COLOR, 0, &ddp_color }, + [DDP_COMPONENT_COLOR1] = { MTK_DISP_COLOR, 1, &ddp_color }, [DDP_COMPONENT_DITHER] = { MTK_DISP_DITHER, 0, &ddp_dither }, - [DDP_COMPONENT_DPI0] = { MTK_DPI, 0, NULL }, - [DDP_COMPONENT_DPI1] = { MTK_DPI, 1, NULL }, - [DDP_COMPONENT_DSI0] = { MTK_DSI, 0, NULL }, - [DDP_COMPONENT_DSI1] = { MTK_DSI, 1, NULL }, - [DDP_COMPONENT_DSI2] = { MTK_DSI, 2, NULL }, - [DDP_COMPONENT_DSI3] = { MTK_DSI, 3, NULL }, + [DDP_COMPONENT_DPI0] = { MTK_DPI, 0, &ddp_dpi }, + [DDP_COMPONENT_DPI1] = { MTK_DPI, 1, &ddp_dpi }, + [DDP_COMPONENT_DSI0] = { MTK_DSI, 0, &ddp_dsi }, + [DDP_COMPONENT_DSI1] = { MTK_DSI, 1, &ddp_dsi }, + [DDP_COMPONENT_DSI2] = { MTK_DSI, 2, &ddp_dsi }, + [DDP_COMPONENT_DSI3] = { MTK_DSI, 3, &ddp_dsi }, [DDP_COMPONENT_GAMMA] = { MTK_DISP_GAMMA, 0, &ddp_gamma }, [DDP_COMPONENT_OD0] = { MTK_DISP_OD, 0, &ddp_od }, [DDP_COMPONENT_OD1] = { MTK_DISP_OD, 1, &ddp_od }, - [DDP_COMPONENT_OVL0] = { MTK_DISP_OVL, 0, NULL }, - [DDP_COMPONENT_OVL1] = { MTK_DISP_OVL, 1, NULL }, - [DDP_COMPONENT_OVL_2L0] = { MTK_DISP_OVL_2L, 0, NULL }, - [DDP_COMPONENT_OVL_2L1] = { MTK_DISP_OVL_2L, 1, NULL }, + [DDP_COMPONENT_OVL0] = { MTK_DISP_OVL, 0, &ddp_ovl }, + [DDP_COMPONENT_OVL1] = { MTK_DISP_OVL, 1, &ddp_ovl }, + [DDP_COMPONENT_OVL_2L0] = { MTK_DISP_OVL_2L, 0, &ddp_ovl }, + [DDP_COMPONENT_OVL_2L1] = { MTK_DISP_OVL_2L, 1, &ddp_ovl }, [DDP_COMPONENT_PWM0] = { MTK_DISP_PWM, 0, NULL }, [DDP_COMPONENT_PWM1] = { MTK_DISP_PWM, 1, NULL }, [DDP_COMPONENT_PWM2] = { MTK_DISP_PWM, 2, NULL }, - [DDP_COMPONENT_RDMA0] = { MTK_DISP_RDMA, 0, NULL }, - [DDP_COMPONENT_RDMA1] = { MTK_DISP_RDMA, 1, NULL }, - [DDP_COMPONENT_RDMA2] = { MTK_DISP_RDMA, 2, NULL }, + [DDP_COMPONENT_RDMA0] = { MTK_DISP_RDMA, 0, &ddp_rdma }, + [DDP_COMPONENT_RDMA1] = { MTK_DISP_RDMA, 1, &ddp_rdma }, + [DDP_COMPONENT_RDMA2] = { MTK_DISP_RDMA, 2, &ddp_rdma }, [DDP_COMPONENT_UFOE] = { MTK_DISP_UFOE, 0, &ddp_ufoe }, [DDP_COMPONENT_WDMA0] = { MTK_DISP_WDMA, 0, NULL }, [DDP_COMPONENT_WDMA1] = { MTK_DISP_WDMA, 1, NULL }, }; -static bool mtk_drm_find_comp_in_ddp(struct mtk_ddp_comp ddp_comp, +static bool mtk_drm_find_comp_in_ddp(struct device *dev, const enum mtk_ddp_comp_id *path, - unsigned int path_len) + unsigned int path_len, + struct mtk_ddp_comp *ddp_comp) { unsigned int i; @@ -424,7 +407,7 @@ static bool mtk_drm_find_comp_in_ddp(struct mtk_ddp_comp ddp_comp, return false; for (i = 0U; i < path_len; i++) - if (ddp_comp.id == path[i]) + if (dev == ddp_comp[path[i]].dev) return true; return false; @@ -446,18 +429,19 @@ int mtk_ddp_comp_get_id(struct device_node *node, } unsigned int mtk_drm_find_possible_crtc_by_comp(struct drm_device *drm, - struct mtk_ddp_comp ddp_comp) + struct device *dev) { struct mtk_drm_private *private = drm->dev_private; unsigned int ret = 0; - if (mtk_drm_find_comp_in_ddp(ddp_comp, private->data->main_path, private->data->main_len)) + if (mtk_drm_find_comp_in_ddp(dev, private->data->main_path, private->data->main_len, + private->ddp_comp)) ret = BIT(0); - else if (mtk_drm_find_comp_in_ddp(ddp_comp, private->data->ext_path, - private->data->ext_len)) + else if (mtk_drm_find_comp_in_ddp(dev, private->data->ext_path, + private->data->ext_len, private->ddp_comp)) ret = BIT(1); - else if (mtk_drm_find_comp_in_ddp(ddp_comp, private->data->third_path, - private->data->third_len)) + else if (mtk_drm_find_comp_in_ddp(dev, private->data->third_path, + private->data->third_len, private->ddp_comp)) ret = BIT(2); else DRM_INFO("Failed to find comp in ddp table\n"); @@ -465,59 +449,15 @@ unsigned int mtk_drm_find_possible_crtc_by_comp(struct drm_device *drm, return ret; } -int mtk_ddp_comp_init(struct device *dev, struct device_node *node, - struct mtk_ddp_comp *comp, enum mtk_ddp_comp_id comp_id, - const struct mtk_ddp_comp_funcs *funcs) +static int mtk_ddp_get_larb_dev(struct device_node *node, struct mtk_ddp_comp *comp, + struct device *dev) { - enum mtk_ddp_comp_type type; struct device_node *larb_node; struct platform_device *larb_pdev; -#if IS_REACHABLE(CONFIG_MTK_CMDQ) - struct resource res; - struct cmdq_client_reg cmdq_reg; - int ret; -#endif - - if (comp_id < 0 || comp_id >= DDP_COMPONENT_ID_MAX) - return -EINVAL; - - type = mtk_ddp_matches[comp_id].type; - - comp->id = comp_id; - comp->funcs = funcs ?: mtk_ddp_matches[comp_id].funcs; - - if (comp_id == DDP_COMPONENT_BLS || - comp_id == DDP_COMPONENT_DPI0 || - comp_id == DDP_COMPONENT_DPI1 || - comp_id == DDP_COMPONENT_DSI0 || - comp_id == DDP_COMPONENT_DSI1 || - comp_id == DDP_COMPONENT_DSI2 || - comp_id == DDP_COMPONENT_DSI3 || - comp_id == DDP_COMPONENT_PWM0) { - comp->regs = NULL; - comp->clk = NULL; - comp->irq = 0; - return 0; - } - - comp->regs = of_iomap(node, 0); - comp->irq = of_irq_get(node, 0); - comp->clk = of_clk_get(node, 0); - if (IS_ERR(comp->clk)) - return PTR_ERR(comp->clk); - - /* Only DMA capable components need the LARB property */ - comp->larb_dev = NULL; - if (type != MTK_DISP_OVL && - type != MTK_DISP_OVL_2L && - type != MTK_DISP_RDMA && - type != MTK_DISP_WDMA) - return 0; larb_node = of_parse_phandle(node, "mediatek,larb", 0); if (!larb_node) { - dev_err(dev, - "Missing mediadek,larb phandle in %pOF node\n", node); + dev_err(dev, "Missing mediadek,larb phandle in %pOF node\n", node); return -EINVAL; } @@ -528,40 +468,71 @@ int mtk_ddp_comp_init(struct device *dev, struct device_node *node, return -EPROBE_DEFER; } of_node_put(larb_node); - comp->larb_dev = &larb_pdev->dev; -#if IS_REACHABLE(CONFIG_MTK_CMDQ) - if (of_address_to_resource(node, 0, &res) != 0) { - dev_err(dev, "Missing reg in %s node\n", node->full_name); - put_device(&larb_pdev->dev); - return -EINVAL; - } - comp->regs_pa = res.start; - - ret = cmdq_dev_get_client_reg(dev, &cmdq_reg, 0); - if (ret) - dev_dbg(dev, "get mediatek,gce-client-reg fail!\n"); - else - comp->subsys = cmdq_reg.subsys; -#endif return 0; } -int mtk_ddp_comp_register(struct drm_device *drm, struct mtk_ddp_comp *comp) +int mtk_ddp_comp_init(struct device_node *node, struct mtk_ddp_comp *comp, + enum mtk_ddp_comp_id comp_id) { - struct mtk_drm_private *private = drm->dev_private; + struct platform_device *comp_pdev; + enum mtk_ddp_comp_type type; + struct mtk_ddp_comp_dev *priv; + int ret; - if (private->ddp_comp[comp->id]) - return -EBUSY; + if (comp_id < 0 || comp_id >= DDP_COMPONENT_ID_MAX) + return -EINVAL; - private->ddp_comp[comp->id] = comp; - return 0; -} + type = mtk_ddp_matches[comp_id].type; -void mtk_ddp_comp_unregister(struct drm_device *drm, struct mtk_ddp_comp *comp) -{ - struct mtk_drm_private *private = drm->dev_private; + comp->id = comp_id; + comp->funcs = mtk_ddp_matches[comp_id].funcs; + comp_pdev = of_find_device_by_node(node); + if (!comp_pdev) { + DRM_INFO("Waiting for device %s\n", node->full_name); + return -EPROBE_DEFER; + } + comp->dev = &comp_pdev->dev; - private->ddp_comp[comp->id] = NULL; + /* Only DMA capable components need the LARB property */ + if (type == MTK_DISP_OVL || + type == MTK_DISP_OVL_2L || + type == MTK_DISP_RDMA || + type == MTK_DISP_WDMA) { + ret = mtk_ddp_get_larb_dev(node, comp, comp->dev); + if (ret) + return ret; + } + + if (type == MTK_DISP_BLS || + type == MTK_DISP_CCORR || + type == MTK_DISP_COLOR || + type == MTK_DISP_GAMMA || + type == MTK_DPI || + type == MTK_DSI || + type == MTK_DISP_OVL || + type == MTK_DISP_OVL_2L || + type == MTK_DISP_PWM || + type == MTK_DISP_RDMA) + return 0; + + priv = devm_kzalloc(comp->dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->regs = of_iomap(node, 0); + priv->clk = of_clk_get(node, 0); + if (IS_ERR(priv->clk)) + return PTR_ERR(priv->clk); + +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + ret = cmdq_dev_get_client_reg(comp->dev, &priv->cmdq_reg, 0); + if (ret) + dev_dbg(comp->dev, "get mediatek,gce-client-reg fail!\n"); +#endif + + platform_set_drvdata(comp_pdev, priv); + + return 0; } diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h index 5aa52b7afeec..bb914d976cf5 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h @@ -7,6 +7,7 @@ #define MTK_DRM_DDP_COMP_H #include <linux/io.h> +#include <linux/soc/mediatek/mtk-cmdq.h> #include <linux/soc/mediatek/mtk-mmsys.h> struct device; @@ -39,79 +40,95 @@ enum mtk_ddp_comp_type { struct mtk_ddp_comp; struct cmdq_pkt; struct mtk_ddp_comp_funcs { - void (*config)(struct mtk_ddp_comp *comp, unsigned int w, + int (*clk_enable)(struct device *dev); + void (*clk_disable)(struct device *dev); + void (*config)(struct device *dev, unsigned int w, unsigned int h, unsigned int vrefresh, unsigned int bpc, struct cmdq_pkt *cmdq_pkt); - void (*start)(struct mtk_ddp_comp *comp); - void (*stop)(struct mtk_ddp_comp *comp); - void (*enable_vblank)(struct mtk_ddp_comp *comp, struct drm_crtc *crtc); - void (*disable_vblank)(struct mtk_ddp_comp *comp); - unsigned int (*supported_rotations)(struct mtk_ddp_comp *comp); - unsigned int (*layer_nr)(struct mtk_ddp_comp *comp); - int (*layer_check)(struct mtk_ddp_comp *comp, + void (*start)(struct device *dev); + void (*stop)(struct device *dev); + void (*enable_vblank)(struct device *dev, + void (*vblank_cb)(void *), + void *vblank_cb_data); + void (*disable_vblank)(struct device *dev); + unsigned int (*supported_rotations)(struct device *dev); + unsigned int (*layer_nr)(struct device *dev); + int (*layer_check)(struct device *dev, unsigned int idx, struct mtk_plane_state *state); - void (*layer_config)(struct mtk_ddp_comp *comp, unsigned int idx, + void (*layer_config)(struct device *dev, unsigned int idx, struct mtk_plane_state *state, struct cmdq_pkt *cmdq_pkt); - void (*gamma_set)(struct mtk_ddp_comp *comp, + void (*gamma_set)(struct device *dev, struct drm_crtc_state *state); - void (*bgclr_in_on)(struct mtk_ddp_comp *comp); - void (*bgclr_in_off)(struct mtk_ddp_comp *comp); - void (*ctm_set)(struct mtk_ddp_comp *comp, + void (*bgclr_in_on)(struct device *dev); + void (*bgclr_in_off)(struct device *dev); + void (*ctm_set)(struct device *dev, struct drm_crtc_state *state); }; struct mtk_ddp_comp { - struct clk *clk; - void __iomem *regs; + struct device *dev; int irq; struct device *larb_dev; enum mtk_ddp_comp_id id; const struct mtk_ddp_comp_funcs *funcs; - resource_size_t regs_pa; - u8 subsys; }; +static inline int mtk_ddp_comp_clk_enable(struct mtk_ddp_comp *comp) +{ + if (comp->funcs && comp->funcs->clk_enable) + return comp->funcs->clk_enable(comp->dev); + + return 0; +} + +static inline void mtk_ddp_comp_clk_disable(struct mtk_ddp_comp *comp) +{ + if (comp->funcs && comp->funcs->clk_disable) + comp->funcs->clk_disable(comp->dev); +} + static inline void mtk_ddp_comp_config(struct mtk_ddp_comp *comp, unsigned int w, unsigned int h, unsigned int vrefresh, unsigned int bpc, struct cmdq_pkt *cmdq_pkt) { if (comp->funcs && comp->funcs->config) - comp->funcs->config(comp, w, h, vrefresh, bpc, cmdq_pkt); + comp->funcs->config(comp->dev, w, h, vrefresh, bpc, cmdq_pkt); } static inline void mtk_ddp_comp_start(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->start) - comp->funcs->start(comp); + comp->funcs->start(comp->dev); } static inline void mtk_ddp_comp_stop(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->stop) - comp->funcs->stop(comp); + comp->funcs->stop(comp->dev); } static inline void mtk_ddp_comp_enable_vblank(struct mtk_ddp_comp *comp, - struct drm_crtc *crtc) + void (*vblank_cb)(void *), + void *vblank_cb_data) { if (comp->funcs && comp->funcs->enable_vblank) - comp->funcs->enable_vblank(comp, crtc); + comp->funcs->enable_vblank(comp->dev, vblank_cb, vblank_cb_data); } static inline void mtk_ddp_comp_disable_vblank(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->disable_vblank) - comp->funcs->disable_vblank(comp); + comp->funcs->disable_vblank(comp->dev); } static inline unsigned int mtk_ddp_comp_supported_rotations(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->supported_rotations) - return comp->funcs->supported_rotations(comp); + return comp->funcs->supported_rotations(comp->dev); return 0; } @@ -119,7 +136,7 @@ unsigned int mtk_ddp_comp_supported_rotations(struct mtk_ddp_comp *comp) static inline unsigned int mtk_ddp_comp_layer_nr(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->layer_nr) - return comp->funcs->layer_nr(comp); + return comp->funcs->layer_nr(comp->dev); return 0; } @@ -129,7 +146,7 @@ static inline int mtk_ddp_comp_layer_check(struct mtk_ddp_comp *comp, struct mtk_plane_state *state) { if (comp->funcs && comp->funcs->layer_check) - return comp->funcs->layer_check(comp, idx, state); + return comp->funcs->layer_check(comp->dev, idx, state); return 0; } @@ -139,52 +156,49 @@ static inline void mtk_ddp_comp_layer_config(struct mtk_ddp_comp *comp, struct cmdq_pkt *cmdq_pkt) { if (comp->funcs && comp->funcs->layer_config) - comp->funcs->layer_config(comp, idx, state, cmdq_pkt); + comp->funcs->layer_config(comp->dev, idx, state, cmdq_pkt); } static inline void mtk_ddp_gamma_set(struct mtk_ddp_comp *comp, struct drm_crtc_state *state) { if (comp->funcs && comp->funcs->gamma_set) - comp->funcs->gamma_set(comp, state); + comp->funcs->gamma_set(comp->dev, state); } static inline void mtk_ddp_comp_bgclr_in_on(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->bgclr_in_on) - comp->funcs->bgclr_in_on(comp); + comp->funcs->bgclr_in_on(comp->dev); } static inline void mtk_ddp_comp_bgclr_in_off(struct mtk_ddp_comp *comp) { if (comp->funcs && comp->funcs->bgclr_in_off) - comp->funcs->bgclr_in_off(comp); + comp->funcs->bgclr_in_off(comp->dev); } static inline void mtk_ddp_ctm_set(struct mtk_ddp_comp *comp, struct drm_crtc_state *state) { if (comp->funcs && comp->funcs->ctm_set) - comp->funcs->ctm_set(comp, state); + comp->funcs->ctm_set(comp->dev, state); } int mtk_ddp_comp_get_id(struct device_node *node, enum mtk_ddp_comp_type comp_type); unsigned int mtk_drm_find_possible_crtc_by_comp(struct drm_device *drm, - struct mtk_ddp_comp ddp_comp); -int mtk_ddp_comp_init(struct device *dev, struct device_node *comp_node, - struct mtk_ddp_comp *comp, enum mtk_ddp_comp_id comp_id, - const struct mtk_ddp_comp_funcs *funcs); -int mtk_ddp_comp_register(struct drm_device *drm, struct mtk_ddp_comp *comp); -void mtk_ddp_comp_unregister(struct drm_device *drm, struct mtk_ddp_comp *comp); -void mtk_dither_set(struct mtk_ddp_comp *comp, unsigned int bpc, - unsigned int CFG, struct cmdq_pkt *cmdq_pkt); + struct device *dev); +int mtk_ddp_comp_init(struct device_node *comp_node, struct mtk_ddp_comp *comp, + enum mtk_ddp_comp_id comp_id); enum mtk_ddp_comp_type mtk_ddp_comp_get_type(enum mtk_ddp_comp_id comp_id); void mtk_ddp_write(struct cmdq_pkt *cmdq_pkt, unsigned int value, - struct mtk_ddp_comp *comp, unsigned int offset); + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, + unsigned int offset); void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, unsigned int value, - struct mtk_ddp_comp *comp, unsigned int offset); + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, + unsigned int offset); void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value, - struct mtk_ddp_comp *comp, unsigned int offset, - unsigned int mask); + struct cmdq_client_reg *cmdq_reg, void __iomem *regs, + unsigned int offset, unsigned int mask); #endif /* MTK_DRM_DDP_COMP_H */ diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c index 2f717df28a77..b013d56d2777 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c @@ -10,7 +10,6 @@ #include <linux/of_address.h> #include <linux/of_platform.h> #include <linux/pm_runtime.h> -#include <linux/soc/mediatek/mtk-mmsys.h> #include <linux/dma-mapping.h> #include <drm/drm_atomic.h> @@ -26,7 +25,6 @@ #include <drm/drm_vblank.h> #include "mtk_drm_crtc.h" -#include "mtk_drm_ddp.h" #include "mtk_drm_ddp_comp.h" #include "mtk_drm_drv.h" #include "mtk_drm_gem.h" @@ -131,6 +129,24 @@ static const enum mtk_ddp_comp_id mt8173_mtk_ddp_ext[] = { DDP_COMPONENT_DPI0, }; +static const enum mtk_ddp_comp_id mt8183_mtk_ddp_main[] = { + DDP_COMPONENT_OVL0, + DDP_COMPONENT_OVL_2L0, + DDP_COMPONENT_RDMA0, + DDP_COMPONENT_COLOR0, + DDP_COMPONENT_CCORR, + DDP_COMPONENT_AAL0, + DDP_COMPONENT_GAMMA, + DDP_COMPONENT_DITHER, + DDP_COMPONENT_DSI0, +}; + +static const enum mtk_ddp_comp_id mt8183_mtk_ddp_ext[] = { + DDP_COMPONENT_OVL_2L1, + DDP_COMPONENT_RDMA1, + DDP_COMPONENT_DPI0, +}; + static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { .main_path = mt2701_mtk_ddp_main, .main_len = ARRAY_SIZE(mt2701_mtk_ddp_main), @@ -163,6 +179,13 @@ static const struct mtk_mmsys_driver_data mt8173_mmsys_driver_data = { .ext_len = ARRAY_SIZE(mt8173_mtk_ddp_ext), }; +static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = { + .main_path = mt8183_mtk_ddp_main, + .main_len = ARRAY_SIZE(mt8183_mtk_ddp_main), + .ext_path = mt8183_mtk_ddp_ext, + .ext_len = ARRAY_SIZE(mt8183_mtk_ddp_ext), +}; + static int mtk_drm_kms_init(struct drm_device *drm) { struct mtk_drm_private *private = drm->dev_private; @@ -377,12 +400,20 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = { .data = (void *)MTK_DISP_OVL }, { .compatible = "mediatek,mt8173-disp-ovl", .data = (void *)MTK_DISP_OVL }, + { .compatible = "mediatek,mt8183-disp-ovl", + .data = (void *)MTK_DISP_OVL }, + { .compatible = "mediatek,mt8183-disp-ovl-2l", + .data = (void *)MTK_DISP_OVL_2L }, { .compatible = "mediatek,mt2701-disp-rdma", .data = (void *)MTK_DISP_RDMA }, { .compatible = "mediatek,mt8173-disp-rdma", .data = (void *)MTK_DISP_RDMA }, + { .compatible = "mediatek,mt8183-disp-rdma", + .data = (void *)MTK_DISP_RDMA }, { .compatible = "mediatek,mt8173-disp-wdma", .data = (void *)MTK_DISP_WDMA }, + { .compatible = "mediatek,mt8183-disp-ccorr", + .data = (void *)MTK_DISP_CCORR }, { .compatible = "mediatek,mt2701-disp-color", .data = (void *)MTK_DISP_COLOR }, { .compatible = "mediatek,mt8173-disp-color", @@ -391,22 +422,32 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = { .data = (void *)MTK_DISP_AAL}, { .compatible = "mediatek,mt8173-disp-gamma", .data = (void *)MTK_DISP_GAMMA, }, + { .compatible = "mediatek,mt8183-disp-gamma", + .data = (void *)MTK_DISP_GAMMA, }, + { .compatible = "mediatek,mt8183-disp-dither", + .data = (void *)MTK_DISP_DITHER }, { .compatible = "mediatek,mt8173-disp-ufoe", .data = (void *)MTK_DISP_UFOE }, { .compatible = "mediatek,mt2701-dsi", .data = (void *)MTK_DSI }, { .compatible = "mediatek,mt8173-dsi", .data = (void *)MTK_DSI }, + { .compatible = "mediatek,mt8183-dsi", + .data = (void *)MTK_DSI }, { .compatible = "mediatek,mt2701-dpi", .data = (void *)MTK_DPI }, { .compatible = "mediatek,mt8173-dpi", .data = (void *)MTK_DPI }, + { .compatible = "mediatek,mt8183-dpi", + .data = (void *)MTK_DPI }, { .compatible = "mediatek,mt2701-disp-mutex", .data = (void *)MTK_DISP_MUTEX }, { .compatible = "mediatek,mt2712-disp-mutex", .data = (void *)MTK_DISP_MUTEX }, { .compatible = "mediatek,mt8173-disp-mutex", .data = (void *)MTK_DISP_MUTEX }, + { .compatible = "mediatek,mt8183-disp-mutex", + .data = (void *)MTK_DISP_MUTEX }, { .compatible = "mediatek,mt2701-disp-pwm", .data = (void *)MTK_DISP_BLS }, { .compatible = "mediatek,mt8173-disp-pwm", @@ -425,6 +466,8 @@ static const struct of_device_id mtk_drm_of_ids[] = { .data = &mt2712_mmsys_driver_data}, { .compatible = "mediatek,mt8173-mmsys", .data = &mt8173_mmsys_driver_data}, + { .compatible = "mediatek,mt8183-mmsys", + .data = &mt8183_mmsys_driver_data}, { } }; @@ -488,11 +531,13 @@ static int mtk_drm_probe(struct platform_device *pdev) private->comp_node[comp_id] = of_node_get(node); /* - * Currently only the COLOR, OVL, RDMA, DSI, and DPI blocks have - * separate component platform drivers and initialize their own + * Currently only the CCORR, COLOR, GAMMA, OVL, RDMA, DSI, and DPI + * blocks have separate component platform drivers and initialize their own * DDP component structure. The others are initialized here. */ - if (comp_type == MTK_DISP_COLOR || + if (comp_type == MTK_DISP_CCORR || + comp_type == MTK_DISP_COLOR || + comp_type == MTK_DISP_GAMMA || comp_type == MTK_DISP_OVL || comp_type == MTK_DISP_OVL_2L || comp_type == MTK_DISP_RDMA || @@ -502,24 +547,12 @@ static int mtk_drm_probe(struct platform_device *pdev) node); drm_of_component_match_add(dev, &match, compare_of, node); - } else { - struct mtk_ddp_comp *comp; - - comp = devm_kzalloc(dev, sizeof(*comp), GFP_KERNEL); - if (!comp) { - ret = -ENOMEM; - of_node_put(node); - goto err_node; - } - - ret = mtk_ddp_comp_init(dev->parent, node, comp, - comp_id, NULL); - if (ret) { - of_node_put(node); - goto err_node; - } - - private->ddp_comp[comp_id] = comp; + } + + ret = mtk_ddp_comp_init(node, &private->ddp_comp[comp_id], comp_id); + if (ret) { + of_node_put(node); + goto err_node; } } @@ -545,10 +578,8 @@ err_node: of_node_put(private->mutex_node); for (i = 0; i < DDP_COMPONENT_ID_MAX; i++) { of_node_put(private->comp_node[i]); - if (private->ddp_comp[i]) { - put_device(private->ddp_comp[i]->larb_dev); - private->ddp_comp[i] = NULL; - } + if (private->ddp_comp[i].larb_dev) + put_device(private->ddp_comp[i].larb_dev); } return ret; } @@ -604,8 +635,9 @@ static struct platform_driver mtk_drm_platform_driver = { }; static struct platform_driver * const mtk_drm_drivers[] = { - &mtk_ddp_driver, + &mtk_disp_ccorr_driver, &mtk_disp_color_driver, + &mtk_disp_gamma_driver, &mtk_disp_ovl_driver, &mtk_disp_rdma_driver, &mtk_dpi_driver, diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.h b/drivers/gpu/drm/mediatek/mtk_drm_drv.h index 5d771cf0bf25..637f5669e895 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.h @@ -41,13 +41,14 @@ struct mtk_drm_private { struct device *mutex_dev; struct device *mmsys_dev; struct device_node *comp_node[DDP_COMPONENT_ID_MAX]; - struct mtk_ddp_comp *ddp_comp[DDP_COMPONENT_ID_MAX]; + struct mtk_ddp_comp ddp_comp[DDP_COMPONENT_ID_MAX]; const struct mtk_mmsys_driver_data *data; struct drm_atomic_state *suspend_state; }; -extern struct platform_driver mtk_ddp_driver; +extern struct platform_driver mtk_disp_ccorr_driver; extern struct platform_driver mtk_disp_color_driver; +extern struct platform_driver mtk_disp_gamma_driver; extern struct platform_driver mtk_disp_ovl_driver; extern struct platform_driver mtk_disp_rdma_driver; extern struct platform_driver mtk_dpi_driver; diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c index 65fd99c528af..a1ff152ef468 100644 --- a/drivers/gpu/drm/mediatek/mtk_dsi.c +++ b/drivers/gpu/drm/mediatek/mtk_dsi.c @@ -25,6 +25,7 @@ #include <drm/drm_probe_helper.h> #include <drm/drm_simple_kms_helper.h> +#include "mtk_disp_drv.h" #include "mtk_drm_ddp_comp.h" #define DSI_START 0x00 @@ -178,7 +179,6 @@ struct mtk_dsi_driver_data { }; struct mtk_dsi { - struct mtk_ddp_comp ddp_comp; struct device *dev; struct mipi_dsi_host host; struct drm_encoder encoder; @@ -767,25 +767,20 @@ static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = { .mode_set = mtk_dsi_bridge_mode_set, }; -static void mtk_dsi_ddp_start(struct mtk_ddp_comp *comp) +void mtk_dsi_ddp_start(struct device *dev) { - struct mtk_dsi *dsi = container_of(comp, struct mtk_dsi, ddp_comp); + struct mtk_dsi *dsi = dev_get_drvdata(dev); mtk_dsi_poweron(dsi); } -static void mtk_dsi_ddp_stop(struct mtk_ddp_comp *comp) +void mtk_dsi_ddp_stop(struct device *dev) { - struct mtk_dsi *dsi = container_of(comp, struct mtk_dsi, ddp_comp); + struct mtk_dsi *dsi = dev_get_drvdata(dev); mtk_dsi_poweroff(dsi); } -static const struct mtk_ddp_comp_funcs mtk_dsi_funcs = { - .start = mtk_dsi_ddp_start, - .stop = mtk_dsi_ddp_stop, -}; - static int mtk_dsi_host_attach(struct mipi_dsi_host *host, struct mipi_dsi_device *device) { @@ -952,7 +947,7 @@ static int mtk_dsi_encoder_init(struct drm_device *drm, struct mtk_dsi *dsi) return ret; } - dsi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm, dsi->ddp_comp); + dsi->encoder.possible_crtcs = mtk_drm_find_possible_crtc_by_comp(drm, dsi->host.dev); ret = drm_bridge_attach(&dsi->encoder, &dsi->bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR); @@ -980,32 +975,17 @@ static int mtk_dsi_bind(struct device *dev, struct device *master, void *data) struct drm_device *drm = data; struct mtk_dsi *dsi = dev_get_drvdata(dev); - ret = mtk_ddp_comp_register(drm, &dsi->ddp_comp); - if (ret < 0) { - dev_err(dev, "Failed to register component %pOF: %d\n", - dev->of_node, ret); - return ret; - } - ret = mtk_dsi_encoder_init(drm, dsi); - if (ret) - goto err_unregister; - return 0; - -err_unregister: - mtk_ddp_comp_unregister(drm, &dsi->ddp_comp); return ret; } static void mtk_dsi_unbind(struct device *dev, struct device *master, void *data) { - struct drm_device *drm = data; struct mtk_dsi *dsi = dev_get_drvdata(dev); drm_encoder_cleanup(&dsi->encoder); - mtk_ddp_comp_unregister(drm, &dsi->ddp_comp); } static const struct component_ops mtk_dsi_component_ops = { @@ -1020,7 +1000,6 @@ static int mtk_dsi_probe(struct platform_device *pdev) struct drm_panel *panel; struct resource *regs; int irq_num; - int comp_id; int ret; dsi = devm_kzalloc(dev, sizeof(*dsi), GFP_KERNEL); @@ -1090,20 +1069,6 @@ static int mtk_dsi_probe(struct platform_device *pdev) goto err_unregister_host; } - comp_id = mtk_ddp_comp_get_id(dev->of_node, MTK_DSI); - if (comp_id < 0) { - dev_err(dev, "Failed to identify by alias: %d\n", comp_id); - ret = comp_id; - goto err_unregister_host; - } - - ret = mtk_ddp_comp_init(dev, dev->of_node, &dsi->ddp_comp, comp_id, - &mtk_dsi_funcs); - if (ret) { - dev_err(dev, "Failed to initialize component: %d\n", ret); - goto err_unregister_host; - } - irq_num = platform_get_irq(pdev, 0); if (irq_num < 0) { dev_err(&pdev->dev, "failed to get dsi irq_num: %d\n", irq_num); @@ -1111,9 +1076,8 @@ static int mtk_dsi_probe(struct platform_device *pdev) goto err_unregister_host; } - irq_set_status_flags(irq_num, IRQ_TYPE_LEVEL_LOW); ret = devm_request_irq(&pdev->dev, irq_num, mtk_dsi_irq, - IRQF_TRIGGER_LOW, dev_name(&pdev->dev), dsi); + IRQF_TRIGGER_NONE, dev_name(&pdev->dev), dsi); if (ret) { dev_err(&pdev->dev, "failed to request mediatek dsi irq\n"); goto err_unregister_host; diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 7e82c41a85f1..bdc989183c64 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -534,8 +534,10 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) if (!gpu->aspace) { dev_err(dev->dev, "No memory protection without MMU\n"); - ret = -ENXIO; - goto fail; + if (!allow_vram_carveout) { + ret = -ENXIO; + goto fail; + } } return gpu; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 93da6683a866..4534633fe7cd 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -564,8 +564,10 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) * implement a cmdstream validator. */ DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - ret = -ENXIO; - goto fail; + if (!allow_vram_carveout) { + ret = -ENXIO; + goto fail; + } } icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index c0be3a0f36b2..82bebb40234d 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -692,8 +692,10 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) * implement a cmdstream validator. */ DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - ret = -ENXIO; - goto fail; + if (!allow_vram_carveout) { + ret = -ENXIO; + goto fail; + } } icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index 87c8b033ad1a..12e75ba360f9 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -18,6 +18,10 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); +bool allow_vram_carveout = false; +MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); +module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); + static const struct adreno_info gpulist[] = { { .rev = ADRENO_REV(2, 0, 0, 0), diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 6cf9975e951e..f09175698827 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,8 +191,6 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); - struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); - struct io_pgtable_domain_attr pgtbl_cfg; struct iommu_domain *iommu; struct msm_mmu *mmu; struct msm_gem_address_space *aspace; @@ -202,13 +200,18 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, if (!iommu) return NULL; - /* - * This allows GPU to set the bus attributes required to use system - * cache on behalf of the iommu page table walker. - */ - if (!IS_ERR(a6xx_gpu->htw_llc_slice)) { - pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; - iommu_domain_set_attr(iommu, DOMAIN_ATTR_IO_PGTABLE_CFG, &pgtbl_cfg); + + if (adreno_is_a6xx(adreno_gpu)) { + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); + struct io_pgtable_domain_attr pgtbl_cfg; + /* + * This allows GPU to set the bus attributes required to use system + * cache on behalf of the iommu page table walker. + */ + if (!IS_ERR(a6xx_gpu->htw_llc_slice)) { + pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; + iommu_domain_set_attr(iommu, DOMAIN_ATTR_IO_PGTABLE_CFG, &pgtbl_cfg); + } } mmu = msm_iommu_new(&pdev->dev, iommu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index c3775f79525a..b3d9a333591b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,6 +18,7 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; +extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, @@ -211,6 +212,11 @@ static inline int adreno_is_a540(struct adreno_gpu *gpu) return gpu->revn == 540; } +static inline bool adreno_is_a6xx(struct adreno_gpu *gpu) +{ + return ((gpu->revn < 700 && gpu->revn > 599)); +} + static inline int adreno_is_a618(struct adreno_gpu *gpu) { return gpu->revn == 618; diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c index e3462f5d96d7..36b39c381b3f 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c @@ -1420,16 +1420,14 @@ void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl) static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl) { u8 *dpcd = ctrl->panel->dpcd; - u32 edid_quirks = 0; - edid_quirks = drm_dp_get_edid_quirks(ctrl->panel->edid); /* * For better interop experience, used a fixed NVID=0x8000 * whenever connected to a VGA dongle downstream. */ if (drm_dp_is_branch(dpcd)) - return (drm_dp_has_quirk(&ctrl->panel->desc, edid_quirks, - DP_DPCD_QUIRK_CONSTANT_N)); + return (drm_dp_has_quirk(&ctrl->panel->desc, + DP_DPCD_QUIRK_CONSTANT_N)); return false; } diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 6e971d552911..3bc7ed21de28 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -693,6 +693,13 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) return 0; } + if (state == ST_CONNECT_PENDING) { + /* wait until ST_CONNECTED */ + dp_add_event(dp, EV_IRQ_HPD_INT, 0, 1); /* delay = 1 */ + mutex_unlock(&dp->event_mutex); + return 0; + } + ret = dp_display_usbpd_attention_cb(&dp->pdev->dev); if (ret == -ECONNRESET) { /* cable unplugged */ dp->core_initialized = false; diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c index 97dca3e378b7..d1780bcac8cc 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.c +++ b/drivers/gpu/drm/msm/dp/dp_panel.c @@ -167,12 +167,18 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel, panel = container_of(dp_panel, struct dp_panel_private, dp_panel); rc = dp_panel_read_dpcd(dp_panel); + if (rc) { + DRM_ERROR("read dpcd failed %d\n", rc); + return rc; + } + bw_code = drm_dp_link_rate_to_bw_code(dp_panel->link_info.rate); - if (rc || !is_link_rate_valid(bw_code) || + if (!is_link_rate_valid(bw_code) || !is_lane_count_valid(dp_panel->link_info.num_lanes) || (bw_code > dp_panel->max_bw_code)) { - DRM_ERROR("read dpcd failed %d\n", rc); - return rc; + DRM_ERROR("Illegal link rate=%d lane=%d\n", dp_panel->link_info.rate, + dp_panel->link_info.num_lanes); + return -EINVAL; } if (dp_panel->dfp_present) { diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 535a0263ceeb..108c405e03dd 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -457,14 +457,14 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) drm_mode_config_init(ddev); - /* Bind all our sub-components: */ - ret = component_bind_all(dev, ddev); + ret = msm_init_vram(ddev); if (ret) goto err_destroy_mdss; - ret = msm_init_vram(ddev); + /* Bind all our sub-components: */ + ret = component_bind_all(dev, ddev); if (ret) - goto err_msm_uninit; + goto err_destroy_mdss; dma_set_max_seg_size(dev, UINT_MAX); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 45e23125dfdb..a588b0388d27 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -96,6 +96,8 @@ static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + WARN_ON(!msm_gem_is_locked(obj)); + if (!msm_obj->pages) { struct drm_device *dev = obj->dev; struct page **p; @@ -988,6 +990,8 @@ void msm_gem_free_object(struct drm_gem_object *obj) if (msm_obj->pages) kvfree(msm_obj->pages); + put_iova_vmas(obj); + /* dma_buf_detach() grabs resv lock, so we need to unlock * prior to drm_prime_gem_destroy */ @@ -997,11 +1001,10 @@ void msm_gem_free_object(struct drm_gem_object *obj) } else { msm_gem_vunmap(obj); put_pages(obj); + put_iova_vmas(obj); msm_gem_unlock(obj); } - put_iova_vmas(obj); - drm_gem_object_release(obj); kfree(msm_obj); @@ -1115,6 +1118,8 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, struct msm_gem_vma *vma; struct page **pages; + drm_gem_private_object_init(dev, obj, size); + msm_gem_lock(obj); vma = add_vma(obj, NULL); @@ -1126,9 +1131,9 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, to_msm_bo(obj)->vram_node = &vma->node; - drm_gem_private_object_init(dev, obj, size); - + msm_gem_lock(obj); pages = get_pages(obj); + msm_gem_unlock(obj); if (IS_ERR(pages)) { ret = PTR_ERR(pages); goto fail; diff --git a/drivers/gpu/drm/nouveau/dispnv50/Kbuild b/drivers/gpu/drm/nouveau/dispnv50/Kbuild index 6fdddb266fb1..4488e1c061b3 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/Kbuild +++ b/drivers/gpu/drm/nouveau/dispnv50/Kbuild @@ -37,6 +37,7 @@ nouveau-y += dispnv50/wimmc37b.o nouveau-y += dispnv50/wndw.o nouveau-y += dispnv50/wndwc37e.o nouveau-y += dispnv50/wndwc57e.o +nouveau-y += dispnv50/wndwc67e.o nouveau-y += dispnv50/base.o nouveau-y += dispnv50/base507c.o diff --git a/drivers/gpu/drm/nouveau/dispnv50/core.c b/drivers/gpu/drm/nouveau/dispnv50/core.c index 27ea3f34706d..abefc2343443 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/core.c +++ b/drivers/gpu/drm/nouveau/dispnv50/core.c @@ -42,6 +42,7 @@ nv50_core_new(struct nouveau_drm *drm, struct nv50_core **pcore) int version; int (*new)(struct nouveau_drm *, s32, struct nv50_core **); } cores[] = { + { GA102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new }, { TU102_DISP_CORE_CHANNEL_DMA, 0, corec57d_new }, { GV100_DISP_CORE_CHANNEL_DMA, 0, corec37d_new }, { GP102_DISP_CORE_CHANNEL_DMA, 0, core917d_new }, diff --git a/drivers/gpu/drm/nouveau/dispnv50/core507d.c b/drivers/gpu/drm/nouveau/dispnv50/core507d.c index e6f16a7750f0..1a1d806e0b01 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/core507d.c +++ b/drivers/gpu/drm/nouveau/dispnv50/core507d.c @@ -36,7 +36,7 @@ core507d_update(struct nv50_core *core, u32 *interlock, bool ntfy) struct nvif_push *push = core->chan.push; int ret; - if ((ret = PUSH_WAIT(push, 5))) + if ((ret = PUSH_WAIT(push, (ntfy ? 2 : 0) + 3))) return ret; if (ntfy) { diff --git a/drivers/gpu/drm/nouveau/dispnv50/corec37d.c b/drivers/gpu/drm/nouveau/dispnv50/corec37d.c index 9035d3ab062c..42f877f2ced2 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/corec37d.c +++ b/drivers/gpu/drm/nouveau/dispnv50/corec37d.c @@ -54,7 +54,7 @@ corec37d_update(struct nv50_core *core, u32 *interlock, bool ntfy) struct nvif_push *push = core->chan.push; int ret; - if ((ret = PUSH_WAIT(push, 9))) + if ((ret = PUSH_WAIT(push, (ntfy ? 2 * 2 : 0) + 5))) return ret; if (ntfy) { diff --git a/drivers/gpu/drm/nouveau/dispnv50/curs.c b/drivers/gpu/drm/nouveau/dispnv50/curs.c index 121c24a18f11..31d8b2e4791d 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/curs.c +++ b/drivers/gpu/drm/nouveau/dispnv50/curs.c @@ -31,6 +31,7 @@ nv50_curs_new(struct nouveau_drm *drm, int head, struct nv50_wndw **pwndw) int version; int (*new)(struct nouveau_drm *, int, s32, struct nv50_wndw **); } curses[] = { + { GA102_DISP_CURSOR, 0, cursc37a_new }, { TU102_DISP_CURSOR, 0, cursc37a_new }, { GV100_DISP_CURSOR, 0, cursc37a_new }, { GK104_DISP_CURSOR, 0, curs907a_new }, diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 33fff388dd83..c9e1575a06a2 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -220,9 +220,13 @@ nv50_dmac_wait(struct nvif_push *push, u32 size) return 0; } +MODULE_PARM_DESC(kms_vram_pushbuf, "Place EVO/NVD push buffers in VRAM (default: auto)"); +static int nv50_dmac_vram_pushbuf = -1; +module_param_named(kms_vram_pushbuf, nv50_dmac_vram_pushbuf, int, 0400); + int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, - const s32 *oclass, u8 head, void *data, u32 size, u64 syncbuf, + const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf, struct nv50_dmac *dmac) { struct nouveau_cli *cli = (void *)device->object.client; @@ -241,7 +245,8 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, * * This appears to match NVIDIA's behaviour on Pascal. */ - if (device->info.family == NV_DEVICE_INFO_V0_PASCAL) + if ((nv50_dmac_vram_pushbuf > 0) || + (nv50_dmac_vram_pushbuf < 0 && device->info.family == NV_DEVICE_INFO_V0_PASCAL)) type |= NVIF_MEM_VRAM; ret = nvif_mem_ctor_map(&cli->mmu, "kmsChanPush", type, 0x1000, @@ -271,7 +276,7 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, if (ret) return ret; - if (!syncbuf) + if (syncbuf < 0) return 0; ret = nvif_object_ctor(&dmac->base.user, "kmsSyncCtxDma", NV50_DISP_HANDLE_SYNCBUF, @@ -305,6 +310,14 @@ nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, * Output path helpers *****************************************************************************/ static void +nv50_outp_dump_caps(struct nouveau_drm *drm, + struct nouveau_encoder *outp) +{ + NV_DEBUG(drm, "%s caps: dp_interlace=%d\n", + outp->base.base.name, outp->caps.dp_interlace); +} + +static void nv50_outp_release(struct nouveau_encoder *nv_encoder) { struct nv50_disp *disp = nv50_disp(nv_encoder->base.base.dev); @@ -419,8 +432,7 @@ nv50_outp_atomic_check(struct drm_encoder *encoder, } struct nouveau_connector * -nv50_outp_get_new_connector(struct nouveau_encoder *outp, - struct drm_atomic_state *state) +nv50_outp_get_new_connector(struct drm_atomic_state *state, struct nouveau_encoder *outp) { struct drm_connector *connector; struct drm_connector_state *connector_state; @@ -436,8 +448,7 @@ nv50_outp_get_new_connector(struct nouveau_encoder *outp, } struct nouveau_connector * -nv50_outp_get_old_connector(struct nouveau_encoder *outp, - struct drm_atomic_state *state) +nv50_outp_get_old_connector(struct drm_atomic_state *state, struct nouveau_encoder *outp) { struct drm_connector *connector; struct drm_connector_state *connector_state; @@ -452,27 +463,44 @@ nv50_outp_get_old_connector(struct nouveau_encoder *outp, return NULL; } +static struct nouveau_crtc * +nv50_outp_get_new_crtc(const struct drm_atomic_state *state, const struct nouveau_encoder *outp) +{ + struct drm_crtc *crtc; + struct drm_crtc_state *crtc_state; + const u32 mask = drm_encoder_mask(&outp->base.base); + int i; + + for_each_new_crtc_in_state(state, crtc, crtc_state, i) { + if (crtc_state->encoder_mask & mask) + return nouveau_crtc(crtc); + } + + return NULL; +} + /****************************************************************************** * DAC *****************************************************************************/ static void -nv50_dac_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_dac_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); struct nv50_core *core = nv50_disp(encoder->dev)->core; const u32 ctrl = NVDEF(NV507D, DAC_SET_CONTROL, OWNER, NONE); - if (nv_encoder->crtc) - core->func->dac->ctrl(core, nv_encoder->or, ctrl, NULL); + + core->func->dac->ctrl(core, nv_encoder->or, ctrl, NULL); nv_encoder->crtc = NULL; nv50_outp_release(nv_encoder); } static void -nv50_dac_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_dac_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); - struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc); - struct nv50_head_atom *asyh = nv50_head_atom(nv_crtc->base.state); + struct nouveau_crtc *nv_crtc = nv50_outp_get_new_crtc(state, nv_encoder); + struct nv50_head_atom *asyh = + nv50_head_atom(drm_atomic_get_new_crtc_state(state, &nv_crtc->base)); struct nv50_core *core = nv50_disp(encoder->dev)->core; u32 ctrl = 0; @@ -493,7 +521,7 @@ nv50_dac_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) core->func->dac->ctrl(core, nv_encoder->or, ctrl, asyh); asyh->or.depth = 0; - nv_encoder->crtc = encoder->crtc; + nv_encoder->crtc = &nv_crtc->base; } static enum drm_connector_status @@ -526,8 +554,8 @@ nv50_dac_detect(struct drm_encoder *encoder, struct drm_connector *connector) static const struct drm_encoder_helper_funcs nv50_dac_help = { .atomic_check = nv50_outp_atomic_check, - .atomic_enable = nv50_dac_enable, - .atomic_disable = nv50_dac_disable, + .atomic_enable = nv50_dac_atomic_enable, + .atomic_disable = nv50_dac_atomic_disable, .detect = nv50_dac_detect }; @@ -593,34 +621,27 @@ nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id, struct nouveau_drm *drm = nouveau_drm(drm_dev); struct drm_encoder *encoder; struct nouveau_encoder *nv_encoder; - struct drm_connector *connector; struct nouveau_crtc *nv_crtc; - struct drm_connector_list_iter conn_iter; int ret = 0; *enabled = false; + mutex_lock(&drm->audio.lock); + drm_for_each_encoder(encoder, drm->dev) { struct nouveau_connector *nv_connector = NULL; + if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) + continue; /* TODO */ + nv_encoder = nouveau_encoder(encoder); + nv_connector = nouveau_connector(nv_encoder->audio.connector); + nv_crtc = nouveau_crtc(nv_encoder->crtc); - drm_connector_list_iter_begin(drm_dev, &conn_iter); - drm_for_each_connector_iter(connector, &conn_iter) { - if (connector->state->best_encoder == encoder) { - nv_connector = nouveau_connector(connector); - break; - } - } - drm_connector_list_iter_end(&conn_iter); - if (!nv_connector) + if (!nv_crtc || nv_encoder->or != port || nv_crtc->index != dev_id) continue; - nv_crtc = nouveau_crtc(encoder->crtc); - if (!nv_crtc || nv_encoder->or != port || - nv_crtc->index != dev_id) - continue; - *enabled = nv_encoder->audio; + *enabled = nv_encoder->audio.enabled; if (*enabled) { ret = drm_eld_size(nv_connector->base.eld); memcpy(buf, nv_connector->base.eld, @@ -629,6 +650,8 @@ nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id, break; } + mutex_unlock(&drm->audio.lock); + return ret; } @@ -678,17 +701,22 @@ static const struct component_ops nv50_audio_component_bind_ops = { static void nv50_audio_component_init(struct nouveau_drm *drm) { - if (!component_add(drm->dev->dev, &nv50_audio_component_bind_ops)) - drm->audio.component_registered = true; + if (component_add(drm->dev->dev, &nv50_audio_component_bind_ops)) + return; + + drm->audio.component_registered = true; + mutex_init(&drm->audio.lock); } static void nv50_audio_component_fini(struct nouveau_drm *drm) { - if (drm->audio.component_registered) { - component_del(drm->dev->dev, &nv50_audio_component_bind_ops); - drm->audio.component_registered = false; - } + if (!drm->audio.component_registered) + return; + + component_del(drm->dev->dev, &nv50_audio_component_bind_ops); + drm->audio.component_registered = false; + mutex_destroy(&drm->audio.lock); } /****************************************************************************** @@ -711,24 +739,25 @@ nv50_audio_disable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc) (0x0100 << nv_crtc->index), }; - if (!nv_encoder->audio) - return; - - nv_encoder->audio = false; - nvif_mthd(&disp->disp->object, 0, &args, sizeof(args)); + mutex_lock(&drm->audio.lock); + if (nv_encoder->audio.enabled) { + nv_encoder->audio.enabled = false; + nv_encoder->audio.connector = NULL; + nvif_mthd(&disp->disp->object, 0, &args, sizeof(args)); + } + mutex_unlock(&drm->audio.lock); nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or, nv_crtc->index); } static void -nv50_audio_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, +nv50_audio_enable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc, + struct nouveau_connector *nv_connector, struct drm_atomic_state *state, struct drm_display_mode *mode) { struct nouveau_drm *drm = nouveau_drm(encoder->dev); struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); - struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc); - struct nouveau_connector *nv_connector; struct nv50_disp *disp = nv50_disp(encoder->dev); struct __packed { struct { @@ -744,15 +773,19 @@ nv50_audio_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, (0x0100 << nv_crtc->index), }; - nv_connector = nv50_outp_get_new_connector(nv_encoder, state); if (!drm_detect_monitor_audio(nv_connector->edid)) return; + mutex_lock(&drm->audio.lock); + memcpy(args.data, nv_connector->base.eld, sizeof(args.data)); nvif_mthd(&disp->disp->object, 0, &args, sizeof(args.base) + drm_eld_size(args.data)); - nv_encoder->audio = true; + nv_encoder->audio.enabled = true; + nv_encoder->audio.connector = &nv_connector->base; + + mutex_unlock(&drm->audio.lock); nv50_audio_component_eld_notify(drm->audio.component, nv_encoder->or, nv_crtc->index); @@ -781,12 +814,12 @@ nv50_hdmi_disable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc) } static void -nv50_hdmi_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, +nv50_hdmi_enable(struct drm_encoder *encoder, struct nouveau_crtc *nv_crtc, + struct nouveau_connector *nv_connector, struct drm_atomic_state *state, struct drm_display_mode *mode) { struct nouveau_drm *drm = nouveau_drm(encoder->dev); struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); - struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc); struct nv50_disp *disp = nv50_disp(encoder->dev); struct { struct nv50_disp_mthd_v1 base; @@ -801,7 +834,6 @@ nv50_hdmi_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, .pwr.state = 1, .pwr.rekey = 56, /* binary driver, and tegra, constant */ }; - struct nouveau_connector *nv_connector; struct drm_hdmi_info *hdmi; u32 max_ac_packet; union hdmi_infoframe avi_frame; @@ -811,7 +843,6 @@ nv50_hdmi_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, int ret; int size; - nv_connector = nv50_outp_get_new_connector(nv_encoder, state); if (!drm_detect_hdmi_monitor(nv_connector->edid)) return; @@ -857,7 +888,7 @@ nv50_hdmi_enable(struct drm_encoder *encoder, struct drm_atomic_state *state, + args.pwr.vendor_infoframe_length; nvif_mthd(&disp->disp->object, 0, &args, size); - nv50_audio_enable(encoder, state, mode); + nv50_audio_enable(encoder, nv_crtc, nv_connector, state, mode); /* If SCDC is supported by the downstream monitor, update * divider / scrambling settings to what we programmed above. @@ -898,6 +929,7 @@ struct nv50_mstc { struct nv50_msto { struct drm_encoder encoder; + /* head is statically assigned on msto creation */ struct nv50_head *head; struct nv50_mstc *mstc; bool disabled; @@ -1056,11 +1088,12 @@ nv50_dp_bpc_to_depth(unsigned int bpc) } static void -nv50_msto_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) { - struct nv50_head *head = nv50_head(encoder->crtc); - struct nv50_head_atom *armh = nv50_head_atom(head->base.base.state); struct nv50_msto *msto = nv50_msto(encoder); + struct nv50_head *head = msto->head; + struct nv50_head_atom *asyh = + nv50_head_atom(drm_atomic_get_new_crtc_state(state, &head->base.base)); struct nv50_mstc *mstc = NULL; struct nv50_mstm *mstm = NULL; struct drm_connector *connector; @@ -1081,8 +1114,7 @@ nv50_msto_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) if (WARN_ON(!mstc)) return; - r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, armh->dp.pbn, - armh->dp.tu); + r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, asyh->dp.pbn, asyh->dp.tu); if (!r) DRM_DEBUG_KMS("Failed to allocate VCPI\n"); @@ -1094,15 +1126,15 @@ nv50_msto_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) else proto = NV917D_SOR_SET_CONTROL_PROTOCOL_DP_B; - mstm->outp->update(mstm->outp, head->base.index, armh, proto, - nv50_dp_bpc_to_depth(armh->or.bpc)); + mstm->outp->update(mstm->outp, head->base.index, asyh, proto, + nv50_dp_bpc_to_depth(asyh->or.bpc)); msto->mstc = mstc; mstm->modified = true; } static void -nv50_msto_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_msto_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nv50_msto *msto = nv50_msto(encoder); struct nv50_mstc *mstc = msto->mstc; @@ -1119,8 +1151,8 @@ nv50_msto_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) static const struct drm_encoder_helper_funcs nv50_msto_help = { - .atomic_disable = nv50_msto_disable, - .atomic_enable = nv50_msto_enable, + .atomic_disable = nv50_msto_atomic_disable, + .atomic_enable = nv50_msto_atomic_enable, .atomic_check = nv50_msto_atomic_check, }; @@ -1616,43 +1648,38 @@ nv50_sor_update(struct nouveau_encoder *nv_encoder, u8 head, } static void -nv50_sor_disable(struct drm_encoder *encoder, - struct drm_atomic_state *state) +nv50_sor_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); struct nouveau_crtc *nv_crtc = nouveau_crtc(nv_encoder->crtc); - struct nouveau_connector *nv_connector = - nv50_outp_get_old_connector(nv_encoder, state); - - nv_encoder->crtc = NULL; - - if (nv_crtc) { - struct drm_dp_aux *aux = &nv_connector->aux; - u8 pwr; + struct nouveau_connector *nv_connector = nv50_outp_get_old_connector(state, nv_encoder); + struct drm_dp_aux *aux = &nv_connector->aux; + u8 pwr; - if (nv_encoder->dcb->type == DCB_OUTPUT_DP) { - int ret = drm_dp_dpcd_readb(aux, DP_SET_POWER, &pwr); + if (nv_encoder->dcb->type == DCB_OUTPUT_DP) { + int ret = drm_dp_dpcd_readb(aux, DP_SET_POWER, &pwr); - if (ret == 0) { - pwr &= ~DP_SET_POWER_MASK; - pwr |= DP_SET_POWER_D3; - drm_dp_dpcd_writeb(aux, DP_SET_POWER, pwr); - } + if (ret == 0) { + pwr &= ~DP_SET_POWER_MASK; + pwr |= DP_SET_POWER_D3; + drm_dp_dpcd_writeb(aux, DP_SET_POWER, pwr); } - - nv_encoder->update(nv_encoder, nv_crtc->index, NULL, 0, 0); - nv50_audio_disable(encoder, nv_crtc); - nv50_hdmi_disable(&nv_encoder->base.base, nv_crtc); - nv50_outp_release(nv_encoder); } + + nv_encoder->update(nv_encoder, nv_crtc->index, NULL, 0, 0); + nv50_audio_disable(encoder, nv_crtc); + nv50_hdmi_disable(&nv_encoder->base.base, nv_crtc); + nv50_outp_release(nv_encoder); + nv_encoder->crtc = NULL; } static void -nv50_sor_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_sor_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); - struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc); - struct nv50_head_atom *asyh = nv50_head_atom(nv_crtc->base.state); + struct nouveau_crtc *nv_crtc = nv50_outp_get_new_crtc(state, nv_encoder); + struct nv50_head_atom *asyh = + nv50_head_atom(drm_atomic_get_new_crtc_state(state, &nv_crtc->base)); struct drm_display_mode *mode = &asyh->state.adjusted_mode; struct { struct nv50_disp_mthd_v1 base; @@ -1672,8 +1699,8 @@ nv50_sor_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) u8 proto = NV507D_SOR_SET_CONTROL_PROTOCOL_CUSTOM; u8 depth = NV837D_SOR_SET_CONTROL_PIXEL_DEPTH_DEFAULT; - nv_connector = nv50_outp_get_new_connector(nv_encoder, state); - nv_encoder->crtc = encoder->crtc; + nv_connector = nv50_outp_get_new_connector(state, nv_encoder); + nv_encoder->crtc = &nv_crtc->base; if ((disp->disp->object.oclass == GT214_DISP || disp->disp->object.oclass >= GF110_DISP) && @@ -1699,7 +1726,7 @@ nv50_sor_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) proto = NV507D_SOR_SET_CONTROL_PROTOCOL_SINGLE_TMDS_B; } - nv50_hdmi_enable(&nv_encoder->base.base, state, mode); + nv50_hdmi_enable(&nv_encoder->base.base, nv_crtc, nv_connector, state, mode); break; case DCB_OUTPUT_LVDS: proto = NV507D_SOR_SET_CONTROL_PROTOCOL_LVDS_CUSTOM; @@ -1740,7 +1767,7 @@ nv50_sor_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) else proto = NV887D_SOR_SET_CONTROL_PROTOCOL_DP_B; - nv50_audio_enable(encoder, state, mode); + nv50_audio_enable(encoder, nv_crtc, nv_connector, state, mode); break; default: BUG(); @@ -1753,8 +1780,8 @@ nv50_sor_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) static const struct drm_encoder_helper_funcs nv50_sor_help = { .atomic_check = nv50_outp_atomic_check, - .atomic_enable = nv50_sor_enable, - .atomic_disable = nv50_sor_disable, + .atomic_enable = nv50_sor_atomic_enable, + .atomic_disable = nv50_sor_atomic_disable, }; static void @@ -1821,6 +1848,7 @@ nv50_sor_create(struct drm_connector *connector, struct dcb_output *dcbe) drm_connector_attach_encoder(connector, encoder); disp->core->func->sor->get_caps(disp, nv_encoder, ffs(dcbe->or) - 1); + nv50_outp_dump_caps(drm, nv_encoder); if (dcbe->type == DCB_OUTPUT_DP) { struct nvkm_i2c_aux *aux = @@ -1875,23 +1903,24 @@ nv50_pior_atomic_check(struct drm_encoder *encoder, } static void -nv50_pior_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_pior_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); struct nv50_core *core = nv50_disp(encoder->dev)->core; const u32 ctrl = NVDEF(NV507D, PIOR_SET_CONTROL, OWNER, NONE); - if (nv_encoder->crtc) - core->func->pior->ctrl(core, nv_encoder->or, ctrl, NULL); + + core->func->pior->ctrl(core, nv_encoder->or, ctrl, NULL); nv_encoder->crtc = NULL; nv50_outp_release(nv_encoder); } static void -nv50_pior_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) +nv50_pior_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) { struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder); - struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc); - struct nv50_head_atom *asyh = nv50_head_atom(nv_crtc->base.state); + struct nouveau_crtc *nv_crtc = nv50_outp_get_new_crtc(state, nv_encoder); + struct nv50_head_atom *asyh = + nv50_head_atom(drm_atomic_get_new_crtc_state(state, &nv_crtc->base)); struct nv50_core *core = nv50_disp(encoder->dev)->core; u32 ctrl = 0; @@ -1929,8 +1958,8 @@ nv50_pior_enable(struct drm_encoder *encoder, struct drm_atomic_state *state) static const struct drm_encoder_helper_funcs nv50_pior_help = { .atomic_check = nv50_pior_atomic_check, - .atomic_enable = nv50_pior_enable, - .atomic_disable = nv50_pior_disable, + .atomic_enable = nv50_pior_atomic_enable, + .atomic_disable = nv50_pior_atomic_disable, }; static void @@ -1991,6 +2020,7 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe) drm_connector_attach_encoder(connector, encoder); disp->core->func->pior->get_caps(disp, nv_encoder, ffs(dcbe->or) - 1); + nv50_outp_dump_caps(drm, nv_encoder); return 0; } diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h index 92bddc083617..38dec11e7dda 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.h +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h @@ -95,7 +95,7 @@ struct nv50_outp_atom { int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, const s32 *oclass, u8 head, void *data, u32 size, - u64 syncbuf, struct nv50_dmac *dmac); + s64 syncbuf, struct nv50_dmac *dmac); void nv50_dmac_destroy(struct nv50_dmac *); /* diff --git a/drivers/gpu/drm/nouveau/dispnv50/head907d.c b/drivers/gpu/drm/nouveau/dispnv50/head907d.c index 8f860e9c5224..85648d790743 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/head907d.c +++ b/drivers/gpu/drm/nouveau/dispnv50/head907d.c @@ -322,7 +322,7 @@ head907d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) const int i = head->base.index; int ret; - if ((ret = PUSH_WAIT(push, 14))) + if ((ret = PUSH_WAIT(push, 13))) return ret; PUSH_MTHD(push, NV907D, HEAD_SET_OVERSCAN_COLOR(i), @@ -353,14 +353,7 @@ head907d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) PUSH_MTHD(push, NV907D, HEAD_SET_DEFAULT_BASE_COLOR(i), NVVAL(NV907D, HEAD_SET_DEFAULT_BASE_COLOR, RED, 0) | NVVAL(NV907D, HEAD_SET_DEFAULT_BASE_COLOR, GREEN, 0) | - NVVAL(NV907D, HEAD_SET_DEFAULT_BASE_COLOR, BLUE, 0), - - HEAD_SET_CRC_CONTROL(i), - NVDEF(NV907D, HEAD_SET_CRC_CONTROL, CONTROLLING_CHANNEL, CORE) | - NVDEF(NV907D, HEAD_SET_CRC_CONTROL, EXPECT_BUFFER_COLLAPSE, FALSE) | - NVDEF(NV907D, HEAD_SET_CRC_CONTROL, TIMESTAMP_MODE, FALSE) | - NVDEF(NV907D, HEAD_SET_CRC_CONTROL, PRIMARY_OUTPUT, NONE) | - NVDEF(NV907D, HEAD_SET_CRC_CONTROL, SECONDARY_OUTPUT, NONE)); + NVVAL(NV907D, HEAD_SET_DEFAULT_BASE_COLOR, BLUE, 0)); PUSH_MTHD(push, NV907D, HEAD_SET_PIXEL_CLOCK_FREQUENCY(i), NVVAL(NV907D, HEAD_SET_PIXEL_CLOCK_FREQUENCY, HERTZ, m->clock * 1000) | diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimm.c b/drivers/gpu/drm/nouveau/dispnv50/wimm.c index a1ac153d5e98..566fbddfc8d7 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wimm.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wimm.c @@ -31,6 +31,7 @@ nv50_wimm_init(struct nouveau_drm *drm, struct nv50_wndw *wndw) int version; int (*init)(struct nouveau_drm *, s32, struct nv50_wndw *); } wimms[] = { + { GA102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, { TU102_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, { GV100_DISP_WINDOW_IMM_CHANNEL_DMA, 0, wimmc37b_init }, {} diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c index 685b70871324..b390029c69ec 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c @@ -76,7 +76,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm, int ret; ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, - &oclass, 0, &args, sizeof(args), 0, + &oclass, 0, &args, sizeof(args), -1, &wndw->wimm); if (ret) { NV_ERROR(drm, "wimm%04x allocation failed: %d\n", oclass, ret); diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index 0356474ad6f6..ce451242f79e 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -784,6 +784,7 @@ nv50_wndw_new(struct nouveau_drm *drm, enum drm_plane_type type, int index, int (*new)(struct nouveau_drm *, enum drm_plane_type, int, s32, struct nv50_wndw **); } wndws[] = { + { GA102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc67e_new }, { TU102_DISP_WINDOW_CHANNEL_DMA, 0, wndwc57e_new }, { GV100_DISP_WINDOW_CHANNEL_DMA, 0, wndwc37e_new }, {} diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.h b/drivers/gpu/drm/nouveau/dispnv50/wndw.h index 3278e2880034..f4e0c5080034 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.h +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.h @@ -129,6 +129,14 @@ int wndwc37e_update(struct nv50_wndw *, u32 *); int wndwc57e_new(struct nouveau_drm *, enum drm_plane_type, int, s32, struct nv50_wndw **); +bool wndwc57e_ilut(struct nv50_wndw *, struct nv50_wndw_atom *, int); +int wndwc57e_ilut_set(struct nv50_wndw *, struct nv50_wndw_atom *); +int wndwc57e_ilut_clr(struct nv50_wndw *); +int wndwc57e_csc_set(struct nv50_wndw *, struct nv50_wndw_atom *); +int wndwc57e_csc_clr(struct nv50_wndw *); + +int wndwc67e_new(struct nouveau_drm *, enum drm_plane_type, int, s32, + struct nv50_wndw **); int nv50_wndw_new(struct nouveau_drm *, enum drm_plane_type, int index, struct nv50_wndw **); diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c b/drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c index 429be0bb0222..abdd3bb658b3 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c @@ -80,7 +80,7 @@ wndwc57e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) return 0; } -static int +int wndwc57e_csc_clr(struct nv50_wndw *wndw) { struct nvif_push *push = wndw->wndw.push; @@ -98,7 +98,7 @@ wndwc57e_csc_clr(struct nv50_wndw *wndw) return 0; } -static int +int wndwc57e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) { struct nvif_push *push = wndw->wndw.push; @@ -111,7 +111,7 @@ wndwc57e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) return 0; } -static int +int wndwc57e_ilut_clr(struct nv50_wndw *wndw) { struct nvif_push *push = wndw->wndw.push; @@ -124,7 +124,7 @@ wndwc57e_ilut_clr(struct nv50_wndw *wndw) return 0; } -static int +int wndwc57e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) { struct nvif_push *push = wndw->wndw.push; @@ -179,7 +179,7 @@ wndwc57e_ilut_load(struct drm_color_lut *in, int size, void __iomem *mem) writew(readw(mem - 4), mem + 4); } -static bool +bool wndwc57e_ilut(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw, int size) { if (size = size ? size : 1024, size != 256 && size != 1024) diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndwc67e.c b/drivers/gpu/drm/nouveau/dispnv50/wndwc67e.c new file mode 100644 index 000000000000..7a370fa1df20 --- /dev/null +++ b/drivers/gpu/drm/nouveau/dispnv50/wndwc67e.c @@ -0,0 +1,106 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "wndw.h" +#include "atom.h" + +#include <nvif/pushc37b.h> + +#include <nvhw/class/clc57e.h> + +static int +wndwc67e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) +{ + struct nvif_push *push = wndw->wndw.push; + int ret; + + if ((ret = PUSH_WAIT(push, 17))) + return ret; + + PUSH_MTHD(push, NVC57E, SET_PRESENT_CONTROL, + NVVAL(NVC57E, SET_PRESENT_CONTROL, MIN_PRESENT_INTERVAL, asyw->image.interval) | + NVVAL(NVC57E, SET_PRESENT_CONTROL, BEGIN_MODE, asyw->image.mode) | + NVDEF(NVC57E, SET_PRESENT_CONTROL, TIMESTAMP_MODE, DISABLE)); + + PUSH_MTHD(push, NVC57E, SET_SIZE, + NVVAL(NVC57E, SET_SIZE, WIDTH, asyw->image.w) | + NVVAL(NVC57E, SET_SIZE, HEIGHT, asyw->image.h), + + SET_STORAGE, + NVVAL(NVC57E, SET_STORAGE, BLOCK_HEIGHT, asyw->image.blockh), + + SET_PARAMS, + NVVAL(NVC57E, SET_PARAMS, FORMAT, asyw->image.format) | + NVDEF(NVC57E, SET_PARAMS, CLAMP_BEFORE_BLEND, DISABLE) | + NVDEF(NVC57E, SET_PARAMS, SWAP_UV, DISABLE) | + NVDEF(NVC57E, SET_PARAMS, FMT_ROUNDING_MODE, ROUND_TO_NEAREST), + + SET_PLANAR_STORAGE(0), + NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.blocks[0]) | + NVVAL(NVC57E, SET_PLANAR_STORAGE, PITCH, asyw->image.pitch[0] >> 6)); + + PUSH_MTHD(push, NVC57E, SET_CONTEXT_DMA_ISO(0), asyw->image.handle, 1); + PUSH_MTHD(push, NVC57E, SET_OFFSET(0), asyw->image.offset[0] >> 8); + + PUSH_MTHD(push, NVC57E, SET_POINT_IN(0), + NVVAL(NVC57E, SET_POINT_IN, X, asyw->state.src_x >> 16) | + NVVAL(NVC57E, SET_POINT_IN, Y, asyw->state.src_y >> 16)); + + PUSH_MTHD(push, NVC57E, SET_SIZE_IN, + NVVAL(NVC57E, SET_SIZE_IN, WIDTH, asyw->state.src_w >> 16) | + NVVAL(NVC57E, SET_SIZE_IN, HEIGHT, asyw->state.src_h >> 16)); + + PUSH_MTHD(push, NVC57E, SET_SIZE_OUT, + NVVAL(NVC57E, SET_SIZE_OUT, WIDTH, asyw->state.crtc_w) | + NVVAL(NVC57E, SET_SIZE_OUT, HEIGHT, asyw->state.crtc_h)); + return 0; +} + +static const struct nv50_wndw_func +wndwc67e = { + .acquire = wndwc37e_acquire, + .release = wndwc37e_release, + .sema_set = wndwc37e_sema_set, + .sema_clr = wndwc37e_sema_clr, + .ntfy_set = wndwc37e_ntfy_set, + .ntfy_clr = wndwc37e_ntfy_clr, + .ntfy_reset = corec37d_ntfy_init, + .ntfy_wait_begun = base507c_ntfy_wait_begun, + .ilut = wndwc57e_ilut, + .ilut_identity = true, + .ilut_size = 1024, + .xlut_set = wndwc57e_ilut_set, + .xlut_clr = wndwc57e_ilut_clr, + .csc = base907c_csc, + .csc_set = wndwc57e_csc_set, + .csc_clr = wndwc57e_csc_clr, + .image_set = wndwc67e_image_set, + .image_clr = wndwc37e_image_clr, + .blend_set = wndwc37e_blend_set, + .update = wndwc37e_update, +}; + +int +wndwc67e_new(struct nouveau_drm *drm, enum drm_plane_type type, int index, + s32 oclass, struct nv50_wndw **pwndw) +{ + return wndwc37e_new_(&wndwc67e, drm, type, index, oclass, BIT(index >> 1), pwndw); +} diff --git a/drivers/gpu/drm/nouveau/include/nvif/cl0080.h b/drivers/gpu/drm/nouveau/include/nvif/cl0080.h index cd9a2e687bb6..57d4f457a7d4 100644 --- a/drivers/gpu/drm/nouveau/include/nvif/cl0080.h +++ b/drivers/gpu/drm/nouveau/include/nvif/cl0080.h @@ -33,6 +33,7 @@ struct nv_device_info_v0 { #define NV_DEVICE_INFO_V0_PASCAL 0x0a #define NV_DEVICE_INFO_V0_VOLTA 0x0b #define NV_DEVICE_INFO_V0_TURING 0x0c +#define NV_DEVICE_INFO_V0_AMPERE 0x0d __u8 family; __u8 pad06[2]; __u64 ram_size; diff --git a/drivers/gpu/drm/nouveau/include/nvif/class.h b/drivers/gpu/drm/nouveau/include/nvif/class.h index 2c79beb41126..ba2c28ea43d2 100644 --- a/drivers/gpu/drm/nouveau/include/nvif/class.h +++ b/drivers/gpu/drm/nouveau/include/nvif/class.h @@ -88,6 +88,7 @@ #define GP102_DISP /* cl5070.h */ 0x00009870 #define GV100_DISP /* cl5070.h */ 0x0000c370 #define TU102_DISP /* cl5070.h */ 0x0000c570 +#define GA102_DISP /* cl5070.h */ 0x0000c670 #define GV100_DISP_CAPS 0x0000c373 @@ -103,6 +104,7 @@ #define GK104_DISP_CURSOR /* cl507a.h */ 0x0000917a #define GV100_DISP_CURSOR /* cl507a.h */ 0x0000c37a #define TU102_DISP_CURSOR /* cl507a.h */ 0x0000c57a +#define GA102_DISP_CURSOR /* cl507a.h */ 0x0000c67a #define NV50_DISP_OVERLAY /* cl507b.h */ 0x0000507b #define G82_DISP_OVERLAY /* cl507b.h */ 0x0000827b @@ -112,6 +114,7 @@ #define GV100_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c37b #define TU102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c57b +#define GA102_DISP_WINDOW_IMM_CHANNEL_DMA /* clc37b.h */ 0x0000c67b #define NV50_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000507c #define G82_DISP_BASE_CHANNEL_DMA /* cl507c.h */ 0x0000827c @@ -135,6 +138,7 @@ #define GP102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000987d #define GV100_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c37d #define TU102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c57d +#define GA102_DISP_CORE_CHANNEL_DMA /* cl507d.h */ 0x0000c67d #define NV50_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000507e #define G82_DISP_OVERLAY_CHANNEL_DMA /* cl507e.h */ 0x0000827e @@ -145,6 +149,7 @@ #define GV100_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c37e #define TU102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c57e +#define GA102_DISP_WINDOW_CHANNEL_DMA /* clc37e.h */ 0x0000c67e #define NV50_TESLA 0x00005097 #define G82_TESLA 0x00008297 diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/device.h b/drivers/gpu/drm/nouveau/include/nvkm/core/device.h index 5c007ce62fc3..c920939a1467 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/core/device.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/core/device.h @@ -120,6 +120,7 @@ struct nvkm_device { GP100 = 0x130, GV100 = 0x140, TU100 = 0x160, + GA100 = 0x170, } card_type; u32 chipset; u8 chiprev; diff --git a/drivers/gpu/drm/nouveau/include/nvkm/engine/disp.h b/drivers/gpu/drm/nouveau/include/nvkm/engine/disp.h index 5a96c942d912..0f6fa6631a19 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/engine/disp.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/engine/disp.h @@ -37,4 +37,5 @@ int gp100_disp_new(struct nvkm_device *, int, struct nvkm_disp **); int gp102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); int gv100_disp_new(struct nvkm_device *, int, struct nvkm_disp **); int tu102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); +int ga102_disp_new(struct nvkm_device *, int, struct nvkm_disp **); #endif diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h index f5f59261ea81..d1beaad0c82b 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h @@ -14,6 +14,7 @@ enum dcb_connector_type { DCB_CONNECTOR_LVDS_SPWG = 0x41, DCB_CONNECTOR_DP = 0x46, DCB_CONNECTOR_eDP = 0x47, + DCB_CONNECTOR_mDP = 0x48, DCB_CONNECTOR_HDMI_0 = 0x60, DCB_CONNECTOR_HDMI_1 = 0x61, DCB_CONNECTOR_HDMI_C = 0x63, diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/devinit.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/devinit.h index 1a39e52e09e3..50cc7c05eac4 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/devinit.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/devinit.h @@ -32,4 +32,5 @@ int gm107_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); int gm200_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); int gv100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); int tu102_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); +int ga100_devinit_new(struct nvkm_device *, int, struct nvkm_devinit **); #endif diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h index 34b56b10218a..2ecd52aec1d1 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h @@ -86,6 +86,8 @@ int gp100_fb_new(struct nvkm_device *, int, struct nvkm_fb **); int gp102_fb_new(struct nvkm_device *, int, struct nvkm_fb **); int gp10b_fb_new(struct nvkm_device *, int, struct nvkm_fb **); int gv100_fb_new(struct nvkm_device *, int, struct nvkm_fb **); +int ga100_fb_new(struct nvkm_device *, int, struct nvkm_fb **); +int ga102_fb_new(struct nvkm_device *, int, struct nvkm_fb **); #include <subdev/bios.h> #include <subdev/bios/ramcfg.h> diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gpio.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gpio.h index eaacf8d80527..cdcce5ece6ff 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gpio.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gpio.h @@ -37,4 +37,5 @@ int nv50_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); int g94_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); int gf119_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); int gk104_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); +int ga102_gpio_new(struct nvkm_device *, int, struct nvkm_gpio **); #endif diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h index 81b977319640..640f649ce497 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/i2c.h @@ -92,6 +92,7 @@ int g94_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); int gf117_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); int gf119_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); int gk104_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); +int gk110_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); int gm200_i2c_new(struct nvkm_device *, int, struct nvkm_i2c **); static inline int diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mc.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mc.h index 6641fe4c252c..e45ca4583967 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mc.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mc.h @@ -32,4 +32,5 @@ int gk20a_mc_new(struct nvkm_device *, int, struct nvkm_mc **); int gp100_mc_new(struct nvkm_device *, int, struct nvkm_mc **); int gp10b_mc_new(struct nvkm_device *, int, struct nvkm_mc **); int tu102_mc_new(struct nvkm_device *, int, struct nvkm_mc **); +int ga100_mc_new(struct nvkm_device *, int, struct nvkm_mc **); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c index c7a94c94dbf3..72f35a2babcb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_backlight.c +++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c @@ -256,6 +256,7 @@ nouveau_backlight_init(struct drm_connector *connector) case NV_DEVICE_INFO_V0_PASCAL: case NV_DEVICE_INFO_V0_VOLTA: case NV_DEVICE_INFO_V0_TURING: + case NV_DEVICE_INFO_V0_AMPERE: //XXX: not confirmed ret = nv50_backlight_init(nv_encoder, &props, &ops); break; default: diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c index 5d191e58edf1..e48f1f7eb370 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.c +++ b/drivers/gpu/drm/nouveau/nouveau_chan.c @@ -533,6 +533,7 @@ nouveau_channel_new(struct nouveau_drm *drm, struct nvif_device *device, if (ret) { NV_PRINTK(err, cli, "channel failed to initialise, %d\n", ret); nouveau_channel_del(pchan); + goto done; } ret = nouveau_svmm_join((*pchan)->vmm->svmm, (*pchan)->inst); diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index 14c29e68db8f..61e6d7412505 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -1212,6 +1212,7 @@ drm_conntype_from_dcb(enum dcb_connector_type dcb) case DCB_CONNECTOR_DMS59_DP0: case DCB_CONNECTOR_DMS59_DP1: case DCB_CONNECTOR_DP : + case DCB_CONNECTOR_mDP : case DCB_CONNECTOR_USB_C : return DRM_MODE_CONNECTOR_DisplayPort; case DCB_CONNECTOR_eDP : return DRM_MODE_CONNECTOR_eDP; case DCB_CONNECTOR_HDMI_0 : diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index c802d3d1ba39..d28ee6844245 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -221,6 +221,7 @@ struct nouveau_drm { struct { struct drm_audio_component *component; + struct mutex lock; bool component_registered; } audio; }; diff --git a/drivers/gpu/drm/nouveau/nouveau_encoder.h b/drivers/gpu/drm/nouveau/nouveau_encoder.h index 21937f1c7dd9..1ffcc0a491fd 100644 --- a/drivers/gpu/drm/nouveau/nouveau_encoder.h +++ b/drivers/gpu/drm/nouveau/nouveau_encoder.h @@ -53,7 +53,12 @@ struct nouveau_encoder { * actually programmed on the hw, not the proposed crtc */ struct drm_crtc *crtc; u32 ctrl; - bool audio; + + /* Protected by nouveau_drm.audio.lock */ + struct { + bool enabled; + struct drm_connector *connector; + } audio; struct drm_display_mode mode; int last_dpms; @@ -141,11 +146,9 @@ enum drm_mode_status nv50_dp_mode_valid(struct drm_connector *, unsigned *clock); struct nouveau_connector * -nv50_outp_get_new_connector(struct nouveau_encoder *outp, - struct drm_atomic_state *state); +nv50_outp_get_new_connector(struct drm_atomic_state *state, struct nouveau_encoder *outp); struct nouveau_connector * -nv50_outp_get_old_connector(struct nouveau_encoder *outp, - struct drm_atomic_state *state); +nv50_outp_get_old_connector(struct drm_atomic_state *state, struct nouveau_encoder *outp); int nv50_mstm_detect(struct nouveau_encoder *encoder); void nv50_mstm_remove(struct nv50_mstm *mstm); diff --git a/drivers/gpu/drm/nouveau/nvif/disp.c b/drivers/gpu/drm/nouveau/nvif/disp.c index 8d0d30e08f57..529cb60d5efb 100644 --- a/drivers/gpu/drm/nouveau/nvif/disp.c +++ b/drivers/gpu/drm/nouveau/nvif/disp.c @@ -35,6 +35,7 @@ nvif_disp_ctor(struct nvif_device *device, const char *name, s32 oclass, struct nvif_disp *disp) { static const struct nvif_mclass disps[] = { + { GA102_DISP, -1 }, { TU102_DISP, -1 }, { GV100_DISP, -1 }, { GP102_DISP, -1 }, diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c index 7851bec5f0e5..cdcc851e06f9 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c @@ -1815,7 +1815,7 @@ nvf0_chipset = { .fb = gk110_fb_new, .fuse = gf100_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -1853,7 +1853,7 @@ nvf1_chipset = { .fb = gk110_fb_new, .fuse = gf100_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -1891,7 +1891,7 @@ nv106_chipset = { .fb = gk110_fb_new, .fuse = gf100_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -1929,7 +1929,7 @@ nv108_chipset = { .fb = gk110_fb_new, .fuse = gf100_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -1967,7 +1967,7 @@ nv117_chipset = { .fb = gm107_fb_new, .fuse = gm107_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -2003,7 +2003,7 @@ nv118_chipset = { .fb = gm107_fb_new, .fuse = gm107_fuse_new, .gpio = gk104_gpio_new, - .i2c = gk104_i2c_new, + .i2c = gk110_i2c_new, .ibus = gk104_ibus_new, .iccsense = gf100_iccsense_new, .imem = nv50_instmem_new, @@ -2652,6 +2652,61 @@ nv168_chipset = { .sec2 = tu102_sec2_new, }; +static const struct nvkm_device_chip +nv170_chipset = { + .name = "GA100", + .bar = tu102_bar_new, + .bios = nvkm_bios_new, + .devinit = ga100_devinit_new, + .fb = ga100_fb_new, + .gpio = gk104_gpio_new, + .i2c = gm200_i2c_new, + .ibus = gm200_ibus_new, + .imem = nv50_instmem_new, + .mc = ga100_mc_new, + .mmu = tu102_mmu_new, + .pci = gp100_pci_new, + .timer = gk20a_timer_new, +}; + +static const struct nvkm_device_chip +nv172_chipset = { + .name = "GA102", + .bar = tu102_bar_new, + .bios = nvkm_bios_new, + .devinit = ga100_devinit_new, + .fb = ga102_fb_new, + .gpio = ga102_gpio_new, + .i2c = gm200_i2c_new, + .ibus = gm200_ibus_new, + .imem = nv50_instmem_new, + .mc = ga100_mc_new, + .mmu = tu102_mmu_new, + .pci = gp100_pci_new, + .timer = gk20a_timer_new, + .disp = ga102_disp_new, + .dma = gv100_dma_new, +}; + +static const struct nvkm_device_chip +nv174_chipset = { + .name = "GA104", + .bar = tu102_bar_new, + .bios = nvkm_bios_new, + .devinit = ga100_devinit_new, + .fb = ga102_fb_new, + .gpio = ga102_gpio_new, + .i2c = gm200_i2c_new, + .ibus = gm200_ibus_new, + .imem = nv50_instmem_new, + .mc = ga100_mc_new, + .mmu = tu102_mmu_new, + .pci = gp100_pci_new, + .timer = gk20a_timer_new, + .disp = ga102_disp_new, + .dma = gv100_dma_new, +}; + static int nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size, struct nvkm_notify *notify) @@ -3063,6 +3118,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func, case 0x130: device->card_type = GP100; break; case 0x140: device->card_type = GV100; break; case 0x160: device->card_type = TU100; break; + case 0x170: device->card_type = GA100; break; default: break; } @@ -3160,10 +3216,23 @@ nvkm_device_ctor(const struct nvkm_device_func *func, case 0x166: device->chip = &nv166_chipset; break; case 0x167: device->chip = &nv167_chipset; break; case 0x168: device->chip = &nv168_chipset; break; + case 0x172: device->chip = &nv172_chipset; break; + case 0x174: device->chip = &nv174_chipset; break; default: - nvdev_error(device, "unknown chipset (%08x)\n", boot0); - ret = -ENODEV; - goto done; + if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) { + switch (device->chipset) { + case 0x170: device->chip = &nv170_chipset; break; + default: + break; + } + } + + if (!device->chip) { + nvdev_error(device, "unknown chipset (%08x)\n", boot0); + ret = -ENODEV; + goto done; + } + break; } nvdev_info(device, "NVIDIA %s (%08x)\n", diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c index 03c6d9aef075..147894798786 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c @@ -176,6 +176,7 @@ nvkm_udevice_info(struct nvkm_udevice *udev, void *data, u32 size) case GP100: args->v0.family = NV_DEVICE_INFO_V0_PASCAL; break; case GV100: args->v0.family = NV_DEVICE_INFO_V0_VOLTA; break; case TU100: args->v0.family = NV_DEVICE_INFO_V0_TURING; break; + case GA100: args->v0.family = NV_DEVICE_INFO_V0_AMPERE; break; default: args->v0.family = 0; break; diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/Kbuild b/drivers/gpu/drm/nouveau/nvkm/engine/disp/Kbuild index cf075311cdd2..b03f043efe26 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/Kbuild @@ -17,6 +17,7 @@ nvkm-y += nvkm/engine/disp/gp100.o nvkm-y += nvkm/engine/disp/gp102.o nvkm-y += nvkm/engine/disp/gv100.o nvkm-y += nvkm/engine/disp/tu102.o +nvkm-y += nvkm/engine/disp/ga102.o nvkm-y += nvkm/engine/disp/vga.o nvkm-y += nvkm/engine/disp/head.o @@ -42,6 +43,7 @@ nvkm-y += nvkm/engine/disp/sorgm200.o nvkm-y += nvkm/engine/disp/sorgp100.o nvkm-y += nvkm/engine/disp/sorgv100.o nvkm-y += nvkm/engine/disp/sortu102.o +nvkm-y += nvkm/engine/disp/sorga102.o nvkm-y += nvkm/engine/disp/outp.o nvkm-y += nvkm/engine/disp/dp.o @@ -75,6 +77,7 @@ nvkm-y += nvkm/engine/disp/rootgp100.o nvkm-y += nvkm/engine/disp/rootgp102.o nvkm-y += nvkm/engine/disp/rootgv100.o nvkm-y += nvkm/engine/disp/roottu102.o +nvkm-y += nvkm/engine/disp/rootga102.o nvkm-y += nvkm/engine/disp/capsgv100.o diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c index 3800aeb507d0..55fbfe28c6dc 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c @@ -33,6 +33,12 @@ #include <nvif/event.h> +/* IED scripts are no longer used by UEFI/RM from Ampere, but have been updated for + * the x86 option ROM. However, the relevant VBIOS table versions weren't modified, + * so we're unable to detect this in a nice way. + */ +#define AMPERE_IED_HACK(disp) ((disp)->engine.subdev.device->card_type >= GA100) + struct lt_state { struct nvkm_dp *dp; u8 stat[6]; @@ -238,6 +244,19 @@ nvkm_dp_train_links(struct nvkm_dp *dp) dp->dpcd[DPCD_RC02] &= ~DPCD_RC02_TPS3_SUPPORTED; lt.pc2 = dp->dpcd[DPCD_RC02] & DPCD_RC02_TPS3_SUPPORTED; + if (AMPERE_IED_HACK(disp) && (lnkcmp = lt.dp->info.script[0])) { + /* Execute BeforeLinkTraining script from DP Info table. */ + while (ior->dp.bw < nvbios_rd08(bios, lnkcmp)) + lnkcmp += 3; + lnkcmp = nvbios_rd16(bios, lnkcmp + 1); + + nvbios_init(&dp->outp.disp->engine.subdev, lnkcmp, + init.outp = &dp->outp.info; + init.or = ior->id; + init.link = ior->asy.link; + ); + } + /* Set desired link configuration on the source. */ if ((lnkcmp = lt.dp->info.lnkcmp)) { if (dp->version < 0x30) { @@ -316,12 +335,14 @@ nvkm_dp_train_init(struct nvkm_dp *dp) ); } - /* Execute BeforeLinkTraining script from DP Info table. */ - nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0], - init.outp = &dp->outp.info; - init.or = dp->outp.ior->id; - init.link = dp->outp.ior->asy.link; - ); + if (!AMPERE_IED_HACK(dp->outp.disp)) { + /* Execute BeforeLinkTraining script from DP Info table. */ + nvbios_init(&dp->outp.disp->engine.subdev, dp->info.script[0], + init.outp = &dp->outp.info; + init.or = dp->outp.ior->id; + init.link = dp->outp.ior->asy.link; + ); + } } static const struct dp_rates { diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ga102.c new file mode 100644 index 000000000000..aa2e5645fe36 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ga102.c @@ -0,0 +1,46 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "nv50.h" +#include "head.h" +#include "ior.h" +#include "channv50.h" +#include "rootnv50.h" + +static const struct nv50_disp_func +ga102_disp = { + .init = tu102_disp_init, + .fini = gv100_disp_fini, + .intr = gv100_disp_intr, + .uevent = &gv100_disp_chan_uevent, + .super = gv100_disp_super, + .root = &ga102_disp_root_oclass, + .wndw = { .cnt = gv100_disp_wndw_cnt }, + .head = { .cnt = gv100_head_cnt, .new = gv100_head_new }, + .sor = { .cnt = gv100_sor_cnt, .new = ga102_sor_new }, + .ramht_size = 0x2000, +}; + +int +ga102_disp_new(struct nvkm_device *device, int index, struct nvkm_disp **pdisp) +{ + return nv50_disp_new_(&ga102_disp, device, index, pdisp); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h index 09f3038eff26..9f0bb7c6b010 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h @@ -150,6 +150,8 @@ void gv100_sor_dp_audio(struct nvkm_ior *, int, bool); void gv100_sor_dp_audio_sym(struct nvkm_ior *, int, u16, u32); void gv100_sor_dp_watermark(struct nvkm_ior *, int, u8); +void tu102_sor_dp_vcpi(struct nvkm_ior *, int, u8, u8, u16, u16); + void g84_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); void gt215_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); void gf119_hdmi_ctrl(struct nvkm_ior *, int, bool, u8, u8, u8 *, u8 , u8 *, u8); @@ -207,4 +209,6 @@ int gv100_sor_cnt(struct nvkm_disp *, unsigned long *); int gv100_sor_new(struct nvkm_disp *, int); int tu102_sor_new(struct nvkm_disp *, int); + +int ga102_sor_new(struct nvkm_disp *, int); #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.h index a677161c7f3a..db31b37752a2 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.h +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.h @@ -86,6 +86,8 @@ void gv100_disp_intr(struct nv50_disp *); void gv100_disp_super(struct work_struct *); int gv100_disp_wndw_cnt(struct nvkm_disp *, unsigned long *); +int tu102_disp_init(struct nv50_disp *); + void nv50_disp_dptmds_war_2(struct nv50_disp *, struct dcb_output *); void nv50_disp_dptmds_war_3(struct nv50_disp *, struct dcb_output *); void nv50_disp_update_sppll1(struct nv50_disp *); diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootga102.c new file mode 100644 index 000000000000..9af07c3cf9fc --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootga102.c @@ -0,0 +1,52 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "rootnv50.h" +#include "channv50.h" + +#include <nvif/class.h> + +static const struct nv50_disp_root_func +ga102_disp_root = { + .user = { + {{-1,-1,GV100_DISP_CAPS }, gv100_disp_caps_new }, + {{0,0,GA102_DISP_CURSOR }, gv100_disp_curs_new }, + {{0,0,GA102_DISP_WINDOW_IMM_CHANNEL_DMA}, gv100_disp_wimm_new }, + {{0,0,GA102_DISP_CORE_CHANNEL_DMA }, gv100_disp_core_new }, + {{0,0,GA102_DISP_WINDOW_CHANNEL_DMA }, gv100_disp_wndw_new }, + {} + }, +}; + +static int +ga102_disp_root_new(struct nvkm_disp *disp, const struct nvkm_oclass *oclass, + void *data, u32 size, struct nvkm_object **pobject) +{ + return nv50_disp_root_new_(&ga102_disp_root, disp, oclass, data, size, pobject); +} + +const struct nvkm_disp_oclass +ga102_disp_root_oclass = { + .base.oclass = GA102_DISP, + .base.minver = -1, + .base.maxver = -1, + .ctor = ga102_disp_root_new, +}; diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootnv50.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootnv50.h index 7070f5408d92..27bb170d0293 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootnv50.h +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/rootnv50.h @@ -41,4 +41,5 @@ extern const struct nvkm_disp_oclass gp100_disp_root_oclass; extern const struct nvkm_disp_oclass gp102_disp_root_oclass; extern const struct nvkm_disp_oclass gv100_disp_root_oclass; extern const struct nvkm_disp_oclass tu102_disp_root_oclass; +extern const struct nvkm_disp_oclass ga102_disp_root_oclass; #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/sorga102.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/sorga102.c new file mode 100644 index 000000000000..033827de9116 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/sorga102.c @@ -0,0 +1,140 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "ior.h" + +#include <subdev/timer.h> + +static int +ga102_sor_dp_links(struct nvkm_ior *sor, struct nvkm_i2c_aux *aux) +{ + struct nvkm_device *device = sor->disp->engine.subdev.device; + const u32 soff = nv50_ior_base(sor); + const u32 loff = nv50_sor_link(sor); + u32 dpctrl = 0x00000000; + u32 clksor = 0x00000000; + + switch (sor->dp.bw) { + case 0x06: clksor |= 0x00000000; break; + case 0x0a: clksor |= 0x00040000; break; + case 0x14: clksor |= 0x00080000; break; + case 0x1e: clksor |= 0x000c0000; break; + default: + WARN_ON(1); + return -EINVAL; + } + + dpctrl |= ((1 << sor->dp.nr) - 1) << 16; + if (sor->dp.mst) + dpctrl |= 0x40000000; + if (sor->dp.ef) + dpctrl |= 0x00004000; + + nvkm_mask(device, 0x612300 + soff, 0x007c0000, clksor); + + /*XXX*/ + nvkm_msec(device, 40, NVKM_DELAY); + nvkm_mask(device, 0x612300 + soff, 0x00030000, 0x00010000); + nvkm_mask(device, 0x61c10c + loff, 0x00000003, 0x00000001); + + nvkm_mask(device, 0x61c10c + loff, 0x401f4000, dpctrl); + return 0; +} + +static void +ga102_sor_clock(struct nvkm_ior *sor) +{ + struct nvkm_device *device = sor->disp->engine.subdev.device; + u32 div2 = 0; + if (sor->asy.proto == TMDS) { + if (sor->tmds.high_speed) + div2 = 1; + } + nvkm_wr32(device, 0x00ec08 + (sor->id * 0x10), 0x00000000); + nvkm_wr32(device, 0x00ec04 + (sor->id * 0x10), div2); +} + +static const struct nvkm_ior_func +ga102_sor_hda = { + .route = { + .get = gm200_sor_route_get, + .set = gm200_sor_route_set, + }, + .state = gv100_sor_state, + .power = nv50_sor_power, + .clock = ga102_sor_clock, + .hdmi = { + .ctrl = gv100_hdmi_ctrl, + .scdc = gm200_hdmi_scdc, + }, + .dp = { + .lanes = { 0, 1, 2, 3 }, + .links = ga102_sor_dp_links, + .power = g94_sor_dp_power, + .pattern = gm107_sor_dp_pattern, + .drive = gm200_sor_dp_drive, + .vcpi = tu102_sor_dp_vcpi, + .audio = gv100_sor_dp_audio, + .audio_sym = gv100_sor_dp_audio_sym, + .watermark = gv100_sor_dp_watermark, + }, + .hda = { + .hpd = gf119_hda_hpd, + .eld = gf119_hda_eld, + .device_entry = gv100_hda_device_entry, + }, +}; + +static const struct nvkm_ior_func +ga102_sor = { + .route = { + .get = gm200_sor_route_get, + .set = gm200_sor_route_set, + }, + .state = gv100_sor_state, + .power = nv50_sor_power, + .clock = ga102_sor_clock, + .hdmi = { + .ctrl = gv100_hdmi_ctrl, + .scdc = gm200_hdmi_scdc, + }, + .dp = { + .lanes = { 0, 1, 2, 3 }, + .links = ga102_sor_dp_links, + .power = g94_sor_dp_power, + .pattern = gm107_sor_dp_pattern, + .drive = gm200_sor_dp_drive, + .vcpi = tu102_sor_dp_vcpi, + .audio = gv100_sor_dp_audio, + .audio_sym = gv100_sor_dp_audio_sym, + .watermark = gv100_sor_dp_watermark, + }, +}; + +int +ga102_sor_new(struct nvkm_disp *disp, int id) +{ + struct nvkm_device *device = disp->engine.subdev.device; + u32 hda = nvkm_rd32(device, 0x08a15c); + if (hda & BIT(id)) + return nvkm_ior_new_(&ga102_sor_hda, disp, SOR, id); + return nvkm_ior_new_(&ga102_sor, disp, SOR, id); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/sortu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/sortu102.c index 59865a934c4b..0cf9e8752d25 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/sortu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/sortu102.c @@ -23,7 +23,7 @@ #include <subdev/timer.h> -static void +void tu102_sor_dp_vcpi(struct nvkm_ior *sor, int head, u8 slot, u8 slot_nr, u16 pbn, u16 aligned) { diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.c index 883ae4151ff8..4c85d1d4fbd4 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.c @@ -28,7 +28,7 @@ #include <core/gpuobj.h> #include <subdev/timer.h> -static int +int tu102_disp_init(struct nv50_disp *disp) { struct nvkm_device *device = disp->base.engine.subdev.device; diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c index 5d4b695cab8e..c73b7eab776e 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c @@ -36,19 +36,7 @@ #include <nvif/class.h> #include <nvif/cl0080.h> -struct gk104_fifo_engine_status { - bool busy; - bool faulted; - bool chsw; - bool save; - bool load; - struct { - bool tsg; - u32 id; - } prev, next, *chan; -}; - -static void +void gk104_fifo_engine_status(struct gk104_fifo *fifo, int engn, struct gk104_fifo_engine_status *status) { @@ -95,7 +83,7 @@ gk104_fifo_engine_status(struct gk104_fifo *fifo, int engn, status->chan == &status->next ? "*" : " "); } -static int +int gk104_fifo_class_new(struct nvkm_fifo *base, const struct nvkm_oclass *oclass, void *argv, u32 argc, struct nvkm_object **pobject) { @@ -112,7 +100,7 @@ gk104_fifo_class_new(struct nvkm_fifo *base, const struct nvkm_oclass *oclass, return -EINVAL; } -static int +int gk104_fifo_class_get(struct nvkm_fifo *base, int index, struct nvkm_oclass *oclass) { @@ -134,14 +122,14 @@ gk104_fifo_class_get(struct nvkm_fifo *base, int index, return c; } -static void +void gk104_fifo_uevent_fini(struct nvkm_fifo *fifo) { struct nvkm_device *device = fifo->engine.subdev.device; nvkm_mask(device, 0x002140, 0x80000000, 0x00000000); } -static void +void gk104_fifo_uevent_init(struct nvkm_fifo *fifo) { struct nvkm_device *device = fifo->engine.subdev.device; @@ -556,7 +544,7 @@ gk104_fifo_bind_reason[] = { {} }; -static void +void gk104_fifo_intr_bind(struct gk104_fifo *fifo) { struct nvkm_subdev *subdev = &fifo->base.engine.subdev; @@ -627,7 +615,7 @@ gk104_fifo_intr_sched(struct gk104_fifo *fifo) } } -static void +void gk104_fifo_intr_chsw(struct gk104_fifo *fifo) { struct nvkm_subdev *subdev = &fifo->base.engine.subdev; @@ -637,7 +625,7 @@ gk104_fifo_intr_chsw(struct gk104_fifo *fifo) nvkm_wr32(device, 0x00256c, stat); } -static void +void gk104_fifo_intr_dropped_fault(struct gk104_fifo *fifo) { struct nvkm_subdev *subdev = &fifo->base.engine.subdev; @@ -680,7 +668,7 @@ static const struct nvkm_bitfield gk104_fifo_pbdma_intr_0[] = { {} }; -static void +void gk104_fifo_intr_pbdma_0(struct gk104_fifo *fifo, int unit) { struct nvkm_subdev *subdev = &fifo->base.engine.subdev; @@ -729,7 +717,7 @@ static const struct nvkm_bitfield gk104_fifo_pbdma_intr_1[] = { {} }; -static void +void gk104_fifo_intr_pbdma_1(struct gk104_fifo *fifo, int unit) { struct nvkm_subdev *subdev = &fifo->base.engine.subdev; @@ -750,7 +738,7 @@ gk104_fifo_intr_pbdma_1(struct gk104_fifo *fifo, int unit) nvkm_wr32(device, 0x040148 + (unit * 0x2000), stat); } -static void +void gk104_fifo_intr_runlist(struct gk104_fifo *fifo) { struct nvkm_device *device = fifo->base.engine.subdev.device; @@ -763,7 +751,7 @@ gk104_fifo_intr_runlist(struct gk104_fifo *fifo) } } -static void +void gk104_fifo_intr_engine(struct gk104_fifo *fifo) { nvkm_fifo_uevent(&fifo->base); @@ -861,7 +849,7 @@ gk104_fifo_intr(struct nvkm_fifo *base) } } -static void +void gk104_fifo_fini(struct nvkm_fifo *base) { struct gk104_fifo *fifo = gk104_fifo(base); @@ -871,7 +859,7 @@ gk104_fifo_fini(struct nvkm_fifo *base) nvkm_mask(device, 0x002140, 0x10000000, 0x10000000); } -static int +int gk104_fifo_info(struct nvkm_fifo *base, u64 mthd, u64 *data) { struct gk104_fifo *fifo = gk104_fifo(base); @@ -899,7 +887,7 @@ gk104_fifo_info(struct nvkm_fifo *base, u64 mthd, u64 *data) } } -static int +int gk104_fifo_oneinit(struct nvkm_fifo *base) { struct gk104_fifo *fifo = gk104_fifo(base); @@ -974,7 +962,7 @@ gk104_fifo_oneinit(struct nvkm_fifo *base) return nvkm_memory_map(fifo->user.mem, 0, bar, fifo->user.bar, NULL, 0); } -static void +void gk104_fifo_init(struct nvkm_fifo *base) { struct gk104_fifo *fifo = gk104_fifo(base); @@ -1006,7 +994,7 @@ gk104_fifo_init(struct nvkm_fifo *base) nvkm_wr32(device, 0x002140, 0x7fffffff); } -static void * +void * gk104_fifo_dtor(struct nvkm_fifo *base) { struct gk104_fifo *fifo = gk104_fifo(base); diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h index 6407a4a174cf..4398b340e514 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h +++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h @@ -87,11 +87,43 @@ struct gk104_fifo_func { bool cgrp_force; }; +struct gk104_fifo_engine_status { + bool busy; + bool faulted; + bool chsw; + bool save; + bool load; + struct { + bool tsg; + u32 id; + } prev, next, *chan; +}; + int gk104_fifo_new_(const struct gk104_fifo_func *, struct nvkm_device *, int index, int nr, struct nvkm_fifo **); void gk104_fifo_runlist_insert(struct gk104_fifo *, struct gk104_fifo_chan *); void gk104_fifo_runlist_remove(struct gk104_fifo *, struct gk104_fifo_chan *); void gk104_fifo_runlist_update(struct gk104_fifo *, int runl); +void gk104_fifo_engine_status(struct gk104_fifo *fifo, int engn, + struct gk104_fifo_engine_status *status); +void gk104_fifo_intr_bind(struct gk104_fifo *fifo); +void gk104_fifo_intr_chsw(struct gk104_fifo *fifo); +void gk104_fifo_intr_dropped_fault(struct gk104_fifo *fifo); +void gk104_fifo_intr_pbdma_0(struct gk104_fifo *fifo, int unit); +void gk104_fifo_intr_pbdma_1(struct gk104_fifo *fifo, int unit); +void gk104_fifo_intr_runlist(struct gk104_fifo *fifo); +void gk104_fifo_intr_engine(struct gk104_fifo *fifo); +void *gk104_fifo_dtor(struct nvkm_fifo *base); +int gk104_fifo_oneinit(struct nvkm_fifo *base); +int gk104_fifo_info(struct nvkm_fifo *base, u64 mthd, u64 *data); +void gk104_fifo_init(struct nvkm_fifo *base); +void gk104_fifo_fini(struct nvkm_fifo *base); +int gk104_fifo_class_new(struct nvkm_fifo *base, const struct nvkm_oclass *oclass, + void *argv, u32 argc, struct nvkm_object **pobject); +int gk104_fifo_class_get(struct nvkm_fifo *base, int index, + struct nvkm_oclass *oclass); +void gk104_fifo_uevent_fini(struct nvkm_fifo *fifo); +void gk104_fifo_uevent_init(struct nvkm_fifo *fifo); extern const struct gk104_fifo_pbdma_func gk104_fifo_pbdma; int gk104_fifo_pbdma_nr(struct gk104_fifo *); diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/tu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/tu102.c index 005f3e1729b9..14e5b70e0255 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/tu102.c @@ -24,7 +24,13 @@ #include "changk104.h" #include "user.h" +#include <core/client.h> #include <core/gpuobj.h> +#include <subdev/bar.h> +#include <subdev/fault.h> +#include <subdev/top.h> +#include <subdev/timer.h> +#include <engine/sw.h> #include <nvif/class.h> @@ -109,8 +115,364 @@ tu102_fifo = { .cgrp_force = true, }; +static void +tu102_fifo_recover_work(struct work_struct *w) +{ + struct gk104_fifo *fifo = container_of(w, typeof(*fifo), recover.work); + struct nvkm_device *device = fifo->base.engine.subdev.device; + struct nvkm_engine *engine; + unsigned long flags; + u32 engm, runm, todo; + int engn, runl; + + spin_lock_irqsave(&fifo->base.lock, flags); + runm = fifo->recover.runm; + engm = fifo->recover.engm; + fifo->recover.engm = 0; + fifo->recover.runm = 0; + spin_unlock_irqrestore(&fifo->base.lock, flags); + + nvkm_mask(device, 0x002630, runm, runm); + + for (todo = engm; engn = __ffs(todo), todo; todo &= ~BIT(engn)) { + if ((engine = fifo->engine[engn].engine)) { + nvkm_subdev_fini(&engine->subdev, false); + WARN_ON(nvkm_subdev_init(&engine->subdev)); + } + } + + for (todo = runm; runl = __ffs(todo), todo; todo &= ~BIT(runl)) + gk104_fifo_runlist_update(fifo, runl); + + nvkm_mask(device, 0x002630, runm, 0x00000000); +} + +static void tu102_fifo_recover_engn(struct gk104_fifo *fifo, int engn); + +static void +tu102_fifo_recover_runl(struct gk104_fifo *fifo, int runl) +{ + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + const u32 runm = BIT(runl); + + assert_spin_locked(&fifo->base.lock); + if (fifo->recover.runm & runm) + return; + fifo->recover.runm |= runm; + + /* Block runlist to prevent channel assignment(s) from changing. */ + nvkm_mask(device, 0x002630, runm, runm); + + /* Schedule recovery. */ + nvkm_warn(subdev, "runlist %d: scheduled for recovery\n", runl); + schedule_work(&fifo->recover.work); +} + +static struct gk104_fifo_chan * +tu102_fifo_recover_chid(struct gk104_fifo *fifo, int runl, int chid) +{ + struct gk104_fifo_chan *chan; + struct nvkm_fifo_cgrp *cgrp; + + list_for_each_entry(chan, &fifo->runlist[runl].chan, head) { + if (chan->base.chid == chid) { + list_del_init(&chan->head); + return chan; + } + } + + list_for_each_entry(cgrp, &fifo->runlist[runl].cgrp, head) { + if (cgrp->id == chid) { + chan = list_first_entry(&cgrp->chan, typeof(*chan), head); + list_del_init(&chan->head); + if (!--cgrp->chan_nr) + list_del_init(&cgrp->head); + return chan; + } + } + + return NULL; +} + +static void +tu102_fifo_recover_chan(struct nvkm_fifo *base, int chid) +{ + struct gk104_fifo *fifo = gk104_fifo(base); + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + const u32 stat = nvkm_rd32(device, 0x800004 + (chid * 0x08)); + const u32 runl = (stat & 0x000f0000) >> 16; + const bool used = (stat & 0x00000001); + unsigned long engn, engm = fifo->runlist[runl].engm; + struct gk104_fifo_chan *chan; + + assert_spin_locked(&fifo->base.lock); + if (!used) + return; + + /* Lookup SW state for channel, and mark it as dead. */ + chan = tu102_fifo_recover_chid(fifo, runl, chid); + if (chan) { + chan->killed = true; + nvkm_fifo_kevent(&fifo->base, chid); + } + + /* Disable channel. */ + nvkm_wr32(device, 0x800004 + (chid * 0x08), stat | 0x00000800); + nvkm_warn(subdev, "channel %d: killed\n", chid); + + /* Block channel assignments from changing during recovery. */ + tu102_fifo_recover_runl(fifo, runl); + + /* Schedule recovery for any engines the channel is on. */ + for_each_set_bit(engn, &engm, fifo->engine_nr) { + struct gk104_fifo_engine_status status; + + gk104_fifo_engine_status(fifo, engn, &status); + if (!status.chan || status.chan->id != chid) + continue; + tu102_fifo_recover_engn(fifo, engn); + } +} + +static void +tu102_fifo_recover_engn(struct gk104_fifo *fifo, int engn) +{ + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + const u32 runl = fifo->engine[engn].runl; + const u32 engm = BIT(engn); + struct gk104_fifo_engine_status status; + + assert_spin_locked(&fifo->base.lock); + if (fifo->recover.engm & engm) + return; + fifo->recover.engm |= engm; + + /* Block channel assignments from changing during recovery. */ + tu102_fifo_recover_runl(fifo, runl); + + /* Determine which channel (if any) is currently on the engine. */ + gk104_fifo_engine_status(fifo, engn, &status); + if (status.chan) { + /* The channel is not longer viable, kill it. */ + tu102_fifo_recover_chan(&fifo->base, status.chan->id); + } + + /* Preempt the runlist */ + nvkm_wr32(device, 0x2638, BIT(runl)); + + /* Schedule recovery. */ + nvkm_warn(subdev, "engine %d: scheduled for recovery\n", engn); + schedule_work(&fifo->recover.work); +} + +static void +tu102_fifo_fault(struct nvkm_fifo *base, struct nvkm_fault_data *info) +{ + struct gk104_fifo *fifo = gk104_fifo(base); + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + const struct nvkm_enum *er, *ee, *ec, *ea; + struct nvkm_engine *engine = NULL; + struct nvkm_fifo_chan *chan; + unsigned long flags; + char ct[8] = "HUB/", en[16] = ""; + int engn; + + er = nvkm_enum_find(fifo->func->fault.reason, info->reason); + ee = nvkm_enum_find(fifo->func->fault.engine, info->engine); + if (info->hub) { + ec = nvkm_enum_find(fifo->func->fault.hubclient, info->client); + } else { + ec = nvkm_enum_find(fifo->func->fault.gpcclient, info->client); + snprintf(ct, sizeof(ct), "GPC%d/", info->gpc); + } + ea = nvkm_enum_find(fifo->func->fault.access, info->access); + + if (ee && ee->data2) { + switch (ee->data2) { + case NVKM_SUBDEV_BAR: + nvkm_bar_bar1_reset(device); + break; + case NVKM_SUBDEV_INSTMEM: + nvkm_bar_bar2_reset(device); + break; + case NVKM_ENGINE_IFB: + nvkm_mask(device, 0x001718, 0x00000000, 0x00000000); + break; + default: + engine = nvkm_device_engine(device, ee->data2); + break; + } + } + + if (ee == NULL) { + enum nvkm_devidx engidx = nvkm_top_fault(device, info->engine); + + if (engidx < NVKM_SUBDEV_NR) { + const char *src = nvkm_subdev_name[engidx]; + char *dst = en; + + do { + *dst++ = toupper(*src++); + } while (*src); + engine = nvkm_device_engine(device, engidx); + } + } else { + snprintf(en, sizeof(en), "%s", ee->name); + } + + spin_lock_irqsave(&fifo->base.lock, flags); + chan = nvkm_fifo_chan_inst_locked(&fifo->base, info->inst); + + nvkm_error(subdev, + "fault %02x [%s] at %016llx engine %02x [%s] client %02x " + "[%s%s] reason %02x [%s] on channel %d [%010llx %s]\n", + info->access, ea ? ea->name : "", info->addr, + info->engine, ee ? ee->name : en, + info->client, ct, ec ? ec->name : "", + info->reason, er ? er->name : "", chan ? chan->chid : -1, + info->inst, chan ? chan->object.client->name : "unknown"); + + /* Kill the channel that caused the fault. */ + if (chan) + tu102_fifo_recover_chan(&fifo->base, chan->chid); + + /* Channel recovery will probably have already done this for the + * correct engine(s), but just in case we can't find the channel + * information... + */ + for (engn = 0; engn < fifo->engine_nr && engine; engn++) { + if (fifo->engine[engn].engine == engine) { + tu102_fifo_recover_engn(fifo, engn); + break; + } + } + + spin_unlock_irqrestore(&fifo->base.lock, flags); +} + +static void +tu102_fifo_intr_ctxsw_timeout(struct gk104_fifo *fifo) +{ + struct nvkm_device *device = fifo->base.engine.subdev.device; + unsigned long flags, engm; + u32 engn; + + spin_lock_irqsave(&fifo->base.lock, flags); + + engm = nvkm_rd32(device, 0x2a30); + nvkm_wr32(device, 0x2a30, engm); + + for_each_set_bit(engn, &engm, 32) + tu102_fifo_recover_engn(fifo, engn); + + spin_unlock_irqrestore(&fifo->base.lock, flags); +} + +static void +tu102_fifo_intr_sched(struct gk104_fifo *fifo) +{ + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + u32 intr = nvkm_rd32(device, 0x00254c); + u32 code = intr & 0x000000ff; + + nvkm_error(subdev, "SCHED_ERROR %02x\n", code); +} + +static void +tu102_fifo_intr(struct nvkm_fifo *base) +{ + struct gk104_fifo *fifo = gk104_fifo(base); + struct nvkm_subdev *subdev = &fifo->base.engine.subdev; + struct nvkm_device *device = subdev->device; + u32 mask = nvkm_rd32(device, 0x002140); + u32 stat = nvkm_rd32(device, 0x002100) & mask; + + if (stat & 0x00000001) { + gk104_fifo_intr_bind(fifo); + nvkm_wr32(device, 0x002100, 0x00000001); + stat &= ~0x00000001; + } + + if (stat & 0x00000002) { + tu102_fifo_intr_ctxsw_timeout(fifo); + stat &= ~0x00000002; + } + + if (stat & 0x00000100) { + tu102_fifo_intr_sched(fifo); + nvkm_wr32(device, 0x002100, 0x00000100); + stat &= ~0x00000100; + } + + if (stat & 0x00010000) { + gk104_fifo_intr_chsw(fifo); + nvkm_wr32(device, 0x002100, 0x00010000); + stat &= ~0x00010000; + } + + if (stat & 0x20000000) { + u32 mask = nvkm_rd32(device, 0x0025a0); + + while (mask) { + u32 unit = __ffs(mask); + + gk104_fifo_intr_pbdma_0(fifo, unit); + gk104_fifo_intr_pbdma_1(fifo, unit); + nvkm_wr32(device, 0x0025a0, (1 << unit)); + mask &= ~(1 << unit); + } + stat &= ~0x20000000; + } + + if (stat & 0x40000000) { + gk104_fifo_intr_runlist(fifo); + stat &= ~0x40000000; + } + + if (stat & 0x80000000) { + nvkm_wr32(device, 0x002100, 0x80000000); + gk104_fifo_intr_engine(fifo); + stat &= ~0x80000000; + } + + if (stat) { + nvkm_error(subdev, "INTR %08x\n", stat); + nvkm_mask(device, 0x002140, stat, 0x00000000); + nvkm_wr32(device, 0x002100, stat); + } +} + +static const struct nvkm_fifo_func +tu102_fifo_ = { + .dtor = gk104_fifo_dtor, + .oneinit = gk104_fifo_oneinit, + .info = gk104_fifo_info, + .init = gk104_fifo_init, + .fini = gk104_fifo_fini, + .intr = tu102_fifo_intr, + .fault = tu102_fifo_fault, + .uevent_init = gk104_fifo_uevent_init, + .uevent_fini = gk104_fifo_uevent_fini, + .recover_chan = tu102_fifo_recover_chan, + .class_get = gk104_fifo_class_get, + .class_new = gk104_fifo_class_new, +}; + int tu102_fifo_new(struct nvkm_device *device, int index, struct nvkm_fifo **pfifo) { - return gk104_fifo_new_(&tu102_fifo, device, index, 4096, pfifo); + struct gk104_fifo *fifo; + + if (!(fifo = kzalloc(sizeof(*fifo), GFP_KERNEL))) + return -ENOMEM; + fifo->func = &tu102_fifo; + INIT_WORK(&fifo->recover.work, tu102_fifo_recover_work); + *pfifo = &fifo->base; + + return nvkm_fifo_ctor(&tu102_fifo_, device, index, 4096, &fifo->base); } diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c index 7deb81b6dbac..4b571cc6bc70 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.c @@ -75,7 +75,7 @@ shadow_image(struct nvkm_bios *bios, int idx, u32 offset, struct shadow *mthd) nvkm_debug(subdev, "%08x: type %02x, %d bytes\n", image.base, image.type, image.size); - if (!shadow_fetch(bios, mthd, image.size)) { + if (!shadow_fetch(bios, mthd, image.base + image.size)) { nvkm_debug(subdev, "%08x: fetch failed\n", image.base); return 0; } diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.c index 3634cd0630b8..023ddc7c5399 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.c @@ -64,6 +64,9 @@ pramin_init(struct nvkm_bios *bios, const char *name) return NULL; /* we can't get the bios image pointer without PDISP */ + if (device->card_type >= GA100) + addr = device->chipset == 0x170; /*XXX: find the fuse reg for this */ + else if (device->card_type >= GM100) addr = nvkm_rd32(device, 0x021c04); else diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/Kbuild index b3429371ed82..d1abb64841da 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/Kbuild @@ -15,3 +15,4 @@ nvkm-y += nvkm/subdev/devinit/gm107.o nvkm-y += nvkm/subdev/devinit/gm200.o nvkm-y += nvkm/subdev/devinit/gv100.o nvkm-y += nvkm/subdev/devinit/tu102.o +nvkm-y += nvkm/subdev/devinit/ga100.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/ga100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/ga100.c new file mode 100644 index 000000000000..636a92128f6c --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/ga100.c @@ -0,0 +1,76 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "nv50.h" + +#include <subdev/bios.h> +#include <subdev/bios/pll.h> +#include <subdev/clk/pll.h> + +static int +ga100_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq) +{ + struct nvkm_subdev *subdev = &init->subdev; + struct nvkm_device *device = subdev->device; + struct nvbios_pll info; + int head = type - PLL_VPLL0; + int N, fN, M, P; + int ret; + + ret = nvbios_pll_parse(device->bios, type, &info); + if (ret) + return ret; + + ret = gt215_pll_calc(subdev, &info, freq, &N, &fN, &M, &P); + if (ret < 0) + return ret; + + switch (info.type) { + case PLL_VPLL0: + case PLL_VPLL1: + case PLL_VPLL2: + case PLL_VPLL3: + nvkm_wr32(device, 0x00ef00 + (head * 0x40), 0x02080004); + nvkm_wr32(device, 0x00ef18 + (head * 0x40), (N << 16) | fN); + nvkm_wr32(device, 0x00ef04 + (head * 0x40), (P << 16) | M); + nvkm_wr32(device, 0x00e9c0 + (head * 0x04), 0x00000001); + break; + default: + nvkm_warn(subdev, "%08x/%dKhz unimplemented\n", type, freq); + ret = -EINVAL; + break; + } + + return ret; +} + +static const struct nvkm_devinit_func +ga100_devinit = { + .init = nv50_devinit_init, + .post = tu102_devinit_post, + .pll_set = ga100_devinit_pll_set, +}; + +int +ga100_devinit_new(struct nvkm_device *device, int index, struct nvkm_devinit **pinit) +{ + return nv50_devinit_new_(&ga100_devinit, device, index, pinit); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/priv.h index 94723352137a..05961e624264 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/priv.h @@ -19,4 +19,5 @@ void nvkm_devinit_ctor(const struct nvkm_devinit_func *, struct nvkm_device *, int index, struct nvkm_devinit *); int nv04_devinit_post(struct nvkm_devinit *, bool); +int tu102_devinit_post(struct nvkm_devinit *, bool); #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c index 397670e72fff..9a469bf482f2 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c @@ -65,7 +65,7 @@ tu102_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq) return ret; } -static int +int tu102_devinit_post(struct nvkm_devinit *base, bool post) { struct nv50_devinit *init = nv50_devinit(base); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.c index 45a6a68b9f48..f080051b0c65 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.c @@ -22,6 +22,7 @@ #include "priv.h" #include <core/memory.h> +#include <subdev/mc.h> #include <subdev/mmu.h> #include <engine/fifo.h> @@ -34,6 +35,9 @@ tu102_fault_buffer_intr(struct nvkm_fault_buffer *buffer, bool enable) * which don't appear to actually work anymore, but newer * versions of RM don't appear to touch anything at all.. */ + struct nvkm_device *device = buffer->fault->subdev.device; + + nvkm_mc_intr_mask(device, NVKM_SUBDEV_FAULT, enable); } static void @@ -41,6 +45,11 @@ tu102_fault_buffer_fini(struct nvkm_fault_buffer *buffer) { struct nvkm_device *device = buffer->fault->subdev.device; const u32 foff = buffer->id * 0x20; + + /* Disable the fault interrupts */ + nvkm_wr32(device, 0xb81408, 0x1); + nvkm_wr32(device, 0xb81410, 0x10); + nvkm_mask(device, 0xb83010 + foff, 0x80000000, 0x00000000); } @@ -50,6 +59,10 @@ tu102_fault_buffer_init(struct nvkm_fault_buffer *buffer) struct nvkm_device *device = buffer->fault->subdev.device; const u32 foff = buffer->id * 0x20; + /* Enable the fault interrupts */ + nvkm_wr32(device, 0xb81208, 0x1); + nvkm_wr32(device, 0xb81210, 0x10); + nvkm_mask(device, 0xb83010 + foff, 0xc0000000, 0x40000000); nvkm_wr32(device, 0xb83004 + foff, upper_32_bits(buffer->addr)); nvkm_wr32(device, 0xb83000 + foff, lower_32_bits(buffer->addr)); @@ -109,14 +122,20 @@ tu102_fault_intr(struct nvkm_fault *fault) } if (stat & 0x00000200) { + /* Clear the associated interrupt flag */ + nvkm_wr32(device, 0xb81010, 0x10); + if (fault->buffer[0]) { nvkm_event_send(&fault->event, 1, 0, NULL, 0); stat &= ~0x00000200; } } - /*XXX: guess, can't confirm until we get fw... */ + /* Replayable MMU fault */ if (stat & 0x00000100) { + /* Clear the associated interrupt flag */ + nvkm_wr32(device, 0xb81008, 0x1); + if (fault->buffer[1]) { nvkm_event_send(&fault->event, 1, 1, NULL, 0); stat &= ~0x00000100; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild index 43a42159a3d0..5d0bab8ecb43 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild @@ -32,6 +32,8 @@ nvkm-y += nvkm/subdev/fb/gp100.o nvkm-y += nvkm/subdev/fb/gp102.o nvkm-y += nvkm/subdev/fb/gp10b.o nvkm-y += nvkm/subdev/fb/gv100.o +nvkm-y += nvkm/subdev/fb/ga100.o +nvkm-y += nvkm/subdev/fb/ga102.o nvkm-y += nvkm/subdev/fb/ram.o nvkm-y += nvkm/subdev/fb/ramnv04.o @@ -52,6 +54,7 @@ nvkm-y += nvkm/subdev/fb/ramgk104.o nvkm-y += nvkm/subdev/fb/ramgm107.o nvkm-y += nvkm/subdev/fb/ramgm200.o nvkm-y += nvkm/subdev/fb/ramgp100.o +nvkm-y += nvkm/subdev/fb/ramga102.o nvkm-y += nvkm/subdev/fb/sddr2.o nvkm-y += nvkm/subdev/fb/sddr3.o nvkm-y += nvkm/subdev/fb/gddr3.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c new file mode 100644 index 000000000000..bf82686851cd --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.c @@ -0,0 +1,40 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "gf100.h" +#include "ram.h" + +static const struct nvkm_fb_func +ga100_fb = { + .dtor = gf100_fb_dtor, + .oneinit = gf100_fb_oneinit, + .init = gp100_fb_init, + .init_page = gv100_fb_init_page, + .init_unkn = gp100_fb_init_unkn, + .ram_new = gp100_ram_new, + .default_bigpage = 16, +}; + +int +ga100_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb) +{ + return gp102_fb_new_(&ga100_fb, device, index, pfb); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c new file mode 100644 index 000000000000..bcecf84a6e67 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c @@ -0,0 +1,40 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "gf100.h" +#include "ram.h" + +static const struct nvkm_fb_func +ga102_fb = { + .dtor = gf100_fb_dtor, + .oneinit = gf100_fb_oneinit, + .init = gp100_fb_init, + .init_page = gv100_fb_init_page, + .init_unkn = gp100_fb_init_unkn, + .ram_new = ga102_ram_new, + .default_bigpage = 16, +}; + +int +ga102_fb_new(struct nvkm_device *device, int index, struct nvkm_fb **pfb) +{ + return gp102_fb_new_(&ga102_fb, device, index, pfb); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c index 10ff5d053f7e..feda86a5fba8 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c @@ -22,7 +22,7 @@ #include "gf100.h" #include "ram.h" -static int +int gv100_fb_init_page(struct nvkm_fb *fb) { return (fb->page == 16) ? 0 : -EINVAL; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h index 5be9c563350d..66932ac10d15 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h @@ -82,4 +82,6 @@ int gp102_fb_new_(const struct nvkm_fb_func *, struct nvkm_device *, int, struct nvkm_fb **); bool gp102_fb_vpr_scrub_required(struct nvkm_fb *); int gp102_fb_vpr_scrub(struct nvkm_fb *); + +int gv100_fb_init_page(struct nvkm_fb *); #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h index d723a9b4e3c4..ea7d66f3dd82 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.h @@ -70,4 +70,5 @@ int gk104_ram_new(struct nvkm_fb *, struct nvkm_ram **); int gm107_ram_new(struct nvkm_fb *, struct nvkm_ram **); int gm200_ram_new(struct nvkm_fb *, struct nvkm_ram **); int gp100_ram_new(struct nvkm_fb *, struct nvkm_ram **); +int ga102_ram_new(struct nvkm_fb *, struct nvkm_ram **); #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramga102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramga102.c new file mode 100644 index 000000000000..298c136cefe0 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramga102.c @@ -0,0 +1,40 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "ram.h" + +#include <subdev/bios.h> +#include <subdev/bios/init.h> +#include <subdev/bios/rammap.h> + +static const struct nvkm_ram_func +ga102_ram = { +}; + +int +ga102_ram_new(struct nvkm_fb *fb, struct nvkm_ram **pram) +{ + struct nvkm_device *device = fb->subdev.device; + enum nvkm_ram_type type = nvkm_fb_bios_memtype(device->bios); + u32 size = nvkm_rd32(device, 0x1183a4); + + return nvkm_ram_new_(&ga102_ram, fb, type, (u64)size << 20, pram); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/Kbuild index b2ad5922a1c2..efbbaa080de5 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/Kbuild @@ -5,3 +5,4 @@ nvkm-y += nvkm/subdev/gpio/nv50.o nvkm-y += nvkm/subdev/gpio/g94.o nvkm-y += nvkm/subdev/gpio/gf119.o nvkm-y += nvkm/subdev/gpio/gk104.o +nvkm-y += nvkm/subdev/gpio/ga102.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/ga102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/ga102.c new file mode 100644 index 000000000000..62c791baf400 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/ga102.c @@ -0,0 +1,118 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "priv.h" + +static void +ga102_gpio_reset(struct nvkm_gpio *gpio, u8 match) +{ + struct nvkm_device *device = gpio->subdev.device; + struct nvkm_bios *bios = device->bios; + u8 ver, len; + u16 entry; + int ent = -1; + + while ((entry = dcb_gpio_entry(bios, 0, ++ent, &ver, &len))) { + u32 data = nvbios_rd32(bios, entry); + u8 line = (data & 0x0000003f); + u8 defs = !!(data & 0x00000080); + u8 func = (data & 0x0000ff00) >> 8; + u8 unk0 = (data & 0x00ff0000) >> 16; + u8 unk1 = (data & 0x1f000000) >> 24; + + if ( func == DCB_GPIO_UNUSED || + (match != DCB_GPIO_UNUSED && match != func)) + continue; + + nvkm_gpio_set(gpio, 0, func, line, defs); + + nvkm_mask(device, 0x021200 + (line * 4), 0xff, unk0); + if (unk1--) + nvkm_mask(device, 0x00d740 + (unk1 * 4), 0xff, line); + } +} + +static int +ga102_gpio_drive(struct nvkm_gpio *gpio, int line, int dir, int out) +{ + struct nvkm_device *device = gpio->subdev.device; + u32 data = ((dir ^ 1) << 13) | (out << 12); + nvkm_mask(device, 0x021200 + (line * 4), 0x00003000, data); + nvkm_mask(device, 0x00d604, 0x00000001, 0x00000001); /* update? */ + return 0; +} + +static int +ga102_gpio_sense(struct nvkm_gpio *gpio, int line) +{ + struct nvkm_device *device = gpio->subdev.device; + return !!(nvkm_rd32(device, 0x021200 + (line * 4)) & 0x00004000); +} + +static void +ga102_gpio_intr_stat(struct nvkm_gpio *gpio, u32 *hi, u32 *lo) +{ + struct nvkm_device *device = gpio->subdev.device; + u32 intr0 = nvkm_rd32(device, 0x021640); + u32 intr1 = nvkm_rd32(device, 0x02164c); + u32 stat0 = nvkm_rd32(device, 0x021648) & intr0; + u32 stat1 = nvkm_rd32(device, 0x021654) & intr1; + *lo = (stat1 & 0xffff0000) | (stat0 >> 16); + *hi = (stat1 << 16) | (stat0 & 0x0000ffff); + nvkm_wr32(device, 0x021640, intr0); + nvkm_wr32(device, 0x02164c, intr1); +} + +static void +ga102_gpio_intr_mask(struct nvkm_gpio *gpio, u32 type, u32 mask, u32 data) +{ + struct nvkm_device *device = gpio->subdev.device; + u32 inte0 = nvkm_rd32(device, 0x021648); + u32 inte1 = nvkm_rd32(device, 0x021654); + if (type & NVKM_GPIO_LO) + inte0 = (inte0 & ~(mask << 16)) | (data << 16); + if (type & NVKM_GPIO_HI) + inte0 = (inte0 & ~(mask & 0xffff)) | (data & 0xffff); + mask >>= 16; + data >>= 16; + if (type & NVKM_GPIO_LO) + inte1 = (inte1 & ~(mask << 16)) | (data << 16); + if (type & NVKM_GPIO_HI) + inte1 = (inte1 & ~mask) | data; + nvkm_wr32(device, 0x021648, inte0); + nvkm_wr32(device, 0x021654, inte1); +} + +static const struct nvkm_gpio_func +ga102_gpio = { + .lines = 32, + .intr_stat = ga102_gpio_intr_stat, + .intr_mask = ga102_gpio_intr_mask, + .drive = ga102_gpio_drive, + .sense = ga102_gpio_sense, + .reset = ga102_gpio_reset, +}; + +int +ga102_gpio_new(struct nvkm_device *device, int index, struct nvkm_gpio **pgpio) +{ + return nvkm_gpio_new_(&ga102_gpio, device, index, pgpio); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/Kbuild index 723d0284caef..819703913a00 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/Kbuild @@ -7,6 +7,7 @@ nvkm-y += nvkm/subdev/i2c/g94.o nvkm-y += nvkm/subdev/i2c/gf117.o nvkm-y += nvkm/subdev/i2c/gf119.o nvkm-y += nvkm/subdev/i2c/gk104.o +nvkm-y += nvkm/subdev/i2c/gk110.o nvkm-y += nvkm/subdev/i2c/gm200.o nvkm-y += nvkm/subdev/i2c/pad.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h index 30b48896965e..f920eabf8628 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.h @@ -3,6 +3,13 @@ #define __NVKM_I2C_AUX_H__ #include "pad.h" +static inline void +nvkm_i2c_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) +{ + if (i2c->func->aux_autodpcd) + i2c->func->aux_autodpcd(i2c, aux, false); +} + struct nvkm_i2c_aux_func { bool address_only; int (*xfer)(struct nvkm_i2c_aux *, bool retry, u8 type, diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c index db7769cb33eb..47068f6f9c55 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.c @@ -77,7 +77,8 @@ g94_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, u8 type, u32 addr, u8 *data, u8 *size) { struct g94_i2c_aux *aux = g94_i2c_aux(obj); - struct nvkm_device *device = aux->base.pad->i2c->subdev.device; + struct nvkm_i2c *i2c = aux->base.pad->i2c; + struct nvkm_device *device = i2c->subdev.device; const u32 base = aux->ch * 0x50; u32 ctrl, stat, timeout, retries = 0; u32 xbuf[4] = {}; @@ -96,6 +97,8 @@ g94_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, goto out; } + nvkm_i2c_aux_autodpcd(i2c, aux->ch, false); + if (!(type & 1)) { memcpy(xbuf, data, *size); for (i = 0; i < 16; i += 4) { @@ -128,7 +131,7 @@ g94_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, if (!timeout--) { AUX_ERR(&aux->base, "timeout %08x", ctrl); ret = -EIO; - goto out; + goto out_err; } } while (ctrl & 0x00010000); ret = 0; @@ -154,7 +157,8 @@ g94_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, memcpy(data, xbuf, *size); *size = stat & 0x0000001f; } - +out_err: + nvkm_i2c_aux_autodpcd(i2c, aux->ch, true); out: g94_i2c_aux_fini(aux); return ret < 0 ? ret : (stat & 0x000f0000) >> 16; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c index edb6148cbca0..8bd1d442e465 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.c @@ -33,7 +33,7 @@ static void gm200_i2c_aux_fini(struct gm200_i2c_aux *aux) { struct nvkm_device *device = aux->base.pad->i2c->subdev.device; - nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00310000, 0x00000000); + nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00710000, 0x00000000); } static int @@ -54,10 +54,10 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux) AUX_ERR(&aux->base, "begin idle timeout %08x", ctrl); return -EBUSY; } - } while (ctrl & 0x03010000); + } while (ctrl & 0x07010000); /* set some magic, and wait up to 1ms for it to appear */ - nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00300000, ureq); + nvkm_mask(device, 0x00d954 + (aux->ch * 0x50), 0x00700000, ureq); timeout = 1000; do { ctrl = nvkm_rd32(device, 0x00d954 + (aux->ch * 0x50)); @@ -67,7 +67,7 @@ gm200_i2c_aux_init(struct gm200_i2c_aux *aux) gm200_i2c_aux_fini(aux); return -EBUSY; } - } while ((ctrl & 0x03000000) != urep); + } while ((ctrl & 0x07000000) != urep); return 0; } @@ -77,7 +77,8 @@ gm200_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, u8 type, u32 addr, u8 *data, u8 *size) { struct gm200_i2c_aux *aux = gm200_i2c_aux(obj); - struct nvkm_device *device = aux->base.pad->i2c->subdev.device; + struct nvkm_i2c *i2c = aux->base.pad->i2c; + struct nvkm_device *device = i2c->subdev.device; const u32 base = aux->ch * 0x50; u32 ctrl, stat, timeout, retries = 0; u32 xbuf[4] = {}; @@ -96,6 +97,8 @@ gm200_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, goto out; } + nvkm_i2c_aux_autodpcd(i2c, aux->ch, false); + if (!(type & 1)) { memcpy(xbuf, data, *size); for (i = 0; i < 16; i += 4) { @@ -128,7 +131,7 @@ gm200_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, if (!timeout--) { AUX_ERR(&aux->base, "timeout %08x", ctrl); ret = -EIO; - goto out; + goto out_err; } } while (ctrl & 0x00010000); ret = 0; @@ -155,6 +158,8 @@ gm200_i2c_aux_xfer(struct nvkm_i2c_aux *obj, bool retry, *size = stat & 0x0000001f; } +out_err: + nvkm_i2c_aux_autodpcd(i2c, aux->ch, true); out: gm200_i2c_aux_fini(aux); return ret < 0 ? ret : (stat & 0x000f0000) >> 16; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk110.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk110.c new file mode 100644 index 000000000000..8e3bfa1af52a --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk110.c @@ -0,0 +1,45 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "priv.h" +#include "pad.h" + +static void +gk110_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) +{ + nvkm_mask(i2c->subdev.device, 0x00e4f8 + (aux * 0x50), 0x00010000, enable << 16); +} + +static const struct nvkm_i2c_func +gk110_i2c = { + .pad_x_new = gf119_i2c_pad_x_new, + .pad_s_new = gf119_i2c_pad_s_new, + .aux = 4, + .aux_stat = gk104_aux_stat, + .aux_mask = gk104_aux_mask, + .aux_autodpcd = gk110_aux_autodpcd, +}; + +int +gk110_i2c_new(struct nvkm_device *device, int index, struct nvkm_i2c **pi2c) +{ + return nvkm_i2c_new_(&gk110_i2c, device, index, pi2c); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.c index a23c5f315221..7b2375bff8a9 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.c @@ -24,6 +24,12 @@ #include "priv.h" #include "pad.h" +static void +gm200_aux_autodpcd(struct nvkm_i2c *i2c, int aux, bool enable) +{ + nvkm_mask(i2c->subdev.device, 0x00d968 + (aux * 0x50), 0x00010000, enable << 16); +} + static const struct nvkm_i2c_func gm200_i2c = { .pad_x_new = gf119_i2c_pad_x_new, @@ -31,6 +37,7 @@ gm200_i2c = { .aux = 8, .aux_stat = gk104_aux_stat, .aux_mask = gk104_aux_mask, + .aux_autodpcd = gm200_aux_autodpcd, }; int diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.h b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.h index 461016814f4f..44b7bb7d4777 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: MIT */ #ifndef __NVKM_I2C_PAD_H__ #define __NVKM_I2C_PAD_H__ -#include <subdev/i2c.h> +#include "priv.h" struct nvkm_i2c_pad { const struct nvkm_i2c_pad_func *func; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/priv.h index bd86bc298ebe..e35f6036fcfc 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/priv.h @@ -23,6 +23,10 @@ struct nvkm_i2c_func { /* mask on/off interrupt types for a given set of auxch */ void (*aux_mask)(struct nvkm_i2c *, u32, u32, u32); + + /* enable/disable HW-initiated DPCD reads + */ + void (*aux_autodpcd)(struct nvkm_i2c *, int aux, bool enable); }; void g94_aux_stat(struct nvkm_i2c *, u32 *, u32 *, u32 *, u32 *); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c index 2340040942c9..1115376bc85f 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gf100.c @@ -22,6 +22,7 @@ * Authors: Ben Skeggs */ #include "priv.h" +#include <subdev/timer.h> static void gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i) @@ -31,7 +32,6 @@ gf100_ibus_intr_hub(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0400)); nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x122128 + (i * 0x0400), 0x00000200, 0x00000000); } static void @@ -42,7 +42,6 @@ gf100_ibus_intr_rop(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0400)); nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x124128 + (i * 0x0400), 0x00000200, 0x00000000); } static void @@ -53,7 +52,6 @@ gf100_ibus_intr_gpc(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0400)); u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0400)); nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x128128 + (i * 0x0400), 0x00000200, 0x00000000); } void @@ -90,6 +88,12 @@ gf100_ibus_intr(struct nvkm_subdev *ibus) intr1 &= ~stat; } } + + nvkm_mask(device, 0x121c4c, 0x0000003f, 0x00000002); + nvkm_msec(device, 2000, + if (!(nvkm_rd32(device, 0x121c4c) & 0x0000003f)) + break; + ); } static int diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c index f3915f85838e..22e487b493ad 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ibus/gk104.c @@ -22,6 +22,7 @@ * Authors: Ben Skeggs */ #include "priv.h" +#include <subdev/timer.h> static void gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i) @@ -31,7 +32,6 @@ gk104_ibus_intr_hub(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x122124 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x122128 + (i * 0x0800)); nvkm_debug(ibus, "HUB%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x122128 + (i * 0x0800), 0x00000200, 0x00000000); } static void @@ -42,7 +42,6 @@ gk104_ibus_intr_rop(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x124124 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x124128 + (i * 0x0800)); nvkm_debug(ibus, "ROP%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x124128 + (i * 0x0800), 0x00000200, 0x00000000); } static void @@ -53,7 +52,6 @@ gk104_ibus_intr_gpc(struct nvkm_subdev *ibus, int i) u32 data = nvkm_rd32(device, 0x128124 + (i * 0x0800)); u32 stat = nvkm_rd32(device, 0x128128 + (i * 0x0800)); nvkm_debug(ibus, "GPC%d: %06x %08x (%08x)\n", i, addr, data, stat); - nvkm_mask(device, 0x128128 + (i * 0x0800), 0x00000200, 0x00000000); } void @@ -90,6 +88,12 @@ gk104_ibus_intr(struct nvkm_subdev *ibus) intr1 &= ~stat; } } + + nvkm_mask(device, 0x12004c, 0x0000003f, 0x00000002); + nvkm_msec(device, 2000, + if (!(nvkm_rd32(device, 0x12004c) & 0x0000003f)) + break; + ); } static int diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/Kbuild index 2585ef07532a..ac2b34e9ac6a 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/Kbuild @@ -14,3 +14,4 @@ nvkm-y += nvkm/subdev/mc/gk20a.o nvkm-y += nvkm/subdev/mc/gp100.o nvkm-y += nvkm/subdev/mc/gp10b.o nvkm-y += nvkm/subdev/mc/tu102.o +nvkm-y += nvkm/subdev/mc/ga100.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/base.c index 0e57ab2a709f..09f669ac6630 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/base.c @@ -108,9 +108,6 @@ nvkm_mc_intr(struct nvkm_device *device, bool *handled) if (stat) nvkm_error(&mc->subdev, "intr %08x\n", stat); *handled = intr != 0; - - if (mc->func->intr_hack) - mc->func->intr_hack(mc, handled); } static u32 diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c new file mode 100644 index 000000000000..967eb3af11eb --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.c @@ -0,0 +1,74 @@ +/* + * Copyright 2021 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "priv.h" + +static void +ga100_mc_intr_unarm(struct nvkm_mc *mc) +{ + nvkm_wr32(mc->subdev.device, 0xb81610, 0x00000004); +} + +static void +ga100_mc_intr_rearm(struct nvkm_mc *mc) +{ + nvkm_wr32(mc->subdev.device, 0xb81608, 0x00000004); +} + +static void +ga100_mc_intr_mask(struct nvkm_mc *mc, u32 mask, u32 intr) +{ + nvkm_wr32(mc->subdev.device, 0xb81210, mask & intr ); + nvkm_wr32(mc->subdev.device, 0xb81410, mask & ~(mask & intr)); +} + +static u32 +ga100_mc_intr_stat(struct nvkm_mc *mc) +{ + u32 intr_top = nvkm_rd32(mc->subdev.device, 0xb81600), intr = 0x00000000; + if (intr_top & 0x00000004) + intr = nvkm_mask(mc->subdev.device, 0xb81010, 0x00000000, 0x00000000); + return intr; +} + +static void +ga100_mc_init(struct nvkm_mc *mc) +{ + nv50_mc_init(mc); + nvkm_wr32(mc->subdev.device, 0xb81210, 0xffffffff); +} + +static const struct nvkm_mc_func +ga100_mc = { + .init = ga100_mc_init, + .intr = gp100_mc_intr, + .intr_unarm = ga100_mc_intr_unarm, + .intr_rearm = ga100_mc_intr_rearm, + .intr_mask = ga100_mc_intr_mask, + .intr_stat = ga100_mc_intr_stat, + .reset = gk104_mc_reset, +}; + +int +ga100_mc_new(struct nvkm_device *device, int index, struct nvkm_mc **pmc) +{ + return nvkm_mc_new_(&ga100_mc, device, index, pmc); +} diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/priv.h index 4aab753a6040..0d01b2c419ff 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/priv.h @@ -26,7 +26,6 @@ struct nvkm_mc_func { void (*intr_mask)(struct nvkm_mc *, u32 mask, u32 stat); /* retrieve pending interrupt mask (NV_PMC_INTR) */ u32 (*intr_stat)(struct nvkm_mc *); - void (*intr_hack)(struct nvkm_mc *, bool *handled); const struct nvkm_mc_map *reset; void (*unk260)(struct nvkm_mc *, u32); }; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/tu102.c index d098c44a4fcb..af0afd1ad6ee 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mc/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mc/tu102.c @@ -19,37 +19,118 @@ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR * OTHER DEALINGS IN THE SOFTWARE. */ +#define tu102_mc(p) container_of((p), struct tu102_mc, base) #include "priv.h" +struct tu102_mc { + struct nvkm_mc base; + spinlock_t lock; + bool intr; + u32 mask; +}; + static void -tu102_mc_intr_hack(struct nvkm_mc *mc, bool *handled) +tu102_mc_intr_update(struct tu102_mc *mc) { - struct nvkm_device *device = mc->subdev.device; - u32 stat = nvkm_rd32(device, 0xb81010); - if (stat & 0x00000050) { - struct nvkm_subdev *subdev = - nvkm_device_subdev(device, NVKM_SUBDEV_FAULT); - nvkm_wr32(device, 0xb81010, stat & 0x00000050); - if (subdev) - nvkm_subdev_intr(subdev); - *handled = true; + struct nvkm_device *device = mc->base.subdev.device; + u32 mask = mc->intr ? mc->mask : 0, i; + + for (i = 0; i < 2; i++) { + nvkm_wr32(device, 0x000180 + (i * 0x04), ~mask); + nvkm_wr32(device, 0x000160 + (i * 0x04), mask); } + + if (mask & 0x00000200) + nvkm_wr32(device, 0xb81608, 0x6); + else + nvkm_wr32(device, 0xb81610, 0x6); +} + +void +tu102_mc_intr_unarm(struct nvkm_mc *base) +{ + struct tu102_mc *mc = tu102_mc(base); + unsigned long flags; + + spin_lock_irqsave(&mc->lock, flags); + mc->intr = false; + tu102_mc_intr_update(mc); + spin_unlock_irqrestore(&mc->lock, flags); +} + +void +tu102_mc_intr_rearm(struct nvkm_mc *base) +{ + struct tu102_mc *mc = tu102_mc(base); + unsigned long flags; + + spin_lock_irqsave(&mc->lock, flags); + mc->intr = true; + tu102_mc_intr_update(mc); + spin_unlock_irqrestore(&mc->lock, flags); +} + +void +tu102_mc_intr_mask(struct nvkm_mc *base, u32 mask, u32 intr) +{ + struct tu102_mc *mc = tu102_mc(base); + unsigned long flags; + + spin_lock_irqsave(&mc->lock, flags); + mc->mask = (mc->mask & ~mask) | intr; + tu102_mc_intr_update(mc); + spin_unlock_irqrestore(&mc->lock, flags); +} + +static u32 +tu102_mc_intr_stat(struct nvkm_mc *mc) +{ + struct nvkm_device *device = mc->subdev.device; + u32 intr0 = nvkm_rd32(device, 0x000100); + u32 intr1 = nvkm_rd32(device, 0x000104); + u32 intr_top = nvkm_rd32(device, 0xb81600); + + /* Turing and above route the MMU fault interrupts via a different + * interrupt tree with different control registers. For the moment remap + * them back to the old PMC vector. + */ + if (intr_top & 0x00000006) + intr0 |= 0x00000200; + + return intr0 | intr1; } + static const struct nvkm_mc_func tu102_mc = { .init = nv50_mc_init, .intr = gp100_mc_intr, - .intr_unarm = gp100_mc_intr_unarm, - .intr_rearm = gp100_mc_intr_rearm, - .intr_mask = gp100_mc_intr_mask, - .intr_stat = gf100_mc_intr_stat, - .intr_hack = tu102_mc_intr_hack, + .intr_unarm = tu102_mc_intr_unarm, + .intr_rearm = tu102_mc_intr_rearm, + .intr_mask = tu102_mc_intr_mask, + .intr_stat = tu102_mc_intr_stat, .reset = gk104_mc_reset, }; int +tu102_mc_new_(const struct nvkm_mc_func *func, struct nvkm_device *device, + int index, struct nvkm_mc **pmc) +{ + struct tu102_mc *mc; + + if (!(mc = kzalloc(sizeof(*mc), GFP_KERNEL))) + return -ENOMEM; + nvkm_mc_ctor(func, device, index, &mc->base); + *pmc = &mc->base; + + spin_lock_init(&mc->lock); + mc->intr = false; + mc->mask = 0x7fffffff; + return 0; +} + +int tu102_mc_new(struct nvkm_device *device, int index, struct nvkm_mc **pmc) { - return gp100_mc_new_(&tu102_mc, device, index, pmc); + return tu102_mc_new_(&tu102_mc, device, index, pmc); } diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c index de91e9a26172..6d5212ae2fd5 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c @@ -316,9 +316,9 @@ nvkm_mmu_vram(struct nvkm_mmu *mmu) { struct nvkm_device *device = mmu->subdev.device; struct nvkm_mm *mm = &device->fb->ram->vram; - const u32 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL); - const u32 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP); - const u32 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED); + const u64 sizeN = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NORMAL); + const u64 sizeU = nvkm_mm_heap_size(mm, NVKM_RAM_MM_NOMAP); + const u64 sizeM = nvkm_mm_heap_size(mm, NVKM_RAM_MM_MIXED); u8 type = NVKM_MEM_KIND * !!mmu->func->kind; u8 heap = NVKM_MEM_VRAM; int heapM, heapN, heapU; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index c6d575f50c48..e8c66d10478f 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -729,9 +729,6 @@ int radeon_ttm_init(struct radeon_device *rdev) } rdev->mman.initialized = true; - ttm_pool_init(&rdev->mman.bdev.pool, rdev->dev, rdev->need_swiotlb, - dma_addressing_limited(&rdev->pdev->dev)); - r = radeon_ttm_init_vram(rdev); if (r) { DRM_ERROR("Failed initializing VRAM heap.\n"); diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c index 85dd7131553a..0ae3a025efe9 100644 --- a/drivers/gpu/drm/tegra/dc.c +++ b/drivers/gpu/drm/tegra/dc.c @@ -2186,7 +2186,7 @@ static int tegra_dc_runtime_resume(struct host1x_client *client) struct device *dev = client->dev; int err; - err = pm_runtime_get_sync(dev); + err = pm_runtime_resume_and_get(dev); if (err < 0) { dev_err(dev, "failed to get runtime PM: %d\n", err); return err; diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index e45c8414e2a3..e9ce7d6992d2 100644 --- a/drivers/gpu/drm/tegra/drm.c +++ b/drivers/gpu/drm/tegra/drm.c @@ -1301,8 +1301,10 @@ static const struct of_device_id host1x_drm_subdevs[] = { { .compatible = "nvidia,tegra30-hdmi", }, { .compatible = "nvidia,tegra30-gr2d", }, { .compatible = "nvidia,tegra30-gr3d", }, + { .compatible = "nvidia,tegra114-dc", }, { .compatible = "nvidia,tegra114-dsi", }, { .compatible = "nvidia,tegra114-hdmi", }, + { .compatible = "nvidia,tegra114-gr2d", }, { .compatible = "nvidia,tegra114-gr3d", }, { .compatible = "nvidia,tegra124-dc", }, { .compatible = "nvidia,tegra124-sor", }, diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c index 5691ef1b0e58..f46d377f0c30 100644 --- a/drivers/gpu/drm/tegra/dsi.c +++ b/drivers/gpu/drm/tegra/dsi.c @@ -1111,7 +1111,7 @@ static int tegra_dsi_runtime_resume(struct host1x_client *client) struct device *dev = client->dev; int err; - err = pm_runtime_get_sync(dev); + err = pm_runtime_resume_and_get(dev); if (err < 0) { dev_err(dev, "failed to get runtime PM: %d\n", err); return err; diff --git a/drivers/gpu/drm/tegra/falcon.c b/drivers/gpu/drm/tegra/falcon.c index 56edef06c48e..223ab2ceb7e6 100644 --- a/drivers/gpu/drm/tegra/falcon.c +++ b/drivers/gpu/drm/tegra/falcon.c @@ -72,7 +72,7 @@ static int falcon_parse_firmware_image(struct falcon *falcon) struct falcon_fw_os_header_v1 *os; /* endian problems would show up right here */ - if (bin->magic != PCI_VENDOR_ID_NVIDIA) { + if (bin->magic != PCI_VENDOR_ID_NVIDIA && bin->magic != 0x10fe) { dev_err(falcon->dev, "incorrect firmware magic\n"); return -EINVAL; } @@ -178,9 +178,10 @@ int falcon_boot(struct falcon *falcon) falcon->firmware.data.offset + offset, offset, FALCON_MEMORY_DATA); - /* copy the first code segment into Falcon internal memory */ - falcon_copy_chunk(falcon, falcon->firmware.code.offset, - 0, FALCON_MEMORY_IMEM); + /* copy the code segment into Falcon internal memory */ + for (offset = 0; offset < falcon->firmware.code.size; offset += 256) + falcon_copy_chunk(falcon, falcon->firmware.code.offset + offset, + offset, FALCON_MEMORY_IMEM); /* setup falcon interrupts */ falcon_writel(falcon, FALCON_IRQMSET_EXT(0xff) | diff --git a/drivers/gpu/drm/tegra/gr2d.c b/drivers/gpu/drm/tegra/gr2d.c index 1a0d3ba6e525..adbe2ddcda19 100644 --- a/drivers/gpu/drm/tegra/gr2d.c +++ b/drivers/gpu/drm/tegra/gr2d.c @@ -161,9 +161,14 @@ static const struct gr2d_soc tegra30_gr2d_soc = { .version = 0x30, }; +static const struct gr2d_soc tegra114_gr2d_soc = { + .version = 0x35, +}; + static const struct of_device_id gr2d_match[] = { - { .compatible = "nvidia,tegra30-gr2d", .data = &tegra20_gr2d_soc }, - { .compatible = "nvidia,tegra20-gr2d", .data = &tegra30_gr2d_soc }, + { .compatible = "nvidia,tegra114-gr2d", .data = &tegra114_gr2d_soc }, + { .compatible = "nvidia,tegra30-gr2d", .data = &tegra30_gr2d_soc }, + { .compatible = "nvidia,tegra20-gr2d", .data = &tegra20_gr2d_soc }, { }, }; MODULE_DEVICE_TABLE(of, gr2d_match); diff --git a/drivers/gpu/drm/tegra/hdmi.c b/drivers/gpu/drm/tegra/hdmi.c index d09a24931c87..e5d2a4026028 100644 --- a/drivers/gpu/drm/tegra/hdmi.c +++ b/drivers/gpu/drm/tegra/hdmi.c @@ -1510,7 +1510,7 @@ static int tegra_hdmi_runtime_resume(struct host1x_client *client) struct device *dev = client->dev; int err; - err = pm_runtime_get_sync(dev); + err = pm_runtime_resume_and_get(dev); if (err < 0) { dev_err(dev, "failed to get runtime PM: %d\n", err); return err; diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c index 22a03f7ffdc1..5ce771cba133 100644 --- a/drivers/gpu/drm/tegra/hub.c +++ b/drivers/gpu/drm/tegra/hub.c @@ -789,7 +789,7 @@ static int tegra_display_hub_runtime_resume(struct host1x_client *client) unsigned int i; int err; - err = pm_runtime_get_sync(dev); + err = pm_runtime_resume_and_get(dev); if (err < 0) { dev_err(dev, "failed to get runtime PM: %d\n", err); return err; diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c index cc2aa2308a51..f02a035dda45 100644 --- a/drivers/gpu/drm/tegra/sor.c +++ b/drivers/gpu/drm/tegra/sor.c @@ -3218,7 +3218,7 @@ static int tegra_sor_runtime_resume(struct host1x_client *client) struct device *dev = client->dev; int err; - err = pm_runtime_get_sync(dev); + err = pm_runtime_resume_and_get(dev); if (err < 0) { dev_err(dev, "failed to get runtime PM: %d\n", err); return err; diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c index ade56b860cf9..77e128832920 100644 --- a/drivers/gpu/drm/tegra/vic.c +++ b/drivers/gpu/drm/tegra/vic.c @@ -117,7 +117,19 @@ static int vic_boot(struct vic *vic) if (spec->num_ids > 0) { value = spec->ids[0] & 0xffff; + /* + * STREAMID0 is used for input/output buffers. + * Initialize it to SID_VIC in case context isolation + * is not enabled, and SID_VIC is used for both firmware + * and data buffers. + * + * If context isolation is enabled, it will be + * overridden by the SETSTREAMID opcode as part of + * each job. + */ vic_writel(vic, value, VIC_THI_STREAMID0); + + /* STREAMID1 is used for firmware loading. */ vic_writel(vic, value, VIC_THI_STREAMID1); } } @@ -135,16 +147,21 @@ static int vic_boot(struct vic *vic) hdr = vic->falcon.firmware.virt; fce_bin_data_offset = *(u32 *)(hdr + VIC_UCODE_FCE_DATA_OFFSET); - hdr = vic->falcon.firmware.virt + - *(u32 *)(hdr + VIC_UCODE_FCE_HEADER_OFFSET); - fce_ucode_size = *(u32 *)(hdr + FCE_UCODE_SIZE_OFFSET); falcon_execute_method(&vic->falcon, VIC_SET_APPLICATION_ID, 1); - falcon_execute_method(&vic->falcon, VIC_SET_FCE_UCODE_SIZE, - fce_ucode_size); - falcon_execute_method(&vic->falcon, VIC_SET_FCE_UCODE_OFFSET, - (vic->falcon.firmware.iova + fce_bin_data_offset) - >> 8); + + /* Old VIC firmware needs kernel help with setting up FCE microcode. */ + if (fce_bin_data_offset != 0x0 && fce_bin_data_offset != 0xa5a5a5a5) { + hdr = vic->falcon.firmware.virt + + *(u32 *)(hdr + VIC_UCODE_FCE_HEADER_OFFSET); + fce_ucode_size = *(u32 *)(hdr + FCE_UCODE_SIZE_OFFSET); + + falcon_execute_method(&vic->falcon, VIC_SET_FCE_UCODE_SIZE, + fce_ucode_size); + falcon_execute_method( + &vic->falcon, VIC_SET_FCE_UCODE_OFFSET, + (vic->falcon.firmware.iova + fce_bin_data_offset) >> 8); + } err = falcon_wait_idle(&vic->falcon); if (err < 0) { @@ -314,7 +331,7 @@ static int vic_open_channel(struct tegra_drm_client *client, struct vic *vic = to_vic(client); int err; - err = pm_runtime_get_sync(vic->dev); + err = pm_runtime_resume_and_get(vic->dev); if (err < 0) return err; diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 7b2f60616750..11e0313db0ea 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -66,7 +66,7 @@ static struct ttm_pool_type global_uncached[MAX_ORDER]; static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; -static spinlock_t shrinker_lock; +static struct mutex shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; @@ -79,12 +79,13 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, struct page *p; void *vaddr; - if (order) { - gfp_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | + /* Don't set the __GFP_COMP flag for higher order allocations. + * Mapping pages directly into an userspace process and calling + * put_page() on a TTM allocated page is illegal. + */ + if (order) + gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_KSWAPD_RECLAIM; - gfp_flags &= ~__GFP_MOVABLE; - gfp_flags &= ~__GFP_COMP; - } if (!pool->use_dma_alloc) { p = alloc_pages(gfp_flags, order); @@ -190,7 +191,7 @@ static int ttm_pool_map(struct ttm_pool *pool, unsigned int order, size_t size = (1ULL << order) * PAGE_SIZE; addr = dma_map_page(pool->dev, p, 0, size, DMA_BIDIRECTIONAL); - if (dma_mapping_error(pool->dev, **dma_addr)) + if (dma_mapping_error(pool->dev, addr)) return -EFAULT; } @@ -249,9 +250,9 @@ static void ttm_pool_type_init(struct ttm_pool_type *pt, struct ttm_pool *pool, spin_lock_init(&pt->lock); INIT_LIST_HEAD(&pt->pages); - spin_lock(&shrinker_lock); + mutex_lock(&shrinker_lock); list_add_tail(&pt->shrinker_list, &shrinker_list); - spin_unlock(&shrinker_lock); + mutex_unlock(&shrinker_lock); } /* Remove a pool_type from the global shrinker list and free all pages */ @@ -259,9 +260,9 @@ static void ttm_pool_type_fini(struct ttm_pool_type *pt) { struct page *p, *tmp; - spin_lock(&shrinker_lock); + mutex_lock(&shrinker_lock); list_del(&pt->shrinker_list); - spin_unlock(&shrinker_lock); + mutex_unlock(&shrinker_lock); list_for_each_entry_safe(p, tmp, &pt->pages, lru) ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); @@ -302,7 +303,7 @@ static unsigned int ttm_pool_shrink(void) unsigned int num_freed; struct page *p; - spin_lock(&shrinker_lock); + mutex_lock(&shrinker_lock); pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); p = ttm_pool_type_take(pt); @@ -314,7 +315,7 @@ static unsigned int ttm_pool_shrink(void) } list_move_tail(&pt->shrinker_list, &shrinker_list); - spin_unlock(&shrinker_lock); + mutex_unlock(&shrinker_lock); return num_freed; } @@ -507,7 +508,6 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, ttm_pool_type_init(&pool->caching[i].orders[j], pool, i, j); } -EXPORT_SYMBOL(ttm_pool_init); /** * ttm_pool_fini - Cleanup a pool @@ -525,7 +525,6 @@ void ttm_pool_fini(struct ttm_pool *pool) for (j = 0; j < MAX_ORDER; ++j) ttm_pool_type_fini(&pool->caching[i].orders[j]); } -EXPORT_SYMBOL(ttm_pool_fini); #ifdef CONFIG_DEBUG_FS /* Count the number of pages available in a pool_type */ @@ -566,7 +565,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) { unsigned int i; - spin_lock(&shrinker_lock); + mutex_lock(&shrinker_lock); seq_puts(m, "\t "); for (i = 0; i < MAX_ORDER; ++i) @@ -602,7 +601,7 @@ int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m) seq_printf(m, "\ntotal\t: %8lu of %8lu\n", atomic_long_read(&allocated_pages), page_pool_size); - spin_unlock(&shrinker_lock); + mutex_unlock(&shrinker_lock); return 0; } @@ -646,7 +645,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) if (!page_pool_size) page_pool_size = num_pages; - spin_lock_init(&shrinker_lock); + mutex_init(&shrinker_lock); INIT_LIST_HEAD(&shrinker_list); for (i = 0; i < MAX_ORDER; ++i) { diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c index 2e5449b25ce4..bd7514612e95 100644 --- a/drivers/gpu/drm/vc4/vc4_hdmi.c +++ b/drivers/gpu/drm/vc4/vc4_hdmi.c @@ -1402,6 +1402,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *vc4_hdmi) card->dai_link = dai_link; card->num_links = 1; card->name = vc4_hdmi->variant->card_name; + card->driver_name = "vc4-hdmi"; card->dev = dev; card->owner = THIS_MODULE; diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 7bdda1b5b221..09fa75a2b289 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -899,6 +899,7 @@ config HID_SONY depends on NEW_LEDS depends on LEDS_CLASS select POWER_SUPPLY + select CRC32 help Support for diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c index 3d1ccac5d99a..2ab38b715347 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c +++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c @@ -154,7 +154,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata) for (i = 0; i < cl_data->num_hid_devices; i++) { cl_data->sensor_virt_addr[i] = dma_alloc_coherent(dev, sizeof(int) * 8, - &cl_data->sensor_phys_addr[i], + &cl_data->sensor_dma_addr[i], GFP_KERNEL); cl_data->sensor_sts[i] = 0; cl_data->sensor_requested_cnt[i] = 0; @@ -187,7 +187,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata) } info.period = msecs_to_jiffies(AMD_SFH_IDLE_LOOP); info.sensor_idx = cl_idx; - info.phys_address = cl_data->sensor_phys_addr[i]; + info.dma_address = cl_data->sensor_dma_addr[i]; cl_data->report_descr[i] = kzalloc(cl_data->report_descr_sz[i], GFP_KERNEL); if (!cl_data->report_descr[i]) { @@ -212,7 +212,7 @@ cleanup: if (cl_data->sensor_virt_addr[i]) { dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int), cl_data->sensor_virt_addr[i], - cl_data->sensor_phys_addr[i]); + cl_data->sensor_dma_addr[i]); } kfree(cl_data->feature_report[i]); kfree(cl_data->input_report[i]); @@ -238,7 +238,7 @@ int amd_sfh_hid_client_deinit(struct amd_mp2_dev *privdata) if (cl_data->sensor_virt_addr[i]) { dma_free_coherent(&privdata->pdev->dev, 8 * sizeof(int), cl_data->sensor_virt_addr[i], - cl_data->sensor_phys_addr[i]); + cl_data->sensor_dma_addr[i]); } } kfree(cl_data); diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h index 6be0783d885c..d7eac1728e31 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h +++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h @@ -27,7 +27,7 @@ struct amdtp_cl_data { int hid_descr_size[MAX_HID_DEVICES]; phys_addr_t phys_addr_base; u32 *sensor_virt_addr[MAX_HID_DEVICES]; - phys_addr_t sensor_phys_addr[MAX_HID_DEVICES]; + dma_addr_t sensor_dma_addr[MAX_HID_DEVICES]; u32 sensor_sts[MAX_HID_DEVICES]; u32 sensor_requested_cnt[MAX_HID_DEVICES]; u8 report_type[MAX_HID_DEVICES]; diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c index a51c7b76283b..dbac16641662 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c +++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c @@ -41,7 +41,7 @@ void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info i cmd_param.s.buf_layout = 1; cmd_param.s.buf_length = 16; - writeq(info.phys_address, privdata->mmio + AMD_C2P_MSG2); + writeq(info.dma_address, privdata->mmio + AMD_C2P_MSG2); writel(cmd_param.ul, privdata->mmio + AMD_C2P_MSG1); writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG0); } diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h index e8be94f935b7..8f8d19b2cfe5 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h +++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h @@ -67,7 +67,7 @@ struct amd_mp2_dev { struct amd_mp2_sensor_info { u8 sensor_idx; u32 period; - phys_addr_t phys_address; + dma_addr_t dma_address; }; void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info); diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 4c5f23640f9c..5ba0aa1d2335 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -389,6 +389,7 @@ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 #define USB_DEVICE_ID_HP_X2 0x074d #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755 +#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 #define USB_VENDOR_ID_ELECOM 0x056e #define USB_DEVICE_ID_ELECOM_BM084 0x0061 diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c index dc7f6b4a775c..f23027d2795b 100644 --- a/drivers/hid/hid-input.c +++ b/drivers/hid/hid-input.c @@ -322,6 +322,8 @@ static const struct hid_device_id hid_battery_quirks[] = { { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), HID_BATTERY_QUIRK_IGNORE }, + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), + HID_BATTERY_QUIRK_IGNORE }, {} }; diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c index 1ffcfc9a1e03..45e7e0bdd382 100644 --- a/drivers/hid/hid-logitech-dj.c +++ b/drivers/hid/hid-logitech-dj.c @@ -1869,6 +1869,10 @@ static const struct hid_device_id logi_dj_receivers[] = { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xc531), .driver_data = recvr_type_gaming_hidpp}, + { /* Logitech G602 receiver (0xc537) */ + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, + 0xc537), + .driver_data = recvr_type_gaming_hidpp}, { /* Logitech lightspeed receiver (0xc539) */ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1), diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c index f85781464807..7eb9a6ddb46a 100644 --- a/drivers/hid/hid-logitech-hidpp.c +++ b/drivers/hid/hid-logitech-hidpp.c @@ -4053,6 +4053,8 @@ static const struct hid_device_id hidpp_devices[] = { { /* MX Master mouse over Bluetooth */ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, + { /* MX Ergo trackball over Bluetooth */ + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) }, { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, { /* MX Master 3 mouse over Bluetooth */ diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index d670bcd57bde..0743ef51d3b2 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -2054,6 +2054,10 @@ static const struct hid_device_id mt_devices[] = { HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_SYNAPTICS, 0xce08) }, + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, + USB_VENDOR_ID_SYNAPTICS, 0xce09) }, + /* TopSeed panels */ { .driver_data = MT_CLS_TOPSEED, MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, diff --git a/drivers/hid/hid-uclogic-params.c b/drivers/hid/hid-uclogic-params.c index d26d8cd98efc..56406cee401f 100644 --- a/drivers/hid/hid-uclogic-params.c +++ b/drivers/hid/hid-uclogic-params.c @@ -90,7 +90,7 @@ static int uclogic_params_get_str_desc(__u8 **pbuf, struct hid_device *hdev, goto cleanup; } else if (rc < 0) { hid_err(hdev, - "failed retrieving string descriptor #%hhu: %d\n", + "failed retrieving string descriptor #%u: %d\n", idx, rc); goto cleanup; } diff --git a/drivers/hid/hid-wiimote-core.c b/drivers/hid/hid-wiimote-core.c index 41012681cafd..4399d6c6afef 100644 --- a/drivers/hid/hid-wiimote-core.c +++ b/drivers/hid/hid-wiimote-core.c @@ -1482,7 +1482,7 @@ static void handler_return(struct wiimote_data *wdata, const __u8 *payload) wdata->state.cmd_err = err; wiimote_cmd_complete(wdata); } else if (err) { - hid_warn(wdata->hdev, "Remote error %hhu on req %hhu\n", err, + hid_warn(wdata->hdev, "Remote error %u on req %u\n", err, cmd); } } diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c index 045c464228d9..e8acd235db2a 100644 --- a/drivers/hid/wacom_sys.c +++ b/drivers/hid/wacom_sys.c @@ -1270,6 +1270,37 @@ static int wacom_devm_sysfs_create_group(struct wacom *wacom, group); } +static void wacom_devm_kfifo_release(struct device *dev, void *res) +{ + struct kfifo_rec_ptr_2 *devres = res; + + kfifo_free(devres); +} + +static int wacom_devm_kfifo_alloc(struct wacom *wacom) +{ + struct wacom_wac *wacom_wac = &wacom->wacom_wac; + struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo; + int error; + + pen_fifo = devres_alloc(wacom_devm_kfifo_release, + sizeof(struct kfifo_rec_ptr_2), + GFP_KERNEL); + + if (!pen_fifo) + return -ENOMEM; + + error = kfifo_alloc(pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL); + if (error) { + devres_free(pen_fifo); + return error; + } + + devres_add(&wacom->hdev->dev, pen_fifo); + + return 0; +} + enum led_brightness wacom_leds_brightness_get(struct wacom_led *led) { struct wacom *wacom = led->wacom; @@ -2724,7 +2755,7 @@ static int wacom_probe(struct hid_device *hdev, if (features->check_for_hid_type && features->hid_type != hdev->type) return -ENODEV; - error = kfifo_alloc(&wacom_wac->pen_fifo, WACOM_PKGLEN_MAX, GFP_KERNEL); + error = wacom_devm_kfifo_alloc(wacom); if (error) return error; @@ -2786,8 +2817,6 @@ static void wacom_remove(struct hid_device *hdev) if (wacom->wacom_wac.features.type != REMOTE) wacom_release_resources(wacom); - - kfifo_free(&wacom_wac->pen_fifo); } #ifdef CONFIG_PM diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index 502f8cd95f6d..d491fdcee61f 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -2550,7 +2550,6 @@ static void hv_kexec_handler(void) /* Make sure conn_state is set as hv_synic_cleanup checks for it */ mb(); cpuhp_remove_state(hyperv_cpuhp_online); - hyperv_cleanup(); }; static void hv_crash_handler(struct pt_regs *regs) @@ -2566,7 +2565,6 @@ static void hv_crash_handler(struct pt_regs *regs) cpu = smp_processor_id(); hv_stimer_cleanup(cpu); hv_synic_disable_regs(cpu); - hyperv_cleanup(); }; static int hv_synic_suspend(void) diff --git a/drivers/hwmon/amd_energy.c b/drivers/hwmon/amd_energy.c index 9b306448b7a0..822c2e74b98d 100644 --- a/drivers/hwmon/amd_energy.c +++ b/drivers/hwmon/amd_energy.c @@ -222,7 +222,7 @@ static int amd_create_sensor(struct device *dev, */ cpus = num_present_cpus() / num_siblings; - s_config = devm_kcalloc(dev, cpus + sockets, + s_config = devm_kcalloc(dev, cpus + sockets + 1, sizeof(u32), GFP_KERNEL); if (!s_config) return -ENOMEM; @@ -254,6 +254,7 @@ static int amd_create_sensor(struct device *dev, scnprintf(label_l[i], 10, "Esocket%u", (i - cpus)); } + s_config[i] = 0; return 0; } diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c index 777439f48c14..111a91dc6b79 100644 --- a/drivers/hwmon/pwm-fan.c +++ b/drivers/hwmon/pwm-fan.c @@ -334,8 +334,18 @@ static int pwm_fan_probe(struct platform_device *pdev) ctx->pwm_value = MAX_PWM; - /* Set duty cycle to maximum allowed and enable PWM output */ pwm_init_state(ctx->pwm, &state); + /* + * __set_pwm assumes that MAX_PWM * (period - 1) fits into an unsigned + * long. Check this here to prevent the fan running at a too low + * frequency. + */ + if (state.period > ULONG_MAX / MAX_PWM + 1) { + dev_err(dev, "Configured period too big\n"); + return -EINVAL; + } + + /* Set duty cycle to maximum allowed and enable PWM output */ state.duty_cycle = ctx->pwm->args.period - 1; state.enabled = true; diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c index 52acd77438ed..251e75c9ba9d 100644 --- a/drivers/hwtracing/intel_th/pci.c +++ b/drivers/hwtracing/intel_th/pci.c @@ -269,6 +269,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { .driver_data = (kernel_ulong_t)&intel_th_2x, }, { + /* Alder Lake-P */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, + { /* Alder Lake CPU */ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), .driver_data = (kernel_ulong_t)&intel_th_2x, diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c index 3e7df1c0477f..81d7b21d31ec 100644 --- a/drivers/hwtracing/stm/heartbeat.c +++ b/drivers/hwtracing/stm/heartbeat.c @@ -64,7 +64,7 @@ static void stm_heartbeat_unlink(struct stm_source_data *data) static int stm_heartbeat_init(void) { - int i, ret = -ENOMEM; + int i, ret; if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX) return -EINVAL; @@ -72,8 +72,10 @@ static int stm_heartbeat_init(void) for (i = 0; i < nr_devs; i++) { stm_heartbeat[i].data.name = kasprintf(GFP_KERNEL, "heartbeat.%d", i); - if (!stm_heartbeat[i].data.name) + if (!stm_heartbeat[i].data.name) { + ret = -ENOMEM; goto fail_unregister; + } stm_heartbeat[i].data.nr_chans = 1; stm_heartbeat[i].data.link = stm_heartbeat_link; diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig index d4d60ad0eda0..ab1f39ac39f4 100644 --- a/drivers/i2c/busses/Kconfig +++ b/drivers/i2c/busses/Kconfig @@ -1013,6 +1013,7 @@ config I2C_SIRF config I2C_SPRD tristate "Spreadtrum I2C interface" depends on I2C=y && (ARCH_SPRD || COMPILE_TEST) + depends on COMMON_CLK help If you say yes to this option, support will be included for the Spreadtrum I2C interface. diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c index ae90713443fa..877fe3733a42 100644 --- a/drivers/i2c/busses/i2c-i801.c +++ b/drivers/i2c/busses/i2c-i801.c @@ -1449,7 +1449,7 @@ static int i801_add_mux(struct i801_priv *priv) /* Register GPIO descriptor lookup table */ lookup = devm_kzalloc(dev, - struct_size(lookup, table, mux_config->n_gpios), + struct_size(lookup, table, mux_config->n_gpios + 1), GFP_KERNEL); if (!lookup) return -ENOMEM; diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c index b444fbf1a262..a8e8af57e33f 100644 --- a/drivers/i2c/busses/i2c-imx.c +++ b/drivers/i2c/busses/i2c-imx.c @@ -241,6 +241,19 @@ static struct imx_i2c_hwdata vf610_i2c_hwdata = { }; +static const struct platform_device_id imx_i2c_devtype[] = { + { + .name = "imx1-i2c", + .driver_data = (kernel_ulong_t)&imx1_i2c_hwdata, + }, { + .name = "imx21-i2c", + .driver_data = (kernel_ulong_t)&imx21_i2c_hwdata, + }, { + /* sentinel */ + } +}; +MODULE_DEVICE_TABLE(platform, imx_i2c_devtype); + static const struct of_device_id i2c_imx_dt_ids[] = { { .compatible = "fsl,imx1-i2c", .data = &imx1_i2c_hwdata, }, { .compatible = "fsl,imx21-i2c", .data = &imx21_i2c_hwdata, }, @@ -1330,7 +1343,11 @@ static int i2c_imx_probe(struct platform_device *pdev) return -ENOMEM; match = device_get_match_data(&pdev->dev); - i2c_imx->hwdata = match; + if (match) + i2c_imx->hwdata = match; + else + i2c_imx->hwdata = (struct imx_i2c_hwdata *) + platform_get_device_id(pdev)->driver_data; /* Setup i2c_imx driver structure */ strlcpy(i2c_imx->adapter.name, pdev->name, sizeof(i2c_imx->adapter.name)); @@ -1498,6 +1515,7 @@ static struct platform_driver i2c_imx_driver = { .of_match_table = i2c_imx_dt_ids, .acpi_match_table = i2c_imx_acpi_ids, }, + .id_table = imx_i2c_devtype, }; static int __init i2c_adap_imx_init(void) diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c index 33de99b7bc20..0818d3e50734 100644 --- a/drivers/i2c/busses/i2c-mt65xx.c +++ b/drivers/i2c/busses/i2c-mt65xx.c @@ -38,6 +38,7 @@ #define I2C_IO_CONFIG_OPEN_DRAIN 0x0003 #define I2C_IO_CONFIG_PUSH_PULL 0x0000 #define I2C_SOFT_RST 0x0001 +#define I2C_HANDSHAKE_RST 0x0020 #define I2C_FIFO_ADDR_CLR 0x0001 #define I2C_DELAY_LEN 0x0002 #define I2C_TIME_CLR_VALUE 0x0000 @@ -45,6 +46,7 @@ #define I2C_WRRD_TRANAC_VALUE 0x0002 #define I2C_RD_TRANAC_VALUE 0x0001 #define I2C_SCL_MIS_COMP_VALUE 0x0000 +#define I2C_CHN_CLR_FLAG 0x0000 #define I2C_DMA_CON_TX 0x0000 #define I2C_DMA_CON_RX 0x0001 @@ -54,7 +56,9 @@ #define I2C_DMA_START_EN 0x0001 #define I2C_DMA_INT_FLAG_NONE 0x0000 #define I2C_DMA_CLR_FLAG 0x0000 +#define I2C_DMA_WARM_RST 0x0001 #define I2C_DMA_HARD_RST 0x0002 +#define I2C_DMA_HANDSHAKE_RST 0x0004 #define MAX_SAMPLE_CNT_DIV 8 #define MAX_STEP_CNT_DIV 64 @@ -475,11 +479,24 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c) { u16 control_reg; - writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST); - udelay(50); - writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); - - mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET); + if (i2c->dev_comp->dma_sync) { + writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST); + udelay(10); + writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); + udelay(10); + writel(I2C_DMA_HANDSHAKE_RST | I2C_DMA_HARD_RST, + i2c->pdmabase + OFFSET_RST); + mtk_i2c_writew(i2c, I2C_HANDSHAKE_RST | I2C_SOFT_RST, + OFFSET_SOFTRESET); + udelay(10); + writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); + mtk_i2c_writew(i2c, I2C_CHN_CLR_FLAG, OFFSET_SOFTRESET); + } else { + writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST); + udelay(50); + writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST); + mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET); + } /* Set ioconfig */ if (i2c->use_push_pull) diff --git a/drivers/i2c/busses/i2c-octeon-core.c b/drivers/i2c/busses/i2c-octeon-core.c index d9607905dc2f..845eda70b8ca 100644 --- a/drivers/i2c/busses/i2c-octeon-core.c +++ b/drivers/i2c/busses/i2c-octeon-core.c @@ -347,7 +347,7 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target, if (result) return result; if (recv_len && i == 0) { - if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) + if (data[i] > I2C_SMBUS_BLOCK_MAX) return -EPROTO; length += data[i]; } diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c index 19cda6742423..2917fecf6c80 100644 --- a/drivers/i2c/busses/i2c-sprd.c +++ b/drivers/i2c/busses/i2c-sprd.c @@ -72,6 +72,8 @@ /* timeout (ms) for pm runtime autosuspend */ #define SPRD_I2C_PM_TIMEOUT 1000 +/* timeout (ms) for transfer message */ +#define I2C_XFER_TIMEOUT 1000 /* SPRD i2c data structure */ struct sprd_i2c { @@ -244,6 +246,7 @@ static int sprd_i2c_handle_msg(struct i2c_adapter *i2c_adap, struct i2c_msg *msg, bool is_last_msg) { struct sprd_i2c *i2c_dev = i2c_adap->algo_data; + unsigned long time_left; i2c_dev->msg = msg; i2c_dev->buf = msg->buf; @@ -273,7 +276,10 @@ static int sprd_i2c_handle_msg(struct i2c_adapter *i2c_adap, sprd_i2c_opt_start(i2c_dev); - wait_for_completion(&i2c_dev->complete); + time_left = wait_for_completion_timeout(&i2c_dev->complete, + msecs_to_jiffies(I2C_XFER_TIMEOUT)); + if (!time_left) + return -ETIMEDOUT; return i2c_dev->err; } diff --git a/drivers/i2c/busses/i2c-tegra-bpmp.c b/drivers/i2c/busses/i2c-tegra-bpmp.c index ec7a7e917edd..c0c7d01473f2 100644 --- a/drivers/i2c/busses/i2c-tegra-bpmp.c +++ b/drivers/i2c/busses/i2c-tegra-bpmp.c @@ -80,7 +80,7 @@ static int tegra_bpmp_xlate_flags(u16 flags, u16 *out) flags &= ~I2C_M_RECV_LEN; } - return (flags != 0) ? -EINVAL : 0; + return 0; } /** diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c index 6f08c0c3238d..8b113ae32dc7 100644 --- a/drivers/i2c/busses/i2c-tegra.c +++ b/drivers/i2c/busses/i2c-tegra.c @@ -326,6 +326,8 @@ static void i2c_writel(struct tegra_i2c_dev *i2c_dev, u32 val, unsigned int reg) /* read back register to make sure that register writes completed */ if (reg != I2C_TX_FIFO) readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg)); + else if (i2c_dev->is_vi) + readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, I2C_INT_STATUS)); } static u32 i2c_readl(struct tegra_i2c_dev *i2c_dev, unsigned int reg) @@ -339,6 +341,21 @@ static void i2c_writesl(struct tegra_i2c_dev *i2c_dev, void *data, writesl(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg), data, len); } +static void i2c_writesl_vi(struct tegra_i2c_dev *i2c_dev, void *data, + unsigned int reg, unsigned int len) +{ + u32 *data32 = data; + + /* + * VI I2C controller has known hardware bug where writes get stuck + * when immediate multiple writes happen to TX_FIFO register. + * Recommended software work around is to read I2C register after + * each write to TX_FIFO register to flush out the data. + */ + while (len--) + i2c_writel(i2c_dev, *data32++, reg); +} + static void i2c_readsl(struct tegra_i2c_dev *i2c_dev, void *data, unsigned int reg, unsigned int len) { @@ -533,7 +550,7 @@ static int tegra_i2c_poll_register(struct tegra_i2c_dev *i2c_dev, void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg); u32 val; - if (!i2c_dev->atomic_mode) + if (!i2c_dev->atomic_mode && !in_irq()) return readl_relaxed_poll_timeout(addr, val, !(val & mask), delay_us, timeout_us); @@ -811,7 +828,10 @@ static int tegra_i2c_fill_tx_fifo(struct tegra_i2c_dev *i2c_dev) i2c_dev->msg_buf_remaining = buf_remaining; i2c_dev->msg_buf = buf + words_to_transfer * BYTES_PER_FIFO_WORD; - i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); + if (i2c_dev->is_vi) + i2c_writesl_vi(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); + else + i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); buf += words_to_transfer * BYTES_PER_FIFO_WORD; } diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c index b11c8c47ba2a..e946903b0993 100644 --- a/drivers/iio/adc/ti_am335x_adc.c +++ b/drivers/iio/adc/ti_am335x_adc.c @@ -397,16 +397,12 @@ static int tiadc_iio_buffered_hardware_setup(struct device *dev, ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh, flags, indio_dev->name, indio_dev); if (ret) - goto error_kfifo_free; + return ret; indio_dev->setup_ops = setup_ops; indio_dev->modes |= INDIO_BUFFER_SOFTWARE; return 0; - -error_kfifo_free: - iio_kfifo_free(indio_dev->buffer); - return ret; } static const char * const chan_name_ain[] = { diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c index 0507283bd4c1..2dbd2646e44e 100644 --- a/drivers/iio/common/st_sensors/st_sensors_trigger.c +++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c @@ -23,35 +23,31 @@ * @sdata: Sensor data. * * returns: - * 0 - no new samples available - * 1 - new samples available - * negative - error or unknown + * false - no new samples available or read error + * true - new samples available */ -static int st_sensors_new_samples_available(struct iio_dev *indio_dev, - struct st_sensor_data *sdata) +static bool st_sensors_new_samples_available(struct iio_dev *indio_dev, + struct st_sensor_data *sdata) { int ret, status; /* How would I know if I can't check it? */ if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) - return -EINVAL; + return true; /* No scan mask, no interrupt */ if (!indio_dev->active_scan_mask) - return 0; + return false; ret = regmap_read(sdata->regmap, sdata->sensor_settings->drdy_irq.stat_drdy.addr, &status); if (ret < 0) { dev_err(sdata->dev, "error checking samples available\n"); - return ret; + return false; } - if (status & sdata->sensor_settings->drdy_irq.stat_drdy.mask) - return 1; - - return 0; + return !!(status & sdata->sensor_settings->drdy_irq.stat_drdy.mask); } /** @@ -180,9 +176,15 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev, /* Tell the interrupt handler that we're dealing with edges */ if (irq_trig == IRQF_TRIGGER_FALLING || - irq_trig == IRQF_TRIGGER_RISING) + irq_trig == IRQF_TRIGGER_RISING) { + if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) { + dev_err(&indio_dev->dev, + "edge IRQ not supported w/o stat register.\n"); + err = -EOPNOTSUPP; + goto iio_trigger_free; + } sdata->edge_irq = true; - else + } else { /* * If we're not using edges (i.e. level interrupts) we * just mask off the IRQ, handle one interrupt, then @@ -190,6 +192,7 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev, * interrupt handler top half again and start over. */ irq_trig |= IRQF_ONESHOT; + } /* * If the interrupt pin is Open Drain, by definition this diff --git a/drivers/iio/dac/ad5504.c b/drivers/iio/dac/ad5504.c index 28921b62e642..e9297c25d4ef 100644 --- a/drivers/iio/dac/ad5504.c +++ b/drivers/iio/dac/ad5504.c @@ -187,9 +187,9 @@ static ssize_t ad5504_write_dac_powerdown(struct iio_dev *indio_dev, return ret; if (pwr_down) - st->pwr_down_mask |= (1 << chan->channel); - else st->pwr_down_mask &= ~(1 << chan->channel); + else + st->pwr_down_mask |= (1 << chan->channel); ret = ad5504_spi_write(st, AD5504_ADDR_CTRL, AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) | diff --git a/drivers/iio/proximity/sx9310.c b/drivers/iio/proximity/sx9310.c index a2f820997afc..37fd0b65a014 100644 --- a/drivers/iio/proximity/sx9310.c +++ b/drivers/iio/proximity/sx9310.c @@ -601,7 +601,7 @@ static int sx9310_read_thresh(struct sx9310_data *data, return ret; regval = FIELD_GET(SX9310_REG_PROX_CTRL8_9_PTHRESH_MASK, regval); - if (regval > ARRAY_SIZE(sx9310_pthresh_codes)) + if (regval >= ARRAY_SIZE(sx9310_pthresh_codes)) return -EINVAL; *val = sx9310_pthresh_codes[regval]; @@ -1305,7 +1305,8 @@ sx9310_get_default_reg(struct sx9310_data *data, int i, if (ret) break; - pos = min(max(ilog2(pos), 3), 10) - 3; + /* Powers of 2, except for a gap between 16 and 64 */ + pos = clamp(ilog2(pos), 3, 11) - (pos >= 32 ? 4 : 3); reg_def->def &= ~SX9310_REG_PROX_CTRL7_AVGPOSFILT_MASK; reg_def->def |= FIELD_PREP(SX9310_REG_PROX_CTRL7_AVGPOSFILT_MASK, pos); diff --git a/drivers/iio/temperature/mlx90632.c b/drivers/iio/temperature/mlx90632.c index 503fe54a0bb9..608ccb1d8bc8 100644 --- a/drivers/iio/temperature/mlx90632.c +++ b/drivers/iio/temperature/mlx90632.c @@ -248,6 +248,12 @@ static int mlx90632_set_meas_type(struct regmap *regmap, u8 type) if (ret < 0) return ret; + /* + * Give the mlx90632 some time to reset properly before sending a new I2C command + * if this is not done, the following I2C command(s) will not be accepted. + */ + usleep_range(150, 200); + ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL, (MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK), (MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT)); diff --git a/drivers/infiniband/core/cma_configfs.c b/drivers/infiniband/core/cma_configfs.c index 7f70e5a7de10..97a77ea8d3c9 100644 --- a/drivers/infiniband/core/cma_configfs.c +++ b/drivers/infiniband/core/cma_configfs.c @@ -131,8 +131,10 @@ static ssize_t default_roce_mode_store(struct config_item *item, return ret; gid_type = ib_cache_gid_parse_type_str(buf); - if (gid_type < 0) + if (gid_type < 0) { + cma_configfs_params_put(cma_dev); return -EINVAL; + } ret = cma_set_default_gid_type(cma_dev, group->port_num, gid_type); diff --git a/drivers/infiniband/core/restrack.c b/drivers/infiniband/core/restrack.c index e0a41c867002..ff1551b3cf61 100644 --- a/drivers/infiniband/core/restrack.c +++ b/drivers/infiniband/core/restrack.c @@ -254,6 +254,7 @@ void rdma_restrack_add(struct rdma_restrack_entry *res) } else { ret = xa_alloc_cyclic(&rt->xa, &res->id, res, xa_limit_32b, &rt->next_id, GFP_KERNEL); + ret = (ret < 0) ? ret : 0; } out: diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c index 7dab9a27a145..da2512c30ffd 100644 --- a/drivers/infiniband/core/ucma.c +++ b/drivers/infiniband/core/ucma.c @@ -95,8 +95,6 @@ struct ucma_context { u64 uid; struct list_head list; - /* sync between removal event and id destroy, protected by file mut */ - int destroying; struct work_struct close_work; }; @@ -122,7 +120,7 @@ static DEFINE_XARRAY_ALLOC(ctx_table); static DEFINE_XARRAY_ALLOC(multicast_table); static const struct file_operations ucma_fops; -static int __destroy_id(struct ucma_context *ctx); +static int ucma_destroy_private_ctx(struct ucma_context *ctx); static inline struct ucma_context *_ucma_find_context(int id, struct ucma_file *file) @@ -179,19 +177,14 @@ static void ucma_close_id(struct work_struct *work) /* once all inflight tasks are finished, we close all underlying * resources. The context is still alive till its explicit destryoing - * by its creator. + * by its creator. This puts back the xarray's reference. */ ucma_put_ctx(ctx); wait_for_completion(&ctx->comp); /* No new events will be generated after destroying the id. */ rdma_destroy_id(ctx->cm_id); - /* - * At this point ctx->ref is zero so the only place the ctx can be is in - * a uevent or in __destroy_id(). Since the former doesn't touch - * ctx->cm_id and the latter sync cancels this, there is no races with - * this store. - */ + /* Reading the cm_id without holding a positive ref is not allowed */ ctx->cm_id = NULL; } @@ -204,7 +197,6 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file) return NULL; INIT_WORK(&ctx->close_work, ucma_close_id); - refcount_set(&ctx->ref, 1); init_completion(&ctx->comp); /* So list_del() will work if we don't do ucma_finish_ctx() */ INIT_LIST_HEAD(&ctx->list); @@ -218,6 +210,13 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file) return ctx; } +static void ucma_set_ctx_cm_id(struct ucma_context *ctx, + struct rdma_cm_id *cm_id) +{ + refcount_set(&ctx->ref, 1); + ctx->cm_id = cm_id; +} + static void ucma_finish_ctx(struct ucma_context *ctx) { lockdep_assert_held(&ctx->file->mut); @@ -303,7 +302,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id, ctx = ucma_alloc_ctx(listen_ctx->file); if (!ctx) goto err_backlog; - ctx->cm_id = cm_id; + ucma_set_ctx_cm_id(ctx, cm_id); uevent = ucma_create_uevent(listen_ctx, event); if (!uevent) @@ -321,8 +320,7 @@ static int ucma_connect_event_handler(struct rdma_cm_id *cm_id, return 0; err_alloc: - xa_erase(&ctx_table, ctx->id); - kfree(ctx); + ucma_destroy_private_ctx(ctx); err_backlog: atomic_inc(&listen_ctx->backlog); /* Returning error causes the new ID to be destroyed */ @@ -356,8 +354,12 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id, wake_up_interruptible(&ctx->file->poll_wait); } - if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL && !ctx->destroying) - queue_work(system_unbound_wq, &ctx->close_work); + if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) { + xa_lock(&ctx_table); + if (xa_load(&ctx_table, ctx->id) == ctx) + queue_work(system_unbound_wq, &ctx->close_work); + xa_unlock(&ctx_table); + } return 0; } @@ -461,13 +463,12 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf, ret = PTR_ERR(cm_id); goto err1; } - ctx->cm_id = cm_id; + ucma_set_ctx_cm_id(ctx, cm_id); resp.id = ctx->id; if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) { - xa_erase(&ctx_table, ctx->id); - __destroy_id(ctx); + ucma_destroy_private_ctx(ctx); return -EFAULT; } @@ -477,8 +478,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf, return 0; err1: - xa_erase(&ctx_table, ctx->id); - kfree(ctx); + ucma_destroy_private_ctx(ctx); return ret; } @@ -516,68 +516,73 @@ static void ucma_cleanup_mc_events(struct ucma_multicast *mc) rdma_unlock_handler(mc->ctx->cm_id); } -/* - * ucma_free_ctx is called after the underlying rdma CM-ID is destroyed. At - * this point, no new events will be reported from the hardware. However, we - * still need to cleanup the UCMA context for this ID. Specifically, there - * might be events that have not yet been consumed by the user space software. - * mutex. After that we release them as needed. - */ -static int ucma_free_ctx(struct ucma_context *ctx) +static int ucma_cleanup_ctx_events(struct ucma_context *ctx) { int events_reported; struct ucma_event *uevent, *tmp; LIST_HEAD(list); - ucma_cleanup_multicast(ctx); - - /* Cleanup events not yet reported to the user. */ + /* Cleanup events not yet reported to the user.*/ mutex_lock(&ctx->file->mut); list_for_each_entry_safe(uevent, tmp, &ctx->file->event_list, list) { - if (uevent->ctx == ctx || uevent->conn_req_ctx == ctx) + if (uevent->ctx != ctx) + continue; + + if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST && + xa_cmpxchg(&ctx_table, uevent->conn_req_ctx->id, + uevent->conn_req_ctx, XA_ZERO_ENTRY, + GFP_KERNEL) == uevent->conn_req_ctx) { list_move_tail(&uevent->list, &list); + continue; + } + list_del(&uevent->list); + kfree(uevent); } list_del(&ctx->list); events_reported = ctx->events_reported; mutex_unlock(&ctx->file->mut); /* - * If this was a listening ID then any connections spawned from it - * that have not been delivered to userspace are cleaned up too. - * Must be done outside any locks. + * If this was a listening ID then any connections spawned from it that + * have not been delivered to userspace are cleaned up too. Must be done + * outside any locks. */ list_for_each_entry_safe(uevent, tmp, &list, list) { - list_del(&uevent->list); - if (uevent->resp.event == RDMA_CM_EVENT_CONNECT_REQUEST && - uevent->conn_req_ctx != ctx) - __destroy_id(uevent->conn_req_ctx); + ucma_destroy_private_ctx(uevent->conn_req_ctx); kfree(uevent); } - - mutex_destroy(&ctx->mutex); - kfree(ctx); return events_reported; } -static int __destroy_id(struct ucma_context *ctx) +/* + * When this is called the xarray must have a XA_ZERO_ENTRY in the ctx->id (ie + * the ctx is not public to the user). This either because: + * - ucma_finish_ctx() hasn't been called + * - xa_cmpxchg() succeed to remove the entry (only one thread can succeed) + */ +static int ucma_destroy_private_ctx(struct ucma_context *ctx) { + int events_reported; + /* - * If the refcount is already 0 then ucma_close_id() has already - * destroyed the cm_id, otherwise holding the refcount keeps cm_id - * valid. Prevent queue_work() from being called. + * Destroy the underlying cm_id. New work queuing is prevented now by + * the removal from the xarray. Once the work is cancled ref will either + * be 0 because the work ran to completion and consumed the ref from the + * xarray, or it will be positive because we still have the ref from the + * xarray. This can also be 0 in cases where cm_id was never set */ - if (refcount_inc_not_zero(&ctx->ref)) { - rdma_lock_handler(ctx->cm_id); - ctx->destroying = 1; - rdma_unlock_handler(ctx->cm_id); - ucma_put_ctx(ctx); - } - cancel_work_sync(&ctx->close_work); - /* At this point it's guaranteed that there is no inflight closing task */ - if (ctx->cm_id) + if (refcount_read(&ctx->ref)) ucma_close_id(&ctx->close_work); - return ucma_free_ctx(ctx); + + events_reported = ucma_cleanup_ctx_events(ctx); + ucma_cleanup_multicast(ctx); + + WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, XA_ZERO_ENTRY, NULL, + GFP_KERNEL) != NULL); + mutex_destroy(&ctx->mutex); + kfree(ctx); + return events_reported; } static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, @@ -596,14 +601,17 @@ static ssize_t ucma_destroy_id(struct ucma_file *file, const char __user *inbuf, xa_lock(&ctx_table); ctx = _ucma_find_context(cmd.id, file); - if (!IS_ERR(ctx)) - __xa_erase(&ctx_table, ctx->id); + if (!IS_ERR(ctx)) { + if (__xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY, + GFP_KERNEL) != ctx) + ctx = ERR_PTR(-ENOENT); + } xa_unlock(&ctx_table); if (IS_ERR(ctx)) return PTR_ERR(ctx); - resp.events_reported = __destroy_id(ctx); + resp.events_reported = ucma_destroy_private_ctx(ctx); if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) ret = -EFAULT; @@ -1777,15 +1785,16 @@ static int ucma_close(struct inode *inode, struct file *filp) * prevented by this being a FD release function. The list_add_tail() in * ucma_connect_event_handler() can run concurrently, however it only * adds to the list *after* a listening ID. By only reading the first of - * the list, and relying on __destroy_id() to block + * the list, and relying on ucma_destroy_private_ctx() to block * ucma_connect_event_handler(), no additional locking is needed. */ while (!list_empty(&file->ctx_list)) { struct ucma_context *ctx = list_first_entry( &file->ctx_list, struct ucma_context, list); - xa_erase(&ctx_table, ctx->id); - __destroy_id(ctx); + WARN_ON(xa_cmpxchg(&ctx_table, ctx->id, ctx, XA_ZERO_ENTRY, + GFP_KERNEL) != ctx); + ucma_destroy_private_ctx(ctx); } kfree(file); return 0; diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 7ca4112e3e8f..917338db7ac1 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -135,7 +135,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, */ if (mask) pgsz_bitmap &= GENMASK(count_trailing_zeros(mask), 0); - return rounddown_pow_of_two(pgsz_bitmap); + return pgsz_bitmap ? rounddown_pow_of_two(pgsz_bitmap) : 0; } EXPORT_SYMBOL(ib_umem_find_best_pgsz); diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 3bae9ba0ead8..d26f3f3e0462 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -3956,7 +3956,7 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) err = set_has_smi_cap(dev); if (err) - return err; + goto err_mp; if (!mlx5_core_mp_enabled(mdev)) { for (i = 1; i <= dev->num_ports; i++) { @@ -4319,7 +4319,7 @@ static int mlx5_ib_stage_bfrag_init(struct mlx5_ib_dev *dev) err = mlx5_alloc_bfreg(dev->mdev, &dev->fp_bfreg, false, true); if (err) - mlx5_free_bfreg(dev->mdev, &dev->fp_bfreg); + mlx5_free_bfreg(dev->mdev, &dev->bfreg); return err; } diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c index bc98bd950d99..3acb5c10b155 100644 --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c @@ -434,9 +434,9 @@ static void ocrdma_dealloc_ucontext_pd(struct ocrdma_ucontext *uctx) pr_err("%s(%d) Freeing in use pdid=0x%x.\n", __func__, dev->id, pd->id); } - kfree(uctx->cntxt_pd); uctx->cntxt_pd = NULL; _ocrdma_dealloc_pd(dev, pd); + kfree(pd); } static struct ocrdma_pd *ocrdma_get_ucontext_pd(struct ocrdma_ucontext *uctx) diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c index 38a37770c016..3705c6b8b223 100644 --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c @@ -214,6 +214,7 @@ find_free_vf_and_create_qp_grp(struct usnic_ib_dev *us_ibdev, } usnic_uiom_free_dev_list(dev_list); + dev_list = NULL; } /* Try to find resources on an unused vf */ @@ -239,6 +240,8 @@ find_free_vf_and_create_qp_grp(struct usnic_ib_dev *us_ibdev, qp_grp_check: if (IS_ERR_OR_NULL(qp_grp)) { usnic_err("Failed to allocate qp_grp\n"); + if (usnic_ib_share_vf) + usnic_uiom_free_dev_list(dev_list); return ERR_PTR(qp_grp ? PTR_ERR(qp_grp) : -ENOMEM); } return qp_grp; diff --git a/drivers/interconnect/imx/imx.c b/drivers/interconnect/imx/imx.c index 41dba7090c2a..c770951a909c 100644 --- a/drivers/interconnect/imx/imx.c +++ b/drivers/interconnect/imx/imx.c @@ -96,9 +96,10 @@ static int imx_icc_node_init_qos(struct icc_provider *provider, return -ENODEV; } /* Allow scaling to be disabled on a per-node basis */ - if (!dn || !of_device_is_available(dn)) { + if (!of_device_is_available(dn)) { dev_warn(dev, "Missing property %s, skip scaling %s\n", adj->phandle_name, node->name); + of_node_put(dn); return 0; } diff --git a/drivers/interconnect/imx/imx8mq.c b/drivers/interconnect/imx/imx8mq.c index ba43a15aefec..d7768d3c6d8a 100644 --- a/drivers/interconnect/imx/imx8mq.c +++ b/drivers/interconnect/imx/imx8mq.c @@ -7,6 +7,7 @@ #include <linux/module.h> #include <linux/platform_device.h> +#include <linux/interconnect-provider.h> #include <dt-bindings/interconnect/imx8mq.h> #include "imx.h" @@ -94,6 +95,7 @@ static struct platform_driver imx8mq_icc_driver = { .remove = imx8mq_icc_remove, .driver = { .name = "imx8mq-interconnect", + .sync_state = icc_sync_state, }, }; diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig index a8f93ba265f8..b3fb5b02bcf1 100644 --- a/drivers/interconnect/qcom/Kconfig +++ b/drivers/interconnect/qcom/Kconfig @@ -42,13 +42,23 @@ config INTERCONNECT_QCOM_QCS404 This is a driver for the Qualcomm Network-on-Chip on qcs404-based platforms. +config INTERCONNECT_QCOM_RPMH_POSSIBLE + tristate + default INTERCONNECT_QCOM + depends on QCOM_RPMH || (COMPILE_TEST && !QCOM_RPMH) + depends on QCOM_COMMAND_DB || (COMPILE_TEST && !QCOM_COMMAND_DB) + depends on OF || COMPILE_TEST + help + Compile-testing RPMH drivers is possible on other platforms, + but in order to avoid link failures, drivers must not be built-in + when QCOM_RPMH or QCOM_COMMAND_DB are loadable modules + config INTERCONNECT_QCOM_RPMH tristate config INTERCONNECT_QCOM_SC7180 tristate "Qualcomm SC7180 interconnect driver" - depends on INTERCONNECT_QCOM - depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST + depends on INTERCONNECT_QCOM_RPMH_POSSIBLE select INTERCONNECT_QCOM_RPMH select INTERCONNECT_QCOM_BCM_VOTER help @@ -57,8 +67,7 @@ config INTERCONNECT_QCOM_SC7180 config INTERCONNECT_QCOM_SDM845 tristate "Qualcomm SDM845 interconnect driver" - depends on INTERCONNECT_QCOM - depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST + depends on INTERCONNECT_QCOM_RPMH_POSSIBLE select INTERCONNECT_QCOM_RPMH select INTERCONNECT_QCOM_BCM_VOTER help @@ -67,8 +76,7 @@ config INTERCONNECT_QCOM_SDM845 config INTERCONNECT_QCOM_SM8150 tristate "Qualcomm SM8150 interconnect driver" - depends on INTERCONNECT_QCOM - depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST + depends on INTERCONNECT_QCOM_RPMH_POSSIBLE select INTERCONNECT_QCOM_RPMH select INTERCONNECT_QCOM_BCM_VOTER help @@ -77,8 +85,7 @@ config INTERCONNECT_QCOM_SM8150 config INTERCONNECT_QCOM_SM8250 tristate "Qualcomm SM8250 interconnect driver" - depends on INTERCONNECT_QCOM - depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST + depends on INTERCONNECT_QCOM_RPMH_POSSIBLE select INTERCONNECT_QCOM_RPMH select INTERCONNECT_QCOM_BCM_VOTER help diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index f54cd79b43e4..6a1f7048dacc 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -1973,8 +1973,6 @@ static int iommu_setup_msi(struct amd_iommu *iommu) return r; } - iommu->int_enabled = true; - return 0; } @@ -2169,6 +2167,7 @@ static int iommu_init_irq(struct amd_iommu *iommu) if (ret) return ret; + iommu->int_enabled = true; enable_faults: iommu_feature_enable(iommu, CONTROL_EVT_INT_EN); diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 7e2c445a1fae..f0adbc48fd17 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -3854,6 +3854,9 @@ static int irq_remapping_select(struct irq_domain *d, struct irq_fwspec *fwspec, struct amd_iommu *iommu; int devid = -1; + if (!amd_iommu_irq_remap) + return 0; + if (x86_fwspec_is_ioapic(fwspec)) devid = get_ioapic_devid(fwspec->param[0]); else if (x86_fwspec_is_hpet(fwspec)) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 5dff7ffbef11..bcda17012aee 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -196,6 +196,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu) set_bit(qsmmu->bypass_cbndx, smmu->context_map); + arm_smmu_cb_write(smmu, qsmmu->bypass_cbndx, ARM_SMMU_CB_SCTLR, 0); + reg = FIELD_PREP(ARM_SMMU_CBAR_TYPE, CBAR_TYPE_S1_TRANS_S2_BYPASS); arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBAR(qsmmu->bypass_cbndx), reg); } @@ -323,7 +325,9 @@ static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu, } static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = { + { .compatible = "qcom,msm8998-smmu-v2" }, { .compatible = "qcom,sc7180-smmu-500" }, + { .compatible = "qcom,sdm630-smmu-v2" }, { .compatible = "qcom,sdm845-smmu-500" }, { .compatible = "qcom,sm8150-smmu-500" }, { .compatible = "qcom,sm8250-smmu-500" }, diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f0305e6aac1b..4078358ed66e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -863,33 +863,6 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev); int i, count = 0; - /* - * The Intel graphic driver is used to assume that the returned - * sg list is not combound. This blocks the efforts of converting - * Intel IOMMU driver to dma-iommu api's. Add this quirk to make the - * device driver work and should be removed once it's fixed in i915 - * driver. - */ - if (IS_ENABLED(CONFIG_DRM_I915) && dev_is_pci(dev) && - to_pci_dev(dev)->vendor == PCI_VENDOR_ID_INTEL && - (to_pci_dev(dev)->class >> 16) == PCI_BASE_CLASS_DISPLAY) { - for_each_sg(sg, s, nents, i) { - unsigned int s_iova_off = sg_dma_address(s); - unsigned int s_length = sg_dma_len(s); - unsigned int s_iova_len = s->length; - - s->offset += s_iova_off; - s->length = s_length; - sg_dma_address(s) = dma_addr + s_iova_off; - sg_dma_len(s) = s_length; - dma_addr += s_iova_len; - - pr_info_once("sg combining disabled due to i915 driver\n"); - } - - return nents; - } - for_each_sg(sg, s, nents, i) { /* Restore this segment's original unaligned fields first */ unsigned int s_iova_off = sg_dma_address(s); diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index b46dbfa6d0ed..004feaed3c72 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -1461,8 +1461,8 @@ void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr, int mask = ilog2(__roundup_pow_of_two(npages)); unsigned long align = (1ULL << (VTD_PAGE_SHIFT + mask)); - if (WARN_ON_ONCE(!ALIGN(addr, align))) - addr &= ~(align - 1); + if (WARN_ON_ONCE(!IS_ALIGNED(addr, align))) + addr = ALIGN_DOWN(addr, align); desc.qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) | diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 788119c5b021..f665322a0991 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -38,7 +38,6 @@ #include <linux/dmi.h> #include <linux/pci-ats.h> #include <linux/memblock.h> -#include <linux/dma-map-ops.h> #include <linux/dma-direct.h> #include <linux/crash_dump.h> #include <linux/numa.h> @@ -719,6 +718,8 @@ static int domain_update_device_node(struct dmar_domain *domain) return nid; } +static void domain_update_iotlb(struct dmar_domain *domain); + /* Some capabilities may be different across iommus */ static void domain_update_iommu_cap(struct dmar_domain *domain) { @@ -744,6 +745,8 @@ static void domain_update_iommu_cap(struct dmar_domain *domain) domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw - 1); else domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw); + + domain_update_iotlb(domain); } struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus, @@ -1464,17 +1467,22 @@ static void domain_update_iotlb(struct dmar_domain *domain) assert_spin_locked(&device_domain_lock); - list_for_each_entry(info, &domain->devices, link) { - struct pci_dev *pdev; - - if (!info->dev || !dev_is_pci(info->dev)) - continue; - - pdev = to_pci_dev(info->dev); - if (pdev->ats_enabled) { + list_for_each_entry(info, &domain->devices, link) + if (info->ats_enabled) { has_iotlb_device = true; break; } + + if (!has_iotlb_device) { + struct subdev_domain_info *sinfo; + + list_for_each_entry(sinfo, &domain->subdevices, link_domain) { + info = get_domain_info(sinfo->pdev); + if (info && info->ats_enabled) { + has_iotlb_device = true; + break; + } + } } domain->has_iotlb_device = has_iotlb_device; @@ -1555,25 +1563,37 @@ static void iommu_disable_dev_iotlb(struct device_domain_info *info) #endif } +static void __iommu_flush_dev_iotlb(struct device_domain_info *info, + u64 addr, unsigned int mask) +{ + u16 sid, qdep; + + if (!info || !info->ats_enabled) + return; + + sid = info->bus << 8 | info->devfn; + qdep = info->ats_qdep; + qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, + qdep, addr, mask); +} + static void iommu_flush_dev_iotlb(struct dmar_domain *domain, u64 addr, unsigned mask) { - u16 sid, qdep; unsigned long flags; struct device_domain_info *info; + struct subdev_domain_info *sinfo; if (!domain->has_iotlb_device) return; spin_lock_irqsave(&device_domain_lock, flags); - list_for_each_entry(info, &domain->devices, link) { - if (!info->ats_enabled) - continue; + list_for_each_entry(info, &domain->devices, link) + __iommu_flush_dev_iotlb(info, addr, mask); - sid = info->bus << 8 | info->devfn; - qdep = info->ats_qdep; - qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, - qdep, addr, mask); + list_for_each_entry(sinfo, &domain->subdevices, link_domain) { + info = get_domain_info(sinfo->pdev); + __iommu_flush_dev_iotlb(info, addr, mask); } spin_unlock_irqrestore(&device_domain_lock, flags); } @@ -1877,6 +1897,7 @@ static struct dmar_domain *alloc_domain(int flags) domain->flags |= DOMAIN_FLAG_USE_FIRST_LEVEL; domain->has_iotlb_device = false; INIT_LIST_HEAD(&domain->devices); + INIT_LIST_HEAD(&domain->subdevices); return domain; } @@ -2547,7 +2568,7 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu, info->iommu = iommu; info->pasid_table = NULL; info->auxd_enabled = 0; - INIT_LIST_HEAD(&info->auxiliary_domains); + INIT_LIST_HEAD(&info->subdevices); if (dev && dev_is_pci(dev)) { struct pci_dev *pdev = to_pci_dev(info->dev); @@ -4475,33 +4496,61 @@ is_aux_domain(struct device *dev, struct iommu_domain *domain) domain->type == IOMMU_DOMAIN_UNMANAGED; } -static void auxiliary_link_device(struct dmar_domain *domain, - struct device *dev) +static inline struct subdev_domain_info * +lookup_subdev_info(struct dmar_domain *domain, struct device *dev) +{ + struct subdev_domain_info *sinfo; + + if (!list_empty(&domain->subdevices)) { + list_for_each_entry(sinfo, &domain->subdevices, link_domain) { + if (sinfo->pdev == dev) + return sinfo; + } + } + + return NULL; +} + +static int auxiliary_link_device(struct dmar_domain *domain, + struct device *dev) { struct device_domain_info *info = get_domain_info(dev); + struct subdev_domain_info *sinfo = lookup_subdev_info(domain, dev); assert_spin_locked(&device_domain_lock); if (WARN_ON(!info)) - return; + return -EINVAL; - domain->auxd_refcnt++; - list_add(&domain->auxd, &info->auxiliary_domains); + if (!sinfo) { + sinfo = kzalloc(sizeof(*sinfo), GFP_ATOMIC); + sinfo->domain = domain; + sinfo->pdev = dev; + list_add(&sinfo->link_phys, &info->subdevices); + list_add(&sinfo->link_domain, &domain->subdevices); + } + + return ++sinfo->users; } -static void auxiliary_unlink_device(struct dmar_domain *domain, - struct device *dev) +static int auxiliary_unlink_device(struct dmar_domain *domain, + struct device *dev) { struct device_domain_info *info = get_domain_info(dev); + struct subdev_domain_info *sinfo = lookup_subdev_info(domain, dev); + int ret; assert_spin_locked(&device_domain_lock); - if (WARN_ON(!info)) - return; + if (WARN_ON(!info || !sinfo || sinfo->users <= 0)) + return -EINVAL; - list_del(&domain->auxd); - domain->auxd_refcnt--; + ret = --sinfo->users; + if (!ret) { + list_del(&sinfo->link_phys); + list_del(&sinfo->link_domain); + kfree(sinfo); + } - if (!domain->auxd_refcnt && domain->default_pasid > 0) - ioasid_put(domain->default_pasid); + return ret; } static int aux_domain_add_dev(struct dmar_domain *domain, @@ -4530,6 +4579,19 @@ static int aux_domain_add_dev(struct dmar_domain *domain, } spin_lock_irqsave(&device_domain_lock, flags); + ret = auxiliary_link_device(domain, dev); + if (ret <= 0) + goto link_failed; + + /* + * Subdevices from the same physical device can be attached to the + * same domain. For such cases, only the first subdevice attachment + * needs to go through the full steps in this function. So if ret > + * 1, just goto out. + */ + if (ret > 1) + goto out; + /* * iommu->lock must be held to attach domain to iommu and setup the * pasid entry for second level translation. @@ -4548,10 +4610,9 @@ static int aux_domain_add_dev(struct dmar_domain *domain, domain->default_pasid); if (ret) goto table_failed; - spin_unlock(&iommu->lock); - - auxiliary_link_device(domain, dev); + spin_unlock(&iommu->lock); +out: spin_unlock_irqrestore(&device_domain_lock, flags); return 0; @@ -4560,8 +4621,10 @@ table_failed: domain_detach_iommu(domain, iommu); attach_failed: spin_unlock(&iommu->lock); + auxiliary_unlink_device(domain, dev); +link_failed: spin_unlock_irqrestore(&device_domain_lock, flags); - if (!domain->auxd_refcnt && domain->default_pasid > 0) + if (list_empty(&domain->subdevices) && domain->default_pasid > 0) ioasid_put(domain->default_pasid); return ret; @@ -4581,14 +4644,18 @@ static void aux_domain_remove_dev(struct dmar_domain *domain, info = get_domain_info(dev); iommu = info->iommu; - auxiliary_unlink_device(domain, dev); - - spin_lock(&iommu->lock); - intel_pasid_tear_down_entry(iommu, dev, domain->default_pasid, false); - domain_detach_iommu(domain, iommu); - spin_unlock(&iommu->lock); + if (!auxiliary_unlink_device(domain, dev)) { + spin_lock(&iommu->lock); + intel_pasid_tear_down_entry(iommu, dev, + domain->default_pasid, false); + domain_detach_iommu(domain, iommu); + spin_unlock(&iommu->lock); + } spin_unlock_irqrestore(&device_domain_lock, flags); + + if (list_empty(&domain->subdevices) && domain->default_pasid > 0) + ioasid_put(domain->default_pasid); } static int prepare_domain_attach_device(struct iommu_domain *domain, diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c index aeffda92b10b..685200a5cff0 100644 --- a/drivers/iommu/intel/irq_remapping.c +++ b/drivers/iommu/intel/irq_remapping.c @@ -1353,6 +1353,8 @@ static int intel_irq_remapping_alloc(struct irq_domain *domain, irq_data = irq_domain_get_irq_data(domain, virq + i); irq_cfg = irqd_cfg(irq_data); if (!irq_data || !irq_cfg) { + if (!i) + kfree(data); ret = -EINVAL; goto out_free_data; } diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 4fa248b98031..18a9f05df407 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -118,8 +118,10 @@ void intel_svm_check(struct intel_iommu *iommu) iommu->flags |= VTD_FLAG_SVM_CAPABLE; } -static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev, - unsigned long address, unsigned long pages, int ih) +static void __flush_svm_range_dev(struct intel_svm *svm, + struct intel_svm_dev *sdev, + unsigned long address, + unsigned long pages, int ih) { struct qi_desc desc; @@ -142,7 +144,7 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d } desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(svm->iommu, &desc, 1, 0); + qi_submit_sync(sdev->iommu, &desc, 1, 0); if (sdev->dev_iotlb) { desc.qw0 = QI_DEV_EIOTLB_PASID(svm->pasid) | @@ -166,7 +168,23 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d } desc.qw2 = 0; desc.qw3 = 0; - qi_submit_sync(svm->iommu, &desc, 1, 0); + qi_submit_sync(sdev->iommu, &desc, 1, 0); + } +} + +static void intel_flush_svm_range_dev(struct intel_svm *svm, + struct intel_svm_dev *sdev, + unsigned long address, + unsigned long pages, int ih) +{ + unsigned long shift = ilog2(__roundup_pow_of_two(pages)); + unsigned long align = (1ULL << (VTD_PAGE_SHIFT + shift)); + unsigned long start = ALIGN_DOWN(address, align); + unsigned long end = ALIGN(address + (pages << VTD_PAGE_SHIFT), align); + + while (start < end) { + __flush_svm_range_dev(svm, sdev, start, align >> VTD_PAGE_SHIFT, ih); + start += align; } } @@ -211,7 +229,7 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) */ rcu_read_lock(); list_for_each_entry_rcu(sdev, &svm->devs, list) - intel_pasid_tear_down_entry(svm->iommu, sdev->dev, + intel_pasid_tear_down_entry(sdev->iommu, sdev->dev, svm->pasid, true); rcu_read_unlock(); @@ -281,6 +299,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev, struct dmar_domain *dmar_domain; struct device_domain_info *info; struct intel_svm *svm = NULL; + unsigned long iflags; int ret = 0; if (WARN_ON(!iommu) || !data) @@ -363,6 +382,7 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev, } sdev->dev = dev; sdev->sid = PCI_DEVID(info->bus, info->devfn); + sdev->iommu = iommu; /* Only count users if device has aux domains */ if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)) @@ -381,12 +401,12 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev, * each bind of a new device even with an existing PASID, we need to * call the nested mode setup function here. */ - spin_lock(&iommu->lock); + spin_lock_irqsave(&iommu->lock, iflags); ret = intel_pasid_setup_nested(iommu, dev, (pgd_t *)(uintptr_t)data->gpgd, data->hpasid, &data->vendor.vtd, dmar_domain, data->addr_width); - spin_unlock(&iommu->lock); + spin_unlock_irqrestore(&iommu->lock, iflags); if (ret) { dev_err_ratelimited(dev, "Failed to set up PASID %llu in nested mode, Err %d\n", data->hpasid, ret); @@ -486,6 +506,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags, struct device_domain_info *info; struct intel_svm_dev *sdev; struct intel_svm *svm = NULL; + unsigned long iflags; int pasid_max; int ret; @@ -546,6 +567,7 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags, goto out; } sdev->dev = dev; + sdev->iommu = iommu; ret = intel_iommu_enable_pasid(iommu, dev); if (ret) { @@ -575,7 +597,6 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags, kfree(sdev); goto out; } - svm->iommu = iommu; if (pasid_max > intel_pasid_max_id) pasid_max = intel_pasid_max_id; @@ -605,14 +626,14 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags, } } - spin_lock(&iommu->lock); + spin_lock_irqsave(&iommu->lock, iflags); ret = intel_pasid_setup_first_level(iommu, dev, mm ? mm->pgd : init_mm.pgd, svm->pasid, FLPT_DEFAULT_DID, (mm ? 0 : PASID_FLAG_SUPERVISOR_MODE) | (cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0)); - spin_unlock(&iommu->lock); + spin_unlock_irqrestore(&iommu->lock, iflags); if (ret) { if (mm) mmu_notifier_unregister(&svm->notifier, mm); @@ -632,14 +653,14 @@ intel_svm_bind_mm(struct device *dev, unsigned int flags, * Binding a new device with existing PASID, need to setup * the PASID entry. */ - spin_lock(&iommu->lock); + spin_lock_irqsave(&iommu->lock, iflags); ret = intel_pasid_setup_first_level(iommu, dev, mm ? mm->pgd : init_mm.pgd, svm->pasid, FLPT_DEFAULT_DID, (mm ? 0 : PASID_FLAG_SUPERVISOR_MODE) | (cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0)); - spin_unlock(&iommu->lock); + spin_unlock_irqrestore(&iommu->lock, iflags); if (ret) { kfree(sdev); goto out; diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 4bb3293ae4d7..d20b8b333d30 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -358,7 +358,7 @@ static void private_free_iova(struct iova_domain *iovad, struct iova *iova) * @iovad: - iova domain in question. * @pfn: - page frame number * This function finds and returns an iova belonging to the - * given doamin which matches the given pfn. + * given domain which matches the given pfn. */ struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn) { @@ -601,7 +601,7 @@ void queue_iova(struct iova_domain *iovad, EXPORT_SYMBOL_GPL(queue_iova); /** - * put_iova_domain - destroys the iova doamin + * put_iova_domain - destroys the iova domain * @iovad: - iova domain in question. * All the iova's in that domain are destroyed. */ @@ -712,9 +712,9 @@ EXPORT_SYMBOL_GPL(reserve_iova); /** * copy_reserved_iova - copies the reserved between domains - * @from: - source doamin from where to copy + * @from: - source domain from where to copy * @to: - destination domin where to copy - * This function copies reserved iova's from one doamin to + * This function copies reserved iova's from one domain to * other. */ void diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 94920a51c628..b147f22a78f4 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -493,8 +493,9 @@ config TI_SCI_INTA_IRQCHIP TI System Controller, say Y here. Otherwise, say N. config TI_PRUSS_INTC - tristate "TI PRU-ICSS Interrupt Controller" - depends on ARCH_DAVINCI || SOC_AM33XX || SOC_AM43XX || SOC_DRA7XX || ARCH_KEYSTONE || ARCH_K3 + tristate + depends on TI_PRUSS + default TI_PRUSS select IRQ_DOMAIN help This enables support for the PRU-ICSS Local Interrupt Controller diff --git a/drivers/irqchip/irq-bcm2836.c b/drivers/irqchip/irq-bcm2836.c index 5f5eb8877c41..25c9a9c06e41 100644 --- a/drivers/irqchip/irq-bcm2836.c +++ b/drivers/irqchip/irq-bcm2836.c @@ -167,7 +167,7 @@ static void bcm2836_arm_irqchip_handle_ipi(struct irq_desc *desc) chained_irq_exit(chip, desc); } -static void bcm2836_arm_irqchip_ipi_eoi(struct irq_data *d) +static void bcm2836_arm_irqchip_ipi_ack(struct irq_data *d) { int cpu = smp_processor_id(); @@ -195,7 +195,7 @@ static struct irq_chip bcm2836_arm_irqchip_ipi = { .name = "IPI", .irq_mask = bcm2836_arm_irqchip_dummy_op, .irq_unmask = bcm2836_arm_irqchip_dummy_op, - .irq_eoi = bcm2836_arm_irqchip_ipi_eoi, + .irq_ack = bcm2836_arm_irqchip_ipi_ack, .ipi_send_mask = bcm2836_arm_irqchip_ipi_send_mask, }; diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c index 9ed1bc473663..09b91b81851c 100644 --- a/drivers/irqchip/irq-loongson-liointc.c +++ b/drivers/irqchip/irq-loongson-liointc.c @@ -142,8 +142,8 @@ static void liointc_resume(struct irq_chip_generic *gc) static const char * const parent_names[] = {"int0", "int1", "int2", "int3"}; -int __init liointc_of_init(struct device_node *node, - struct device_node *parent) +static int __init liointc_of_init(struct device_node *node, + struct device_node *parent) { struct irq_chip_generic *gc; struct irq_domain *domain; diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c index 95d4fd8f7a96..0bbb0b2d0dd5 100644 --- a/drivers/irqchip/irq-mips-cpu.c +++ b/drivers/irqchip/irq-mips-cpu.c @@ -197,6 +197,13 @@ static int mips_cpu_ipi_alloc(struct irq_domain *domain, unsigned int virq, if (ret) return ret; + ret = irq_domain_set_hwirq_and_chip(domain->parent, virq + i, hwirq, + &mips_mt_cpu_irq_controller, + NULL); + + if (ret) + return ret; + ret = irq_set_irq_type(virq + i, IRQ_TYPE_LEVEL_HIGH); if (ret) return ret; diff --git a/drivers/irqchip/irq-sl28cpld.c b/drivers/irqchip/irq-sl28cpld.c index 0aa50d025ef6..fbb354413ffa 100644 --- a/drivers/irqchip/irq-sl28cpld.c +++ b/drivers/irqchip/irq-sl28cpld.c @@ -66,7 +66,7 @@ static int sl28cpld_intc_probe(struct platform_device *pdev) irqchip->chip.num_regs = 1; irqchip->chip.status_base = base + INTC_IP; irqchip->chip.mask_base = base + INTC_IE; - irqchip->chip.mask_invert = true, + irqchip->chip.mask_invert = true; irqchip->chip.ack_base = base + INTC_IP; return devm_regmap_add_irq_chip_fwnode(dev, dev_fwnode(dev), diff --git a/drivers/isdn/mISDN/Kconfig b/drivers/isdn/mISDN/Kconfig index 26cf0ac9c4ad..c9a53c222472 100644 --- a/drivers/isdn/mISDN/Kconfig +++ b/drivers/isdn/mISDN/Kconfig @@ -13,6 +13,7 @@ if MISDN != n config MISDN_DSP tristate "Digital Audio Processing of transparent data" depends on MISDN + select BITREVERSE help Enable support for digital audio processing capability. diff --git a/drivers/lightnvm/Kconfig b/drivers/lightnvm/Kconfig index 8f39f9ba5c80..4c2ce210c123 100644 --- a/drivers/lightnvm/Kconfig +++ b/drivers/lightnvm/Kconfig @@ -19,6 +19,7 @@ if NVM config NVM_PBLK tristate "Physical Block Device Open-Channel SSD target" + select CRC32 help Allows an open-channel SSD to be exposed as a block device to the host. The target assumes the device exposes raw flash and must be diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c index c1bcac71008c..28ddcaa5358b 100644 --- a/drivers/lightnvm/core.c +++ b/drivers/lightnvm/core.c @@ -844,11 +844,10 @@ static int nvm_bb_chunk_sense(struct nvm_dev *dev, struct ppa_addr ppa) rqd.ppa_addr = generic_to_dev_addr(dev, ppa); ret = nvm_submit_io_sync_raw(dev, &rqd); + __free_page(page); if (ret) return ret; - __free_page(page); - return rqd.error; } diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig index b7e2d9666614..9e44c09f6410 100644 --- a/drivers/md/Kconfig +++ b/drivers/md/Kconfig @@ -605,6 +605,7 @@ config DM_INTEGRITY select BLK_DEV_INTEGRITY select DM_BUFIO select CRYPTO + select CRYPTO_SKCIPHER select ASYNC_XOR help This device-mapper target emulates a block device that has @@ -622,6 +623,7 @@ config DM_ZONED tristate "Drive-managed zoned block device target support" depends on BLK_DEV_DM depends on BLK_DEV_ZONED + select CRC32 help This device-mapper target takes a host-managed or host-aware zoned block device and exposes most of its capacity as a regular block diff --git a/drivers/md/bcache/features.c b/drivers/md/bcache/features.c index 6469223f0b77..d636b7b2d070 100644 --- a/drivers/md/bcache/features.c +++ b/drivers/md/bcache/features.c @@ -17,7 +17,7 @@ struct feature { }; static struct feature feature_list[] = { - {BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LARGE_BUCKET, + {BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, "large_bucket"}, {0, 0, 0 }, }; diff --git a/drivers/md/bcache/features.h b/drivers/md/bcache/features.h index a1653c478041..84fc2c0f0101 100644 --- a/drivers/md/bcache/features.h +++ b/drivers/md/bcache/features.h @@ -13,11 +13,15 @@ /* Feature set definition */ /* Incompat feature set */ -#define BCH_FEATURE_INCOMPAT_LARGE_BUCKET 0x0001 /* 32bit bucket size */ +/* 32bit bucket size, obsoleted */ +#define BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET 0x0001 +/* real bucket size is (1 << bucket_size) */ +#define BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE 0x0002 -#define BCH_FEATURE_COMPAT_SUUP 0 -#define BCH_FEATURE_RO_COMPAT_SUUP 0 -#define BCH_FEATURE_INCOMPAT_SUUP BCH_FEATURE_INCOMPAT_LARGE_BUCKET +#define BCH_FEATURE_COMPAT_SUPP 0 +#define BCH_FEATURE_RO_COMPAT_SUPP 0 +#define BCH_FEATURE_INCOMPAT_SUPP (BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET| \ + BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE) #define BCH_HAS_COMPAT_FEATURE(sb, mask) \ ((sb)->feature_compat & (mask)) @@ -77,7 +81,23 @@ static inline void bch_clear_feature_##name(struct cache_sb *sb) \ ~BCH##_FEATURE_INCOMPAT_##flagname; \ } -BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET); +BCH_FEATURE_INCOMPAT_FUNCS(obso_large_bucket, OBSO_LARGE_BUCKET); +BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LOG_LARGE_BUCKET_SIZE); + +static inline bool bch_has_unknown_compat_features(struct cache_sb *sb) +{ + return ((sb->feature_compat & ~BCH_FEATURE_COMPAT_SUPP) != 0); +} + +static inline bool bch_has_unknown_ro_compat_features(struct cache_sb *sb) +{ + return ((sb->feature_ro_compat & ~BCH_FEATURE_RO_COMPAT_SUPP) != 0); +} + +static inline bool bch_has_unknown_incompat_features(struct cache_sb *sb) +{ + return ((sb->feature_incompat & ~BCH_FEATURE_INCOMPAT_SUPP) != 0); +} int bch_print_cache_set_feature_compat(struct cache_set *c, char *buf, int size); int bch_print_cache_set_feature_ro_compat(struct cache_set *c, char *buf, int size); diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index a4752ac410dc..2047a9cccdb5 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -64,9 +64,25 @@ static unsigned int get_bucket_size(struct cache_sb *sb, struct cache_sb_disk *s { unsigned int bucket_size = le16_to_cpu(s->bucket_size); - if (sb->version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES && - bch_has_feature_large_bucket(sb)) - bucket_size |= le16_to_cpu(s->bucket_size_hi) << 16; + if (sb->version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES) { + if (bch_has_feature_large_bucket(sb)) { + unsigned int max, order; + + max = sizeof(unsigned int) * BITS_PER_BYTE - 1; + order = le16_to_cpu(s->bucket_size); + /* + * bcache tool will make sure the overflow won't + * happen, an error message here is enough. + */ + if (order > max) + pr_err("Bucket size (1 << %u) overflows\n", + order); + bucket_size = 1 << order; + } else if (bch_has_feature_obso_large_bucket(sb)) { + bucket_size += + le16_to_cpu(s->obso_bucket_size_hi) << 16; + } + } return bucket_size; } @@ -228,6 +244,20 @@ static const char *read_super(struct cache_sb *sb, struct block_device *bdev, sb->feature_compat = le64_to_cpu(s->feature_compat); sb->feature_incompat = le64_to_cpu(s->feature_incompat); sb->feature_ro_compat = le64_to_cpu(s->feature_ro_compat); + + /* Check incompatible features */ + err = "Unsupported compatible feature found"; + if (bch_has_unknown_compat_features(sb)) + goto err; + + err = "Unsupported read-only compatible feature found"; + if (bch_has_unknown_ro_compat_features(sb)) + goto err; + + err = "Unsupported incompatible feature found"; + if (bch_has_unknown_incompat_features(sb)) + goto err; + err = read_super_common(sb, bdev, s); if (err) goto err; @@ -1302,6 +1332,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c, bcache_device_link(&dc->disk, c, "bdev"); atomic_inc(&c->attached_dev_nr); + if (bch_has_feature_obso_large_bucket(&(c->cache->sb))) { + pr_err("The obsoleted large bucket layout is unsupported, set the bcache device into read-only\n"); + pr_err("Please update to the latest bcache-tools to create the cache device\n"); + set_disk_ro(dc->disk.disk, 1); + } + /* Allow the writeback thread to proceed */ up_write(&dc->writeback_lock); @@ -1524,6 +1560,12 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u) bcache_device_link(d, c, "volume"); + if (bch_has_feature_obso_large_bucket(&c->cache->sb)) { + pr_err("The obsoleted large bucket layout is unsupported, set the bcache device into read-only\n"); + pr_err("Please update to the latest bcache-tools to create the cache device\n"); + set_disk_ro(d->disk, 1); + } + return 0; err: kobject_put(&d->kobj); @@ -2083,6 +2125,9 @@ static int run_cache_set(struct cache_set *c) c->cache->sb.last_mount = (u32)ktime_get_real_seconds(); bcache_write_super(c); + if (bch_has_feature_obso_large_bucket(&c->cache->sb)) + pr_err("Detect obsoleted large bucket layout, all attached bcache device will be read-only\n"); + list_for_each_entry_safe(dc, t, &uncached_devices, list) bch_cached_dev_attach(dc, c, NULL); @@ -2644,8 +2689,8 @@ static ssize_t bch_pending_bdevs_cleanup(struct kobject *k, } list_for_each_entry_safe(pdev, tpdev, &pending_devs, list) { + char *pdev_set_uuid = pdev->dc->sb.set_uuid; list_for_each_entry_safe(c, tc, &bch_cache_sets, list) { - char *pdev_set_uuid = pdev->dc->sb.set_uuid; char *set_uuid = c->set_uuid; if (!memcmp(pdev_set_uuid, set_uuid, 16)) { diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index 9c1a86bde658..fce4cbf9529d 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -1534,6 +1534,12 @@ sector_t dm_bufio_get_device_size(struct dm_bufio_client *c) } EXPORT_SYMBOL_GPL(dm_bufio_get_device_size); +struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c) +{ + return c->dm_io; +} +EXPORT_SYMBOL_GPL(dm_bufio_get_dm_io_client); + sector_t dm_bufio_get_block_number(struct dm_buffer *b) { return b->block; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 53791138d78b..5a55617a08e6 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1454,13 +1454,16 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, static void kcryptd_async_done(struct crypto_async_request *async_req, int error); -static void crypt_alloc_req_skcipher(struct crypt_config *cc, +static int crypt_alloc_req_skcipher(struct crypt_config *cc, struct convert_context *ctx) { unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1); - if (!ctx->r.req) - ctx->r.req = mempool_alloc(&cc->req_pool, GFP_NOIO); + if (!ctx->r.req) { + ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); + if (!ctx->r.req) + return -ENOMEM; + } skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]); @@ -1471,13 +1474,18 @@ static void crypt_alloc_req_skcipher(struct crypt_config *cc, skcipher_request_set_callback(ctx->r.req, CRYPTO_TFM_REQ_MAY_BACKLOG, kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); + + return 0; } -static void crypt_alloc_req_aead(struct crypt_config *cc, +static int crypt_alloc_req_aead(struct crypt_config *cc, struct convert_context *ctx) { - if (!ctx->r.req_aead) - ctx->r.req_aead = mempool_alloc(&cc->req_pool, GFP_NOIO); + if (!ctx->r.req_aead) { + ctx->r.req_aead = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); + if (!ctx->r.req_aead) + return -ENOMEM; + } aead_request_set_tfm(ctx->r.req_aead, cc->cipher_tfm.tfms_aead[0]); @@ -1488,15 +1496,17 @@ static void crypt_alloc_req_aead(struct crypt_config *cc, aead_request_set_callback(ctx->r.req_aead, CRYPTO_TFM_REQ_MAY_BACKLOG, kcryptd_async_done, dmreq_of_req(cc, ctx->r.req_aead)); + + return 0; } -static void crypt_alloc_req(struct crypt_config *cc, +static int crypt_alloc_req(struct crypt_config *cc, struct convert_context *ctx) { if (crypt_integrity_aead(cc)) - crypt_alloc_req_aead(cc, ctx); + return crypt_alloc_req_aead(cc, ctx); else - crypt_alloc_req_skcipher(cc, ctx); + return crypt_alloc_req_skcipher(cc, ctx); } static void crypt_free_req_skcipher(struct crypt_config *cc, @@ -1529,17 +1539,28 @@ static void crypt_free_req(struct crypt_config *cc, void *req, struct bio *base_ * Encrypt / decrypt data from one bio to another one (can be the same one) */ static blk_status_t crypt_convert(struct crypt_config *cc, - struct convert_context *ctx, bool atomic) + struct convert_context *ctx, bool atomic, bool reset_pending) { unsigned int tag_offset = 0; unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT; int r; - atomic_set(&ctx->cc_pending, 1); + /* + * if reset_pending is set we are dealing with the bio for the first time, + * else we're continuing to work on the previous bio, so don't mess with + * the cc_pending counter + */ + if (reset_pending) + atomic_set(&ctx->cc_pending, 1); while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) { - crypt_alloc_req(cc, ctx); + r = crypt_alloc_req(cc, ctx); + if (r) { + complete(&ctx->restart); + return BLK_STS_DEV_RESOURCE; + } + atomic_inc(&ctx->cc_pending); if (crypt_integrity_aead(cc)) @@ -1553,7 +1574,25 @@ static blk_status_t crypt_convert(struct crypt_config *cc, * but the driver request queue is full, let's wait. */ case -EBUSY: - wait_for_completion(&ctx->restart); + if (in_interrupt()) { + if (try_wait_for_completion(&ctx->restart)) { + /* + * we don't have to block to wait for completion, + * so proceed + */ + } else { + /* + * we can't wait for completion without blocking + * exit and continue processing in a workqueue + */ + ctx->r.req = NULL; + ctx->cc_sector += sector_step; + tag_offset++; + return BLK_STS_DEV_RESOURCE; + } + } else { + wait_for_completion(&ctx->restart); + } reinit_completion(&ctx->restart); fallthrough; /* @@ -1691,6 +1730,12 @@ static void crypt_inc_pending(struct dm_crypt_io *io) atomic_inc(&io->io_pending); } +static void kcryptd_io_bio_endio(struct work_struct *work) +{ + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); + bio_endio(io->base_bio); +} + /* * One of the bios was finished. Check for completion of * the whole request and correctly clean up the buffer. @@ -1713,7 +1758,23 @@ static void crypt_dec_pending(struct dm_crypt_io *io) kfree(io->integrity_metadata); base_bio->bi_status = error; - bio_endio(base_bio); + + /* + * If we are running this function from our tasklet, + * we can't call bio_endio() here, because it will call + * clone_endio() from dm.c, which in turn will + * free the current struct dm_crypt_io structure with + * our tasklet. In this case we need to delay bio_endio() + * execution to after the tasklet is done and dequeued. + */ + if (tasklet_trylock(&io->tasklet)) { + tasklet_unlock(&io->tasklet); + bio_endio(base_bio); + return; + } + + INIT_WORK(&io->work, kcryptd_io_bio_endio); + queue_work(cc->io_queue, &io->work); } /* @@ -1945,6 +2006,37 @@ static bool kcryptd_crypt_write_inline(struct crypt_config *cc, } } +static void kcryptd_crypt_write_continue(struct work_struct *work) +{ + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); + struct crypt_config *cc = io->cc; + struct convert_context *ctx = &io->ctx; + int crypt_finished; + sector_t sector = io->sector; + blk_status_t r; + + wait_for_completion(&ctx->restart); + reinit_completion(&ctx->restart); + + r = crypt_convert(cc, &io->ctx, true, false); + if (r) + io->error = r; + crypt_finished = atomic_dec_and_test(&ctx->cc_pending); + if (!crypt_finished && kcryptd_crypt_write_inline(cc, ctx)) { + /* Wait for completion signaled by kcryptd_async_done() */ + wait_for_completion(&ctx->restart); + crypt_finished = 1; + } + + /* Encryption was already finished, submit io now */ + if (crypt_finished) { + kcryptd_crypt_write_io_submit(io, 0); + io->sector = sector; + } + + crypt_dec_pending(io); +} + static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) { struct crypt_config *cc = io->cc; @@ -1973,7 +2065,17 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) crypt_inc_pending(io); r = crypt_convert(cc, ctx, - test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags)); + test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true); + /* + * Crypto API backlogged the request, because its queue was full + * and we're in softirq context, so continue from a workqueue + * (TODO: is it actually possible to be in softirq in the write path?) + */ + if (r == BLK_STS_DEV_RESOURCE) { + INIT_WORK(&io->work, kcryptd_crypt_write_continue); + queue_work(cc->crypt_queue, &io->work); + return; + } if (r) io->error = r; crypt_finished = atomic_dec_and_test(&ctx->cc_pending); @@ -1998,6 +2100,25 @@ static void kcryptd_crypt_read_done(struct dm_crypt_io *io) crypt_dec_pending(io); } +static void kcryptd_crypt_read_continue(struct work_struct *work) +{ + struct dm_crypt_io *io = container_of(work, struct dm_crypt_io, work); + struct crypt_config *cc = io->cc; + blk_status_t r; + + wait_for_completion(&io->ctx.restart); + reinit_completion(&io->ctx.restart); + + r = crypt_convert(cc, &io->ctx, true, false); + if (r) + io->error = r; + + if (atomic_dec_and_test(&io->ctx.cc_pending)) + kcryptd_crypt_read_done(io); + + crypt_dec_pending(io); +} + static void kcryptd_crypt_read_convert(struct dm_crypt_io *io) { struct crypt_config *cc = io->cc; @@ -2009,7 +2130,16 @@ static void kcryptd_crypt_read_convert(struct dm_crypt_io *io) io->sector); r = crypt_convert(cc, &io->ctx, - test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)); + test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags), true); + /* + * Crypto API backlogged the request, because its queue was full + * and we're in softirq context, so continue from a workqueue + */ + if (r == BLK_STS_DEV_RESOURCE) { + INIT_WORK(&io->work, kcryptd_crypt_read_continue); + queue_work(cc->crypt_queue, &io->work); + return; + } if (r) io->error = r; @@ -2091,8 +2221,12 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io) if ((bio_data_dir(io->base_bio) == READ && test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags)) || (bio_data_dir(io->base_bio) == WRITE && test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags))) { - if (in_irq()) { - /* Crypto API's "skcipher_walk_first() refuses to work in hard IRQ context */ + /* + * in_irq(): Crypto API's skcipher_walk_first() refuses to work in hard IRQ context. + * irqs_disabled(): the kernel may run some IO completion from the idle thread, but + * it is being executed with irqs disabled. + */ + if (in_irq() || irqs_disabled()) { tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work); tasklet_schedule(&io->tasklet); return; diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 5a7a1b90e671..b64fede032dc 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -257,8 +257,9 @@ struct dm_integrity_c { bool journal_uptodate; bool just_formatted; bool recalculate_flag; - bool fix_padding; bool discard; + bool fix_padding; + bool legacy_recalculate; struct alg_spec internal_hash_alg; struct alg_spec journal_crypt_alg; @@ -386,6 +387,14 @@ static int dm_integrity_failed(struct dm_integrity_c *ic) return READ_ONCE(ic->failed); } +static bool dm_integrity_disable_recalculate(struct dm_integrity_c *ic) +{ + if ((ic->internal_hash_alg.key || ic->journal_mac_alg.key) && + !ic->legacy_recalculate) + return true; + return false; +} + static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned i, unsigned j, unsigned char seq) { @@ -1379,12 +1388,52 @@ thorough_test: #undef MAY_BE_HASH } -static void dm_integrity_flush_buffers(struct dm_integrity_c *ic) +struct flush_request { + struct dm_io_request io_req; + struct dm_io_region io_reg; + struct dm_integrity_c *ic; + struct completion comp; +}; + +static void flush_notify(unsigned long error, void *fr_) +{ + struct flush_request *fr = fr_; + if (unlikely(error != 0)) + dm_integrity_io_error(fr->ic, "flusing disk cache", -EIO); + complete(&fr->comp); +} + +static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_data) { int r; + + struct flush_request fr; + + if (!ic->meta_dev) + flush_data = false; + if (flush_data) { + fr.io_req.bi_op = REQ_OP_WRITE, + fr.io_req.bi_op_flags = REQ_PREFLUSH | REQ_SYNC, + fr.io_req.mem.type = DM_IO_KMEM, + fr.io_req.mem.ptr.addr = NULL, + fr.io_req.notify.fn = flush_notify, + fr.io_req.notify.context = &fr; + fr.io_req.client = dm_bufio_get_dm_io_client(ic->bufio), + fr.io_reg.bdev = ic->dev->bdev, + fr.io_reg.sector = 0, + fr.io_reg.count = 0, + fr.ic = ic; + init_completion(&fr.comp); + r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL); + BUG_ON(r); + } + r = dm_bufio_write_dirty_buffers(ic->bufio); if (unlikely(r)) dm_integrity_io_error(ic, "writing tags", r); + + if (flush_data) + wait_for_completion(&fr.comp); } static void sleep_on_endio_wait(struct dm_integrity_c *ic) @@ -2110,7 +2159,7 @@ offload_to_thread: if (unlikely(dio->op == REQ_OP_DISCARD) && likely(ic->mode != 'D')) { integrity_metadata(&dio->work); - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, false); dio->in_flight = (atomic_t)ATOMIC_INIT(1); dio->completion = NULL; @@ -2195,7 +2244,7 @@ static void integrity_commit(struct work_struct *w) flushes = bio_list_get(&ic->flush_bio_list); if (unlikely(ic->mode != 'J')) { spin_unlock_irq(&ic->endio_wait.lock); - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, true); goto release_flush_bios; } @@ -2409,7 +2458,7 @@ skip_io: complete_journal_op(&comp); wait_for_completion_io(&comp.comp); - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, true); } static void integrity_writer(struct work_struct *w) @@ -2451,7 +2500,7 @@ static void recalc_write_super(struct dm_integrity_c *ic) { int r; - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, false); if (dm_integrity_failed(ic)) return; @@ -2654,7 +2703,7 @@ static void bitmap_flush_work(struct work_struct *work) unsigned long limit; struct bio *bio; - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, false); range.logical_sector = 0; range.n_sectors = ic->provided_data_sectors; @@ -2663,9 +2712,7 @@ static void bitmap_flush_work(struct work_struct *work) add_new_range_and_wait(ic, &range); spin_unlock_irq(&ic->endio_wait.lock); - dm_integrity_flush_buffers(ic); - if (ic->meta_dev) - blkdev_issue_flush(ic->dev->bdev, GFP_NOIO); + dm_integrity_flush_buffers(ic, true); limit = ic->provided_data_sectors; if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) { @@ -2934,11 +2981,11 @@ static void dm_integrity_postsuspend(struct dm_target *ti) if (ic->meta_dev) queue_work(ic->writer_wq, &ic->writer_work); drain_workqueue(ic->writer_wq); - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, true); } if (ic->mode == 'B') { - dm_integrity_flush_buffers(ic); + dm_integrity_flush_buffers(ic, true); #if 1 /* set to 0 to test bitmap replay code */ init_journal(ic, 0, ic->journal_sections, 0); @@ -3102,6 +3149,7 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type, arg_count += !!ic->journal_crypt_alg.alg_string; arg_count += !!ic->journal_mac_alg.alg_string; arg_count += (ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0; + arg_count += ic->legacy_recalculate; DMEMIT("%s %llu %u %c %u", ic->dev->name, ic->start, ic->tag_size, ic->mode, arg_count); if (ic->meta_dev) @@ -3125,6 +3173,8 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type, } if ((ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0) DMEMIT(" fix_padding"); + if (ic->legacy_recalculate) + DMEMIT(" legacy_recalculate"); #define EMIT_ALG(a, n) \ do { \ @@ -3754,7 +3804,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) unsigned extra_args; struct dm_arg_set as; static const struct dm_arg _args[] = { - {0, 9, "Invalid number of feature args"}, + {0, 16, "Invalid number of feature args"}, }; unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec; bool should_write_sb; @@ -3902,6 +3952,8 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) ic->discard = true; } else if (!strcmp(opt_string, "fix_padding")) { ic->fix_padding = true; + } else if (!strcmp(opt_string, "legacy_recalculate")) { + ic->legacy_recalculate = true; } else { r = -EINVAL; ti->error = "Invalid argument"; @@ -4197,6 +4249,20 @@ try_smaller_buffer: r = -ENOMEM; goto bad; } + } else { + if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) { + ti->error = "Recalculate can only be specified with internal_hash"; + r = -EINVAL; + goto bad; + } + } + + if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING) && + le64_to_cpu(ic->sb->recalc_sector) < ic->provided_data_sectors && + dm_integrity_disable_recalculate(ic)) { + ti->error = "Recalculating with HMAC is disabled for security reasons - if you really need it, use the argument \"legacy_recalculate\""; + r = -EOPNOTSUPP; + goto bad; } ic->bufio = dm_bufio_client_create(ic->meta_dev ? ic->meta_dev->bdev : ic->dev->bdev, diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 23c38777e8f6..cab12b2251ba 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -3729,10 +3729,10 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits) blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs)); /* - * RAID1 and RAID10 personalities require bio splitting, - * RAID0/4/5/6 don't and process large discard bios properly. + * RAID0 and RAID10 personalities require bio splitting, + * RAID1/4/5/6 don't and process large discard bios properly. */ - if (rs_is_raid1(rs) || rs_is_raid10(rs)) { + if (rs_is_raid0(rs) || rs_is_raid10(rs)) { limits->discard_granularity = chunk_size_bytes; limits->max_discard_sectors = rs->md.chunk_sectors; } diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c index 4668b2cd98f4..11890db71f3f 100644 --- a/drivers/md/dm-snap.c +++ b/drivers/md/dm-snap.c @@ -141,6 +141,11 @@ struct dm_snapshot { * for them to be committed. */ struct bio_list bios_queued_during_merge; + + /* + * Flush data after merge. + */ + struct bio flush_bio; }; /* @@ -1121,6 +1126,17 @@ shut: static void error_bios(struct bio *bio); +static int flush_data(struct dm_snapshot *s) +{ + struct bio *flush_bio = &s->flush_bio; + + bio_reset(flush_bio); + bio_set_dev(flush_bio, s->origin->bdev); + flush_bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH; + + return submit_bio_wait(flush_bio); +} + static void merge_callback(int read_err, unsigned long write_err, void *context) { struct dm_snapshot *s = context; @@ -1134,6 +1150,11 @@ static void merge_callback(int read_err, unsigned long write_err, void *context) goto shut; } + if (flush_data(s) < 0) { + DMERR("Flush after merge failed: shutting down merge"); + goto shut; + } + if (s->store->type->commit_merge(s->store, s->num_merging_chunks) < 0) { DMERR("Write error in exception store: shutting down merge"); @@ -1318,6 +1339,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) s->first_merging_chunk = 0; s->num_merging_chunks = 0; bio_list_init(&s->bios_queued_during_merge); + bio_init(&s->flush_bio, NULL, 0); /* Allocate hash table for COW data */ if (init_hash_tables(s)) { @@ -1504,6 +1526,8 @@ static void snapshot_dtr(struct dm_target *ti) dm_exception_store_destroy(s->store); + bio_uninit(&s->flush_bio); + dm_put_device(ti, s->cow); dm_put_device(ti, s->origin); diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 188f41287f18..4acf2342f7ad 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -363,14 +363,23 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode, { int r; dev_t dev; + unsigned int major, minor; + char dummy; struct dm_dev_internal *dd; struct dm_table *t = ti->table; BUG_ON(!t); - dev = dm_get_dev_t(path); - if (!dev) - return -ENODEV; + if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) { + /* Extract the major/minor numbers */ + dev = MKDEV(major, minor); + if (MAJOR(dev) != major || MINOR(dev) != minor) + return -EOVERFLOW; + } else { + dev = dm_get_dev_t(path); + if (!dev) + return -ENODEV; + } dd = find_device(&t->devices, dev); if (!dd) { diff --git a/drivers/md/dm.c b/drivers/md/dm.c index b3c3c8b4cb42..7bac564f3faa 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -562,7 +562,7 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode, * subset of the parent bdev; require extra privileges. */ if (!capable(CAP_SYS_RAWIO)) { - DMWARN_LIMIT( + DMDEBUG_LIMIT( "%s: sending ioctl %x to DM device without required privilege.", current->comm, cmd); r = -ENOIOCTLCMD; diff --git a/drivers/md/md.c b/drivers/md/md.c index ca409428b4fc..04384452a7ab 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws) * could wait for this and below md_handle_request could wait for those * bios because of suspend check */ + spin_lock_irq(&mddev->lock); mddev->prev_flush_start = mddev->start_flush; mddev->flush_bio = NULL; + spin_unlock_irq(&mddev->lock); wake_up(&mddev->sb_wait); if (bio->bi_iter.bi_size == 0) { diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c index 2aa6648fa41f..5a491d2cd1ae 100644 --- a/drivers/misc/cardreader/rtsx_pcr.c +++ b/drivers/misc/cardreader/rtsx_pcr.c @@ -1512,6 +1512,7 @@ static int rtsx_pci_probe(struct pci_dev *pcidev, struct pcr_handle *handle; u32 base, len; int ret, i, bar = 0; + u8 val; dev_dbg(&(pcidev->dev), ": Realtek PCI-E Card Reader found at %s [%04x:%04x] (rev %x)\n", @@ -1577,7 +1578,11 @@ static int rtsx_pci_probe(struct pci_dev *pcidev, pcr->host_cmds_addr = pcr->rtsx_resv_buf_addr; pcr->host_sg_tbl_ptr = pcr->rtsx_resv_buf + HOST_CMDS_BUF_LEN; pcr->host_sg_tbl_addr = pcr->rtsx_resv_buf_addr + HOST_CMDS_BUF_LEN; - + rtsx_pci_read_register(pcr, ASPM_FORCE_CTL, &val); + if (val & FORCE_ASPM_CTL0 && val & FORCE_ASPM_CTL1) + pcr->aspm_enabled = false; + else + pcr->aspm_enabled = true; pcr->card_inserted = 0; pcr->card_removed = 0; INIT_DELAYED_WORK(&pcr->carddet_work, rtsx_pci_card_detect); diff --git a/drivers/misc/habanalabs/common/command_submission.c b/drivers/misc/habanalabs/common/command_submission.c index beb482310a58..b2b3d2b0f808 100644 --- a/drivers/misc/habanalabs/common/command_submission.c +++ b/drivers/misc/habanalabs/common/command_submission.c @@ -472,8 +472,11 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx, cntr = &hdev->aggregated_cs_counters; cs = kzalloc(sizeof(*cs), GFP_ATOMIC); - if (!cs) + if (!cs) { + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&cntr->out_of_mem_drop_cnt); return -ENOMEM; + } cs->ctx = ctx; cs->submitted = false; @@ -486,6 +489,8 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx, cs_cmpl = kmalloc(sizeof(*cs_cmpl), GFP_ATOMIC); if (!cs_cmpl) { + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&cntr->out_of_mem_drop_cnt); rc = -ENOMEM; goto free_cs; } @@ -513,6 +518,8 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx, cs->jobs_in_queue_cnt = kcalloc(hdev->asic_prop.max_queues, sizeof(*cs->jobs_in_queue_cnt), GFP_ATOMIC); if (!cs->jobs_in_queue_cnt) { + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&cntr->out_of_mem_drop_cnt); rc = -ENOMEM; goto free_fence; } @@ -562,7 +569,7 @@ void hl_cs_rollback_all(struct hl_device *hdev) for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) flush_workqueue(hdev->cq_wq[i]); - /* Make sure we don't have leftovers in the H/W queues mirror list */ + /* Make sure we don't have leftovers in the CS mirror list */ list_for_each_entry_safe(cs, tmp, &hdev->cs_mirror_list, mirror_node) { cs_get(cs); cs->aborted = true; @@ -764,11 +771,14 @@ static int hl_cs_sanity_checks(struct hl_fpriv *hpriv, union hl_cs_args *args) static int hl_cs_copy_chunk_array(struct hl_device *hdev, struct hl_cs_chunk **cs_chunk_array, - void __user *chunks, u32 num_chunks) + void __user *chunks, u32 num_chunks, + struct hl_ctx *ctx) { u32 size_to_copy; if (num_chunks > HL_MAX_JOBS_PER_CS) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.validation_drop_cnt); dev_err(hdev->dev, "Number of chunks can NOT be larger than %d\n", HL_MAX_JOBS_PER_CS); @@ -777,11 +787,16 @@ static int hl_cs_copy_chunk_array(struct hl_device *hdev, *cs_chunk_array = kmalloc_array(num_chunks, sizeof(**cs_chunk_array), GFP_ATOMIC); - if (!*cs_chunk_array) + if (!*cs_chunk_array) { + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.out_of_mem_drop_cnt); return -ENOMEM; + } size_to_copy = num_chunks * sizeof(struct hl_cs_chunk); if (copy_from_user(*cs_chunk_array, chunks, size_to_copy)) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.validation_drop_cnt); dev_err(hdev->dev, "Failed to copy cs chunk array from user\n"); kfree(*cs_chunk_array); return -EFAULT; @@ -797,6 +812,7 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, struct hl_device *hdev = hpriv->hdev; struct hl_cs_chunk *cs_chunk_array; struct hl_cs_counters_atomic *cntr; + struct hl_ctx *ctx = hpriv->ctx; struct hl_cs_job *job; struct hl_cs *cs; struct hl_cb *cb; @@ -805,7 +821,8 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, cntr = &hdev->aggregated_cs_counters; *cs_seq = ULLONG_MAX; - rc = hl_cs_copy_chunk_array(hdev, &cs_chunk_array, chunks, num_chunks); + rc = hl_cs_copy_chunk_array(hdev, &cs_chunk_array, chunks, num_chunks, + hpriv->ctx); if (rc) goto out; @@ -832,8 +849,8 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, rc = validate_queue_index(hdev, chunk, &queue_type, &is_kernel_allocated_cb); if (rc) { - atomic64_inc(&hpriv->ctx->cs_counters.parsing_drop_cnt); - atomic64_inc(&cntr->parsing_drop_cnt); + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); goto free_cs_object; } @@ -841,8 +858,8 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, cb = get_cb_from_cs_chunk(hdev, &hpriv->cb_mgr, chunk); if (!cb) { atomic64_inc( - &hpriv->ctx->cs_counters.parsing_drop_cnt); - atomic64_inc(&cntr->parsing_drop_cnt); + &ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); rc = -EINVAL; goto free_cs_object; } @@ -856,8 +873,7 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, job = hl_cs_allocate_job(hdev, queue_type, is_kernel_allocated_cb); if (!job) { - atomic64_inc( - &hpriv->ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); atomic64_inc(&cntr->out_of_mem_drop_cnt); dev_err(hdev->dev, "Failed to allocate a new job\n"); rc = -ENOMEM; @@ -891,7 +907,7 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, rc = cs_parser(hpriv, job); if (rc) { - atomic64_inc(&hpriv->ctx->cs_counters.parsing_drop_cnt); + atomic64_inc(&ctx->cs_counters.parsing_drop_cnt); atomic64_inc(&cntr->parsing_drop_cnt); dev_err(hdev->dev, "Failed to parse JOB %d.%llu.%d, err %d, rejecting the CS\n", @@ -901,8 +917,8 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks, } if (int_queues_only) { - atomic64_inc(&hpriv->ctx->cs_counters.parsing_drop_cnt); - atomic64_inc(&cntr->parsing_drop_cnt); + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "Reject CS %d.%llu because only internal queues jobs are present\n", cs->ctx->asid, cs->sequence); @@ -1042,7 +1058,7 @@ out: } static int cs_ioctl_extract_signal_seq(struct hl_device *hdev, - struct hl_cs_chunk *chunk, u64 *signal_seq) + struct hl_cs_chunk *chunk, u64 *signal_seq, struct hl_ctx *ctx) { u64 *signal_seq_arr = NULL; u32 size_to_copy, signal_seq_arr_len; @@ -1052,6 +1068,8 @@ static int cs_ioctl_extract_signal_seq(struct hl_device *hdev, /* currently only one signal seq is supported */ if (signal_seq_arr_len != 1) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.validation_drop_cnt); dev_err(hdev->dev, "Wait for signal CS supports only one signal CS seq\n"); return -EINVAL; @@ -1060,13 +1078,18 @@ static int cs_ioctl_extract_signal_seq(struct hl_device *hdev, signal_seq_arr = kmalloc_array(signal_seq_arr_len, sizeof(*signal_seq_arr), GFP_ATOMIC); - if (!signal_seq_arr) + if (!signal_seq_arr) { + atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.out_of_mem_drop_cnt); return -ENOMEM; + } size_to_copy = chunk->num_signal_seq_arr * sizeof(*signal_seq_arr); if (copy_from_user(signal_seq_arr, u64_to_user_ptr(chunk->signal_seq_arr), size_to_copy)) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&hdev->aggregated_cs_counters.validation_drop_cnt); dev_err(hdev->dev, "Failed to copy signal seq array from user\n"); rc = -EFAULT; @@ -1153,6 +1176,7 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, struct hl_device *hdev = hpriv->hdev; struct hl_cs_compl *sig_waitcs_cmpl; u32 q_idx, collective_engine_id = 0; + struct hl_cs_counters_atomic *cntr; struct hl_fence *sig_fence = NULL; struct hl_ctx *ctx = hpriv->ctx; enum hl_queue_type q_type; @@ -1160,9 +1184,11 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, u64 signal_seq; int rc; + cntr = &hdev->aggregated_cs_counters; *cs_seq = ULLONG_MAX; - rc = hl_cs_copy_chunk_array(hdev, &cs_chunk_array, chunks, num_chunks); + rc = hl_cs_copy_chunk_array(hdev, &cs_chunk_array, chunks, num_chunks, + ctx); if (rc) goto out; @@ -1170,6 +1196,8 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, chunk = &cs_chunk_array[0]; if (chunk->queue_index >= hdev->asic_prop.max_queues) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "Queue index %d is invalid\n", chunk->queue_index); rc = -EINVAL; @@ -1181,6 +1209,8 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, q_type = hw_queue_prop->type; if (!hw_queue_prop->supports_sync_stream) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "Queue index %d does not support sync stream operations\n", q_idx); @@ -1190,6 +1220,8 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, if (cs_type == CS_TYPE_COLLECTIVE_WAIT) { if (!(hw_queue_prop->collective_mode == HL_COLLECTIVE_MASTER)) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "Queue index %d is invalid\n", q_idx); rc = -EINVAL; @@ -1200,12 +1232,14 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, } if (cs_type == CS_TYPE_WAIT || cs_type == CS_TYPE_COLLECTIVE_WAIT) { - rc = cs_ioctl_extract_signal_seq(hdev, chunk, &signal_seq); + rc = cs_ioctl_extract_signal_seq(hdev, chunk, &signal_seq, ctx); if (rc) goto free_cs_chunk_array; sig_fence = hl_ctx_get_fence(ctx, signal_seq); if (IS_ERR(sig_fence)) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "Failed to get signal CS with seq 0x%llx\n", signal_seq); @@ -1223,6 +1257,8 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, container_of(sig_fence, struct hl_cs_compl, base_fence); if (sig_waitcs_cmpl->type != CS_TYPE_SIGNAL) { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); dev_err(hdev->dev, "CS seq 0x%llx is not of a signal CS\n", signal_seq); @@ -1270,8 +1306,11 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type, else if (cs_type == CS_TYPE_COLLECTIVE_WAIT) rc = hdev->asic_funcs->collective_wait_create_jobs(hdev, ctx, cs, q_idx, collective_engine_id); - else + else { + atomic64_inc(&ctx->cs_counters.validation_drop_cnt); + atomic64_inc(&cntr->validation_drop_cnt); rc = -EINVAL; + } if (rc) goto free_cs_object; diff --git a/drivers/misc/habanalabs/common/device.c b/drivers/misc/habanalabs/common/device.c index 5871162a8442..69d04eca767f 100644 --- a/drivers/misc/habanalabs/common/device.c +++ b/drivers/misc/habanalabs/common/device.c @@ -17,12 +17,12 @@ enum hl_device_status hl_device_status(struct hl_device *hdev) { enum hl_device_status status; - if (hdev->disabled) - status = HL_DEVICE_STATUS_MALFUNCTION; - else if (atomic_read(&hdev->in_reset)) + if (atomic_read(&hdev->in_reset)) status = HL_DEVICE_STATUS_IN_RESET; else if (hdev->needs_reset) status = HL_DEVICE_STATUS_NEEDS_RESET; + else if (hdev->disabled) + status = HL_DEVICE_STATUS_MALFUNCTION; else status = HL_DEVICE_STATUS_OPERATIONAL; @@ -1037,7 +1037,7 @@ kill_processes: if (hard_reset) { /* Release kernel context */ - if (hl_ctx_put(hdev->kernel_ctx) == 1) + if (hdev->kernel_ctx && hl_ctx_put(hdev->kernel_ctx) == 1) hdev->kernel_ctx = NULL; hl_vm_fini(hdev); hl_mmu_fini(hdev); @@ -1092,6 +1092,7 @@ kill_processes: GFP_KERNEL); if (!hdev->kernel_ctx) { rc = -ENOMEM; + hl_mmu_fini(hdev); goto out_err; } @@ -1103,6 +1104,7 @@ kill_processes: "failed to init kernel ctx in hard reset\n"); kfree(hdev->kernel_ctx); hdev->kernel_ctx = NULL; + hl_mmu_fini(hdev); goto out_err; } } @@ -1485,6 +1487,15 @@ void hl_device_fini(struct hl_device *hdev) } } + /* Disable PCI access from device F/W so it won't send us additional + * interrupts. We disable MSI/MSI-X at the halt_engines function and we + * can't have the F/W sending us interrupts after that. We need to + * disable the access here because if the device is marked disable, the + * message won't be send. Also, in case of heartbeat, the device CPU is + * marked as disable so this message won't be sent + */ + hl_fw_send_pci_access_msg(hdev, CPUCP_PACKET_DISABLE_PCI_ACCESS); + /* Mark device as disabled */ hdev->disabled = true; diff --git a/drivers/misc/habanalabs/common/firmware_if.c b/drivers/misc/habanalabs/common/firmware_if.c index 0e1c629e9800..c9a12980218a 100644 --- a/drivers/misc/habanalabs/common/firmware_if.c +++ b/drivers/misc/habanalabs/common/firmware_if.c @@ -402,6 +402,10 @@ int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev, } counters->rx_throughput = result; + memset(&pkt, 0, sizeof(pkt)); + pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_THROUGHPUT_GET << + CPUCP_PKT_CTL_OPCODE_SHIFT); + /* Fetch PCI tx counter */ pkt.index = cpu_to_le32(cpucp_pcie_throughput_tx); rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), @@ -414,6 +418,7 @@ int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev, counters->tx_throughput = result; /* Fetch PCI replay counter */ + memset(&pkt, 0, sizeof(pkt)); pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_REPLAY_CNT_GET << CPUCP_PKT_CTL_OPCODE_SHIFT); @@ -627,25 +632,38 @@ int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg, security_status = RREG32(cpu_security_boot_status_reg); /* We read security status multiple times during boot: - * 1. preboot - we check if fw security feature is supported - * 2. boot cpu - we get boot cpu security status - * 3. FW application - we get FW application security status + * 1. preboot - a. Check whether the security status bits are valid + * b. Check whether fw security is enabled + * c. Check whether hard reset is done by preboot + * 2. boot cpu - a. Fetch boot cpu security status + * b. Check whether hard reset is done by boot cpu + * 3. FW application - a. Fetch fw application security status + * b. Check whether hard reset is done by fw app * * Preboot: * Check security status bit (CPU_BOOT_DEV_STS0_ENABLED), if it is set * check security enabled bit (CPU_BOOT_DEV_STS0_SECURITY_EN) */ if (security_status & CPU_BOOT_DEV_STS0_ENABLED) { - hdev->asic_prop.fw_security_status_valid = 1; - prop->fw_security_disabled = - !(security_status & CPU_BOOT_DEV_STS0_SECURITY_EN); + prop->fw_security_status_valid = 1; + + if (security_status & CPU_BOOT_DEV_STS0_SECURITY_EN) + prop->fw_security_disabled = false; + else + prop->fw_security_disabled = true; + + if (security_status & CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) + prop->hard_reset_done_by_fw = true; } else { - hdev->asic_prop.fw_security_status_valid = 0; + prop->fw_security_status_valid = 0; prop->fw_security_disabled = true; } + dev_dbg(hdev->dev, "Firmware preboot hard-reset is %s\n", + prop->hard_reset_done_by_fw ? "enabled" : "disabled"); + dev_info(hdev->dev, "firmware-level security is %s\n", - prop->fw_security_disabled ? "disabled" : "enabled"); + prop->fw_security_disabled ? "disabled" : "enabled"); return 0; } @@ -655,6 +673,7 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, u32 cpu_security_boot_status_reg, u32 boot_err0_reg, bool skip_bmc, u32 cpu_timeout, u32 boot_fit_timeout) { + struct asic_fixed_properties *prop = &hdev->asic_prop; u32 status; int rc; @@ -723,11 +742,22 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, /* Read U-Boot version now in case we will later fail */ hdev->asic_funcs->read_device_fw_version(hdev, FW_COMP_UBOOT); + /* Clear reset status since we need to read it again from boot CPU */ + prop->hard_reset_done_by_fw = false; + /* Read boot_cpu security bits */ - if (hdev->asic_prop.fw_security_status_valid) - hdev->asic_prop.fw_boot_cpu_security_map = + if (prop->fw_security_status_valid) { + prop->fw_boot_cpu_security_map = RREG32(cpu_security_boot_status_reg); + if (prop->fw_boot_cpu_security_map & + CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) + prop->hard_reset_done_by_fw = true; + } + + dev_dbg(hdev->dev, "Firmware boot CPU hard-reset is %s\n", + prop->hard_reset_done_by_fw ? "enabled" : "disabled"); + if (rc) { detect_cpu_boot_status(hdev, status); rc = -EIO; @@ -796,18 +826,21 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, goto out; } + /* Clear reset status since we need to read again from app */ + prop->hard_reset_done_by_fw = false; + /* Read FW application security bits */ - if (hdev->asic_prop.fw_security_status_valid) { - hdev->asic_prop.fw_app_security_map = + if (prop->fw_security_status_valid) { + prop->fw_app_security_map = RREG32(cpu_security_boot_status_reg); - if (hdev->asic_prop.fw_app_security_map & + if (prop->fw_app_security_map & CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) - hdev->asic_prop.hard_reset_done_by_fw = true; + prop->hard_reset_done_by_fw = true; } - dev_dbg(hdev->dev, "Firmware hard-reset is %s\n", - hdev->asic_prop.hard_reset_done_by_fw ? "enabled" : "disabled"); + dev_dbg(hdev->dev, "Firmware application CPU hard-reset is %s\n", + prop->hard_reset_done_by_fw ? "enabled" : "disabled"); dev_info(hdev->dev, "Successfully loaded firmware to device\n"); diff --git a/drivers/misc/habanalabs/common/habanalabs.h b/drivers/misc/habanalabs/common/habanalabs.h index 571eda6ef5ab..60e16dc4bcac 100644 --- a/drivers/misc/habanalabs/common/habanalabs.h +++ b/drivers/misc/habanalabs/common/habanalabs.h @@ -944,7 +944,7 @@ struct hl_asic_funcs { u32 (*get_signal_cb_size)(struct hl_device *hdev); u32 (*get_wait_cb_size)(struct hl_device *hdev); u32 (*gen_signal_cb)(struct hl_device *hdev, void *data, u16 sob_id, - u32 size); + u32 size, bool eb); u32 (*gen_wait_cb)(struct hl_device *hdev, struct hl_gen_wait_properties *prop); void (*reset_sob)(struct hl_device *hdev, void *data); @@ -1000,6 +1000,7 @@ struct hl_va_range { * @queue_full_drop_cnt: dropped due to queue full * @device_in_reset_drop_cnt: dropped due to device in reset * @max_cs_in_flight_drop_cnt: dropped due to maximum CS in-flight + * @validation_drop_cnt: dropped due to error in validation */ struct hl_cs_counters_atomic { atomic64_t out_of_mem_drop_cnt; @@ -1007,6 +1008,7 @@ struct hl_cs_counters_atomic { atomic64_t queue_full_drop_cnt; atomic64_t device_in_reset_drop_cnt; atomic64_t max_cs_in_flight_drop_cnt; + atomic64_t validation_drop_cnt; }; /** @@ -2180,6 +2182,7 @@ void hl_mmu_v1_set_funcs(struct hl_device *hdev, struct hl_mmu_funcs *mmu); int hl_mmu_va_to_pa(struct hl_ctx *ctx, u64 virt_addr, u64 *phys_addr); int hl_mmu_get_tlb_info(struct hl_ctx *ctx, u64 virt_addr, struct hl_mmu_hop_info *hops); +bool hl_is_dram_va(struct hl_device *hdev, u64 virt_addr); int hl_fw_load_fw_to_device(struct hl_device *hdev, const char *fw_name, void __iomem *dst, u32 src_offset, u32 size); diff --git a/drivers/misc/habanalabs/common/habanalabs_drv.c b/drivers/misc/habanalabs/common/habanalabs_drv.c index 6bbb6bca6860..032d114f01ea 100644 --- a/drivers/misc/habanalabs/common/habanalabs_drv.c +++ b/drivers/misc/habanalabs/common/habanalabs_drv.c @@ -544,6 +544,7 @@ static struct pci_driver hl_pci_driver = { .id_table = ids, .probe = hl_pci_probe, .remove = hl_pci_remove, + .shutdown = hl_pci_remove, .driver.pm = &hl_pm_ops, .err_handler = &hl_pci_err_handler, }; diff --git a/drivers/misc/habanalabs/common/habanalabs_ioctl.c b/drivers/misc/habanalabs/common/habanalabs_ioctl.c index 32e6af1db4e3..d25892d61ec9 100644 --- a/drivers/misc/habanalabs/common/habanalabs_ioctl.c +++ b/drivers/misc/habanalabs/common/habanalabs_ioctl.c @@ -133,6 +133,8 @@ static int hw_idle(struct hl_device *hdev, struct hl_info_args *args) hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev, &hw_idle.busy_engines_mask_ext, NULL); + hw_idle.busy_engines_mask = + lower_32_bits(hw_idle.busy_engines_mask_ext); return copy_to_user(out, &hw_idle, min((size_t) max_size, sizeof(hw_idle))) ? -EFAULT : 0; @@ -335,6 +337,8 @@ static int cs_counters_info(struct hl_fpriv *hpriv, struct hl_info_args *args) atomic64_read(&cntr->device_in_reset_drop_cnt); cs_counters.total_max_cs_in_flight_drop_cnt = atomic64_read(&cntr->max_cs_in_flight_drop_cnt); + cs_counters.total_validation_drop_cnt = + atomic64_read(&cntr->validation_drop_cnt); if (hpriv->ctx) { cs_counters.ctx_out_of_mem_drop_cnt = @@ -352,6 +356,9 @@ static int cs_counters_info(struct hl_fpriv *hpriv, struct hl_info_args *args) cs_counters.ctx_max_cs_in_flight_drop_cnt = atomic64_read( &hpriv->ctx->cs_counters.max_cs_in_flight_drop_cnt); + cs_counters.ctx_validation_drop_cnt = + atomic64_read( + &hpriv->ctx->cs_counters.validation_drop_cnt); } return copy_to_user(out, &cs_counters, @@ -406,7 +413,7 @@ static int total_energy_consumption_info(struct hl_fpriv *hpriv, static int pll_frequency_info(struct hl_fpriv *hpriv, struct hl_info_args *args) { struct hl_device *hdev = hpriv->hdev; - struct hl_pll_frequency_info freq_info = {0}; + struct hl_pll_frequency_info freq_info = { {0} }; u32 max_size = args->return_size; void __user *out = (void __user *) (uintptr_t) args->return_pointer; int rc; diff --git a/drivers/misc/habanalabs/common/hw_queue.c b/drivers/misc/habanalabs/common/hw_queue.c index 7caf868d1585..76217258780a 100644 --- a/drivers/misc/habanalabs/common/hw_queue.c +++ b/drivers/misc/habanalabs/common/hw_queue.c @@ -418,8 +418,11 @@ static void init_signal_cs(struct hl_device *hdev, "generate signal CB, sob_id: %d, sob val: 0x%x, q_idx: %d\n", cs_cmpl->hw_sob->sob_id, cs_cmpl->sob_val, q_idx); + /* we set an EB since we must make sure all oeprations are done + * when sending the signal + */ hdev->asic_funcs->gen_signal_cb(hdev, job->patched_cb, - cs_cmpl->hw_sob->sob_id, 0); + cs_cmpl->hw_sob->sob_id, 0, true); kref_get(&hw_sob->kref); diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c index cbe9da4e0211..5d4fbdcaefe3 100644 --- a/drivers/misc/habanalabs/common/memory.c +++ b/drivers/misc/habanalabs/common/memory.c @@ -886,8 +886,10 @@ static void unmap_phys_pg_pack(struct hl_ctx *ctx, u64 vaddr, { struct hl_device *hdev = ctx->hdev; u64 next_vaddr, i; + bool is_host_addr; u32 page_size; + is_host_addr = !hl_is_dram_va(hdev, vaddr); page_size = phys_pg_pack->page_size; next_vaddr = vaddr; @@ -900,9 +902,13 @@ static void unmap_phys_pg_pack(struct hl_ctx *ctx, u64 vaddr, /* * unmapping on Palladium can be really long, so avoid a CPU * soft lockup bug by sleeping a little between unmapping pages + * + * In addition, when unmapping host memory we pass through + * the Linux kernel to unpin the pages and that takes a long + * time. Therefore, sleep every 32K pages to avoid soft lockup */ - if (hdev->pldm) - usleep_range(500, 1000); + if (hdev->pldm || (is_host_addr && (i & 0x7FFF) == 0)) + usleep_range(50, 200); } } diff --git a/drivers/misc/habanalabs/common/mmu.c b/drivers/misc/habanalabs/common/mmu.c index 33ae953d3a36..28a4638741d8 100644 --- a/drivers/misc/habanalabs/common/mmu.c +++ b/drivers/misc/habanalabs/common/mmu.c @@ -9,7 +9,7 @@ #include "habanalabs.h" -static bool is_dram_va(struct hl_device *hdev, u64 virt_addr) +bool hl_is_dram_va(struct hl_device *hdev, u64 virt_addr) { struct asic_fixed_properties *prop = &hdev->asic_prop; @@ -156,7 +156,7 @@ int hl_mmu_unmap_page(struct hl_ctx *ctx, u64 virt_addr, u32 page_size, if (!hdev->mmu_enable) return 0; - is_dram_addr = is_dram_va(hdev, virt_addr); + is_dram_addr = hl_is_dram_va(hdev, virt_addr); if (is_dram_addr) mmu_prop = &prop->dmmu; @@ -236,7 +236,7 @@ int hl_mmu_map_page(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, if (!hdev->mmu_enable) return 0; - is_dram_addr = is_dram_va(hdev, virt_addr); + is_dram_addr = hl_is_dram_va(hdev, virt_addr); if (is_dram_addr) mmu_prop = &prop->dmmu; diff --git a/drivers/misc/habanalabs/common/mmu_v1.c b/drivers/misc/habanalabs/common/mmu_v1.c index 2ce6ea89d4fa..06d8a44dd5d4 100644 --- a/drivers/misc/habanalabs/common/mmu_v1.c +++ b/drivers/misc/habanalabs/common/mmu_v1.c @@ -467,8 +467,16 @@ static void hl_mmu_v1_fini(struct hl_device *hdev) { /* MMU H/W fini was already done in device hw_fini() */ - kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0); - gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool); + if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.hr.mmu_shadow_hop0)) { + kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0); + gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool); + } + + /* Make sure that if we arrive here again without init was called we + * won't cause kernel panic. This can happen for example if we fail + * during hard reset code at certain points + */ + hdev->mmu_priv.dr.mmu_shadow_hop0 = NULL; } /** diff --git a/drivers/misc/habanalabs/common/pci.c b/drivers/misc/habanalabs/common/pci.c index 923b2606e29f..b4725e6101f6 100644 --- a/drivers/misc/habanalabs/common/pci.c +++ b/drivers/misc/habanalabs/common/pci.c @@ -130,10 +130,8 @@ static int hl_pci_elbi_write(struct hl_device *hdev, u64 addr, u32 data) if ((val & PCI_CONFIG_ELBI_STS_MASK) == PCI_CONFIG_ELBI_STS_DONE) return 0; - if (val & PCI_CONFIG_ELBI_STS_ERR) { - dev_err(hdev->dev, "Error writing to ELBI\n"); + if (val & PCI_CONFIG_ELBI_STS_ERR) return -EIO; - } if (!(val & PCI_CONFIG_ELBI_STS_MASK)) { dev_err(hdev->dev, "ELBI write didn't finish in time\n"); @@ -160,8 +158,12 @@ int hl_pci_iatu_write(struct hl_device *hdev, u32 addr, u32 data) dbi_offset = addr & 0xFFF; - rc = hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0x00300000); - rc |= hl_pci_elbi_write(hdev, prop->pcie_dbi_base_address + dbi_offset, + /* Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail + * in case the firmware security is enabled + */ + hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0x00300000); + + rc = hl_pci_elbi_write(hdev, prop->pcie_dbi_base_address + dbi_offset, data); if (rc) @@ -244,9 +246,11 @@ int hl_pci_set_inbound_region(struct hl_device *hdev, u8 region, rc |= hl_pci_iatu_write(hdev, offset + 0x4, ctrl_reg_val); - /* Return the DBI window to the default location */ - rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); - rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0); + /* Return the DBI window to the default location + * Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail + * in case the firmware security is enabled + */ + hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); if (rc) dev_err(hdev->dev, "failed to map bar %u to 0x%08llx\n", @@ -294,9 +298,11 @@ int hl_pci_set_outbound_region(struct hl_device *hdev, /* Enable */ rc |= hl_pci_iatu_write(hdev, 0x004, 0x80000000); - /* Return the DBI window to the default location */ - rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); - rc |= hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr + 4, 0); + /* Return the DBI window to the default location + * Ignore result of writing to pcie_aux_dbi_reg_addr as it could fail + * in case the firmware security is enabled + */ + hl_pci_elbi_write(hdev, prop->pcie_aux_dbi_reg_addr, 0); return rc; } diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c index 1f1926607c5e..b328ddaa64ee 100644 --- a/drivers/misc/habanalabs/gaudi/gaudi.c +++ b/drivers/misc/habanalabs/gaudi/gaudi.c @@ -151,19 +151,6 @@ static const u16 gaudi_packet_sizes[MAX_PACKET_ID] = { [PACKET_LOAD_AND_EXE] = sizeof(struct packet_load_and_exe) }; -static const u32 gaudi_pll_base_addresses[GAUDI_PLL_MAX] = { - [CPU_PLL] = mmPSOC_CPU_PLL_NR, - [PCI_PLL] = mmPSOC_PCI_PLL_NR, - [SRAM_PLL] = mmSRAM_W_PLL_NR, - [HBM_PLL] = mmPSOC_HBM_PLL_NR, - [NIC_PLL] = mmNIC0_PLL_NR, - [DMA_PLL] = mmDMA_W_PLL_NR, - [MESH_PLL] = mmMESH_W_PLL_NR, - [MME_PLL] = mmPSOC_MME_PLL_NR, - [TPC_PLL] = mmPSOC_TPC_PLL_NR, - [IF_PLL] = mmIF_W_PLL_NR -}; - static inline bool validate_packet_id(enum packet_id id) { switch (id) { @@ -374,7 +361,7 @@ static int gaudi_cpucp_info_get(struct hl_device *hdev); static void gaudi_disable_clock_gating(struct hl_device *hdev); static void gaudi_mmu_prepare(struct hl_device *hdev, u32 asid); static u32 gaudi_gen_signal_cb(struct hl_device *hdev, void *data, u16 sob_id, - u32 size); + u32 size, bool eb); static u32 gaudi_gen_wait_cb(struct hl_device *hdev, struct hl_gen_wait_properties *prop); @@ -667,12 +654,6 @@ static int gaudi_early_init(struct hl_device *hdev) if (rc) goto free_queue_props; - if (gaudi_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { - dev_info(hdev->dev, - "H/W state is dirty, must reset before initializing\n"); - hdev->asic_funcs->hw_fini(hdev, true); - } - /* Before continuing in the initialization, we need to read the preboot * version to determine whether we run with a security-enabled firmware */ @@ -685,6 +666,12 @@ static int gaudi_early_init(struct hl_device *hdev) goto pci_fini; } + if (gaudi_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { + dev_info(hdev->dev, + "H/W state is dirty, must reset before initializing\n"); + hdev->asic_funcs->hw_fini(hdev, true); + } + return 0; pci_fini: @@ -703,93 +690,60 @@ static int gaudi_early_fini(struct hl_device *hdev) } /** - * gaudi_fetch_pll_frequency - Fetch PLL frequency values + * gaudi_fetch_psoc_frequency - Fetch PSOC frequency values * * @hdev: pointer to hl_device structure - * @pll_index: index of the pll to fetch frequency from - * @pll_freq: pointer to store the pll frequency in MHz in each of the available - * outputs. if a certain output is not available a 0 will be set * */ -static int gaudi_fetch_pll_frequency(struct hl_device *hdev, - enum gaudi_pll_index pll_index, - u16 *pll_freq_arr) +static int gaudi_fetch_psoc_frequency(struct hl_device *hdev) { - u32 nr = 0, nf = 0, od = 0, pll_clk = 0, div_fctr, div_sel, - pll_base_addr = gaudi_pll_base_addresses[pll_index]; - u16 freq = 0; - int i, rc; - - if (hdev->asic_prop.fw_security_status_valid && - (hdev->asic_prop.fw_app_security_map & - CPU_BOOT_DEV_STS0_PLL_INFO_EN)) { - rc = hl_fw_cpucp_pll_info_get(hdev, pll_index, pll_freq_arr); + struct asic_fixed_properties *prop = &hdev->asic_prop; + u32 nr = 0, nf = 0, od = 0, div_fctr = 0, pll_clk, div_sel; + u16 pll_freq_arr[HL_PLL_NUM_OUTPUTS], freq; + int rc; - if (rc) - return rc; - } else if (hdev->asic_prop.fw_security_disabled) { + if (hdev->asic_prop.fw_security_disabled) { /* Backward compatibility */ - nr = RREG32(pll_base_addr + PLL_NR_OFFSET); - nf = RREG32(pll_base_addr + PLL_NF_OFFSET); - od = RREG32(pll_base_addr + PLL_OD_OFFSET); - - for (i = 0; i < HL_PLL_NUM_OUTPUTS; i++) { - div_fctr = RREG32(pll_base_addr + - PLL_DIV_FACTOR_0_OFFSET + i * 4); - div_sel = RREG32(pll_base_addr + - PLL_DIV_SEL_0_OFFSET + i * 4); + div_fctr = RREG32(mmPSOC_CPU_PLL_DIV_FACTOR_2); + div_sel = RREG32(mmPSOC_CPU_PLL_DIV_SEL_2); + nr = RREG32(mmPSOC_CPU_PLL_NR); + nf = RREG32(mmPSOC_CPU_PLL_NF); + od = RREG32(mmPSOC_CPU_PLL_OD); - if (div_sel == DIV_SEL_REF_CLK || + if (div_sel == DIV_SEL_REF_CLK || div_sel == DIV_SEL_DIVIDED_REF) { - if (div_sel == DIV_SEL_REF_CLK) - freq = PLL_REF_CLK; - else - freq = PLL_REF_CLK / (div_fctr + 1); - } else if (div_sel == DIV_SEL_PLL_CLK || - div_sel == DIV_SEL_DIVIDED_PLL) { - pll_clk = PLL_REF_CLK * (nf + 1) / - ((nr + 1) * (od + 1)); - if (div_sel == DIV_SEL_PLL_CLK) - freq = pll_clk; - else - freq = pll_clk / (div_fctr + 1); - } else { - dev_warn(hdev->dev, - "Received invalid div select value: %d", - div_sel); - } - - pll_freq_arr[i] = freq; + if (div_sel == DIV_SEL_REF_CLK) + freq = PLL_REF_CLK; + else + freq = PLL_REF_CLK / (div_fctr + 1); + } else if (div_sel == DIV_SEL_PLL_CLK || + div_sel == DIV_SEL_DIVIDED_PLL) { + pll_clk = PLL_REF_CLK * (nf + 1) / + ((nr + 1) * (od + 1)); + if (div_sel == DIV_SEL_PLL_CLK) + freq = pll_clk; + else + freq = pll_clk / (div_fctr + 1); + } else { + dev_warn(hdev->dev, + "Received invalid div select value: %d", + div_sel); + freq = 0; } } else { - dev_err(hdev->dev, "Failed to fetch PLL frequency values\n"); - return -EIO; - } + rc = hl_fw_cpucp_pll_info_get(hdev, CPU_PLL, pll_freq_arr); - return 0; -} - -/** - * gaudi_fetch_psoc_frequency - Fetch PSOC frequency values - * - * @hdev: pointer to hl_device structure - * - */ -static int gaudi_fetch_psoc_frequency(struct hl_device *hdev) -{ - struct asic_fixed_properties *prop = &hdev->asic_prop; - u16 pll_freq[HL_PLL_NUM_OUTPUTS]; - int rc; + if (rc) + return rc; - rc = gaudi_fetch_pll_frequency(hdev, CPU_PLL, pll_freq); - if (rc) - return rc; + freq = pll_freq_arr[2]; + } - prop->psoc_timestamp_frequency = pll_freq[2]; - prop->psoc_pci_pll_nr = 0; - prop->psoc_pci_pll_nf = 0; - prop->psoc_pci_pll_od = 0; - prop->psoc_pci_pll_div_factor = 0; + prop->psoc_timestamp_frequency = freq; + prop->psoc_pci_pll_nr = nr; + prop->psoc_pci_pll_nf = nf; + prop->psoc_pci_pll_od = od; + prop->psoc_pci_pll_div_factor = div_fctr; return 0; } @@ -884,11 +838,17 @@ static int gaudi_init_tpc_mem(struct hl_device *hdev) size_t fw_size; void *cpu_addr; dma_addr_t dma_handle; - int rc; + int rc, count = 5; +again: rc = request_firmware(&fw, GAUDI_TPC_FW_FILE, hdev->dev); + if (rc == -EINTR && count-- > 0) { + msleep(50); + goto again; + } + if (rc) { - dev_err(hdev->dev, "Firmware file %s is not found!\n", + dev_err(hdev->dev, "Failed to load firmware file %s\n", GAUDI_TPC_FW_FILE); goto out; } @@ -1110,7 +1070,7 @@ static void gaudi_collective_slave_init_job(struct hl_device *hdev, prop->collective_sob_id, queue_id); cb_size += gaudi_gen_signal_cb(hdev, job->user_cb, - prop->collective_sob_id, cb_size); + prop->collective_sob_id, cb_size, false); } static void gaudi_collective_wait_init_cs(struct hl_cs *cs) @@ -2449,8 +2409,6 @@ static void gaudi_init_golden_registers(struct hl_device *hdev) gaudi_init_e2e(hdev); gaudi_init_hbm_cred(hdev); - hdev->asic_funcs->disable_clock_gating(hdev); - for (tpc_id = 0, tpc_offset = 0; tpc_id < TPC_NUMBER_OF_ENGINES; tpc_id++, tpc_offset += TPC_CFG_OFFSET) { @@ -3462,6 +3420,9 @@ static void gaudi_set_clock_gating(struct hl_device *hdev) if (hdev->in_debug) return; + if (!hdev->asic_prop.fw_security_disabled) + return; + for (i = GAUDI_PCI_DMA_1, qman_offset = 0 ; i < GAUDI_HBM_DMA_1 ; i++) { enable = !!(hdev->clock_gating_mask & (BIT_ULL(gaudi_dma_assignment[i]))); @@ -3513,7 +3474,7 @@ static void gaudi_disable_clock_gating(struct hl_device *hdev) u32 qman_offset; int i; - if (!(gaudi->hw_cap_initialized & HW_CAP_CLK_GATE)) + if (!hdev->asic_prop.fw_security_disabled) return; for (i = 0, qman_offset = 0 ; i < DMA_NUMBER_OF_CHANNELS ; i++) { @@ -3806,7 +3767,7 @@ static int gaudi_init_cpu_queues(struct hl_device *hdev, u32 cpu_timeout) static void gaudi_pre_hw_init(struct hl_device *hdev) { /* Perform read from the device to make sure device is up */ - RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG); + RREG32(mmHW_STATE); if (hdev->asic_prop.fw_security_disabled) { /* Set the access through PCI bars (Linux driver only) as @@ -3847,6 +3808,13 @@ static int gaudi_hw_init(struct hl_device *hdev) return rc; } + /* In case the clock gating was enabled in preboot we need to disable + * it here before touching the MME/TPC registers. + * There is no need to take clk gating mutex because when this function + * runs, no other relevant code can run + */ + hdev->asic_funcs->disable_clock_gating(hdev); + /* SRAM scrambler must be initialized after CPU is running from HBM */ gaudi_init_scrambler_sram(hdev); @@ -3885,7 +3853,7 @@ static int gaudi_hw_init(struct hl_device *hdev) } /* Perform read from the device to flush all configuration */ - RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG); + RREG32(mmHW_STATE); return 0; @@ -3927,7 +3895,10 @@ static void gaudi_hw_fini(struct hl_device *hdev, bool hard_reset) /* I don't know what is the state of the CPU so make sure it is * stopped in any means necessary */ - WREG32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, KMD_MSG_GOTO_WFE); + if (hdev->asic_prop.hard_reset_done_by_fw) + WREG32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, KMD_MSG_RST_DEV); + else + WREG32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, KMD_MSG_GOTO_WFE); WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GAUDI_EVENT_HALT_MACHINE); @@ -3971,11 +3942,15 @@ static void gaudi_hw_fini(struct hl_device *hdev, bool hard_reset) WREG32(mmPSOC_GLOBAL_CONF_SW_ALL_RST, 1 << PSOC_GLOBAL_CONF_SW_ALL_RST_IND_SHIFT); - } - dev_info(hdev->dev, - "Issued HARD reset command, going to wait %dms\n", - reset_timeout_ms); + dev_info(hdev->dev, + "Issued HARD reset command, going to wait %dms\n", + reset_timeout_ms); + } else { + dev_info(hdev->dev, + "Firmware performs HARD reset, going to wait %dms\n", + reset_timeout_ms); + } /* * After hard reset, we can't poll the BTM_FSM register because the PSOC @@ -4027,7 +4002,8 @@ static int gaudi_cb_mmap(struct hl_device *hdev, struct vm_area_struct *vma, vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE; - rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size); + rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, + (dma_addr - HOST_PHYS_BASE), size); if (rc) dev_err(hdev->dev, "dma_mmap_coherent error %d", rc); @@ -7936,7 +7912,7 @@ static u32 gaudi_get_wait_cb_size(struct hl_device *hdev) } static u32 gaudi_gen_signal_cb(struct hl_device *hdev, void *data, u16 sob_id, - u32 size) + u32 size, bool eb) { struct hl_cb *cb = (struct hl_cb *) data; struct packet_msg_short *pkt; @@ -7953,7 +7929,7 @@ static u32 gaudi_gen_signal_cb(struct hl_device *hdev, void *data, u16 sob_id, ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_OP_MASK, 0); /* write the value */ ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_BASE_MASK, 3); /* W_S SOB base */ ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_OPCODE_MASK, PACKET_MSG_SHORT); - ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_EB_MASK, 1); + ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_EB_MASK, eb); ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_RB_MASK, 1); ctl |= FIELD_PREP(GAUDI_PKT_SHORT_CTL_MB_MASK, 1); diff --git a/drivers/misc/habanalabs/gaudi/gaudiP.h b/drivers/misc/habanalabs/gaudi/gaudiP.h index f2d91f4fcffe..a7ab2d7e57d4 100644 --- a/drivers/misc/habanalabs/gaudi/gaudiP.h +++ b/drivers/misc/habanalabs/gaudi/gaudiP.h @@ -105,13 +105,6 @@ #define MME_ACC_OFFSET (mmMME1_ACC_BASE - mmMME0_ACC_BASE) #define SRAM_BANK_OFFSET (mmSRAM_Y0_X1_RTR_BASE - mmSRAM_Y0_X0_RTR_BASE) -#define PLL_NR_OFFSET 0 -#define PLL_NF_OFFSET (mmPSOC_CPU_PLL_NF - mmPSOC_CPU_PLL_NR) -#define PLL_OD_OFFSET (mmPSOC_CPU_PLL_OD - mmPSOC_CPU_PLL_NR) -#define PLL_DIV_FACTOR_0_OFFSET (mmPSOC_CPU_PLL_DIV_FACTOR_0 - \ - mmPSOC_CPU_PLL_NR) -#define PLL_DIV_SEL_0_OFFSET (mmPSOC_CPU_PLL_DIV_SEL_0 - mmPSOC_CPU_PLL_NR) - #define NUM_OF_SOB_IN_BLOCK \ (((mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_2047 - \ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0) + 4) >> 2) diff --git a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c index 2e3612e1ee28..88a09d42e111 100644 --- a/drivers/misc/habanalabs/gaudi/gaudi_coresight.c +++ b/drivers/misc/habanalabs/gaudi/gaudi_coresight.c @@ -9,6 +9,7 @@ #include "../include/gaudi/gaudi_coresight.h" #include "../include/gaudi/asic_reg/gaudi_regs.h" #include "../include/gaudi/gaudi_masks.h" +#include "../include/gaudi/gaudi_reg_map.h" #include <uapi/misc/habanalabs.h> #define SPMU_SECTION_SIZE MME0_ACC_SPMU_MAX_OFFSET @@ -874,7 +875,7 @@ int gaudi_debug_coresight(struct hl_device *hdev, void *data) } /* Perform read from the device to flush all configuration */ - RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG); + RREG32(mmHW_STATE); return rc; } diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c index 3e5eb9e3d7bd..63679a747d2c 100644 --- a/drivers/misc/habanalabs/goya/goya.c +++ b/drivers/misc/habanalabs/goya/goya.c @@ -613,12 +613,6 @@ static int goya_early_init(struct hl_device *hdev) if (rc) goto free_queue_props; - if (goya_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { - dev_info(hdev->dev, - "H/W state is dirty, must reset before initializing\n"); - hdev->asic_funcs->hw_fini(hdev, true); - } - /* Before continuing in the initialization, we need to read the preboot * version to determine whether we run with a security-enabled firmware */ @@ -631,6 +625,12 @@ static int goya_early_init(struct hl_device *hdev) goto pci_fini; } + if (goya_get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { + dev_info(hdev->dev, + "H/W state is dirty, must reset before initializing\n"); + hdev->asic_funcs->hw_fini(hdev, true); + } + if (!hdev->pldm) { val = RREG32(mmPSOC_GLOBAL_CONF_BOOT_STRAP_PINS); if (val & PSOC_GLOBAL_CONF_BOOT_STRAP_PINS_SRIOV_EN_MASK) @@ -694,32 +694,47 @@ static void goya_qman0_set_security(struct hl_device *hdev, bool secure) static void goya_fetch_psoc_frequency(struct hl_device *hdev) { struct asic_fixed_properties *prop = &hdev->asic_prop; - u32 trace_freq = 0; - u32 pll_clk = 0; - u32 div_fctr = RREG32(mmPSOC_PCI_PLL_DIV_FACTOR_1); - u32 div_sel = RREG32(mmPSOC_PCI_PLL_DIV_SEL_1); - u32 nr = RREG32(mmPSOC_PCI_PLL_NR); - u32 nf = RREG32(mmPSOC_PCI_PLL_NF); - u32 od = RREG32(mmPSOC_PCI_PLL_OD); - - if (div_sel == DIV_SEL_REF_CLK || div_sel == DIV_SEL_DIVIDED_REF) { - if (div_sel == DIV_SEL_REF_CLK) - trace_freq = PLL_REF_CLK; - else - trace_freq = PLL_REF_CLK / (div_fctr + 1); - } else if (div_sel == DIV_SEL_PLL_CLK || - div_sel == DIV_SEL_DIVIDED_PLL) { - pll_clk = PLL_REF_CLK * (nf + 1) / ((nr + 1) * (od + 1)); - if (div_sel == DIV_SEL_PLL_CLK) - trace_freq = pll_clk; - else - trace_freq = pll_clk / (div_fctr + 1); + u32 nr = 0, nf = 0, od = 0, div_fctr = 0, pll_clk, div_sel; + u16 pll_freq_arr[HL_PLL_NUM_OUTPUTS], freq; + int rc; + + if (hdev->asic_prop.fw_security_disabled) { + div_fctr = RREG32(mmPSOC_PCI_PLL_DIV_FACTOR_1); + div_sel = RREG32(mmPSOC_PCI_PLL_DIV_SEL_1); + nr = RREG32(mmPSOC_PCI_PLL_NR); + nf = RREG32(mmPSOC_PCI_PLL_NF); + od = RREG32(mmPSOC_PCI_PLL_OD); + + if (div_sel == DIV_SEL_REF_CLK || + div_sel == DIV_SEL_DIVIDED_REF) { + if (div_sel == DIV_SEL_REF_CLK) + freq = PLL_REF_CLK; + else + freq = PLL_REF_CLK / (div_fctr + 1); + } else if (div_sel == DIV_SEL_PLL_CLK || + div_sel == DIV_SEL_DIVIDED_PLL) { + pll_clk = PLL_REF_CLK * (nf + 1) / + ((nr + 1) * (od + 1)); + if (div_sel == DIV_SEL_PLL_CLK) + freq = pll_clk; + else + freq = pll_clk / (div_fctr + 1); + } else { + dev_warn(hdev->dev, + "Received invalid div select value: %d", + div_sel); + freq = 0; + } } else { - dev_warn(hdev->dev, - "Received invalid div select value: %d", div_sel); + rc = hl_fw_cpucp_pll_info_get(hdev, PCI_PLL, pll_freq_arr); + + if (rc) + return; + + freq = pll_freq_arr[1]; } - prop->psoc_timestamp_frequency = trace_freq; + prop->psoc_timestamp_frequency = freq; prop->psoc_pci_pll_nr = nr; prop->psoc_pci_pll_nf = nf; prop->psoc_pci_pll_od = od; @@ -2704,7 +2719,8 @@ static int goya_cb_mmap(struct hl_device *hdev, struct vm_area_struct *vma, vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY | VM_NORESERVE; - rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size); + rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, + (dma_addr - HOST_PHYS_BASE), size); if (rc) dev_err(hdev->dev, "dma_mmap_coherent error %d", rc); @@ -5324,7 +5340,7 @@ static u32 goya_get_wait_cb_size(struct hl_device *hdev) } static u32 goya_gen_signal_cb(struct hl_device *hdev, void *data, u16 sob_id, - u32 size) + u32 size, bool eb) { return 0; } diff --git a/drivers/misc/habanalabs/include/common/hl_boot_if.h b/drivers/misc/habanalabs/include/common/hl_boot_if.h index e5801ecf0cb2..b637dfd69f6e 100644 --- a/drivers/misc/habanalabs/include/common/hl_boot_if.h +++ b/drivers/misc/habanalabs/include/common/hl_boot_if.h @@ -145,11 +145,15 @@ * implemented. This means that FW will * perform hard reset procedure on * receiving the halt-machine event. - * Initialized in: linux + * Initialized in: preboot, u-boot, linux * * CPU_BOOT_DEV_STS0_PLL_INFO_EN FW retrieval of PLL info is enabled. * Initialized in: linux * + * CPU_BOOT_DEV_STS0_CLK_GATE_EN Clock Gating enabled. + * FW initialized Clock Gating. + * Initialized in: preboot + * * CPU_BOOT_DEV_STS0_ENABLED Device status register enabled. * This is a main indication that the * running FW populates the device status @@ -171,6 +175,7 @@ #define CPU_BOOT_DEV_STS0_DRAM_SCR_EN (1 << 9) #define CPU_BOOT_DEV_STS0_FW_HARD_RST_EN (1 << 10) #define CPU_BOOT_DEV_STS0_PLL_INFO_EN (1 << 11) +#define CPU_BOOT_DEV_STS0_CLK_GATE_EN (1 << 13) #define CPU_BOOT_DEV_STS0_ENABLED (1 << 31) enum cpu_boot_status { @@ -204,6 +209,8 @@ enum kmd_msg { KMD_MSG_GOTO_WFE, KMD_MSG_FIT_RDY, KMD_MSG_SKIP_BMC, + RESERVED, + KMD_MSG_RST_DEV, }; enum cpu_msg_status { diff --git a/drivers/misc/mei/hdcp/mei_hdcp.c b/drivers/misc/mei/hdcp/mei_hdcp.c index 9ae9669e46ea..3506a3534294 100644 --- a/drivers/misc/mei/hdcp/mei_hdcp.c +++ b/drivers/misc/mei/hdcp/mei_hdcp.c @@ -569,8 +569,7 @@ static int mei_hdcp_verify_mprime(struct device *dev, verify_mprime_in->header.api_version = HDCP_API_VERSION; verify_mprime_in->header.command_id = WIRED_REPEATER_AUTH_STREAM_REQ; verify_mprime_in->header.status = ME_HDCP_STATUS_SUCCESS; - verify_mprime_in->header.buffer_len = - WIRED_CMD_BUF_LEN_REPEATER_AUTH_STREAM_REQ_MIN_IN; + verify_mprime_in->header.buffer_len = cmd_size - sizeof(verify_mprime_in->header); verify_mprime_in->port.integrated_port_type = data->port_type; verify_mprime_in->port.physical_port = (u8)data->fw_ddi; diff --git a/drivers/misc/pvpanic.c b/drivers/misc/pvpanic.c index 951b37da5e3c..41cab297d66e 100644 --- a/drivers/misc/pvpanic.c +++ b/drivers/misc/pvpanic.c @@ -55,12 +55,23 @@ static int pvpanic_mmio_probe(struct platform_device *pdev) struct resource *res; res = platform_get_mem_or_io(pdev, 0); - if (res && resource_type(res) == IORESOURCE_IO) + if (!res) + return -EINVAL; + + switch (resource_type(res)) { + case IORESOURCE_IO: base = devm_ioport_map(dev, res->start, resource_size(res)); - else + if (!base) + return -ENOMEM; + break; + case IORESOURCE_MEM: base = devm_ioremap_resource(dev, res); - if (IS_ERR(base)) - return PTR_ERR(base); + if (IS_ERR(base)) + return PTR_ERR(base); + break; + default: + return -EINVAL; + } atomic_notifier_chain_register(&panic_notifier_list, &pvpanic_panic_nb); diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index de7cb0369c30..002426e3cf76 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -384,8 +384,10 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) "merging was advertised but not possible"); blk_queue_max_segments(mq->queue, mmc_get_max_segments(host)); - if (mmc_card_mmc(card)) + if (mmc_card_mmc(card) && card->ext_csd.data_sector_size) { block_size = card->ext_csd.data_sector_size; + WARN_ON(block_size != 512 && block_size != 4096); + } blk_queue_logical_block_size(mq->queue, block_size); /* diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c index bbf3496f4495..f9780c65ebe9 100644 --- a/drivers/mmc/host/sdhci-brcmstb.c +++ b/drivers/mmc/host/sdhci-brcmstb.c @@ -314,11 +314,7 @@ err_clk: static void sdhci_brcmstb_shutdown(struct platform_device *pdev) { - int ret; - - ret = sdhci_pltfm_unregister(pdev); - if (ret) - dev_err(&pdev->dev, "failed to shutdown\n"); + sdhci_pltfm_suspend(&pdev->dev); } MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match); diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c index 4b673792b5a4..d90020ed3622 100644 --- a/drivers/mmc/host/sdhci-of-dwcmshc.c +++ b/drivers/mmc/host/sdhci-of-dwcmshc.c @@ -16,6 +16,8 @@ #include "sdhci-pltfm.h" +#define SDHCI_DWCMSHC_ARG2_STUFF GENMASK(31, 16) + /* DWCMSHC specific Mode Select value */ #define DWCMSHC_CTRL_HS400 0x7 @@ -49,6 +51,29 @@ static void dwcmshc_adma_write_desc(struct sdhci_host *host, void **desc, sdhci_adma_write_desc(host, desc, addr, len, cmd); } +static void dwcmshc_check_auto_cmd23(struct mmc_host *mmc, + struct mmc_request *mrq) +{ + struct sdhci_host *host = mmc_priv(mmc); + + /* + * No matter V4 is enabled or not, ARGUMENT2 register is 32-bit + * block count register which doesn't support stuff bits of + * CMD23 argument on dwcmsch host controller. + */ + if (mrq->sbc && (mrq->sbc->arg & SDHCI_DWCMSHC_ARG2_STUFF)) + host->flags &= ~SDHCI_AUTO_CMD23; + else + host->flags |= SDHCI_AUTO_CMD23; +} + +static void dwcmshc_request(struct mmc_host *mmc, struct mmc_request *mrq) +{ + dwcmshc_check_auto_cmd23(mmc, mrq); + + sdhci_request(mmc, mrq); +} + static void dwcmshc_set_uhs_signaling(struct sdhci_host *host, unsigned int timing) { @@ -133,6 +158,8 @@ static int dwcmshc_probe(struct platform_device *pdev) sdhci_get_of_property(pdev); + host->mmc_host_ops.request = dwcmshc_request; + err = sdhci_add_host(host); if (err) goto err_clk; diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c index c67611fdaa8a..d19eef5f725f 100644 --- a/drivers/mmc/host/sdhci-xenon.c +++ b/drivers/mmc/host/sdhci-xenon.c @@ -168,7 +168,12 @@ static void xenon_reset_exit(struct sdhci_host *host, /* Disable tuning request and auto-retuning again */ xenon_retune_setup(host); - xenon_set_acg(host, true); + /* + * The ACG should be turned off at the early init time, in order + * to solve a possible issues with the 1.8V regulator stabilization. + * The feature is enabled in later stage. + */ + xenon_set_acg(host, false); xenon_set_sdclk_off_idle(host, sdhc_id, false); diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c index 5cdf05bcbf8f..3fa8c22d3f36 100644 --- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c +++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c @@ -1615,7 +1615,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf, /* Extract interleaved payload data and ECC bits */ for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { if (buf) - nand_extract_bits(buf, step * eccsize, tmp_buf, + nand_extract_bits(buf, step * eccsize * 8, tmp_buf, src_bit_off, eccsize * 8); src_bit_off += eccsize * 8; diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c index fdb112e8a90d..a304fda5d1fa 100644 --- a/drivers/mtd/nand/raw/intel-nand-controller.c +++ b/drivers/mtd/nand/raw/intel-nand-controller.c @@ -579,7 +579,7 @@ static int ebu_nand_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; struct ebu_nand_controller *ebu_host; struct nand_chip *nand; - struct mtd_info *mtd = NULL; + struct mtd_info *mtd; struct resource *res; char *resname; int ret; @@ -647,12 +647,13 @@ static int ebu_nand_probe(struct platform_device *pdev) ebu_host->ebu + EBU_ADDR_SEL(cs)); nand_set_flash_node(&ebu_host->chip, dev->of_node); + + mtd = nand_to_mtd(&ebu_host->chip); if (!mtd->name) { dev_err(ebu_host->dev, "NAND label property is mandatory\n"); return -EINVAL; } - mtd = nand_to_mtd(&ebu_host->chip); mtd->dev.parent = dev; ebu_host->dev = dev; diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c index f2b9250c0ea8..0750121ac371 100644 --- a/drivers/mtd/nand/raw/nandsim.c +++ b/drivers/mtd/nand/raw/nandsim.c @@ -2210,6 +2210,9 @@ static int ns_attach_chip(struct nand_chip *chip) { unsigned int eccsteps, eccbytes; + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; + chip->ecc.algo = bch ? NAND_ECC_ALGO_BCH : NAND_ECC_ALGO_HAMMING; + if (!bch) return 0; @@ -2233,8 +2236,6 @@ static int ns_attach_chip(struct nand_chip *chip) return -EINVAL; } - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; - chip->ecc.algo = NAND_ECC_ALGO_BCH; chip->ecc.size = 512; chip->ecc.strength = bch; chip->ecc.bytes = eccbytes; @@ -2273,8 +2274,6 @@ static int __init ns_init_module(void) nsmtd = nand_to_mtd(chip); nand_set_controller_data(chip, (void *)ns); - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; /* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */ /* and 'badblocks' parameters to work */ chip->options |= NAND_SKIP_BBTSCAN; diff --git a/drivers/mtd/nand/raw/omap2.c b/drivers/mtd/nand/raw/omap2.c index fbb9955f2467..2c3e65cb68f3 100644 --- a/drivers/mtd/nand/raw/omap2.c +++ b/drivers/mtd/nand/raw/omap2.c @@ -15,6 +15,7 @@ #include <linux/jiffies.h> #include <linux/sched.h> #include <linux/mtd/mtd.h> +#include <linux/mtd/nand-ecc-sw-bch.h> #include <linux/mtd/rawnand.h> #include <linux/mtd/partitions.h> #include <linux/omap-dma.h> @@ -1866,18 +1867,19 @@ static const struct mtd_ooblayout_ops omap_ooblayout_ops = { static int omap_sw_ooblayout_ecc(struct mtd_info *mtd, int section, struct mtd_oob_region *oobregion) { - struct nand_chip *chip = mtd_to_nand(mtd); + struct nand_device *nand = mtd_to_nanddev(mtd); + const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; int off = BADBLOCK_MARKER_LENGTH; - if (section >= chip->ecc.steps) + if (section >= engine_conf->nsteps) return -ERANGE; /* * When SW correction is employed, one OMAP specific marker byte is * reserved after each ECC step. */ - oobregion->offset = off + (section * (chip->ecc.bytes + 1)); - oobregion->length = chip->ecc.bytes; + oobregion->offset = off + (section * (engine_conf->code_size + 1)); + oobregion->length = engine_conf->code_size; return 0; } @@ -1885,7 +1887,8 @@ static int omap_sw_ooblayout_ecc(struct mtd_info *mtd, int section, static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section, struct mtd_oob_region *oobregion) { - struct nand_chip *chip = mtd_to_nand(mtd); + struct nand_device *nand = mtd_to_nanddev(mtd); + const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; int off = BADBLOCK_MARKER_LENGTH; if (section) @@ -1895,7 +1898,7 @@ static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section, * When SW correction is employed, one OMAP specific marker byte is * reserved after each ECC step. */ - off += ((chip->ecc.bytes + 1) * chip->ecc.steps); + off += ((engine_conf->code_size + 1) * engine_conf->nsteps); if (off >= mtd->oobsize) return -ERANGE; diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index 8ea545bb924d..61d932c1b718 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -343,6 +343,7 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, const struct nand_page_io_req *req) { struct nand_device *nand = spinand_to_nand(spinand); + struct mtd_info *mtd = spinand_to_mtd(spinand); struct spi_mem_dirmap_desc *rdesc; unsigned int nbytes = 0; void *buf = NULL; @@ -382,9 +383,16 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, memcpy(req->databuf.in, spinand->databuf + req->dataoffs, req->datalen); - if (req->ooblen) - memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, - req->ooblen); + if (req->ooblen) { + if (req->mode == MTD_OPS_AUTO_OOB) + mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, + spinand->oobbuf, + req->ooboffs, + req->ooblen); + else + memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, + req->ooblen); + } return 0; } diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c index 85ebd2b7e446..85de5f96c02b 100644 --- a/drivers/net/bareudp.c +++ b/drivers/net/bareudp.c @@ -380,7 +380,7 @@ static int bareudp6_xmit_skb(struct sk_buff *skb, struct net_device *dev, goto free_dst; min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len + - BAREUDP_BASE_HLEN + info->options_len + sizeof(struct iphdr); + BAREUDP_BASE_HLEN + info->options_len + sizeof(struct ipv6hdr); err = skb_cow_head(skb, min_headroom); if (unlikely(err)) @@ -534,6 +534,7 @@ static void bareudp_setup(struct net_device *dev) SET_NETDEV_DEVTYPE(dev, &bareudp_type); dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM; dev->features |= NETIF_F_RXCSUM; + dev->features |= NETIF_F_LLTX; dev->features |= NETIF_F_GSO_SOFTWARE; dev->hw_features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_RXCSUM; dev->hw_features |= NETIF_F_GSO_SOFTWARE; @@ -644,11 +645,20 @@ static int bareudp_link_config(struct net_device *dev, return 0; } +static void bareudp_dellink(struct net_device *dev, struct list_head *head) +{ + struct bareudp_dev *bareudp = netdev_priv(dev); + + list_del(&bareudp->next); + unregister_netdevice_queue(dev, head); +} + static int bareudp_newlink(struct net *net, struct net_device *dev, struct nlattr *tb[], struct nlattr *data[], struct netlink_ext_ack *extack) { struct bareudp_conf conf; + LIST_HEAD(list_kill); int err; err = bareudp2info(data, &conf, extack); @@ -661,17 +671,14 @@ static int bareudp_newlink(struct net *net, struct net_device *dev, err = bareudp_link_config(dev, tb); if (err) - return err; + goto err_unconfig; return 0; -} - -static void bareudp_dellink(struct net_device *dev, struct list_head *head) -{ - struct bareudp_dev *bareudp = netdev_priv(dev); - list_del(&bareudp->next); - unregister_netdevice_queue(dev, head); +err_unconfig: + bareudp_dellink(dev, &list_kill); + unregister_netdevice_many(&list_kill); + return err; } static size_t bareudp_get_size(const struct net_device *dev) diff --git a/drivers/net/can/Kconfig b/drivers/net/can/Kconfig index 424970939fd4..1c28eade6bec 100644 --- a/drivers/net/can/Kconfig +++ b/drivers/net/can/Kconfig @@ -123,6 +123,7 @@ config CAN_JANZ_ICAN3 config CAN_KVASER_PCIEFD depends on PCI tristate "Kvaser PCIe FD cards" + select CRC32 help This is a driver for the Kvaser PCI Express CAN FD family. diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c index 3486704c8a95..8b1ae023cb21 100644 --- a/drivers/net/can/dev.c +++ b/drivers/net/can/dev.c @@ -592,11 +592,11 @@ static void can_restart(struct net_device *dev) cf->can_id |= CAN_ERR_RESTARTED; - netif_rx_ni(skb); - stats->rx_packets++; stats->rx_bytes += cf->len; + netif_rx_ni(skb); + restart: netdev_dbg(dev, "restarted\n"); priv->can_stats.restarts++; diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c index 2c9f12401276..da551fd0f502 100644 --- a/drivers/net/can/m_can/m_can.c +++ b/drivers/net/can/m_can/m_can.c @@ -1852,8 +1852,6 @@ EXPORT_SYMBOL_GPL(m_can_class_register); void m_can_class_unregister(struct m_can_classdev *cdev) { unregister_candev(cdev->net); - - m_can_clk_stop(cdev); } EXPORT_SYMBOL_GPL(m_can_class_unregister); diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c index 24c737c4fc44..970f0e9d19bf 100644 --- a/drivers/net/can/m_can/tcan4x5x.c +++ b/drivers/net/can/m_can/tcan4x5x.c @@ -131,30 +131,6 @@ static inline struct tcan4x5x_priv *cdev_to_priv(struct m_can_classdev *cdev) } -static struct can_bittiming_const tcan4x5x_bittiming_const = { - .name = DEVICE_NAME, - .tseg1_min = 2, - .tseg1_max = 31, - .tseg2_min = 2, - .tseg2_max = 16, - .sjw_max = 16, - .brp_min = 1, - .brp_max = 32, - .brp_inc = 1, -}; - -static struct can_bittiming_const tcan4x5x_data_bittiming_const = { - .name = DEVICE_NAME, - .tseg1_min = 1, - .tseg1_max = 32, - .tseg2_min = 1, - .tseg2_max = 16, - .sjw_max = 16, - .brp_min = 1, - .brp_max = 32, - .brp_inc = 1, -}; - static void tcan4x5x_check_wake(struct tcan4x5x_priv *priv) { int wake_state = 0; @@ -469,8 +445,6 @@ static int tcan4x5x_can_probe(struct spi_device *spi) mcan_class->dev = &spi->dev; mcan_class->ops = &tcan4x5x_ops; mcan_class->is_peripheral = true; - mcan_class->bit_timing = &tcan4x5x_bittiming_const; - mcan_class->data_timing = &tcan4x5x_data_bittiming_const; mcan_class->net->irq = spi->irq; spi_set_drvdata(spi, priv); diff --git a/drivers/net/can/rcar/Kconfig b/drivers/net/can/rcar/Kconfig index 8d36101b78e3..29cabc20109e 100644 --- a/drivers/net/can/rcar/Kconfig +++ b/drivers/net/can/rcar/Kconfig @@ -1,10 +1,10 @@ # SPDX-License-Identifier: GPL-2.0 config CAN_RCAR - tristate "Renesas R-Car CAN controller" + tristate "Renesas R-Car and RZ/G CAN controller" depends on ARCH_RENESAS || ARM help Say Y here if you want to use CAN controller found on Renesas R-Car - SoCs. + or RZ/G SoCs. To compile this driver as a module, choose M here: the module will be called rcar_can. diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c index 77129d5f410b..f07e8b737d31 100644 --- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c +++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c @@ -1368,13 +1368,10 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) struct mcp251xfd_tx_ring *tx_ring = priv->tx; struct spi_transfer *last_xfer; - tx_ring->tail += len; - /* Increment the TEF FIFO tail pointer 'len' times in * a single SPI message. - */ - - /* Note: + * + * Note: * * "cs_change == 1" on the last transfer results in an * active chip select after the complete SPI @@ -1391,6 +1388,8 @@ static int mcp251xfd_handle_tefif(struct mcp251xfd_priv *priv) if (err) return err; + tx_ring->tail += len; + err = mcp251xfd_check_tef_tail(priv); if (err) return err; @@ -1492,7 +1491,7 @@ mcp251xfd_handle_rxif_one(struct mcp251xfd_priv *priv, else skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd); - if (!cfd) { + if (!skb) { stats->rx_dropped++; return 0; } @@ -1553,10 +1552,8 @@ mcp251xfd_handle_rxif_ring(struct mcp251xfd_priv *priv, /* Increment the RX FIFO tail pointer 'len' times in a * single SPI message. - */ - ring->tail += len; - - /* Note: + * + * Note: * * "cs_change == 1" on the last transfer results in an * active chip select after the complete SPI @@ -1572,6 +1569,8 @@ mcp251xfd_handle_rxif_ring(struct mcp251xfd_priv *priv, last_xfer->cs_change = 1; if (err) return err; + + ring->tail += len; } return 0; diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c index 61631f4fd92a..f347ecc79aef 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_fd.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_fd.c @@ -514,11 +514,11 @@ static int pcan_usb_fd_decode_canmsg(struct pcan_usb_fd_if *usb_if, else memcpy(cfd->data, rm->d, cfd->len); - peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low)); - netdev->stats.rx_packets++; netdev->stats.rx_bytes += cfd->len; + peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(rm->ts_low)); + return 0; } @@ -580,11 +580,11 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if, if (!skb) return -ENOMEM; - peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low)); - netdev->stats.rx_packets++; netdev->stats.rx_bytes += cf->len; + peak_usb_netif_rx(skb, &usb_if->time_ref, le32_to_cpu(sm->ts_low)); + return 0; } diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c index fa47bab510bb..f9a524c5f6d6 100644 --- a/drivers/net/can/vxcan.c +++ b/drivers/net/can/vxcan.c @@ -39,6 +39,7 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev) struct net_device *peer; struct canfd_frame *cfd = (struct canfd_frame *)skb->data; struct net_device_stats *peerstats, *srcstats = &dev->stats; + u8 len; if (can_dropped_invalid_skb(dev, skb)) return NETDEV_TX_OK; @@ -61,12 +62,13 @@ static netdev_tx_t vxcan_xmit(struct sk_buff *skb, struct net_device *dev) skb->dev = peer; skb->ip_summed = CHECKSUM_UNNECESSARY; + len = cfd->len; if (netif_rx_ni(skb) == NET_RX_SUCCESS) { srcstats->tx_packets++; - srcstats->tx_bytes += cfd->len; + srcstats->tx_bytes += len; peerstats = &peer->stats; peerstats->rx_packets++; - peerstats->rx_bytes += cfd->len; + peerstats->rx_bytes += len; } out_unlock: diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c index 288b5a5c3e0d..95c7fa171e35 100644 --- a/drivers/net/dsa/b53/b53_common.c +++ b/drivers/net/dsa/b53/b53_common.c @@ -1404,7 +1404,7 @@ int b53_vlan_prepare(struct dsa_switch *ds, int port, !(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED)) return -EINVAL; - if (vlan->vid_end > dev->num_vlans) + if (vlan->vid_end >= dev->num_vlans) return -ERANGE; b53_enable_vlan(dev, true, ds->vlan_filtering); diff --git a/drivers/net/dsa/hirschmann/Kconfig b/drivers/net/dsa/hirschmann/Kconfig index 222dd35e2c9d..e01191107a4b 100644 --- a/drivers/net/dsa/hirschmann/Kconfig +++ b/drivers/net/dsa/hirschmann/Kconfig @@ -4,6 +4,7 @@ config NET_DSA_HIRSCHMANN_HELLCREEK depends on HAS_IOMEM depends on NET_DSA depends on PTP_1588_CLOCK + depends on LEDS_CLASS select NET_DSA_TAG_HELLCREEK help This driver adds support for Hirschmann Hellcreek TSN switches. diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c index 09701c17f3f6..662e68a0e7e6 100644 --- a/drivers/net/dsa/lantiq_gswip.c +++ b/drivers/net/dsa/lantiq_gswip.c @@ -92,9 +92,7 @@ GSWIP_MDIO_PHY_FDUP_MASK) /* GSWIP MII Registers */ -#define GSWIP_MII_CFG0 0x00 -#define GSWIP_MII_CFG1 0x02 -#define GSWIP_MII_CFG5 0x04 +#define GSWIP_MII_CFGp(p) (0x2 * (p)) #define GSWIP_MII_CFG_EN BIT(14) #define GSWIP_MII_CFG_LDCLKDIS BIT(12) #define GSWIP_MII_CFG_MODE_MIIP 0x0 @@ -392,17 +390,9 @@ static void gswip_mii_mask(struct gswip_priv *priv, u32 clear, u32 set, static void gswip_mii_mask_cfg(struct gswip_priv *priv, u32 clear, u32 set, int port) { - switch (port) { - case 0: - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG0); - break; - case 1: - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG1); - break; - case 5: - gswip_mii_mask(priv, clear, set, GSWIP_MII_CFG5); - break; - } + /* There's no MII_CFG register for the CPU port */ + if (!dsa_is_cpu_port(priv->ds, port)) + gswip_mii_mask(priv, clear, set, GSWIP_MII_CFGp(port)); } static void gswip_mii_mask_pcdu(struct gswip_priv *priv, u32 clear, u32 set, @@ -822,9 +812,8 @@ static int gswip_setup(struct dsa_switch *ds) gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); /* Disable the xMII link */ - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 0); - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 1); - gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, 5); + for (i = 0; i < priv->hw_info->max_ports; i++) + gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); /* enable special tag insertion on cpu port */ gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, @@ -1447,11 +1436,12 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port, phylink_set(mask, Pause); phylink_set(mask, Asym_Pause); - /* With the exclusion of MII and Reverse MII, we support Gigabit, - * including Half duplex + /* With the exclusion of MII, Reverse MII and Reduced MII, we + * support Gigabit, including Half duplex */ if (state->interface != PHY_INTERFACE_MODE_MII && - state->interface != PHY_INTERFACE_MODE_REVMII) { + state->interface != PHY_INTERFACE_MODE_REVMII && + state->interface != PHY_INTERFACE_MODE_RMII) { phylink_set(mask, 1000baseT_Full); phylink_set(mask, 1000baseT_Half); } @@ -1541,9 +1531,7 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port, { struct gswip_priv *priv = ds->priv; - /* Enable the xMII interface only for the external PHY */ - if (interface != PHY_INTERFACE_MODE_INTERNAL) - gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); + gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); } static void gswip_get_strings(struct dsa_switch *ds, int port, u32 stringset, diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c index 66ddf67b8737..7b96396be609 100644 --- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c +++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c @@ -351,6 +351,10 @@ int mv88e6250_g1_vtu_getnext(struct mv88e6xxx_chip *chip, if (err) return err; + err = mv88e6185_g1_stu_data_read(chip, entry); + if (err) + return err; + /* VTU DBNum[3:0] are located in VTU Operation 3:0 * VTU DBNum[5:4] are located in VTU Operation 9:8 */ diff --git a/drivers/net/ethernet/aquantia/Kconfig b/drivers/net/ethernet/aquantia/Kconfig index efb33c078a3c..cec2018c84a9 100644 --- a/drivers/net/ethernet/aquantia/Kconfig +++ b/drivers/net/ethernet/aquantia/Kconfig @@ -19,7 +19,6 @@ if NET_VENDOR_AQUANTIA config AQTION tristate "aQuantia AQtion(tm) Support" depends on PCI - depends on X86_64 || ARM64 || COMPILE_TEST depends on MACSEC || MACSEC=n help This enables the support for the aQuantia AQtion(tm) Ethernet card. diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 0fdd19d99d99..0404aafd5ce5 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -2503,8 +2503,10 @@ static int bcm_sysport_probe(struct platform_device *pdev) priv = netdev_priv(dev); priv->clk = devm_clk_get_optional(&pdev->dev, "sw_sysport"); - if (IS_ERR(priv->clk)) - return PTR_ERR(priv->clk); + if (IS_ERR(priv->clk)) { + ret = PTR_ERR(priv->clk); + goto err_free_netdev; + } /* Allocate number of TX rings */ priv->tx_rings = devm_kcalloc(&pdev->dev, txq, @@ -2577,6 +2579,7 @@ static int bcm_sysport_probe(struct platform_device *pdev) NETIF_F_HW_VLAN_CTAG_TX; dev->hw_features |= dev->features; dev->vlan_features |= dev->features; + dev->max_mtu = UMAC_MAX_MTU_SIZE; /* Request the WOL interrupt and advertise suspend if available */ priv->wol_irq_disabled = 1; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 4edd6f8e017e..d10e4f85dd11 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -6790,8 +6790,10 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) ctx->tqm_fp_rings_count = resp->tqm_fp_rings_count; if (!ctx->tqm_fp_rings_count) ctx->tqm_fp_rings_count = bp->max_q; + else if (ctx->tqm_fp_rings_count > BNXT_MAX_TQM_FP_RINGS) + ctx->tqm_fp_rings_count = BNXT_MAX_TQM_FP_RINGS; - tqm_rings = ctx->tqm_fp_rings_count + 1; + tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS; ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL); if (!ctx_pg) { kfree(ctx); @@ -6925,7 +6927,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) pg_attr = &req.tqm_sp_pg_size_tqm_sp_lvl, pg_dir = &req.tqm_sp_page_dir, ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP; - i < 9; i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) { + i < BNXT_MAX_TQM_RINGS; + i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) { if (!(enables & ena)) continue; @@ -12887,10 +12890,10 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev, */ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) { + pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT; struct net_device *netdev = pci_get_drvdata(pdev); struct bnxt *bp = netdev_priv(netdev); int err = 0, off; - pci_ers_result_t result = PCI_ERS_RESULT_DISCONNECT; netdev_info(bp->dev, "PCI Slot Reset\n"); @@ -12919,22 +12922,8 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) pci_save_state(pdev); err = bnxt_hwrm_func_reset(bp); - if (!err) { - err = bnxt_hwrm_func_qcaps(bp); - if (!err && netif_running(netdev)) - err = bnxt_open(netdev); - } - bnxt_ulp_start(bp, err); - if (!err) { - bnxt_reenable_sriov(bp); + if (!err) result = PCI_ERS_RESULT_RECOVERED; - } - } - - if (result != PCI_ERS_RESULT_RECOVERED) { - if (netif_running(netdev)) - dev_close(netdev); - pci_disable_device(pdev); } rtnl_unlock(); @@ -12952,10 +12941,21 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) static void bnxt_io_resume(struct pci_dev *pdev) { struct net_device *netdev = pci_get_drvdata(pdev); + struct bnxt *bp = netdev_priv(netdev); + int err; + netdev_info(bp->dev, "PCI Slot Resume\n"); rtnl_lock(); - netif_device_attach(netdev); + err = bnxt_hwrm_func_qcaps(bp); + if (!err && netif_running(netdev)) + err = bnxt_open(netdev); + + bnxt_ulp_start(bp, err); + if (!err) { + bnxt_reenable_sriov(bp); + netif_device_attach(netdev); + } rtnl_unlock(); } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 950ea26ae0d2..51996c85547e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1436,6 +1436,11 @@ struct bnxt_ctx_pg_info { struct bnxt_ctx_pg_info **ctx_pg_tbl; }; +#define BNXT_MAX_TQM_SP_RINGS 1 +#define BNXT_MAX_TQM_FP_RINGS 8 +#define BNXT_MAX_TQM_RINGS \ + (BNXT_MAX_TQM_SP_RINGS + BNXT_MAX_TQM_FP_RINGS) + struct bnxt_ctx_mem_info { u32 qp_max_entries; u16 qp_min_qp1_entries; @@ -1474,7 +1479,7 @@ struct bnxt_ctx_mem_info { struct bnxt_ctx_pg_info stat_mem; struct bnxt_ctx_pg_info mrav_mem; struct bnxt_ctx_pg_info tim_mem; - struct bnxt_ctx_pg_info *tqm_mem[9]; + struct bnxt_ctx_pg_info *tqm_mem[BNXT_MAX_TQM_RINGS]; }; struct bnxt_fw_health { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 9ff79d5d14c4..2f8b193a772d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -2532,7 +2532,7 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware if (rc && ((struct hwrm_err_output *)&resp)->cmd_err == NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { - install.flags |= + install.flags = cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); rc = _hwrm_send_message_silent(bp, &install, @@ -2546,6 +2546,7 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware * UPDATE directory and try the flash again */ defrag_attempted = true; + install.flags = 0; rc = __bnxt_flash_nvram(bp->dev, BNX_DIR_TYPE_UPDATE, BNX_DIR_ORDINAL_FIRST, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c index 8c8368c2f335..64dbbb04b043 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -222,8 +222,12 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp) int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) - return BNXT_MIN_ROCE_STAT_CTXS; + if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + struct bnxt_en_dev *edev = bp->edev; + + if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) + return BNXT_MIN_ROCE_STAT_CTXS; + } return 0; } diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index d5d910916c2e..814a5b10141d 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -467,7 +467,7 @@ static void macb_set_tx_clk(struct macb *bp, int speed) { long ferr, rate, rate_rounded; - if (!bp->tx_clk || !(bp->caps & MACB_CAPS_CLK_HW_CHG)) + if (!bp->tx_clk || (bp->caps & MACB_CAPS_CLK_HW_CHG)) return; switch (speed) { diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h index 92473dda55d9..22a0220123ad 100644 --- a/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h +++ b/drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h @@ -40,6 +40,13 @@ #define TCB_L2T_IX_M 0xfffULL #define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S) +#define TCB_T_FLAGS_W 1 +#define TCB_T_FLAGS_S 0 +#define TCB_T_FLAGS_M 0xffffffffffffffffULL +#define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S) + +#define TCB_FIELD_COOKIE_TFLAG 1 + #define TCB_SMAC_SEL_W 0 #define TCB_SMAC_SEL_S 24 #define TCB_SMAC_SEL_M 0xffULL diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h index 72bb123d53db..9e2378013642 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h @@ -575,7 +575,11 @@ int send_tx_flowc_wr(struct sock *sk, int compl, void chtls_tcp_push(struct sock *sk, int flags); int chtls_push_frames(struct chtls_sock *csk, int comp); int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val); +void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, + u64 mask, u64 val, u8 cookie, + int through_l2t); int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type); +void chtls_set_quiesce_ctrl(struct sock *sk, int val); void skb_entail(struct sock *sk, struct sk_buff *skb, int flags); unsigned int keyid_to_addr(int start_addr, int keyid); void free_tls_keyid(struct sock *sk); diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c index a0e0d8a83681..e5cfbe196ba6 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c @@ -32,6 +32,7 @@ #include "chtls.h" #include "chtls_cm.h" #include "clip_tbl.h" +#include "t4_tcb.h" /* * State transitions and actions for close. Note that if we are in SYN_SENT @@ -267,7 +268,9 @@ static void chtls_send_reset(struct sock *sk, int mode, struct sk_buff *skb) if (sk->sk_state != TCP_SYN_RECV) chtls_send_abort(sk, mode, skb); else - goto out; + chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W, + TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0, + TCB_FIELD_COOKIE_TFLAG, 1); return; out: @@ -621,7 +624,7 @@ static void chtls_reset_synq(struct listen_ctx *listen_ctx) while (!skb_queue_empty(&listen_ctx->synq)) { struct chtls_sock *csk = - container_of((struct synq *)__skb_dequeue + container_of((struct synq *)skb_peek (&listen_ctx->synq), struct chtls_sock, synq); struct sock *child = csk->sk; @@ -1109,6 +1112,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, const struct cpl_pass_accept_req *req, struct chtls_dev *cdev) { + struct adapter *adap = pci_get_drvdata(cdev->pdev); struct neighbour *n = NULL; struct inet_sock *newinet; const struct iphdr *iph; @@ -1118,9 +1122,10 @@ static struct sock *chtls_recv_sock(struct sock *lsk, struct dst_entry *dst; struct tcp_sock *tp; struct sock *newsk; + bool found = false; u16 port_id; int rxq_idx; - int step; + int step, i; iph = (const struct iphdr *)network_hdr; newsk = tcp_create_openreq_child(lsk, oreq, cdev->askb); @@ -1152,7 +1157,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, n = dst_neigh_lookup(dst, &ip6h->saddr); #endif } - if (!n) + if (!n || !n->dev) goto free_sk; ndev = n->dev; @@ -1161,6 +1166,13 @@ static struct sock *chtls_recv_sock(struct sock *lsk, if (is_vlan_dev(ndev)) ndev = vlan_dev_real_dev(ndev); + for_each_port(adap, i) + if (cdev->ports[i] == ndev) + found = true; + + if (!found) + goto free_dst; + port_id = cxgb4_port_idx(ndev); csk = chtls_sock_create(cdev); @@ -1238,6 +1250,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, free_csk: chtls_sock_release(&csk->kref); free_dst: + neigh_release(n); dst_release(dst); free_sk: inet_csk_prepare_forced_close(newsk); @@ -1387,7 +1400,7 @@ static void chtls_pass_accept_request(struct sock *sk, newsk = chtls_recv_sock(sk, oreq, network_hdr, req, cdev); if (!newsk) - goto free_oreq; + goto reject; if (chtls_get_module(newsk)) goto reject; @@ -1403,8 +1416,6 @@ static void chtls_pass_accept_request(struct sock *sk, kfree_skb(skb); return; -free_oreq: - chtls_reqsk_free(oreq); reject: mk_tid_release(reply_skb, 0, tid); cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); @@ -1589,6 +1600,11 @@ static int chtls_pass_establish(struct chtls_dev *cdev, struct sk_buff *skb) sk_wake_async(sk, 0, POLL_OUT); data = lookup_stid(cdev->tids, stid); + if (!data) { + /* listening server close */ + kfree_skb(skb); + goto unlock; + } lsk = ((struct listen_ctx *)data)->lsk; bh_lock_sock(lsk); @@ -1936,6 +1952,8 @@ static void chtls_close_con_rpl(struct sock *sk, struct sk_buff *skb) else if (tcp_sk(sk)->linger2 < 0 && !csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN)) chtls_abort_conn(sk, skb); + else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT)) + chtls_set_quiesce_ctrl(sk, 0); break; default: pr_info("close_con_rpl in bad state %d\n", sk->sk_state); @@ -1997,39 +2015,6 @@ static void t4_defer_reply(struct sk_buff *skb, struct chtls_dev *cdev, spin_unlock_bh(&cdev->deferq.lock); } -static void send_abort_rpl(struct sock *sk, struct sk_buff *skb, - struct chtls_dev *cdev, int status, int queue) -{ - struct cpl_abort_req_rss *req = cplhdr(skb); - struct sk_buff *reply_skb; - struct chtls_sock *csk; - - csk = rcu_dereference_sk_user_data(sk); - - reply_skb = alloc_skb(sizeof(struct cpl_abort_rpl), - GFP_KERNEL); - - if (!reply_skb) { - req->status = (queue << 1); - t4_defer_reply(skb, cdev, send_defer_abort_rpl); - return; - } - - set_abort_rpl_wr(reply_skb, GET_TID(req), status); - kfree_skb(skb); - - set_wr_txq(reply_skb, CPL_PRIORITY_DATA, queue); - if (csk_conn_inline(csk)) { - struct l2t_entry *e = csk->l2t_entry; - - if (e && sk->sk_state != TCP_SYN_RECV) { - cxgb4_l2t_send(csk->egress_dev, reply_skb, e); - return; - } - } - cxgb4_ofld_send(cdev->lldi->ports[0], reply_skb); -} - static void chtls_send_abort_rpl(struct sock *sk, struct sk_buff *skb, struct chtls_dev *cdev, int status, int queue) @@ -2078,9 +2063,9 @@ static void bl_abort_syn_rcv(struct sock *lsk, struct sk_buff *skb) queue = csk->txq_idx; skb->sk = NULL; + chtls_send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev, + CPL_ABORT_NO_RST, queue); do_abort_syn_rcv(child, lsk); - send_abort_rpl(child, skb, BLOG_SKB_CB(skb)->cdev, - CPL_ABORT_NO_RST, queue); } static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb) @@ -2110,8 +2095,8 @@ static int abort_syn_rcv(struct sock *sk, struct sk_buff *skb) if (!sock_owned_by_user(psk)) { int queue = csk->txq_idx; + chtls_send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue); do_abort_syn_rcv(sk, psk); - send_abort_rpl(sk, skb, cdev, CPL_ABORT_NO_RST, queue); } else { skb->sk = sk; BLOG_SKB_CB(skb)->backlog_rcv = bl_abort_syn_rcv; @@ -2129,9 +2114,6 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb) int queue = csk->txq_idx; if (is_neg_adv(req->status)) { - if (sk->sk_state == TCP_SYN_RECV) - chtls_set_tcb_tflag(sk, 0, 0); - kfree_skb(skb); return; } @@ -2158,12 +2140,12 @@ static void chtls_abort_req_rss(struct sock *sk, struct sk_buff *skb) if (sk->sk_state == TCP_SYN_RECV && !abort_syn_rcv(sk, skb)) return; - chtls_release_resources(sk); - chtls_conn_done(sk); } chtls_send_abort_rpl(sk, skb, BLOG_SKB_CB(skb)->cdev, rst_status, queue); + chtls_release_resources(sk); + chtls_conn_done(sk); } static void chtls_abort_rpl_rss(struct sock *sk, struct sk_buff *skb) @@ -2315,6 +2297,28 @@ static int chtls_wr_ack(struct chtls_dev *cdev, struct sk_buff *skb) return 0; } +static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb) +{ + struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR; + unsigned int hwtid = GET_TID(rpl); + struct sock *sk; + + sk = lookup_tid(cdev->tids, hwtid); + + /* return EINVAL if socket doesn't exist */ + if (!sk) + return -EINVAL; + + /* Reusing the skb as size of cpl_set_tcb_field structure + * is greater than cpl_abort_req + */ + if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG) + chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL); + + kfree_skb(skb); + return 0; +} + chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = { [CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl, [CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl, @@ -2327,5 +2331,6 @@ chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = { [CPL_CLOSE_CON_RPL] = chtls_conn_cpl, [CPL_ABORT_REQ_RSS] = chtls_conn_cpl, [CPL_ABORT_RPL_RSS] = chtls_conn_cpl, - [CPL_FW4_ACK] = chtls_wr_ack, + [CPL_FW4_ACK] = chtls_wr_ack, + [CPL_SET_TCB_RPL] = chtls_set_tcb_rpl, }; diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c index a4fb463af22a..1e67140b0f80 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c @@ -88,6 +88,24 @@ static int chtls_set_tcb_field(struct sock *sk, u16 word, u64 mask, u64 val) return ret < 0 ? ret : 0; } +void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, + u64 mask, u64 val, u8 cookie, + int through_l2t) +{ + struct sk_buff *skb; + unsigned int wrlen; + + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); + wrlen = roundup(wrlen, 16); + + skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL); + if (!skb) + return; + + __set_tcb_field(sk, skb, word, mask, val, cookie, 0); + send_or_defer(sk, tcp_sk(sk), skb, through_l2t); +} + /* * Set one of the t_flags bits in the TCB. */ @@ -113,6 +131,29 @@ static int chtls_set_tcb_quiesce(struct sock *sk, int val) TF_RX_QUIESCE_V(val)); } +void chtls_set_quiesce_ctrl(struct sock *sk, int val) +{ + struct chtls_sock *csk; + struct sk_buff *skb; + unsigned int wrlen; + int ret; + + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); + wrlen = roundup(wrlen, 16); + + skb = alloc_skb(wrlen, GFP_ATOMIC); + if (!skb) + return; + + csk = rcu_dereference_sk_user_data(sk); + + __set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1); + set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id); + ret = cxgb4_ofld_send(csk->egress_dev, skb); + if (ret < 0) + kfree_skb(skb); +} + /* TLS Key bitmap processing */ int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi) { diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c index 0981fe9652e5..3d9b0b161e24 100644 --- a/drivers/net/ethernet/ethoc.c +++ b/drivers/net/ethernet/ethoc.c @@ -1211,7 +1211,7 @@ static int ethoc_probe(struct platform_device *pdev) ret = mdiobus_register(priv->mdio); if (ret) { dev_err(&netdev->dev, "failed to register MDIO bus\n"); - goto free2; + goto free3; } ret = ethoc_mdio_probe(netdev); @@ -1243,6 +1243,7 @@ error2: netif_napi_del(&priv->napi); error: mdiobus_unregister(priv->mdio); +free3: mdiobus_free(priv->mdio); free2: clk_disable_unprepare(priv->clk); diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c index c8e5d889bd81..21de56345503 100644 --- a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c +++ b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c @@ -223,3 +223,4 @@ static struct platform_driver fs_enet_bb_mdio_driver = { }; module_platform_driver(fs_enet_bb_mdio_driver); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c index 8b51ee142fa3..152f4d83765a 100644 --- a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c +++ b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c @@ -224,3 +224,4 @@ static struct platform_driver fs_enet_fec_mdio_driver = { }; module_platform_driver(fs_enet_fec_mdio_driver); +MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c index ba8869c3d891..6d853f018d53 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.c +++ b/drivers/net/ethernet/freescale/ucc_geth.c @@ -3889,6 +3889,7 @@ static int ucc_geth_probe(struct platform_device* ofdev) INIT_WORK(&ugeth->timeout_work, ucc_geth_timeout_work); netif_napi_add(dev, &ugeth->napi, ucc_geth_poll, 64); dev->mtu = 1500; + dev->max_mtu = 1518; ugeth->msg_enable = netif_msg_init(debug.msg_enable, UGETH_MSG_DEFAULT); ugeth->phy_interface = phy_interface; @@ -3934,12 +3935,12 @@ static int ucc_geth_remove(struct platform_device* ofdev) struct device_node *np = ofdev->dev.of_node; unregister_netdev(dev); - free_netdev(dev); ucc_geth_memclean(ugeth); if (of_phy_is_fixed_link(np)) of_phy_deregister_fixed_link(np); of_node_put(ugeth->ug_info->tbi_node); of_node_put(ugeth->ug_info->phy_node); + free_netdev(dev); return 0; } diff --git a/drivers/net/ethernet/freescale/ucc_geth.h b/drivers/net/ethernet/freescale/ucc_geth.h index 1a9bdf66a7d8..11d4bf5dc21f 100644 --- a/drivers/net/ethernet/freescale/ucc_geth.h +++ b/drivers/net/ethernet/freescale/ucc_geth.h @@ -575,7 +575,14 @@ struct ucc_geth_tx_global_pram { u32 vtagtable[0x8]; /* 8 4-byte VLAN tags */ u32 tqptr; /* a base pointer to the Tx Queues Memory Region */ - u8 res2[0x80 - 0x74]; + u8 res2[0x78 - 0x74]; + u64 snums_en; + u32 l2l3baseptr; /* top byte consists of a few other bit fields */ + + u16 mtu[8]; + u8 res3[0xa8 - 0x94]; + u32 wrrtablebase; /* top byte is reserved */ + u8 res4[0xc0 - 0xac]; } __packed; /* structure representing Extended Filtering Global Parameters in PRAM */ diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c index 7165da0ee9aa..a6e3f07caf99 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c @@ -415,6 +415,10 @@ static void __lb_other_process(struct hns_nic_ring_data *ring_data, /* for mutl buffer*/ new_skb = skb_copy(skb, GFP_ATOMIC); dev_kfree_skb_any(skb); + if (!new_skb) { + netdev_err(ndev, "skb alloc failed\n"); + return; + } skb = new_skb; check_ok = 0; diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h index fb5e8842983c..33defa4c180a 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h @@ -169,7 +169,7 @@ struct hclgevf_mbx_arq_ring { #define hclge_mbx_ring_ptr_move_crq(crq) \ (crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num) #define hclge_mbx_tail_ptr_move_arq(arq) \ - (arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE) + (arq.tail = (arq.tail + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM) #define hclge_mbx_head_ptr_move_arq(arq) \ - (arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_SIZE) + (arq.head = (arq.head + 1) % HCLGE_MBX_MAX_ARQ_MSG_NUM) #endif diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index e6f37f91c489..c242883fea5d 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -752,7 +752,8 @@ static int hclge_get_sset_count(struct hnae3_handle *handle, int stringset) handle->flags |= HNAE3_SUPPORT_SERDES_SERIAL_LOOPBACK; handle->flags |= HNAE3_SUPPORT_SERDES_PARALLEL_LOOPBACK; - if (hdev->hw.mac.phydev) { + if (hdev->hw.mac.phydev && hdev->hw.mac.phydev->drv && + hdev->hw.mac.phydev->drv->set_loopback) { count += 1; handle->flags |= HNAE3_SUPPORT_PHY_LOOPBACK; } @@ -4537,8 +4538,8 @@ static int hclge_set_rss_tuple(struct hnae3_handle *handle, req->ipv4_sctp_en = tuple_sets; break; case SCTP_V6_FLOW: - if ((nfc->data & RXH_L4_B_0_1) || - (nfc->data & RXH_L4_B_2_3)) + if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 && + (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3))) return -EINVAL; req->ipv6_sctp_en = tuple_sets; @@ -4730,6 +4731,8 @@ static void hclge_rss_init_cfg(struct hclge_dev *hdev) vport[i].rss_tuple_sets.ipv6_udp_en = HCLGE_RSS_INPUT_TUPLE_OTHER; vport[i].rss_tuple_sets.ipv6_sctp_en = + hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ? + HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT : HCLGE_RSS_INPUT_TUPLE_SCTP; vport[i].rss_tuple_sets.ipv6_fragment_en = HCLGE_RSS_INPUT_TUPLE_OTHER; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h index 50a294dfaff5..ca46bc9110d7 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h @@ -107,6 +107,8 @@ #define HCLGE_D_IP_BIT BIT(2) #define HCLGE_S_IP_BIT BIT(3) #define HCLGE_V_TAG_BIT BIT(4) +#define HCLGE_RSS_INPUT_TUPLE_SCTP_NO_PORT \ + (HCLGE_D_IP_BIT | HCLGE_S_IP_BIT | HCLGE_V_TAG_BIT) #define HCLGE_RSS_TC_SIZE_0 1 #define HCLGE_RSS_TC_SIZE_1 2 diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c index 145757cb70f9..674b3a22e91f 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c @@ -917,8 +917,8 @@ static int hclgevf_set_rss_tuple(struct hnae3_handle *handle, req->ipv4_sctp_en = tuple_sets; break; case SCTP_V6_FLOW: - if ((nfc->data & RXH_L4_B_0_1) || - (nfc->data & RXH_L4_B_2_3)) + if (hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 && + (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3))) return -EINVAL; req->ipv6_sctp_en = tuple_sets; @@ -2502,7 +2502,10 @@ static void hclgevf_rss_init_cfg(struct hclgevf_dev *hdev) tuple_sets->ipv4_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER; tuple_sets->ipv6_tcp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER; tuple_sets->ipv6_udp_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER; - tuple_sets->ipv6_sctp_en = HCLGEVF_RSS_INPUT_TUPLE_SCTP; + tuple_sets->ipv6_sctp_en = + hdev->ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2 ? + HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT : + HCLGEVF_RSS_INPUT_TUPLE_SCTP; tuple_sets->ipv6_fragment_en = HCLGEVF_RSS_INPUT_TUPLE_OTHER; } diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h index 1b183bc35604..f6d817a3edcb 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h @@ -122,6 +122,8 @@ #define HCLGEVF_D_IP_BIT BIT(2) #define HCLGEVF_S_IP_BIT BIT(3) #define HCLGEVF_V_TAG_BIT BIT(4) +#define HCLGEVF_RSS_INPUT_TUPLE_SCTP_NO_PORT \ + (HCLGEVF_D_IP_BIT | HCLGEVF_S_IP_BIT | HCLGEVF_V_TAG_BIT) #define HCLGEVF_STATS_TIMER_INTERVAL 36U diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index f302504faa8a..9778c83150f1 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -955,6 +955,7 @@ static void release_resources(struct ibmvnic_adapter *adapter) release_rx_pools(adapter); release_napi(adapter); + release_login_buffer(adapter); release_login_rsp_buffer(adapter); } @@ -2341,8 +2342,7 @@ static void __ibmvnic_reset(struct work_struct *work) set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(60 * HZ); } - } else if (!(rwi->reset_reason == VNIC_RESET_FATAL && - adapter->from_passive_init)) { + } else { rc = do_reset(adapter, rwi, reset_state); } kfree(rwi); @@ -2981,9 +2981,7 @@ static int reset_one_sub_crq_queue(struct ibmvnic_adapter *adapter, int rc; if (!scrq) { - netdev_dbg(adapter->netdev, - "Invalid scrq reset. irq (%d) or msgs (%p).\n", - scrq->irq, scrq->msgs); + netdev_dbg(adapter->netdev, "Invalid scrq reset.\n"); return -EINVAL; } @@ -3873,7 +3871,9 @@ static int send_login(struct ibmvnic_adapter *adapter) return -1; } + release_login_buffer(adapter); release_login_rsp_buffer(adapter); + client_data_len = vnic_client_data_len(adapter); buffer_size = diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h index ba7a0f8f6937..5b2143f4b1f8 100644 --- a/drivers/net/ethernet/intel/e1000e/e1000.h +++ b/drivers/net/ethernet/intel/e1000e/e1000.h @@ -436,6 +436,7 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca); #define FLAG2_DFLT_CRC_STRIPPING BIT(12) #define FLAG2_CHECK_RX_HWTSTAMP BIT(13) #define FLAG2_CHECK_SYSTIM_OVERFLOW BIT(14) +#define FLAG2_ENABLE_S0IX_FLOWS BIT(15) #define E1000_RX_DESC_PS(R, i) \ (&(((union e1000_rx_desc_packet_split *)((R).desc))[i])) diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c index 03215b0aee4b..06442e6bef73 100644 --- a/drivers/net/ethernet/intel/e1000e/ethtool.c +++ b/drivers/net/ethernet/intel/e1000e/ethtool.c @@ -23,6 +23,13 @@ struct e1000_stats { int stat_offset; }; +static const char e1000e_priv_flags_strings[][ETH_GSTRING_LEN] = { +#define E1000E_PRIV_FLAGS_S0IX_ENABLED BIT(0) + "s0ix-enabled", +}; + +#define E1000E_PRIV_FLAGS_STR_LEN ARRAY_SIZE(e1000e_priv_flags_strings) + #define E1000_STAT(str, m) { \ .stat_string = str, \ .type = E1000_STATS, \ @@ -1776,6 +1783,8 @@ static int e1000e_get_sset_count(struct net_device __always_unused *netdev, return E1000_TEST_LEN; case ETH_SS_STATS: return E1000_STATS_LEN; + case ETH_SS_PRIV_FLAGS: + return E1000E_PRIV_FLAGS_STR_LEN; default: return -EOPNOTSUPP; } @@ -2097,6 +2106,10 @@ static void e1000_get_strings(struct net_device __always_unused *netdev, p += ETH_GSTRING_LEN; } break; + case ETH_SS_PRIV_FLAGS: + memcpy(data, e1000e_priv_flags_strings, + E1000E_PRIV_FLAGS_STR_LEN * ETH_GSTRING_LEN); + break; } } @@ -2305,6 +2318,37 @@ static int e1000e_get_ts_info(struct net_device *netdev, return 0; } +static u32 e1000e_get_priv_flags(struct net_device *netdev) +{ + struct e1000_adapter *adapter = netdev_priv(netdev); + u32 priv_flags = 0; + + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) + priv_flags |= E1000E_PRIV_FLAGS_S0IX_ENABLED; + + return priv_flags; +} + +static int e1000e_set_priv_flags(struct net_device *netdev, u32 priv_flags) +{ + struct e1000_adapter *adapter = netdev_priv(netdev); + unsigned int flags2 = adapter->flags2; + + flags2 &= ~FLAG2_ENABLE_S0IX_FLOWS; + if (priv_flags & E1000E_PRIV_FLAGS_S0IX_ENABLED) { + struct e1000_hw *hw = &adapter->hw; + + if (hw->mac.type < e1000_pch_cnp) + return -EINVAL; + flags2 |= FLAG2_ENABLE_S0IX_FLOWS; + } + + if (flags2 != adapter->flags2) + adapter->flags2 = flags2; + + return 0; +} + static const struct ethtool_ops e1000_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS, .get_drvinfo = e1000_get_drvinfo, @@ -2336,6 +2380,8 @@ static const struct ethtool_ops e1000_ethtool_ops = { .set_eee = e1000e_set_eee, .get_link_ksettings = e1000_get_link_ksettings, .set_link_ksettings = e1000_set_link_ksettings, + .get_priv_flags = e1000e_get_priv_flags, + .set_priv_flags = e1000e_set_priv_flags, }; void e1000e_set_ethtool_ops(struct net_device *netdev) diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c index 9aa6fad8ed47..6fb46682b058 100644 --- a/drivers/net/ethernet/intel/e1000e/ich8lan.c +++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c @@ -1240,6 +1240,9 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) return 0; if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID) { + struct e1000_adapter *adapter = hw->adapter; + bool firmware_bug = false; + if (force) { /* Request ME un-configure ULP mode in the PHY */ mac_reg = er32(H2ME); @@ -1248,16 +1251,24 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) ew32(H2ME, mac_reg); } - /* Poll up to 300msec for ME to clear ULP_CFG_DONE. */ + /* Poll up to 2.5 seconds for ME to clear ULP_CFG_DONE. + * If this takes more than 1 second, show a warning indicating a + * firmware bug + */ while (er32(FWSM) & E1000_FWSM_ULP_CFG_DONE) { - if (i++ == 30) { + if (i++ == 250) { ret_val = -E1000_ERR_PHY; goto out; } + if (i > 100 && !firmware_bug) + firmware_bug = true; usleep_range(10000, 11000); } - e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10); + if (firmware_bug) + e_warn("ULP_CONFIG_DONE took %dmsec. This is a firmware bug\n", i * 10); + else + e_dbg("ULP_CONFIG_DONE cleared after %dmsec\n", i * 10); if (force) { mac_reg = er32(H2ME); diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 128ab6898070..e9b82c209c2d 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -103,45 +103,6 @@ static const struct e1000_reg_info e1000_reg_info_tbl[] = { {0, NULL} }; -struct e1000e_me_supported { - u16 device_id; /* supported device ID */ -}; - -static const struct e1000e_me_supported me_supported[] = { - {E1000_DEV_ID_PCH_LPT_I217_LM}, - {E1000_DEV_ID_PCH_LPTLP_I218_LM}, - {E1000_DEV_ID_PCH_I218_LM2}, - {E1000_DEV_ID_PCH_I218_LM3}, - {E1000_DEV_ID_PCH_SPT_I219_LM}, - {E1000_DEV_ID_PCH_SPT_I219_LM2}, - {E1000_DEV_ID_PCH_LBG_I219_LM3}, - {E1000_DEV_ID_PCH_SPT_I219_LM4}, - {E1000_DEV_ID_PCH_SPT_I219_LM5}, - {E1000_DEV_ID_PCH_CNP_I219_LM6}, - {E1000_DEV_ID_PCH_CNP_I219_LM7}, - {E1000_DEV_ID_PCH_ICP_I219_LM8}, - {E1000_DEV_ID_PCH_ICP_I219_LM9}, - {E1000_DEV_ID_PCH_CMP_I219_LM10}, - {E1000_DEV_ID_PCH_CMP_I219_LM11}, - {E1000_DEV_ID_PCH_CMP_I219_LM12}, - {E1000_DEV_ID_PCH_TGP_I219_LM13}, - {E1000_DEV_ID_PCH_TGP_I219_LM14}, - {E1000_DEV_ID_PCH_TGP_I219_LM15}, - {0} -}; - -static bool e1000e_check_me(u16 device_id) -{ - struct e1000e_me_supported *id; - - for (id = (struct e1000e_me_supported *)me_supported; - id->device_id; id++) - if (device_id == id->device_id) - return true; - - return false; -} - /** * __ew32_prepare - prepare to write to MAC CSR register on certain parts * @hw: pointer to the HW structure @@ -6962,7 +6923,6 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev) struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); struct e1000_adapter *adapter = netdev_priv(netdev); struct pci_dev *pdev = to_pci_dev(dev); - struct e1000_hw *hw = &adapter->hw; int rc; e1000e_flush_lpic(pdev); @@ -6970,13 +6930,13 @@ static __maybe_unused int e1000e_pm_suspend(struct device *dev) e1000e_pm_freeze(dev); rc = __e1000_shutdown(pdev, false); - if (rc) + if (rc) { e1000e_pm_thaw(dev); - - /* Introduce S0ix implementation */ - if (hw->mac.type >= e1000_pch_cnp && - !e1000e_check_me(hw->adapter->pdev->device)) - e1000e_s0ix_entry_flow(adapter); + } else { + /* Introduce S0ix implementation */ + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) + e1000e_s0ix_entry_flow(adapter); + } return rc; } @@ -6986,12 +6946,10 @@ static __maybe_unused int e1000e_pm_resume(struct device *dev) struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); struct e1000_adapter *adapter = netdev_priv(netdev); struct pci_dev *pdev = to_pci_dev(dev); - struct e1000_hw *hw = &adapter->hw; int rc; /* Introduce S0ix implementation */ - if (hw->mac.type >= e1000_pch_cnp && - !e1000e_check_me(hw->adapter->pdev->device)) + if (adapter->flags2 & FLAG2_ENABLE_S0IX_FLOWS) e1000e_s0ix_exit_flow(adapter); rc = __e1000_resume(pdev); @@ -7655,6 +7613,9 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (!(adapter->flags & FLAG_HAS_AMT)) e1000e_get_hw_control(adapter); + if (hw->mac.type >= e1000_pch_cnp) + adapter->flags2 |= FLAG2_ENABLE_S0IX_FLOWS; + strlcpy(netdev->name, "eth%d", sizeof(netdev->name)); err = register_netdev(netdev); if (err) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index d231a2cdd98f..118473dfdcbd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -120,6 +120,7 @@ enum i40e_state_t { __I40E_RESET_INTR_RECEIVED, __I40E_REINIT_REQUESTED, __I40E_PF_RESET_REQUESTED, + __I40E_PF_RESET_AND_REBUILD_REQUESTED, __I40E_CORE_RESET_REQUESTED, __I40E_GLOBAL_RESET_REQUESTED, __I40E_EMP_RESET_INTR_RECEIVED, @@ -146,6 +147,8 @@ enum i40e_state_t { }; #define I40E_PF_RESET_FLAG BIT_ULL(__I40E_PF_RESET_REQUESTED) +#define I40E_PF_RESET_AND_REBUILD_FLAG \ + BIT_ULL(__I40E_PF_RESET_AND_REBUILD_REQUESTED) /* VSI state flags */ enum i40e_vsi_state_t { diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 1337686bd099..1db482d310c2 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -36,6 +36,8 @@ static int i40e_setup_misc_vector(struct i40e_pf *pf); static void i40e_determine_queue_usage(struct i40e_pf *pf); static int i40e_setup_pf_filter_control(struct i40e_pf *pf); static void i40e_prep_for_reset(struct i40e_pf *pf, bool lock_acquired); +static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit, + bool lock_acquired); static int i40e_reset(struct i40e_pf *pf); static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired); static int i40e_setup_misc_vector_for_recovery_mode(struct i40e_pf *pf); @@ -8536,6 +8538,14 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired) "FW LLDP is disabled\n" : "FW LLDP is enabled\n"); + } else if (reset_flags & I40E_PF_RESET_AND_REBUILD_FLAG) { + /* Request a PF Reset + * + * Resets PF and reinitializes PFs VSI. + */ + i40e_prep_for_reset(pf, lock_acquired); + i40e_reset_and_rebuild(pf, true, lock_acquired); + } else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) { int v; diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 729c4f0d5ac5..21ee56420c3a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -1772,7 +1772,7 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs) if (num_vfs) { if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) { pf->flags |= I40E_FLAG_VEB_MODE_ENABLED; - i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); + i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG); } ret = i40e_pci_sriov_enable(pdev, num_vfs); goto sriov_configure_out; @@ -1781,7 +1781,7 @@ int i40e_pci_sriov_configure(struct pci_dev *pdev, int num_vfs) if (!pci_vfs_assigned(pf->pdev)) { i40e_free_vfs(pf); pf->flags &= ~I40E_FLAG_VEB_MODE_ENABLED; - i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG); + i40e_do_reset_safe(pf, I40E_PF_RESET_AND_REBUILD_FLAG); } else { dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs.\n"); ret = -EINVAL; diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 47eb9c584a12..492ce213208d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -348,12 +348,12 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) * SBP is *not* set in PRT_SBPVSI (default not set). */ skb = i40e_construct_skb_zc(rx_ring, *bi); - *bi = NULL; if (!skb) { rx_ring->rx_stats.alloc_buff_failed++; break; } + *bi = NULL; cleaned_count++; i40e_inc_ntc(rx_ring); diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 95543dfd4fe7..0a867d64d467 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1834,11 +1834,9 @@ static int iavf_init_get_resources(struct iavf_adapter *adapter) netif_tx_stop_all_queues(netdev); if (CLIENT_ALLOWED(adapter)) { err = iavf_lan_add_device(adapter); - if (err) { - rtnl_unlock(); + if (err) dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n", err); - } } dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr); if (netdev->features & NETIF_F_GRO) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 563ceac3060f..bc4d8d144401 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -4432,7 +4432,7 @@ static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog, struct bpf_prog *old_prog; if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) { - NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP"); + NL_SET_ERR_MSG_MOD(extack, "MTU too large for XDP"); return -EOPNOTSUPP; } @@ -5255,7 +5255,7 @@ static int mvneta_probe(struct platform_device *pdev) err = mvneta_port_power_up(pp, pp->phy_interface); if (err < 0) { dev_err(&pdev->dev, "can't power up port\n"); - return err; + goto err_netdev; } /* Armada3700 network controller does not support per-cpu diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index afdd22827223..358119d98358 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -1231,7 +1231,7 @@ static void mvpp22_gop_init_rgmii(struct mvpp2_port *port) regmap_read(priv->sysctrl_base, GENCONF_CTRL0, &val); if (port->gop_id == 2) - val |= GENCONF_CTRL0_PORT0_RGMII | GENCONF_CTRL0_PORT1_RGMII; + val |= GENCONF_CTRL0_PORT0_RGMII; else if (port->gop_id == 3) val |= GENCONF_CTRL0_PORT1_RGMII_MII; regmap_write(priv->sysctrl_base, GENCONF_CTRL0, val); @@ -2370,17 +2370,18 @@ static void mvpp2_rx_pkts_coal_set(struct mvpp2_port *port, static void mvpp2_tx_pkts_coal_set(struct mvpp2_port *port, struct mvpp2_tx_queue *txq) { - unsigned int thread = mvpp2_cpu_to_thread(port->priv, get_cpu()); + unsigned int thread; u32 val; if (txq->done_pkts_coal > MVPP2_TXQ_THRESH_MASK) txq->done_pkts_coal = MVPP2_TXQ_THRESH_MASK; val = (txq->done_pkts_coal << MVPP2_TXQ_THRESH_OFFSET); - mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id); - mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val); - - put_cpu(); + /* PKT-coalescing registers are per-queue + per-thread */ + for (thread = 0; thread < MVPP2_MAX_THREADS; thread++) { + mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_NUM_REG, txq->id); + mvpp2_thread_write(port->priv, thread, MVPP2_TXQ_THRESH_REG, val); + } } static u32 mvpp2_usec_to_cycles(u32 usec, unsigned long clk_hz) @@ -5487,7 +5488,7 @@ static int mvpp2_port_init(struct mvpp2_port *port) struct mvpp2 *priv = port->priv; struct mvpp2_txq_pcpu *txq_pcpu; unsigned int thread; - int queue, err; + int queue, err, val; /* Checks for hardware constraints */ if (port->first_rxq + port->nrxqs > @@ -5501,6 +5502,18 @@ static int mvpp2_port_init(struct mvpp2_port *port) mvpp2_egress_disable(port); mvpp2_port_disable(port); + if (mvpp2_is_xlg(port->phy_interface)) { + val = readl(port->base + MVPP22_XLG_CTRL0_REG); + val &= ~MVPP22_XLG_CTRL0_FORCE_LINK_PASS; + val |= MVPP22_XLG_CTRL0_FORCE_LINK_DOWN; + writel(val, port->base + MVPP22_XLG_CTRL0_REG); + } else { + val = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG); + val &= ~MVPP2_GMAC_FORCE_LINK_PASS; + val |= MVPP2_GMAC_FORCE_LINK_DOWN; + writel(val, port->base + MVPP2_GMAC_AUTONEG_CONFIG); + } + port->tx_time_coal = MVPP2_TXDONE_COAL_USEC; port->txqs = devm_kcalloc(dev, port->ntxqs, sizeof(*port->txqs), @@ -5869,8 +5882,6 @@ static void mvpp2_phylink_validate(struct phylink_config *config, phylink_set(mask, Autoneg); phylink_set_port_modes(mask); - phylink_set(mask, Pause); - phylink_set(mask, Asym_Pause); switch (state->interface) { case PHY_INTERFACE_MODE_10GBASER: diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c index 5692c6087bbb..a30eb90ba3d2 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c @@ -405,6 +405,38 @@ static int mvpp2_prs_tcam_first_free(struct mvpp2 *priv, unsigned char start, return -EINVAL; } +/* Drop flow control pause frames */ +static void mvpp2_prs_drop_fc(struct mvpp2 *priv) +{ + unsigned char da[ETH_ALEN] = { 0x01, 0x80, 0xC2, 0x00, 0x00, 0x01 }; + struct mvpp2_prs_entry pe; + unsigned int len; + + memset(&pe, 0, sizeof(pe)); + + /* For all ports - drop flow control frames */ + pe.index = MVPP2_PE_FC_DROP; + mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); + + /* Set match on DA */ + len = ETH_ALEN; + while (len--) + mvpp2_prs_tcam_data_byte_set(&pe, len, da[len], 0xff); + + mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_DROP_MASK, + MVPP2_PRS_RI_DROP_MASK); + + mvpp2_prs_sram_bits_set(&pe, MVPP2_PRS_SRAM_LU_GEN_BIT, 1); + mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_FLOWS); + + /* Mask all ports */ + mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK); + + /* Update shadow table and hw entry */ + mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_MAC); + mvpp2_prs_hw_write(priv, &pe); +} + /* Enable/disable dropping all mac da's */ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add) { @@ -1162,6 +1194,7 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv) mvpp2_prs_hw_write(priv, &pe); /* Create dummy entries for drop all and promiscuous modes */ + mvpp2_prs_drop_fc(priv); mvpp2_prs_mac_drop_all_set(priv, 0, false); mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); @@ -1647,8 +1680,9 @@ static int mvpp2_prs_pppoe_init(struct mvpp2 *priv) mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP6); mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP6, MVPP2_PRS_RI_L3_PROTO_MASK); - /* Skip eth_type + 4 bytes of IPv6 header */ - mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 4, + /* Jump to DIP of IPV6 header */ + mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN + 8 + + MVPP2_MAX_L3_ADDR_SIZE, MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD); /* Set L3 offset */ mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3, diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h index e22f6c85d380..4b68dd374733 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.h @@ -129,7 +129,7 @@ #define MVPP2_PE_VID_EDSA_FLTR_DEFAULT (MVPP2_PRS_TCAM_SRAM_SIZE - 7) #define MVPP2_PE_VLAN_DBL (MVPP2_PRS_TCAM_SRAM_SIZE - 6) #define MVPP2_PE_VLAN_NONE (MVPP2_PRS_TCAM_SRAM_SIZE - 5) -/* reserved */ +#define MVPP2_PE_FC_DROP (MVPP2_PRS_TCAM_SRAM_SIZE - 4) #define MVPP2_PE_MAC_MC_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 3) #define MVPP2_PE_MAC_UC_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 2) #define MVPP2_PE_MAC_NON_PROMISCUOUS (MVPP2_PRS_TCAM_SRAM_SIZE - 1) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c index 7d0f96290943..1a8f5a039d50 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -871,8 +871,10 @@ static int cgx_lmac_init(struct cgx *cgx) if (!lmac) return -ENOMEM; lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL); - if (!lmac->name) - return -ENOMEM; + if (!lmac->name) { + err = -ENOMEM; + goto err_lmac_free; + } sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i); lmac->lmac_id = i; lmac->cgx = cgx; @@ -883,7 +885,7 @@ static int cgx_lmac_init(struct cgx *cgx) CGX_LMAC_FWI + i * 9), cgx_fwi_event_handler, 0, lmac->name, lmac); if (err) - return err; + goto err_irq; /* Enable interrupt */ cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S, @@ -895,6 +897,12 @@ static int cgx_lmac_init(struct cgx *cgx) } return cgx_lmac_verify_fwi_version(cgx); + +err_irq: + kfree(lmac->name); +err_lmac_free: + kfree(lmac); + return err; } static int cgx_lmac_exit(struct cgx *cgx) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c index d298b9357177..6c6b411e78fd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -469,6 +469,9 @@ int rvu_mbox_handler_cgx_mac_addr_set(struct rvu *rvu, int pf = rvu_get_pf(req->hdr.pcifunc); u8 cgx_id, lmac_id; + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) + return -EPERM; + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); cgx_lmac_addr_set(cgx_id, lmac_id, req->mac_addr); @@ -485,6 +488,9 @@ int rvu_mbox_handler_cgx_mac_addr_get(struct rvu *rvu, int rc = 0, i; u64 cfg; + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) + return -EPERM; + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); rsp->hdr.rc = rc; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c index d29af7b9c695..76177f7c5ec2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c @@ -626,6 +626,11 @@ bool mlx5e_rep_tc_update_skb(struct mlx5_cqe64 *cqe, if (!reg_c0) return true; + /* If reg_c0 is not equal to the default flow tag then skb->mark + * is not supported and must be reset back to 0. + */ + skb->mark = 0; + priv = netdev_priv(skb->dev); esw = priv->mdev->priv.eswitch; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c index e521254d886e..072363e73f1c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c @@ -118,16 +118,17 @@ struct mlx5_ct_tuple { u16 zone; }; -struct mlx5_ct_shared_counter { +struct mlx5_ct_counter { struct mlx5_fc *counter; refcount_t refcount; + bool is_shared; }; struct mlx5_ct_entry { struct rhash_head node; struct rhash_head tuple_node; struct rhash_head tuple_nat_node; - struct mlx5_ct_shared_counter *shared_counter; + struct mlx5_ct_counter *counter; unsigned long cookie; unsigned long restore_cookie; struct mlx5_ct_tuple tuple; @@ -394,13 +395,14 @@ mlx5_tc_ct_set_tuple_match(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec, } static void -mlx5_tc_ct_shared_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry) +mlx5_tc_ct_counter_put(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry) { - if (!refcount_dec_and_test(&entry->shared_counter->refcount)) + if (entry->counter->is_shared && + !refcount_dec_and_test(&entry->counter->refcount)) return; - mlx5_fc_destroy(ct_priv->dev, entry->shared_counter->counter); - kfree(entry->shared_counter); + mlx5_fc_destroy(ct_priv->dev, entry->counter->counter); + kfree(entry->counter); } static void @@ -699,7 +701,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv, attr->dest_ft = ct_priv->post_ct; attr->ft = nat ? ct_priv->ct_nat : ct_priv->ct; attr->outer_match_level = MLX5_MATCH_L4; - attr->counter = entry->shared_counter->counter; + attr->counter = entry->counter->counter; attr->flags |= MLX5_ESW_ATTR_FLAG_NO_IN_PORT; mlx5_tc_ct_set_tuple_match(netdev_priv(ct_priv->netdev), spec, flow_rule); @@ -732,13 +734,34 @@ err_attr: return err; } -static struct mlx5_ct_shared_counter * +static struct mlx5_ct_counter * +mlx5_tc_ct_counter_create(struct mlx5_tc_ct_priv *ct_priv) +{ + struct mlx5_ct_counter *counter; + int ret; + + counter = kzalloc(sizeof(*counter), GFP_KERNEL); + if (!counter) + return ERR_PTR(-ENOMEM); + + counter->is_shared = false; + counter->counter = mlx5_fc_create(ct_priv->dev, true); + if (IS_ERR(counter->counter)) { + ct_dbg("Failed to create counter for ct entry"); + ret = PTR_ERR(counter->counter); + kfree(counter); + return ERR_PTR(ret); + } + + return counter; +} + +static struct mlx5_ct_counter * mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_entry *entry) { struct mlx5_ct_tuple rev_tuple = entry->tuple; - struct mlx5_ct_shared_counter *shared_counter; - struct mlx5_core_dev *dev = ct_priv->dev; + struct mlx5_ct_counter *shared_counter; struct mlx5_ct_entry *rev_entry; __be16 tmp_port; int ret; @@ -767,25 +790,20 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv, rev_entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &rev_tuple, tuples_ht_params); if (rev_entry) { - if (refcount_inc_not_zero(&rev_entry->shared_counter->refcount)) { + if (refcount_inc_not_zero(&rev_entry->counter->refcount)) { mutex_unlock(&ct_priv->shared_counter_lock); - return rev_entry->shared_counter; + return rev_entry->counter; } } mutex_unlock(&ct_priv->shared_counter_lock); - shared_counter = kzalloc(sizeof(*shared_counter), GFP_KERNEL); - if (!shared_counter) - return ERR_PTR(-ENOMEM); - - shared_counter->counter = mlx5_fc_create(dev, true); - if (IS_ERR(shared_counter->counter)) { - ct_dbg("Failed to create counter for ct entry"); - ret = PTR_ERR(shared_counter->counter); - kfree(shared_counter); + shared_counter = mlx5_tc_ct_counter_create(ct_priv); + if (IS_ERR(shared_counter)) { + ret = PTR_ERR(shared_counter); return ERR_PTR(ret); } + shared_counter->is_shared = true; refcount_set(&shared_counter->refcount, 1); return shared_counter; } @@ -798,10 +816,13 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv, { int err; - entry->shared_counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry); - if (IS_ERR(entry->shared_counter)) { - err = PTR_ERR(entry->shared_counter); - ct_dbg("Failed to create counter for ct entry"); + if (nf_ct_acct_enabled(dev_net(ct_priv->netdev))) + entry->counter = mlx5_tc_ct_counter_create(ct_priv); + else + entry->counter = mlx5_tc_ct_shared_counter_get(ct_priv, entry); + + if (IS_ERR(entry->counter)) { + err = PTR_ERR(entry->counter); return err; } @@ -820,7 +841,7 @@ mlx5_tc_ct_entry_add_rules(struct mlx5_tc_ct_priv *ct_priv, err_nat: mlx5_tc_ct_entry_del_rule(ct_priv, entry, false); err_orig: - mlx5_tc_ct_shared_counter_put(ct_priv, entry); + mlx5_tc_ct_counter_put(ct_priv, entry); return err; } @@ -918,7 +939,7 @@ mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv, rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node, tuples_ht_params); mutex_unlock(&ct_priv->shared_counter_lock); - mlx5_tc_ct_shared_counter_put(ct_priv, entry); + mlx5_tc_ct_counter_put(ct_priv, entry); } @@ -956,7 +977,7 @@ mlx5_tc_ct_block_flow_offload_stats(struct mlx5_ct_ft *ft, if (!entry) return -ENOENT; - mlx5_fc_query_cached(entry->shared_counter->counter, &bytes, &packets, &lastuse); + mlx5_fc_query_cached(entry->counter->counter, &bytes, &packets, &lastuse); flow_stats_update(&f->stats, bytes, packets, 0, lastuse, FLOW_ACTION_HW_STATS_DELAYED); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 7943eb30b837..4880f2179273 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -371,6 +371,15 @@ struct mlx5e_swp_spec { u8 tun_l4_proto; }; +static inline void mlx5e_eseg_swp_offsets_add_vlan(struct mlx5_wqe_eth_seg *eseg) +{ + /* SWP offsets are in 2-bytes words */ + eseg->swp_outer_l3_offset += VLAN_HLEN / 2; + eseg->swp_outer_l4_offset += VLAN_HLEN / 2; + eseg->swp_inner_l3_offset += VLAN_HLEN / 2; + eseg->swp_inner_l4_offset += VLAN_HLEN / 2; +} + static inline void mlx5e_set_eseg_swp(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg, struct mlx5e_swp_spec *swp_spec) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 899b98aca0d3..1fae7fab8297 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -51,7 +51,7 @@ static inline bool mlx5_geneve_tx_allowed(struct mlx5_core_dev *mdev) } static inline void -mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) +mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg, u16 ihs) { struct mlx5e_swp_spec swp_spec = {}; unsigned int offset = 0; @@ -85,6 +85,8 @@ mlx5e_tx_tunnel_accel(struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) } mlx5e_set_eseg_swp(skb, eseg, &swp_spec); + if (skb_vlan_tag_present(skb) && ihs) + mlx5e_eseg_swp_offsets_add_vlan(eseg); } #else @@ -163,7 +165,7 @@ static inline unsigned int mlx5e_accel_tx_ids_len(struct mlx5e_txqsq *sq, static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv, struct sk_buff *skb, - struct mlx5_wqe_eth_seg *eseg) + struct mlx5_wqe_eth_seg *eseg, u16 ihs) { #ifdef CONFIG_MLX5_EN_IPSEC if (xfrm_offload(skb)) @@ -172,7 +174,7 @@ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv, #if IS_ENABLED(CONFIG_GENEVE) if (skb->encapsulation) - mlx5e_tx_tunnel_accel(skb, eseg); + mlx5e_tx_tunnel_accel(skb, eseg, ihs); #endif return true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index d9076d543104..2d37742a888c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -1010,6 +1010,22 @@ static int mlx5e_get_link_ksettings(struct net_device *netdev, return mlx5e_ethtool_get_link_ksettings(priv, link_ksettings); } +static int mlx5e_speed_validate(struct net_device *netdev, bool ext, + const unsigned long link_modes, u8 autoneg) +{ + /* Extended link-mode has no speed limitations. */ + if (ext) + return 0; + + if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) && + autoneg != AUTONEG_ENABLE) { + netdev_err(netdev, "%s: 56G link speed requires autoneg enabled\n", + __func__); + return -EINVAL; + } + return 0; +} + static u32 mlx5e_ethtool2ptys_adver_link(const unsigned long *link_modes) { u32 i, ptys_modes = 0; @@ -1103,13 +1119,9 @@ int mlx5e_ethtool_set_link_ksettings(struct mlx5e_priv *priv, link_modes = autoneg == AUTONEG_ENABLE ? ethtool2ptys_adver_func(adver) : mlx5e_port_speed2linkmodes(mdev, speed, !ext); - if ((link_modes & MLX5E_PROT_MASK(MLX5E_56GBASE_R4)) && - autoneg != AUTONEG_ENABLE) { - netdev_err(priv->netdev, "%s: 56G link speed requires autoneg enabled\n", - __func__); - err = -EINVAL; + err = mlx5e_speed_validate(priv->netdev, ext, link_modes, autoneg); + if (err) goto out; - } link_modes = link_modes & eproto.cap; if (!link_modes) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c index fa8149f6eb08..e02e5895703d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c @@ -942,6 +942,7 @@ static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc, in = kvzalloc(inlen, GFP_KERNEL); if (!in) { kfree(ft->g); + ft->g = NULL; return -ENOMEM; } @@ -1087,6 +1088,7 @@ static int mlx5e_create_inner_ttc_table_groups(struct mlx5e_ttc_table *ttc) in = kvzalloc(inlen, GFP_KERNEL); if (!in) { kfree(ft->g); + ft->g = NULL; return -ENOMEM; } @@ -1390,6 +1392,7 @@ err_destroy_groups: ft->g[ft->num_groups] = NULL; mlx5e_destroy_groups(ft); kvfree(in); + kfree(ft->g); return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 7a79d330c075..6a852b4901aa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -3161,7 +3161,8 @@ static void mlx5e_modify_admin_state(struct mlx5_core_dev *mdev, mlx5_set_port_admin_status(mdev, state); - if (mlx5_eswitch_mode(mdev) != MLX5_ESWITCH_LEGACY) + if (mlx5_eswitch_mode(mdev) == MLX5_ESWITCH_OFFLOADS || + !MLX5_CAP_GEN(mdev, uplink_follow)) return; if (state == MLX5_PORT_UP) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index e47e2a0059d0..61ed671fe741 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -682,9 +682,9 @@ void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq) static bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, struct sk_buff *skb, struct mlx5e_accel_tx_state *accel, - struct mlx5_wqe_eth_seg *eseg) + struct mlx5_wqe_eth_seg *eseg, u16 ihs) { - if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg))) + if (unlikely(!mlx5e_accel_tx_eseg(priv, skb, eseg, ihs))) return false; mlx5e_txwqe_build_eseg_csum(sq, skb, accel, eseg); @@ -714,7 +714,8 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) { struct mlx5_wqe_eth_seg eseg = {}; - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg))) + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &eseg, + attr.ihs))) return NETDEV_TX_OK; mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more()); @@ -731,7 +732,7 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) /* May update the WQE, but may not post other WQEs. */ mlx5e_accel_tx_finish(sq, wqe, &accel, (struct mlx5_wqe_inline_seg *)(wqe->data + wqe_attr.ds_cnt_inl)); - if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth))) + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &accel, &wqe->eth, attr.ihs))) return NETDEV_TX_OK; mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more()); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c index 2b85d4777303..3e19b1721303 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c @@ -95,22 +95,21 @@ int esw_acl_egress_lgcy_setup(struct mlx5_eswitch *esw, return 0; } - if (!IS_ERR_OR_NULL(vport->egress.acl)) - return 0; - - vport->egress.acl = esw_acl_table_create(esw, vport->vport, - MLX5_FLOW_NAMESPACE_ESW_EGRESS, - table_size); - if (IS_ERR(vport->egress.acl)) { - err = PTR_ERR(vport->egress.acl); - vport->egress.acl = NULL; - goto out; + if (!vport->egress.acl) { + vport->egress.acl = esw_acl_table_create(esw, vport->vport, + MLX5_FLOW_NAMESPACE_ESW_EGRESS, + table_size); + if (IS_ERR(vport->egress.acl)) { + err = PTR_ERR(vport->egress.acl); + vport->egress.acl = NULL; + goto out; + } + + err = esw_acl_egress_lgcy_groups_create(esw, vport); + if (err) + goto out; } - err = esw_acl_egress_lgcy_groups_create(esw, vport); - if (err) - goto out; - esw_debug(esw->dev, "vport[%d] configure egress rules, vlan(%d) qos(%d)\n", vport->vport, vport->info.vlan, vport->info.qos); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c index f3d45ef082cd..83a05371e2aa 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c @@ -564,7 +564,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev) struct mlx5_core_dev *tmp_dev; int i, err; - if (!MLX5_CAP_GEN(dev, vport_group_manager)) + if (!MLX5_CAP_GEN(dev, vport_group_manager) || + !MLX5_CAP_GEN(dev, lag_master) || + MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS) return; tmp_dev = mlx5_get_next_phys_dev(dev); @@ -582,12 +584,9 @@ void mlx5_lag_add(struct mlx5_core_dev *dev, struct net_device *netdev) if (mlx5_lag_dev_add_pf(ldev, dev, netdev) < 0) return; - for (i = 0; i < MLX5_MAX_PORTS; i++) { - tmp_dev = ldev->pf[i].dev; - if (!tmp_dev || !MLX5_CAP_GEN(tmp_dev, lag_master) || - MLX5_CAP_GEN(tmp_dev, num_lag_ports) != MLX5_MAX_PORTS) + for (i = 0; i < MLX5_MAX_PORTS; i++) + if (!ldev->pf[i].dev) break; - } if (i >= MLX5_MAX_PORTS) ldev->flags |= MLX5_LAG_FLAG_READY; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index c08315b51fd3..ca6f2fc39ea0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1368,8 +1368,10 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id) MLX5_COREDEV_VF : MLX5_COREDEV_PF; dev->priv.adev_idx = mlx5_adev_idx_alloc(); - if (dev->priv.adev_idx < 0) - return dev->priv.adev_idx; + if (dev->priv.adev_idx < 0) { + err = dev->priv.adev_idx; + goto adev_init_err; + } err = mlx5_mdev_init(dev, prof_sel); if (err) @@ -1403,6 +1405,7 @@ pci_init_err: mlx5_mdev_uninit(dev); mdev_init_err: mlx5_adev_idx_free(dev->priv.adev_idx); +adev_init_err: mlx5_devlink_free(devlink); return err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c index 0fc7de4aa572..8e0dddc6383f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/rdma.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/rdma.c @@ -116,7 +116,7 @@ free: static void mlx5_rdma_del_roce_addr(struct mlx5_core_dev *dev) { mlx5_core_roce_gid_set(dev, 0, 0, 0, - NULL, NULL, false, 0, 0); + NULL, NULL, false, 0, 1); } static void mlx5_rdma_make_default_gid(struct mlx5_core_dev *dev, union ib_gid *gid) diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c index 8fa286ccdd6b..bf85ce9835d7 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c @@ -19,7 +19,7 @@ #define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */ #define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */ #define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */ -#define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */ +#define MLXSW_THERMAL_ASIC_TEMP_CRIT 140000 /* 140C */ #define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */ #define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2) #define MLXSW_THERMAL_ZONE_MAX_NAME 16 @@ -176,6 +176,12 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core, if (err) return err; + if (crit_temp > emerg_temp) { + dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n", + tz->tzdev->type, crit_temp, emerg_temp); + return 0; + } + /* According to the system thermal requirements, the thermal zones are * defined with four trip points. The critical and emergency * temperature thresholds, provided by QSFP module are set as "active" @@ -190,11 +196,8 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core, tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp; tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp; tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp; - if (emerg_temp > crit_temp) - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + + tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + MLXSW_THERMAL_MODULE_TEMP_SHIFT; - else - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp; return 0; } diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index 0b9992bd6626..ff87a0bc089c 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -60,14 +60,27 @@ int ocelot_mact_learn(struct ocelot *ocelot, int port, const unsigned char mac[ETH_ALEN], unsigned int vid, enum macaccess_entry_type type) { + u32 cmd = ANA_TABLES_MACACCESS_VALID | + ANA_TABLES_MACACCESS_DEST_IDX(port) | + ANA_TABLES_MACACCESS_ENTRYTYPE(type) | + ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN); + unsigned int mc_ports; + + /* Set MAC_CPU_COPY if the CPU port is used by a multicast entry */ + if (type == ENTRYTYPE_MACv4) + mc_ports = (mac[1] << 8) | mac[2]; + else if (type == ENTRYTYPE_MACv6) + mc_ports = (mac[0] << 8) | mac[1]; + else + mc_ports = 0; + + if (mc_ports & BIT(ocelot->num_phys_ports)) + cmd |= ANA_TABLES_MACACCESS_MAC_CPU_COPY; + ocelot_mact_select(ocelot, mac, vid); /* Issue a write command */ - ocelot_write(ocelot, ANA_TABLES_MACACCESS_VALID | - ANA_TABLES_MACACCESS_DEST_IDX(port) | - ANA_TABLES_MACACCESS_ENTRYTYPE(type) | - ANA_TABLES_MACACCESS_MAC_TABLE_CMD(MACACCESS_CMD_LEARN), - ANA_TABLES_MACACCESS); + ocelot_write(ocelot, cmd, ANA_TABLES_MACACCESS); return ocelot_mact_wait_for_completion(ocelot); } diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c index 2bd2840d88bd..42230f92ca9c 100644 --- a/drivers/net/ethernet/mscc/ocelot_net.c +++ b/drivers/net/ethernet/mscc/ocelot_net.c @@ -1042,10 +1042,8 @@ static int ocelot_netdevice_event(struct notifier_block *unused, struct net_device *dev = netdev_notifier_info_to_dev(ptr); int ret = 0; - if (!ocelot_netdevice_dev_check(dev)) - return 0; - if (event == NETDEV_PRECHANGEUPPER && + ocelot_netdevice_dev_check(dev) && netif_is_lag_master(info->upper_dev)) { struct netdev_lag_upper_info *lag_upper_info = info->upper_info; struct netlink_ext_ack *extack; diff --git a/drivers/net/ethernet/natsemi/macsonic.c b/drivers/net/ethernet/natsemi/macsonic.c index 776b7d264dc3..2289e1fe3741 100644 --- a/drivers/net/ethernet/natsemi/macsonic.c +++ b/drivers/net/ethernet/natsemi/macsonic.c @@ -506,10 +506,14 @@ static int mac_sonic_platform_probe(struct platform_device *pdev) err = register_netdev(dev); if (err) - goto out; + goto undo_probe; return 0; +undo_probe: + dma_free_coherent(lp->device, + SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), + lp->descriptors, lp->descriptors_laddr); out: free_netdev(dev); @@ -584,12 +588,16 @@ static int mac_sonic_nubus_probe(struct nubus_board *board) err = register_netdev(ndev); if (err) - goto out; + goto undo_probe; nubus_set_drvdata(board, ndev); return 0; +undo_probe: + dma_free_coherent(lp->device, + SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), + lp->descriptors, lp->descriptors_laddr); out: free_netdev(ndev); return err; diff --git a/drivers/net/ethernet/natsemi/xtsonic.c b/drivers/net/ethernet/natsemi/xtsonic.c index afa166ff7aef..28d9e98db81a 100644 --- a/drivers/net/ethernet/natsemi/xtsonic.c +++ b/drivers/net/ethernet/natsemi/xtsonic.c @@ -229,11 +229,14 @@ int xtsonic_probe(struct platform_device *pdev) sonic_msg_init(dev); if ((err = register_netdev(dev))) - goto out1; + goto undo_probe1; return 0; -out1: +undo_probe1: + dma_free_coherent(lp->device, + SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), + lp->descriptors, lp->descriptors_laddr); release_region(dev->base_addr, SONIC_MEM_SIZE); out: free_netdev(dev); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c index 9156c9825a16..ac4cd5d82e69 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c @@ -337,7 +337,7 @@ void ionic_rx_fill(struct ionic_queue *q) unsigned int i, j; unsigned int len; - len = netdev->mtu + ETH_HLEN; + len = netdev->mtu + ETH_HLEN + VLAN_HLEN; nfrags = round_up(len, PAGE_SIZE) / PAGE_SIZE; for (i = ionic_q_space_avail(q); i; i--) { diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig index 4366c7a8de95..6b5ddb07ee83 100644 --- a/drivers/net/ethernet/qlogic/Kconfig +++ b/drivers/net/ethernet/qlogic/Kconfig @@ -78,6 +78,7 @@ config QED depends on PCI select ZLIB_INFLATE select CRC8 + select CRC32 select NET_DEVLINK help This enables the support for Marvell FastLinQ adapters family. diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c index f21847739ef1..d258e0ccf946 100644 --- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c +++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c @@ -564,11 +564,6 @@ static const struct net_device_ops netxen_netdev_ops = { .ndo_set_features = netxen_set_features, }; -static inline bool netxen_function_zero(struct pci_dev *pdev) -{ - return (PCI_FUNC(pdev->devfn) == 0) ? true : false; -} - static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter, u32 mode) { @@ -664,7 +659,7 @@ static int netxen_setup_intr(struct netxen_adapter *adapter) netxen_initialize_interrupt_registers(adapter); netxen_set_msix_bit(pdev, 0); - if (netxen_function_zero(pdev)) { + if (adapter->portnum == 0) { if (!netxen_setup_msi_interrupts(adapter, num_msix)) netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE); else diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c index a2494bf85007..ca0ee29a57b5 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_fp.c +++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c @@ -1799,6 +1799,11 @@ netdev_features_t qede_features_check(struct sk_buff *skb, ntohs(udp_hdr(skb)->dest) != gnv_port)) return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); + } else if (l4_proto == IPPROTO_IPIP) { + /* IPIP tunnels are unknown to the device or at least unsupported natively, + * offloads for them can't be done trivially, so disable them for such skb. + */ + return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); } } diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 46d8510b2fe2..a569abe7f5ef 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -2207,7 +2207,8 @@ static void rtl_pll_power_down(struct rtl8169_private *tp) } switch (tp->mac_version) { - case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33: + case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26: + case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33: case RTL_GIGA_MAC_VER_37: case RTL_GIGA_MAC_VER_39: case RTL_GIGA_MAC_VER_43: @@ -2233,7 +2234,8 @@ static void rtl_pll_power_down(struct rtl8169_private *tp) static void rtl_pll_power_up(struct rtl8169_private *tp) { switch (tp->mac_version) { - case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_33: + case RTL_GIGA_MAC_VER_25 ... RTL_GIGA_MAC_VER_26: + case RTL_GIGA_MAC_VER_32 ... RTL_GIGA_MAC_VER_33: case RTL_GIGA_MAC_VER_37: case RTL_GIGA_MAC_VER_39: case RTL_GIGA_MAC_VER_43: diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c index c63304632935..590b088bc4c7 100644 --- a/drivers/net/ethernet/renesas/sh_eth.c +++ b/drivers/net/ethernet/renesas/sh_eth.c @@ -2606,10 +2606,10 @@ static int sh_eth_close(struct net_device *ndev) /* Free all the skbuffs in the Rx queue and the DMA buffer. */ sh_eth_ring_free(ndev); - pm_runtime_put_sync(&mdp->pdev->dev); - mdp->is_opened = 0; + pm_runtime_put(&mdp->pdev->dev); + return 0; } @@ -3034,6 +3034,28 @@ static int sh_mdio_release(struct sh_eth_private *mdp) return 0; } +static int sh_mdiobb_read(struct mii_bus *bus, int phy, int reg) +{ + int res; + + pm_runtime_get_sync(bus->parent); + res = mdiobb_read(bus, phy, reg); + pm_runtime_put(bus->parent); + + return res; +} + +static int sh_mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val) +{ + int res; + + pm_runtime_get_sync(bus->parent); + res = mdiobb_write(bus, phy, reg, val); + pm_runtime_put(bus->parent); + + return res; +} + /* MDIO bus init function */ static int sh_mdio_init(struct sh_eth_private *mdp, struct sh_eth_plat_data *pd) @@ -3058,6 +3080,10 @@ static int sh_mdio_init(struct sh_eth_private *mdp, if (!mdp->mii_bus) return -ENOMEM; + /* Wrap accessors with Runtime PM-aware ops */ + mdp->mii_bus->read = sh_mdiobb_read; + mdp->mii_bus->write = sh_mdiobb_write; + /* Hook up MII support for ethtool */ mdp->mii_bus->name = "sh_mii"; mdp->mii_bus->parent = dev; diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c index a2e80c89de2d..9a6a519426a0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c @@ -721,6 +721,8 @@ static SIMPLE_DEV_PM_OPS(intel_eth_pm_ops, intel_eth_pci_suspend, #define PCI_DEVICE_ID_INTEL_EHL_PSE1_RGMII1G_ID 0x4bb0 #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII1G_ID 0x4bb1 #define PCI_DEVICE_ID_INTEL_EHL_PSE1_SGMII2G5_ID 0x4bb2 +#define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_0_ID 0x43ac +#define PCI_DEVICE_ID_INTEL_TGLH_SGMII1G_1_ID 0x43a2 #define PCI_DEVICE_ID_INTEL_TGL_SGMII1G_ID 0xa0ac static const struct pci_device_id intel_eth_pci_id_table[] = { @@ -735,6 +737,8 @@ static const struct pci_device_id intel_eth_pci_id_table[] = { { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII1G_ID, &ehl_pse1_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, EHL_PSE1_SGMII2G5_ID, &ehl_pse1_sgmii1g_info) }, { PCI_DEVICE_DATA(INTEL, TGL_SGMII1G_ID, &tgl_sgmii1g_info) }, + { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_0_ID, &tgl_sgmii1g_info) }, + { PCI_DEVICE_DATA(INTEL, TGLH_SGMII1G_1_ID, &tgl_sgmii1g_info) }, {} }; MODULE_DEVICE_TABLE(pci, intel_eth_pci_id_table); diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c index 459ae715b33d..f184b00f5116 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c @@ -135,7 +135,7 @@ static int meson8b_init_rgmii_tx_clk(struct meson8b_dwmac *dwmac) struct device *dev = dwmac->dev; static const struct clk_parent_data mux_parents[] = { { .fw_name = "clkin0", }, - { .fw_name = "clkin1", }, + { .index = -1, }, }; static const struct clk_div_table div_table[] = { { .div = 2, .val = 2, }, diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c index 58e0511badba..a5e0eff4a387 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c @@ -64,6 +64,7 @@ struct emac_variant { * @variant: reference to the current board variant * @regmap: regmap for using the syscon * @internal_phy_powered: Does the internal PHY is enabled + * @use_internal_phy: Is the internal PHY selected for use * @mux_handle: Internal pointer used by mdio-mux lib */ struct sunxi_priv_data { @@ -74,6 +75,7 @@ struct sunxi_priv_data { const struct emac_variant *variant; struct regmap_field *regmap_field; bool internal_phy_powered; + bool use_internal_phy; void *mux_handle; }; @@ -539,8 +541,11 @@ static const struct stmmac_dma_ops sun8i_dwmac_dma_ops = { .dma_interrupt = sun8i_dwmac_dma_interrupt, }; +static int sun8i_dwmac_power_internal_phy(struct stmmac_priv *priv); + static int sun8i_dwmac_init(struct platform_device *pdev, void *priv) { + struct net_device *ndev = platform_get_drvdata(pdev); struct sunxi_priv_data *gmac = priv; int ret; @@ -554,13 +559,25 @@ static int sun8i_dwmac_init(struct platform_device *pdev, void *priv) ret = clk_prepare_enable(gmac->tx_clk); if (ret) { - if (gmac->regulator) - regulator_disable(gmac->regulator); dev_err(&pdev->dev, "Could not enable AHB clock\n"); - return ret; + goto err_disable_regulator; + } + + if (gmac->use_internal_phy) { + ret = sun8i_dwmac_power_internal_phy(netdev_priv(ndev)); + if (ret) + goto err_disable_clk; } return 0; + +err_disable_clk: + clk_disable_unprepare(gmac->tx_clk); +err_disable_regulator: + if (gmac->regulator) + regulator_disable(gmac->regulator); + + return ret; } static void sun8i_dwmac_core_init(struct mac_device_info *hw, @@ -831,7 +848,6 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child, struct sunxi_priv_data *gmac = priv->plat->bsp_priv; u32 reg, val; int ret = 0; - bool need_power_ephy = false; if (current_child ^ desired_child) { regmap_field_read(gmac->regmap_field, ®); @@ -839,13 +855,12 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child, case DWMAC_SUN8I_MDIO_MUX_INTERNAL_ID: dev_info(priv->device, "Switch mux to internal PHY"); val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SELECT; - - need_power_ephy = true; + gmac->use_internal_phy = true; break; case DWMAC_SUN8I_MDIO_MUX_EXTERNAL_ID: dev_info(priv->device, "Switch mux to external PHY"); val = (reg & ~H3_EPHY_MUX_MASK) | H3_EPHY_SHUTDOWN; - need_power_ephy = false; + gmac->use_internal_phy = false; break; default: dev_err(priv->device, "Invalid child ID %x\n", @@ -853,7 +868,7 @@ static int mdio_mux_syscon_switch_fn(int current_child, int desired_child, return -EINVAL; } regmap_field_write(gmac->regmap_field, val); - if (need_power_ephy) { + if (gmac->use_internal_phy) { ret = sun8i_dwmac_power_internal_phy(priv); if (ret) return ret; @@ -883,22 +898,23 @@ static int sun8i_dwmac_register_mdio_mux(struct stmmac_priv *priv) return ret; } -static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) +static int sun8i_dwmac_set_syscon(struct device *dev, + struct plat_stmmacenet_data *plat) { - struct sunxi_priv_data *gmac = priv->plat->bsp_priv; - struct device_node *node = priv->device->of_node; + struct sunxi_priv_data *gmac = plat->bsp_priv; + struct device_node *node = dev->of_node; int ret; u32 reg, val; ret = regmap_field_read(gmac->regmap_field, &val); if (ret) { - dev_err(priv->device, "Fail to read from regmap field.\n"); + dev_err(dev, "Fail to read from regmap field.\n"); return ret; } reg = gmac->variant->default_syscon_value; if (reg != val) - dev_warn(priv->device, + dev_warn(dev, "Current syscon value is not the default %x (expect %x)\n", val, reg); @@ -911,9 +927,9 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) /* Force EPHY xtal frequency to 24MHz. */ reg |= H3_EPHY_CLK_SEL; - ret = of_mdio_parse_addr(priv->device, priv->plat->phy_node); + ret = of_mdio_parse_addr(dev, plat->phy_node); if (ret < 0) { - dev_err(priv->device, "Could not parse MDIO addr\n"); + dev_err(dev, "Could not parse MDIO addr\n"); return ret; } /* of_mdio_parse_addr returns a valid (0 ~ 31) PHY @@ -929,17 +945,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) if (!of_property_read_u32(node, "allwinner,tx-delay-ps", &val)) { if (val % 100) { - dev_err(priv->device, "tx-delay must be a multiple of 100\n"); + dev_err(dev, "tx-delay must be a multiple of 100\n"); return -EINVAL; } val /= 100; - dev_dbg(priv->device, "set tx-delay to %x\n", val); + dev_dbg(dev, "set tx-delay to %x\n", val); if (val <= gmac->variant->tx_delay_max) { reg &= ~(gmac->variant->tx_delay_max << SYSCON_ETXDC_SHIFT); reg |= (val << SYSCON_ETXDC_SHIFT); } else { - dev_err(priv->device, "Invalid TX clock delay: %d\n", + dev_err(dev, "Invalid TX clock delay: %d\n", val); return -EINVAL; } @@ -947,17 +963,17 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) if (!of_property_read_u32(node, "allwinner,rx-delay-ps", &val)) { if (val % 100) { - dev_err(priv->device, "rx-delay must be a multiple of 100\n"); + dev_err(dev, "rx-delay must be a multiple of 100\n"); return -EINVAL; } val /= 100; - dev_dbg(priv->device, "set rx-delay to %x\n", val); + dev_dbg(dev, "set rx-delay to %x\n", val); if (val <= gmac->variant->rx_delay_max) { reg &= ~(gmac->variant->rx_delay_max << SYSCON_ERXDC_SHIFT); reg |= (val << SYSCON_ERXDC_SHIFT); } else { - dev_err(priv->device, "Invalid RX clock delay: %d\n", + dev_err(dev, "Invalid RX clock delay: %d\n", val); return -EINVAL; } @@ -968,7 +984,7 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) if (gmac->variant->support_rmii) reg &= ~SYSCON_RMII_EN; - switch (priv->plat->interface) { + switch (plat->interface) { case PHY_INTERFACE_MODE_MII: /* default */ break; @@ -982,8 +998,8 @@ static int sun8i_dwmac_set_syscon(struct stmmac_priv *priv) reg |= SYSCON_RMII_EN | SYSCON_ETCS_EXT_GMII; break; default: - dev_err(priv->device, "Unsupported interface mode: %s", - phy_modes(priv->plat->interface)); + dev_err(dev, "Unsupported interface mode: %s", + phy_modes(plat->interface)); return -EINVAL; } @@ -1004,17 +1020,10 @@ static void sun8i_dwmac_exit(struct platform_device *pdev, void *priv) struct sunxi_priv_data *gmac = priv; if (gmac->variant->soc_has_internal_phy) { - /* sun8i_dwmac_exit could be called with mdiomux uninit */ - if (gmac->mux_handle) - mdio_mux_uninit(gmac->mux_handle); if (gmac->internal_phy_powered) sun8i_dwmac_unpower_internal_phy(gmac); } - sun8i_dwmac_unset_syscon(gmac); - - reset_control_put(gmac->rst_ephy); - clk_disable_unprepare(gmac->tx_clk); if (gmac->regulator) @@ -1049,16 +1058,11 @@ static struct mac_device_info *sun8i_dwmac_setup(void *ppriv) { struct mac_device_info *mac; struct stmmac_priv *priv = ppriv; - int ret; mac = devm_kzalloc(priv->device, sizeof(*mac), GFP_KERNEL); if (!mac) return NULL; - ret = sun8i_dwmac_set_syscon(priv); - if (ret) - return NULL; - mac->pcsr = priv->ioaddr; mac->mac = &sun8i_dwmac_ops; mac->dma = &sun8i_dwmac_dma_ops; @@ -1134,10 +1138,6 @@ static int sun8i_dwmac_probe(struct platform_device *pdev) if (ret) return ret; - plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac); - if (IS_ERR(plat_dat)) - return PTR_ERR(plat_dat); - gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL); if (!gmac) return -ENOMEM; @@ -1201,11 +1201,15 @@ static int sun8i_dwmac_probe(struct platform_device *pdev) ret = of_get_phy_mode(dev->of_node, &interface); if (ret) return -EINVAL; - plat_dat->interface = interface; + + plat_dat = stmmac_probe_config_dt(pdev, &stmmac_res.mac); + if (IS_ERR(plat_dat)) + return PTR_ERR(plat_dat); /* platform data specifying hardware features and callbacks. * hardware features were copied from Allwinner drivers. */ + plat_dat->interface = interface; plat_dat->rx_coe = STMMAC_RX_COE_TYPE2; plat_dat->tx_coe = 1; plat_dat->has_sun8i = true; @@ -1214,9 +1218,13 @@ static int sun8i_dwmac_probe(struct platform_device *pdev) plat_dat->exit = sun8i_dwmac_exit; plat_dat->setup = sun8i_dwmac_setup; + ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat); + if (ret) + goto dwmac_deconfig; + ret = sun8i_dwmac_init(pdev, plat_dat->bsp_priv); if (ret) - return ret; + goto dwmac_syscon; ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); if (ret) @@ -1230,7 +1238,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev) if (gmac->variant->soc_has_internal_phy) { ret = get_ephy_nodes(priv); if (ret) - goto dwmac_exit; + goto dwmac_remove; ret = sun8i_dwmac_register_mdio_mux(priv); if (ret) { dev_err(&pdev->dev, "Failed to register mux\n"); @@ -1239,15 +1247,42 @@ static int sun8i_dwmac_probe(struct platform_device *pdev) } else { ret = sun8i_dwmac_reset(priv); if (ret) - goto dwmac_exit; + goto dwmac_remove; } return ret; dwmac_mux: - sun8i_dwmac_unset_syscon(gmac); + reset_control_put(gmac->rst_ephy); + clk_put(gmac->ephy_clk); +dwmac_remove: + stmmac_dvr_remove(&pdev->dev); dwmac_exit: + sun8i_dwmac_exit(pdev, gmac); +dwmac_syscon: + sun8i_dwmac_unset_syscon(gmac); +dwmac_deconfig: + stmmac_remove_config_dt(pdev, plat_dat); + + return ret; +} + +static int sun8i_dwmac_remove(struct platform_device *pdev) +{ + struct net_device *ndev = platform_get_drvdata(pdev); + struct stmmac_priv *priv = netdev_priv(ndev); + struct sunxi_priv_data *gmac = priv->plat->bsp_priv; + + if (gmac->variant->soc_has_internal_phy) { + mdio_mux_uninit(gmac->mux_handle); + sun8i_dwmac_unpower_internal_phy(gmac); + reset_control_put(gmac->rst_ephy); + clk_put(gmac->ephy_clk); + } + stmmac_pltfr_remove(pdev); -return ret; + sun8i_dwmac_unset_syscon(gmac); + + return 0; } static const struct of_device_id sun8i_dwmac_match[] = { @@ -1269,7 +1304,7 @@ MODULE_DEVICE_TABLE(of, sun8i_dwmac_match); static struct platform_driver sun8i_dwmac_driver = { .probe = sun8i_dwmac_probe, - .remove = stmmac_pltfr_remove, + .remove = sun8i_dwmac_remove, .driver = { .name = "dwmac-sun8i", .pm = &stmmac_pltfr_pm_ops, diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c index 03e79a677c8b..8f7ac24545ef 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c @@ -568,68 +568,24 @@ static int dwmac5_est_write(void __iomem *ioaddr, u32 reg, u32 val, bool gcl) int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg, unsigned int ptp_rate) { - u32 speed, total_offset, offset, ctrl, ctr_low; - u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG); - u32 mac_cfg = readl(ioaddr + GMAC_CONFIG); int i, ret = 0x0; - u64 total_ctr; - - if (extcfg & GMAC_CONFIG_EIPG_EN) { - offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT; - offset = 104 + (offset * 8); - } else { - offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT; - offset = 96 - (offset * 8); - } - - speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES); - speed = speed >> GMAC_CONFIG_FES_SHIFT; - - switch (speed) { - case 0x0: - offset = offset * 1000; /* 1G */ - break; - case 0x1: - offset = offset * 400; /* 2.5G */ - break; - case 0x2: - offset = offset * 100000; /* 10M */ - break; - case 0x3: - offset = offset * 10000; /* 100M */ - break; - default: - return -EINVAL; - } - - offset = offset / 1000; + u32 ctrl; ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false); ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false); ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false); ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false); + ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false); + ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false); if (ret) return ret; - total_offset = 0; for (i = 0; i < cfg->gcl_size; i++) { - ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true); + ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true); if (ret) return ret; - - total_offset += offset; } - total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL; - total_ctr += total_offset; - - ctr_low = do_div(total_ctr, 1000000000); - - ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false); - ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false); - if (ret) - return ret; - ctrl = readl(ioaddr + MTL_EST_CONTROL); ctrl &= ~PTOV; ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT; diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 5b1c12ff98c0..26b971cd4da5 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -2184,7 +2184,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan) spin_lock_irqsave(&ch->lock, flags); stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0); spin_unlock_irqrestore(&ch->lock, flags); - __napi_schedule_irqoff(&ch->rx_napi); + __napi_schedule(&ch->rx_napi); } } @@ -2193,7 +2193,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan) spin_lock_irqsave(&ch->lock, flags); stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1); spin_unlock_irqrestore(&ch->lock, flags); - __napi_schedule_irqoff(&ch->tx_napi); + __napi_schedule(&ch->tx_napi); } } @@ -4026,6 +4026,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu) { struct stmmac_priv *priv = netdev_priv(dev); int txfifosz = priv->plat->tx_fifo_size; + const int mtu = new_mtu; if (txfifosz == 0) txfifosz = priv->dma_cap.tx_fifo_size; @@ -4043,7 +4044,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu) if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB)) return -EINVAL; - dev->mtu = new_mtu; + dev->mtu = mtu; netdev_update_features(dev); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c index f5bed4d26e80..8ed3b2c834a0 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c @@ -599,7 +599,8 @@ static int tc_setup_taprio(struct stmmac_priv *priv, { u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep; struct plat_stmmacenet_data *plat = priv->plat; - struct timespec64 time; + struct timespec64 time, current_time; + ktime_t current_time_ns; bool fpe = false; int i, ret = 0; u64 ctr; @@ -694,7 +695,22 @@ static int tc_setup_taprio(struct stmmac_priv *priv, } /* Adjust for real system time */ - time = ktime_to_timespec64(qopt->base_time); + priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, ¤t_time); + current_time_ns = timespec64_to_ktime(current_time); + if (ktime_after(qopt->base_time, current_time_ns)) { + time = ktime_to_timespec64(qopt->base_time); + } else { + ktime_t base_time; + s64 n; + + n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time), + qopt->cycle_time); + base_time = ktime_add_ns(qopt->base_time, + (n + 1) * qopt->cycle_time); + + time = ktime_to_timespec64(base_time); + } + priv->plat->est->btr[0] = (u32)time.tv_nsec; priv->plat->est->btr[1] = (u32)time.tv_sec; diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c index d1fc7955d422..43222a34cba0 100644 --- a/drivers/net/ethernet/ti/cpts.c +++ b/drivers/net/ethernet/ti/cpts.c @@ -599,6 +599,7 @@ void cpts_unregister(struct cpts *cpts) ptp_clock_unregister(cpts->clock); cpts->clock = NULL; + cpts->phc_index = -1; cpts_write32(cpts, 0, int_enable); cpts_write32(cpts, 0, control); @@ -784,6 +785,7 @@ struct cpts *cpts_create(struct device *dev, void __iomem *regs, cpts->cc.read = cpts_systim_read; cpts->cc.mask = CLOCKSOURCE_MASK(32); cpts->info = cpts_info; + cpts->phc_index = -1; if (n_ext_ts) cpts->info.n_ext_ts = n_ext_ts; diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c index c4795249719d..14d9a791924b 100644 --- a/drivers/net/ipa/gsi.c +++ b/drivers/net/ipa/gsi.c @@ -326,8 +326,8 @@ gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id) } /* Issue an event ring command and wait for it to complete */ -static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id, - enum gsi_evt_cmd_opcode opcode) +static void evt_ring_command(struct gsi *gsi, u32 evt_ring_id, + enum gsi_evt_cmd_opcode opcode) { struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; struct completion *completion = &evt_ring->completion; @@ -340,7 +340,13 @@ static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id, * is issued here. Only permit *this* event ring to trigger * an interrupt, and only enable the event control IRQ type * when we expect it to occur. + * + * There's a small chance that a previous command completed + * after the interrupt was disabled, so make sure we have no + * pending interrupts before we enable them. */ + iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET); + val = BIT(evt_ring_id); iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); gsi_irq_type_enable(gsi, GSI_EV_CTRL); @@ -355,19 +361,16 @@ static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id, iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET); if (success) - return 0; + return; dev_err(dev, "GSI command %u for event ring %u timed out, state %u\n", opcode, evt_ring_id, evt_ring->state); - - return -ETIMEDOUT; } /* Allocate an event ring in NOT_ALLOCATED state */ static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id) { struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; - int ret; /* Get initial event ring state */ evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id); @@ -377,14 +380,16 @@ static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id) return -EINVAL; } - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { - dev_err(gsi->dev, "event ring %u bad state %u after alloc\n", - evt_ring_id, evt_ring->state); - ret = -EIO; - } + evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE); - return ret; + /* If successful the event ring state will have changed */ + if (evt_ring->state == GSI_EVT_RING_STATE_ALLOCATED) + return 0; + + dev_err(gsi->dev, "event ring %u bad state %u after alloc\n", + evt_ring_id, evt_ring->state); + + return -EIO; } /* Reset a GSI event ring in ALLOCATED or ERROR state. */ @@ -392,7 +397,6 @@ static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id) { struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; enum gsi_evt_ring_state state = evt_ring->state; - int ret; if (state != GSI_EVT_RING_STATE_ALLOCATED && state != GSI_EVT_RING_STATE_ERROR) { @@ -401,17 +405,20 @@ static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id) return; } - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) - dev_err(gsi->dev, "event ring %u bad state %u after reset\n", - evt_ring_id, evt_ring->state); + evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET); + + /* If successful the event ring state will have changed */ + if (evt_ring->state == GSI_EVT_RING_STATE_ALLOCATED) + return; + + dev_err(gsi->dev, "event ring %u bad state %u after reset\n", + evt_ring_id, evt_ring->state); } /* Issue a hardware de-allocation request for an allocated event ring */ static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id) { struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id]; - int ret; if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) { dev_err(gsi->dev, "event ring %u state %u before dealloc\n", @@ -419,10 +426,14 @@ static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id) return; } - ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); - if (!ret && evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED) - dev_err(gsi->dev, "event ring %u bad state %u after dealloc\n", - evt_ring_id, evt_ring->state); + evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC); + + /* If successful the event ring state will have changed */ + if (evt_ring->state == GSI_EVT_RING_STATE_NOT_ALLOCATED) + return; + + dev_err(gsi->dev, "event ring %u bad state %u after dealloc\n", + evt_ring_id, evt_ring->state); } /* Fetch the current state of a channel from hardware */ @@ -438,7 +449,7 @@ static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel) } /* Issue a channel command and wait for it to complete */ -static int +static void gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode) { struct completion *completion = &channel->completion; @@ -453,7 +464,13 @@ gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode) * issued here. So we only permit *this* channel to trigger * an interrupt and only enable the channel control IRQ type * when we expect it to occur. + * + * There's a small chance that a previous command completed + * after the interrupt was disabled, so make sure we have no + * pending interrupts before we enable them. */ + iowrite32(~0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET); + val = BIT(channel_id); iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); gsi_irq_type_enable(gsi, GSI_CH_CTRL); @@ -467,12 +484,10 @@ gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode) iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET); if (success) - return 0; + return; dev_err(dev, "GSI command %u for channel %u timed out, state %u\n", opcode, channel_id, gsi_channel_state(channel)); - - return -ETIMEDOUT; } /* Allocate GSI channel in NOT_ALLOCATED state */ @@ -481,7 +496,6 @@ static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id) struct gsi_channel *channel = &gsi->channel[channel_id]; struct device *dev = gsi->dev; enum gsi_channel_state state; - int ret; /* Get initial channel state */ state = gsi_channel_state(channel); @@ -491,17 +505,17 @@ static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id) return -EINVAL; } - ret = gsi_channel_command(channel, GSI_CH_ALLOCATE); + gsi_channel_command(channel, GSI_CH_ALLOCATE); - /* Channel state will normally have been updated */ + /* If successful the channel state will have changed */ state = gsi_channel_state(channel); - if (!ret && state != GSI_CHANNEL_STATE_ALLOCATED) { - dev_err(dev, "channel %u bad state %u after alloc\n", - channel_id, state); - ret = -EIO; - } + if (state == GSI_CHANNEL_STATE_ALLOCATED) + return 0; - return ret; + dev_err(dev, "channel %u bad state %u after alloc\n", + channel_id, state); + + return -EIO; } /* Start an ALLOCATED channel */ @@ -509,7 +523,6 @@ static int gsi_channel_start_command(struct gsi_channel *channel) { struct device *dev = channel->gsi->dev; enum gsi_channel_state state; - int ret; state = gsi_channel_state(channel); if (state != GSI_CHANNEL_STATE_ALLOCATED && @@ -519,17 +532,17 @@ static int gsi_channel_start_command(struct gsi_channel *channel) return -EINVAL; } - ret = gsi_channel_command(channel, GSI_CH_START); + gsi_channel_command(channel, GSI_CH_START); - /* Channel state will normally have been updated */ + /* If successful the channel state will have changed */ state = gsi_channel_state(channel); - if (!ret && state != GSI_CHANNEL_STATE_STARTED) { - dev_err(dev, "channel %u bad state %u after start\n", - gsi_channel_id(channel), state); - ret = -EIO; - } + if (state == GSI_CHANNEL_STATE_STARTED) + return 0; - return ret; + dev_err(dev, "channel %u bad state %u after start\n", + gsi_channel_id(channel), state); + + return -EIO; } /* Stop a GSI channel in STARTED state */ @@ -537,7 +550,6 @@ static int gsi_channel_stop_command(struct gsi_channel *channel) { struct device *dev = channel->gsi->dev; enum gsi_channel_state state; - int ret; state = gsi_channel_state(channel); @@ -554,12 +566,12 @@ static int gsi_channel_stop_command(struct gsi_channel *channel) return -EINVAL; } - ret = gsi_channel_command(channel, GSI_CH_STOP); + gsi_channel_command(channel, GSI_CH_STOP); - /* Channel state will normally have been updated */ + /* If successful the channel state will have changed */ state = gsi_channel_state(channel); - if (ret || state == GSI_CHANNEL_STATE_STOPPED) - return ret; + if (state == GSI_CHANNEL_STATE_STOPPED) + return 0; /* We may have to try again if stop is in progress */ if (state == GSI_CHANNEL_STATE_STOP_IN_PROC) @@ -576,7 +588,6 @@ static void gsi_channel_reset_command(struct gsi_channel *channel) { struct device *dev = channel->gsi->dev; enum gsi_channel_state state; - int ret; msleep(1); /* A short delay is required before a RESET command */ @@ -590,11 +601,11 @@ static void gsi_channel_reset_command(struct gsi_channel *channel) return; } - ret = gsi_channel_command(channel, GSI_CH_RESET); + gsi_channel_command(channel, GSI_CH_RESET); - /* Channel state will normally have been updated */ + /* If successful the channel state will have changed */ state = gsi_channel_state(channel); - if (!ret && state != GSI_CHANNEL_STATE_ALLOCATED) + if (state != GSI_CHANNEL_STATE_ALLOCATED) dev_err(dev, "channel %u bad state %u after reset\n", gsi_channel_id(channel), state); } @@ -605,7 +616,6 @@ static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id) struct gsi_channel *channel = &gsi->channel[channel_id]; struct device *dev = gsi->dev; enum gsi_channel_state state; - int ret; state = gsi_channel_state(channel); if (state != GSI_CHANNEL_STATE_ALLOCATED) { @@ -614,11 +624,12 @@ static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id) return; } - ret = gsi_channel_command(channel, GSI_CH_DE_ALLOC); + gsi_channel_command(channel, GSI_CH_DE_ALLOC); - /* Channel state will normally have been updated */ + /* If successful the channel state will have changed */ state = gsi_channel_state(channel); - if (!ret && state != GSI_CHANNEL_STATE_NOT_ALLOCATED) + + if (state != GSI_CHANNEL_STATE_NOT_ALLOCATED) dev_err(dev, "channel %u bad state %u after dealloc\n", channel_id, state); } diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c index 9dcf16f399b7..135c393437f1 100644 --- a/drivers/net/ipa/ipa_clock.c +++ b/drivers/net/ipa/ipa_clock.c @@ -115,13 +115,13 @@ static int ipa_interconnect_enable(struct ipa *ipa) return ret; data = &clock->interconnect_data[IPA_INTERCONNECT_IMEM]; - ret = icc_set_bw(clock->memory_path, data->average_rate, + ret = icc_set_bw(clock->imem_path, data->average_rate, data->peak_rate); if (ret) goto err_memory_path_disable; data = &clock->interconnect_data[IPA_INTERCONNECT_CONFIG]; - ret = icc_set_bw(clock->memory_path, data->average_rate, + ret = icc_set_bw(clock->config_path, data->average_rate, data->peak_rate); if (ret) goto err_imem_path_disable; diff --git a/drivers/net/ipa/ipa_modem.c b/drivers/net/ipa/ipa_modem.c index e34fe2d77324..9b08eb823984 100644 --- a/drivers/net/ipa/ipa_modem.c +++ b/drivers/net/ipa/ipa_modem.c @@ -216,6 +216,7 @@ int ipa_modem_start(struct ipa *ipa) ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev; ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev; + SET_NETDEV_DEV(netdev, &ipa->pdev->dev); priv = netdev_priv(netdev); priv->ipa = ipa; diff --git a/drivers/net/mdio/mdio-bitbang.c b/drivers/net/mdio/mdio-bitbang.c index 5136275c8e73..d3915f831854 100644 --- a/drivers/net/mdio/mdio-bitbang.c +++ b/drivers/net/mdio/mdio-bitbang.c @@ -149,7 +149,7 @@ static int mdiobb_cmd_addr(struct mdiobb_ctrl *ctrl, int phy, u32 addr) return dev_addr; } -static int mdiobb_read(struct mii_bus *bus, int phy, int reg) +int mdiobb_read(struct mii_bus *bus, int phy, int reg) { struct mdiobb_ctrl *ctrl = bus->priv; int ret, i; @@ -180,8 +180,9 @@ static int mdiobb_read(struct mii_bus *bus, int phy, int reg) mdiobb_get_bit(ctrl); return ret; } +EXPORT_SYMBOL(mdiobb_read); -static int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val) +int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val) { struct mdiobb_ctrl *ctrl = bus->priv; @@ -201,6 +202,7 @@ static int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val) mdiobb_get_bit(ctrl); return 0; } +EXPORT_SYMBOL(mdiobb_write); struct mii_bus *alloc_mdio_bitbang(struct mdiobb_ctrl *ctrl) { diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c index 33372756a451..ddb78fb4d6dc 100644 --- a/drivers/net/phy/smsc.c +++ b/drivers/net/phy/smsc.c @@ -317,7 +317,8 @@ static int smsc_phy_probe(struct phy_device *phydev) /* Make clk optional to keep DTB backward compatibility. */ priv->refclk = clk_get_optional(dev, NULL); if (IS_ERR(priv->refclk)) - dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n"); + return dev_err_probe(dev, PTR_ERR(priv->refclk), + "Failed to request clock\n"); ret = clk_prepare_enable(priv->refclk); if (ret) diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c index 09c27f7773f9..d445ecb1d0c7 100644 --- a/drivers/net/ppp/ppp_generic.c +++ b/drivers/net/ppp/ppp_generic.c @@ -623,6 +623,7 @@ static int ppp_bridge_channels(struct channel *pch, struct channel *pchb) write_unlock_bh(&pch->upl); return -EALREADY; } + refcount_inc(&pchb->file.refcnt); rcu_assign_pointer(pch->bridge, pchb); write_unlock_bh(&pch->upl); @@ -632,19 +633,24 @@ static int ppp_bridge_channels(struct channel *pch, struct channel *pchb) write_unlock_bh(&pchb->upl); goto err_unset; } + refcount_inc(&pch->file.refcnt); rcu_assign_pointer(pchb->bridge, pch); write_unlock_bh(&pchb->upl); - refcount_inc(&pch->file.refcnt); - refcount_inc(&pchb->file.refcnt); - return 0; err_unset: write_lock_bh(&pch->upl); + /* Re-read pch->bridge with upl held in case it was modified concurrently */ + pchb = rcu_dereference_protected(pch->bridge, lockdep_is_held(&pch->upl)); RCU_INIT_POINTER(pch->bridge, NULL); write_unlock_bh(&pch->upl); synchronize_rcu(); + + if (pchb) + if (refcount_dec_and_test(&pchb->file.refcnt)) + ppp_destroy_channel(pchb); + return -EALREADY; } diff --git a/drivers/net/tun.c b/drivers/net/tun.c index fbed05ae7b0f..978ac0981d16 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1365,7 +1365,7 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile, int i; if (it->nr_segs > MAX_SKB_FRAGS + 1) - return ERR_PTR(-ENOMEM); + return ERR_PTR(-EMSGSIZE); local_bh_disable(); skb = napi_get_frags(&tfile->napi); diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig index 1e3719028780..fbbe78643631 100644 --- a/drivers/net/usb/Kconfig +++ b/drivers/net/usb/Kconfig @@ -631,7 +631,6 @@ config USB_NET_AQC111 config USB_RTL8153_ECM tristate "RTL8153 ECM support" depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n) - default y help This option supports ECM mode for RTL8153 ethernet adapter, when CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c index 8c1d61c2cbac..6aaa0675c28a 100644 --- a/drivers/net/usb/cdc_ether.c +++ b/drivers/net/usb/cdc_ether.c @@ -793,6 +793,13 @@ static const struct usb_device_id products[] = { .driver_info = 0, }, +/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ +{ + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM, + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), + .driver_info = 0, +}, + /* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */ { USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM, diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c index 2bac57d5e8d5..291e76d32abe 100644 --- a/drivers/net/usb/cdc_ncm.c +++ b/drivers/net/usb/cdc_ncm.c @@ -1199,7 +1199,10 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign) * accordingly. Otherwise, we should check here. */ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) - delayed_ndp_size = ALIGN(ctx->max_ndp_size, ctx->tx_ndp_modulus); + delayed_ndp_size = ctx->max_ndp_size + + max_t(u32, + ctx->tx_ndp_modulus, + ctx->tx_modulus + ctx->tx_remainder) - 1; else delayed_ndp_size = 0; @@ -1410,7 +1413,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign) if (!(dev->driver_info->flags & FLAG_SEND_ZLP) && skb_out->len > ctx->min_tx_pkt) { padding_count = ctx->tx_curr_size - skb_out->len; - skb_put_zero(skb_out, padding_count); + if (!WARN_ON(padding_count > ctx->tx_curr_size)) + skb_put_zero(skb_out, padding_count); } else if (skb_out->len < ctx->tx_curr_size && (skb_out->len % dev->maxpacket) == 0) { skb_put_u8(skb_out, 0); /* force short packet */ @@ -1823,6 +1827,15 @@ cdc_ncm_speed_change(struct usbnet *dev, uint32_t rx_speed = le32_to_cpu(data->DLBitRRate); uint32_t tx_speed = le32_to_cpu(data->ULBitRate); + /* if the speed hasn't changed, don't report it. + * RTL8156 shipped before 2021 sends notification about every 32ms. + */ + if (dev->rx_speed == rx_speed && dev->tx_speed == tx_speed) + return; + + dev->rx_speed = rx_speed; + dev->tx_speed = tx_speed; + /* * Currently the USB-NET API does not support reporting the actual * device speed. Do print it instead. @@ -1863,10 +1876,8 @@ static void cdc_ncm_status(struct usbnet *dev, struct urb *urb) * USB_CDC_NOTIFY_NETWORK_CONNECTION notification shall be * sent by device after USB_CDC_NOTIFY_SPEED_CHANGE. */ - netif_info(dev, link, dev->net, - "network connection: %sconnected\n", - !!event->wValue ? "" : "dis"); - usbnet_link_change(dev, !!event->wValue, 0); + if (netif_carrier_ok(dev->net) != !!event->wValue) + usbnet_link_change(dev, !!event->wValue, 0); break; case USB_CDC_NOTIFY_SPEED_CHANGE: diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index d166c321ee9b..af19513a9f75 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1013,6 +1013,7 @@ static const struct usb_device_id products[] = { {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */ + {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0620)}, /* Quectel EM160R-GL */ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0800)}, /* Quectel RM500Q-GL */ /* 3. Combined interface devices matching on interface number */ diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c index c448d6089821..67cd6986634f 100644 --- a/drivers/net/usb/r8152.c +++ b/drivers/net/usb/r8152.c @@ -6877,6 +6877,7 @@ static const struct usb_device_id rtl8152_table[] = { {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)}, {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e)}, {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)}, {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)}, diff --git a/drivers/net/usb/r8153_ecm.c b/drivers/net/usb/r8153_ecm.c index 2c3fabd38b16..20b2df8d74ae 100644 --- a/drivers/net/usb/r8153_ecm.c +++ b/drivers/net/usb/r8153_ecm.c @@ -122,12 +122,20 @@ static const struct driver_info r8153_info = { }; static const struct usb_device_id products[] = { +/* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */ { USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), .driver_info = (unsigned long)&r8153_info, }, +/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ +{ + USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0x721e, USB_CLASS_COMM, + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), + .driver_info = (unsigned long)&r8153_info, +}, + { }, /* END */ }; MODULE_DEVICE_TABLE(usb, products); diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c index 6609d21ef894..f813ca9dec53 100644 --- a/drivers/net/usb/rndis_host.c +++ b/drivers/net/usb/rndis_host.c @@ -387,7 +387,7 @@ generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags) reply_len = sizeof *phym; retval = rndis_query(dev, intf, u.buf, RNDIS_OID_GEN_PHYSICAL_MEDIUM, - 0, (void **) &phym, &reply_len); + reply_len, (void **)&phym, &reply_len); if (retval != 0 || !phym) { /* OID is optional so don't fail here. */ phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED); diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 4c41df624dbb..508408fbe78f 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2093,14 +2093,16 @@ static int virtnet_set_channels(struct net_device *dev, get_online_cpus(); err = _virtnet_set_queues(vi, queue_pairs); - if (!err) { - netif_set_real_num_tx_queues(dev, queue_pairs); - netif_set_real_num_rx_queues(dev, queue_pairs); - - virtnet_set_affinity(vi); + if (err) { + put_online_cpus(); + goto err; } + virtnet_set_affinity(vi); put_online_cpus(); + netif_set_real_num_tx_queues(dev, queue_pairs); + netif_set_real_num_rx_queues(dev, queue_pairs); + err: return err; } diff --git a/drivers/net/wan/Kconfig b/drivers/net/wan/Kconfig index 4029fde71a9e..83c9481995dd 100644 --- a/drivers/net/wan/Kconfig +++ b/drivers/net/wan/Kconfig @@ -282,6 +282,7 @@ config SLIC_DS26522 tristate "Slic Maxim ds26522 card support" depends on SPI depends on FSL_SOC || ARCH_MXC || ARCH_LAYERSCAPE || COMPILE_TEST + select BITREVERSE help This module initializes and configures the slic maxim card in T1 or E1 mode. diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c index 64f855651336..261b53fc8e04 100644 --- a/drivers/net/wan/hdlc_ppp.c +++ b/drivers/net/wan/hdlc_ppp.c @@ -569,6 +569,13 @@ static void ppp_timer(struct timer_list *t) unsigned long flags; spin_lock_irqsave(&ppp->lock, flags); + /* mod_timer could be called after we entered this function but + * before we got the lock. + */ + if (timer_pending(&proto->timer)) { + spin_unlock_irqrestore(&ppp->lock, flags); + return; + } switch (proto->state) { case STOPPING: case REQ_SENT: diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c index b97c38b9a270..350b7913622c 100644 --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c @@ -185,7 +185,7 @@ int ath11k_core_suspend(struct ath11k_base *ab) ath11k_hif_ce_irq_disable(ab); ret = ath11k_hif_suspend(ab); - if (!ret) { + if (ret) { ath11k_warn(ab, "failed to suspend hif: %d\n", ret); return ret; } diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c index 205c0f1a40e9..920e5026a635 100644 --- a/drivers/net/wireless/ath/ath11k/dp_rx.c +++ b/drivers/net/wireless/ath/ath11k/dp_rx.c @@ -2294,6 +2294,7 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc, { u8 channel_num; u32 center_freq; + struct ieee80211_channel *channel; rx_status->freq = 0; rx_status->rate_idx = 0; @@ -2314,9 +2315,12 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc, rx_status->band = NL80211_BAND_5GHZ; } else { spin_lock_bh(&ar->data_lock); - rx_status->band = ar->rx_channel->band; - channel_num = - ieee80211_frequency_to_channel(ar->rx_channel->center_freq); + channel = ar->rx_channel; + if (channel) { + rx_status->band = channel->band; + channel_num = + ieee80211_frequency_to_channel(channel->center_freq); + } spin_unlock_bh(&ar->data_lock); ath11k_dbg_dump(ar->ab, ATH11K_DBG_DATA, NULL, "rx_desc: ", rx_desc, sizeof(struct hal_rx_desc)); diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c index 5c175e3e09b2..c1608f64ea95 100644 --- a/drivers/net/wireless/ath/ath11k/mac.c +++ b/drivers/net/wireless/ath/ath11k/mac.c @@ -3021,6 +3021,7 @@ static int ath11k_mac_station_add(struct ath11k *ar, } if (ab->hw_params.vdev_start_delay && + !arvif->is_started && arvif->vdev_type != WMI_VDEV_TYPE_AP) { ret = ath11k_start_vdev_delay(ar->hw, vif); if (ret) { @@ -5284,7 +5285,8 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw, /* for QCA6390 bss peer must be created before vdev_start */ if (ab->hw_params.vdev_start_delay && arvif->vdev_type != WMI_VDEV_TYPE_AP && - arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) { + arvif->vdev_type != WMI_VDEV_TYPE_MONITOR && + !ath11k_peer_find_by_vdev_id(ab, arvif->vdev_id)) { memcpy(&arvif->chanctx, ctx, sizeof(*ctx)); ret = 0; goto out; @@ -5295,7 +5297,9 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw, goto out; } - if (ab->hw_params.vdev_start_delay) { + if (ab->hw_params.vdev_start_delay && + (arvif->vdev_type == WMI_VDEV_TYPE_AP || + arvif->vdev_type == WMI_VDEV_TYPE_MONITOR)) { param.vdev_id = arvif->vdev_id; param.peer_type = WMI_PEER_TYPE_DEFAULT; param.peer_addr = ar->mac_addr; diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c index 857647aa57c8..20b415cd96c4 100644 --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c @@ -274,7 +274,7 @@ static int ath11k_pci_fix_l1ss(struct ath11k_base *ab) PCIE_QSERDES_COM_SYSCLK_EN_SEL_REG, PCIE_QSERDES_COM_SYSCLK_EN_SEL_VAL, PCIE_QSERDES_COM_SYSCLK_EN_SEL_MSK); - if (!ret) { + if (ret) { ath11k_warn(ab, "failed to set sysclk: %d\n", ret); return ret; } @@ -283,7 +283,7 @@ static int ath11k_pci_fix_l1ss(struct ath11k_base *ab) PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG1_REG, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG1_VAL, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); - if (!ret) { + if (ret) { ath11k_warn(ab, "failed to set dtct config1 error: %d\n", ret); return ret; } @@ -292,7 +292,7 @@ static int ath11k_pci_fix_l1ss(struct ath11k_base *ab) PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG2_REG, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG2_VAL, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); - if (!ret) { + if (ret) { ath11k_warn(ab, "failed to set dtct config2: %d\n", ret); return ret; } @@ -301,7 +301,7 @@ static int ath11k_pci_fix_l1ss(struct ath11k_base *ab) PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG4_REG, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG4_VAL, PCIE_USB3_PCS_MISC_OSC_DTCT_CONFIG_MSK); - if (!ret) { + if (ret) { ath11k_warn(ab, "failed to set dtct config4: %d\n", ret); return ret; } @@ -886,6 +886,32 @@ static void ath11k_pci_free_region(struct ath11k_pci *ab_pci) pci_disable_device(pci_dev); } +static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci) +{ + struct ath11k_base *ab = ab_pci->ab; + + pcie_capability_read_word(ab_pci->pdev, PCI_EXP_LNKCTL, + &ab_pci->link_ctl); + + ath11k_dbg(ab, ATH11K_DBG_PCI, "pci link_ctl 0x%04x L0s %d L1 %d\n", + ab_pci->link_ctl, + u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L0S), + u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1)); + + /* disable L0s and L1 */ + pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, + ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC); + + set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags); +} + +static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci) +{ + if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags)) + pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL, + ab_pci->link_ctl); +} + static int ath11k_pci_power_up(struct ath11k_base *ab) { struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); @@ -895,6 +921,11 @@ static int ath11k_pci_power_up(struct ath11k_base *ab) clear_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags); ath11k_pci_sw_reset(ab_pci->ab, true); + /* Disable ASPM during firmware download due to problems switching + * to AMSS state. + */ + ath11k_pci_aspm_disable(ab_pci); + ret = ath11k_mhi_start(ab_pci); if (ret) { ath11k_err(ab, "failed to start mhi: %d\n", ret); @@ -908,6 +939,9 @@ static void ath11k_pci_power_down(struct ath11k_base *ab) { struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); + /* restore aspm in case firmware bootup fails */ + ath11k_pci_aspm_restore(ab_pci); + ath11k_pci_force_wake(ab_pci->ab); ath11k_mhi_stop(ab_pci); clear_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags); @@ -965,6 +999,8 @@ static int ath11k_pci_start(struct ath11k_base *ab) set_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags); + ath11k_pci_aspm_restore(ab_pci); + ath11k_pci_ce_irqs_enable(ab); ath11k_ce_rx_post_buf(ab); diff --git a/drivers/net/wireless/ath/ath11k/pci.h b/drivers/net/wireless/ath/ath11k/pci.h index 0432a702416b..fe44d0dfce19 100644 --- a/drivers/net/wireless/ath/ath11k/pci.h +++ b/drivers/net/wireless/ath/ath11k/pci.h @@ -63,6 +63,7 @@ struct ath11k_msi_config { enum ath11k_pci_flags { ATH11K_PCI_FLAG_INIT_DONE, ATH11K_PCI_FLAG_IS_MSI_64, + ATH11K_PCI_ASPM_RESTORE, }; struct ath11k_pci { @@ -80,6 +81,7 @@ struct ath11k_pci { /* enum ath11k_pci_flags */ unsigned long flags; + u16 link_ctl; }; static inline struct ath11k_pci *ath11k_pci_priv(struct ath11k_base *ab) diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c index 1866d82678fa..b69e7ebfa930 100644 --- a/drivers/net/wireless/ath/ath11k/peer.c +++ b/drivers/net/wireless/ath/ath11k/peer.c @@ -76,6 +76,23 @@ struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab, return NULL; } +struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab, + int vdev_id) +{ + struct ath11k_peer *peer; + + spin_lock_bh(&ab->base_lock); + + list_for_each_entry(peer, &ab->peers, list) { + if (vdev_id == peer->vdev_id) { + spin_unlock_bh(&ab->base_lock); + return peer; + } + } + spin_unlock_bh(&ab->base_lock); + return NULL; +} + void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id) { struct ath11k_peer *peer; diff --git a/drivers/net/wireless/ath/ath11k/peer.h b/drivers/net/wireless/ath/ath11k/peer.h index bba2e00b6944..8553ed061aea 100644 --- a/drivers/net/wireless/ath/ath11k/peer.h +++ b/drivers/net/wireless/ath/ath11k/peer.h @@ -43,5 +43,7 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif, struct ieee80211_sta *sta, struct peer_create_params *param); int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id, const u8 *addr); +struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab, + int vdev_id); #endif /* _PEER_H_ */ diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c index f0b5c50974f3..0db623ff4bb9 100644 --- a/drivers/net/wireless/ath/ath11k/qmi.c +++ b/drivers/net/wireless/ath/ath11k/qmi.c @@ -1660,6 +1660,7 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab) struct qmi_wlanfw_respond_mem_resp_msg_v01 resp; struct qmi_txn txn = {}; int ret = 0, i; + bool delayed; req = kzalloc(sizeof(*req), GFP_KERNEL); if (!req) @@ -1672,11 +1673,13 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab) * failure to FW and FW will then request mulitple blocks of small * chunk size memory. */ - if (!ab->bus_params.fixed_mem_region && ab->qmi.mem_seg_count <= 2) { + if (!ab->bus_params.fixed_mem_region && ab->qmi.target_mem_delayed) { + delayed = true; ath11k_dbg(ab, ATH11K_DBG_QMI, "qmi delays mem_request %d\n", ab->qmi.mem_seg_count); memset(req, 0, sizeof(*req)); } else { + delayed = false; req->mem_seg_len = ab->qmi.mem_seg_count; for (i = 0; i < req->mem_seg_len ; i++) { @@ -1708,6 +1711,12 @@ static int ath11k_qmi_respond_fw_mem_request(struct ath11k_base *ab) } if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { + /* the error response is expected when + * target_mem_delayed is true. + */ + if (delayed && resp.resp.error == 0) + goto out; + ath11k_warn(ab, "Respond mem req failed, result: %d, err: %d\n", resp.resp.result, resp.resp.error); ret = -EINVAL; @@ -1742,6 +1751,8 @@ static int ath11k_qmi_alloc_target_mem_chunk(struct ath11k_base *ab) int i; struct target_mem_chunk *chunk; + ab->qmi.target_mem_delayed = false; + for (i = 0; i < ab->qmi.mem_seg_count; i++) { chunk = &ab->qmi.target_mem[i]; chunk->vaddr = dma_alloc_coherent(ab->dev, @@ -1749,6 +1760,15 @@ static int ath11k_qmi_alloc_target_mem_chunk(struct ath11k_base *ab) &chunk->paddr, GFP_KERNEL); if (!chunk->vaddr) { + if (ab->qmi.mem_seg_count <= 2) { + ath11k_dbg(ab, ATH11K_DBG_QMI, + "qmi dma allocation failed (%d B type %u), will try later with small size\n", + chunk->size, + chunk->type); + ath11k_qmi_free_target_mem_chunk(ab); + ab->qmi.target_mem_delayed = true; + return 0; + } ath11k_err(ab, "failed to alloc memory, size: 0x%x, type: %u\n", chunk->size, chunk->type); @@ -2517,7 +2537,7 @@ static void ath11k_qmi_msg_mem_request_cb(struct qmi_handle *qmi_hdl, ret); return; } - } else if (msg->mem_seg_len > 2) { + } else { ret = ath11k_qmi_alloc_target_mem_chunk(ab); if (ret) { ath11k_warn(ab, "qmi failed to alloc target memory: %d\n", diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h index 92925c9eac67..7bad374cc23a 100644 --- a/drivers/net/wireless/ath/ath11k/qmi.h +++ b/drivers/net/wireless/ath/ath11k/qmi.h @@ -125,6 +125,7 @@ struct ath11k_qmi { struct target_mem_chunk target_mem[ATH11K_QMI_WLANFW_MAX_NUM_MEM_SEG_V01]; u32 mem_seg_count; u32 target_mem_mode; + bool target_mem_delayed; u8 cal_done; struct target_info target; struct m3_mem_region m3_mem; diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c index da4b546b62cb..73869d445c5b 100644 --- a/drivers/net/wireless/ath/ath11k/wmi.c +++ b/drivers/net/wireless/ath/ath11k/wmi.c @@ -3460,6 +3460,9 @@ int ath11k_wmi_set_hw_mode(struct ath11k_base *ab, len = sizeof(*cmd); skb = ath11k_wmi_alloc_skb(wmi_ab, len); + if (!skb) + return -ENOMEM; + cmd = (struct wmi_pdev_set_hw_mode_cmd_param *)skb->data; cmd->tlv_header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_PDEV_SET_HW_MODE_CMD) | diff --git a/drivers/net/wireless/ath/wil6210/Kconfig b/drivers/net/wireless/ath/wil6210/Kconfig index 6a95b199bf62..f074e9c31aa2 100644 --- a/drivers/net/wireless/ath/wil6210/Kconfig +++ b/drivers/net/wireless/ath/wil6210/Kconfig @@ -2,6 +2,7 @@ config WIL6210 tristate "Wilocity 60g WiFi card wil6210 support" select WANT_DEV_COREDUMP + select CRC32 depends on CFG80211 depends on PCI default n diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c index ed4635bd151a..102a8f14c22d 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c @@ -40,9 +40,9 @@ static const struct ieee80211_iface_limit if_limits[] = { .types = BIT(NL80211_IFTYPE_ADHOC) }, { .max = 16, - .types = BIT(NL80211_IFTYPE_AP) | + .types = BIT(NL80211_IFTYPE_AP) #ifdef CONFIG_MAC80211_MESH - BIT(NL80211_IFTYPE_MESH_POINT) + | BIT(NL80211_IFTYPE_MESH_POINT) #endif }, { .max = MT7915_MAX_INTERFACES, diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c index 62b5b912818f..0b6facb17ff7 100644 --- a/drivers/net/wireless/mediatek/mt76/sdio.c +++ b/drivers/net/wireless/mediatek/mt76/sdio.c @@ -157,10 +157,14 @@ static void mt76s_net_worker(struct mt76_worker *w) static int mt76s_process_tx_queue(struct mt76_dev *dev, struct mt76_queue *q) { - bool wake, mcu = q == dev->q_mcu[MT_MCUQ_WM]; struct mt76_queue_entry entry; int nframes = 0; + bool mcu; + if (!q) + return 0; + + mcu = q == dev->q_mcu[MT_MCUQ_WM]; while (q->queued > 0) { if (!q->entry[q->tail].done) break; @@ -177,21 +181,12 @@ static int mt76s_process_tx_queue(struct mt76_dev *dev, struct mt76_queue *q) nframes++; } - wake = q->stopped && q->queued < q->ndesc - 8; - if (wake) - q->stopped = false; - if (!q->queued) wake_up(&dev->tx_wait); - if (mcu) - goto out; - - mt76_txq_schedule(&dev->phy, q->qid); + if (!mcu) + mt76_txq_schedule(&dev->phy, q->qid); - if (wake) - ieee80211_wake_queue(dev->hw, q->qid); -out: return nframes; } diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c index dc850109de22..b95d093728b9 100644 --- a/drivers/net/wireless/mediatek/mt76/usb.c +++ b/drivers/net/wireless/mediatek/mt76/usb.c @@ -811,11 +811,12 @@ static void mt76u_status_worker(struct mt76_worker *w) struct mt76_dev *dev = container_of(usb, struct mt76_dev, usb); struct mt76_queue_entry entry; struct mt76_queue *q; - bool wake; int i; for (i = 0; i < IEEE80211_NUM_ACS; i++) { q = dev->phy.q_tx[i]; + if (!q) + continue; while (q->queued > 0) { if (!q->entry[q->tail].done) @@ -827,10 +828,6 @@ static void mt76u_status_worker(struct mt76_worker *w) mt76_queue_tx_complete(dev, q, &entry); } - wake = q->stopped && q->queued < q->ndesc - 8; - if (wake) - q->stopped = false; - if (!q->queued) wake_up(&dev->tx_wait); @@ -839,8 +836,6 @@ static void mt76u_status_worker(struct mt76_worker *w) if (dev->drv->tx_status_data && !test_and_set_bit(MT76_READING_STATS, &dev->phy.state)) queue_work(dev->wq, &dev->usb.stat_work); - if (wake) - ieee80211_wake_queue(dev->hw, i); } } diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c b/drivers/net/wireless/realtek/rtlwifi/core.c index a7259dbc953d..965bd9589045 100644 --- a/drivers/net/wireless/realtek/rtlwifi/core.c +++ b/drivers/net/wireless/realtek/rtlwifi/core.c @@ -78,7 +78,6 @@ static void rtl_fw_do_work(const struct firmware *firmware, void *context, rtl_dbg(rtlpriv, COMP_ERR, DBG_LOUD, "Firmware callback routine entered!\n"); - complete(&rtlpriv->firmware_loading_complete); if (!firmware) { if (rtlpriv->cfg->alt_fw_name) { err = request_firmware(&firmware, @@ -91,13 +90,13 @@ static void rtl_fw_do_work(const struct firmware *firmware, void *context, } pr_err("Selected firmware is not available\n"); rtlpriv->max_fw_size = 0; - return; + goto exit; } found_alt: if (firmware->size > rtlpriv->max_fw_size) { pr_err("Firmware is too big!\n"); release_firmware(firmware); - return; + goto exit; } if (!is_wow) { memcpy(rtlpriv->rtlhal.pfirmware, firmware->data, @@ -109,6 +108,9 @@ found_alt: rtlpriv->rtlhal.wowlan_fwsize = firmware->size; } release_firmware(firmware); + +exit: + complete(&rtlpriv->firmware_loading_complete); } void rtl_fw_cb(const struct firmware *firmware, void *context) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index ce1b61519441..8caf9b34734d 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -179,7 +179,7 @@ int nvme_reset_ctrl(struct nvme_ctrl *ctrl) } EXPORT_SYMBOL_GPL(nvme_reset_ctrl); -int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl) +static int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl) { int ret; @@ -192,7 +192,6 @@ int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl) return ret; } -EXPORT_SYMBOL_GPL(nvme_reset_ctrl_sync); static void nvme_do_delete_ctrl(struct nvme_ctrl *ctrl) { @@ -331,7 +330,7 @@ static inline void nvme_end_req(struct request *req) req->__sector = nvme_lba_to_sect(req->q->queuedata, le64_to_cpu(nvme_req(req)->result.u64)); - nvme_trace_bio_complete(req, status); + nvme_trace_bio_complete(req); blk_mq_end_request(req, status); } @@ -578,7 +577,7 @@ struct request *nvme_alloc_request(struct request_queue *q, } EXPORT_SYMBOL_GPL(nvme_alloc_request); -struct request *nvme_alloc_request_qid(struct request_queue *q, +static struct request *nvme_alloc_request_qid(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) { struct request *req; @@ -589,7 +588,6 @@ struct request *nvme_alloc_request_qid(struct request_queue *q, nvme_init_request(req, cmd); return req; } -EXPORT_SYMBOL_GPL(nvme_alloc_request_qid); static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable) { @@ -1545,8 +1543,21 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio) } length = (io.nblocks + 1) << ns->lba_shift; - meta_len = (io.nblocks + 1) * ns->ms; - metadata = nvme_to_user_ptr(io.metadata); + + if ((io.control & NVME_RW_PRINFO_PRACT) && + ns->ms == sizeof(struct t10_pi_tuple)) { + /* + * Protection information is stripped/inserted by the + * controller. + */ + if (nvme_to_user_ptr(io.metadata)) + return -EINVAL; + meta_len = 0; + metadata = NULL; + } else { + meta_len = (io.nblocks + 1) * ns->ms; + metadata = nvme_to_user_ptr(io.metadata); + } if (ns->features & NVME_NS_EXT_LBAS) { length += meta_len; @@ -2858,6 +2869,11 @@ static const struct attribute_group *nvme_subsys_attrs_groups[] = { NULL, }; +static inline bool nvme_discovery_ctrl(struct nvme_ctrl *ctrl) +{ + return ctrl->opts && ctrl->opts->discovery_nqn; +} + static bool nvme_validate_cntlid(struct nvme_subsystem *subsys, struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) { @@ -2877,7 +2893,7 @@ static bool nvme_validate_cntlid(struct nvme_subsystem *subsys, } if ((id->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || - (ctrl->opts && ctrl->opts->discovery_nqn)) + nvme_discovery_ctrl(ctrl)) continue; dev_err(ctrl->device, @@ -3146,7 +3162,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) goto out_free; } - if (!ctrl->opts->discovery_nqn && !ctrl->kas) { + if (!nvme_discovery_ctrl(ctrl) && !ctrl->kas) { dev_err(ctrl->device, "keep-alive support is mandatory for fabrics\n"); ret = -EINVAL; @@ -3186,7 +3202,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) if (ret < 0) return ret; - if (!ctrl->identified) { + if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) { ret = nvme_hwmon_init(ctrl); if (ret < 0) return ret; diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 38373a0e86ef..5f36cfa8136c 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -166,6 +166,7 @@ struct nvme_fc_ctrl { struct blk_mq_tag_set admin_tag_set; struct blk_mq_tag_set tag_set; + struct work_struct ioerr_work; struct delayed_work connect_work; struct kref ref; @@ -1889,6 +1890,15 @@ __nvme_fc_fcpop_chk_teardowns(struct nvme_fc_ctrl *ctrl, } static void +nvme_fc_ctrl_ioerr_work(struct work_struct *work) +{ + struct nvme_fc_ctrl *ctrl = + container_of(work, struct nvme_fc_ctrl, ioerr_work); + + nvme_fc_error_recovery(ctrl, "transport detected io error"); +} + +static void nvme_fc_fcpio_done(struct nvmefc_fcp_req *req) { struct nvme_fc_fcp_op *op = fcp_req_to_fcp_op(req); @@ -2046,7 +2056,7 @@ done: check_error: if (terminate_assoc) - nvme_fc_error_recovery(ctrl, "transport detected io error"); + queue_work(nvme_reset_wq, &ctrl->ioerr_work); } static int @@ -3233,6 +3243,7 @@ nvme_fc_delete_ctrl(struct nvme_ctrl *nctrl) { struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl); + cancel_work_sync(&ctrl->ioerr_work); cancel_delayed_work_sync(&ctrl->connect_work); /* * kill the association on the link side. this will block @@ -3449,6 +3460,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work); INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); + INIT_WORK(&ctrl->ioerr_work, nvme_fc_ctrl_ioerr_work); spin_lock_init(&ctrl->lock); /* io queue count */ @@ -3540,6 +3552,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, fail_ctrl: nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING); + cancel_work_sync(&ctrl->ioerr_work); cancel_work_sync(&ctrl->ctrl.reset_work); cancel_delayed_work_sync(&ctrl->connect_work); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 7e49f61f81df..88a6b97247f5 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -610,8 +610,6 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl); #define NVME_QID_ANY -1 struct request *nvme_alloc_request(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags); -struct request *nvme_alloc_request_qid(struct request_queue *q, - struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid); void nvme_cleanup_cmd(struct request *req); blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, struct nvme_command *cmd); @@ -630,7 +628,6 @@ int nvme_get_features(struct nvme_ctrl *dev, unsigned int fid, int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count); void nvme_stop_keep_alive(struct nvme_ctrl *ctrl); int nvme_reset_ctrl(struct nvme_ctrl *ctrl); -int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl); int nvme_try_sched_reset(struct nvme_ctrl *ctrl); int nvme_delete_ctrl(struct nvme_ctrl *ctrl); @@ -675,8 +672,7 @@ static inline void nvme_mpath_check_last_path(struct nvme_ns *ns) kblockd_schedule_work(&head->requeue_work); } -static inline void nvme_trace_bio_complete(struct request *req, - blk_status_t status) +static inline void nvme_trace_bio_complete(struct request *req) { struct nvme_ns *ns = req->q->queuedata; @@ -731,8 +727,7 @@ static inline void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl) static inline void nvme_mpath_check_last_path(struct nvme_ns *ns) { } -static inline void nvme_trace_bio_complete(struct request *req, - blk_status_t status) +static inline void nvme_trace_bio_complete(struct request *req) { } static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index b4385cb0ff60..856aa31931c1 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -23,6 +23,7 @@ #include <linux/t10-pi.h> #include <linux/types.h> #include <linux/io-64-nonatomic-lo-hi.h> +#include <linux/io-64-nonatomic-hi-lo.h> #include <linux/sed-opal.h> #include <linux/pci-p2pdma.h> @@ -542,50 +543,71 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req) return true; } -static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) +static void nvme_free_prps(struct nvme_dev *dev, struct request *req) { - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1; - dma_addr_t dma_addr = iod->first_dma, next_dma_addr; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + dma_addr_t dma_addr = iod->first_dma; int i; - if (iod->dma_len) { - dma_unmap_page(dev->dev, dma_addr, iod->dma_len, - rq_dma_dir(req)); - return; + for (i = 0; i < iod->npages; i++) { + __le64 *prp_list = nvme_pci_iod_list(req)[i]; + dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); + + dma_pool_free(dev->prp_page_pool, prp_list, dma_addr); + dma_addr = next_dma_addr; } - WARN_ON_ONCE(!iod->nents); +} - if (is_pci_p2pdma_page(sg_page(iod->sg))) - pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req)); - else - dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); +static void nvme_free_sgls(struct nvme_dev *dev, struct request *req) +{ + const int last_sg = SGES_PER_PAGE - 1; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + dma_addr_t dma_addr = iod->first_dma; + int i; + for (i = 0; i < iod->npages; i++) { + struct nvme_sgl_desc *sg_list = nvme_pci_iod_list(req)[i]; + dma_addr_t next_dma_addr = le64_to_cpu((sg_list[last_sg]).addr); - if (iod->npages == 0) - dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], - dma_addr); + dma_pool_free(dev->prp_page_pool, sg_list, dma_addr); + dma_addr = next_dma_addr; + } - for (i = 0; i < iod->npages; i++) { - void *addr = nvme_pci_iod_list(req)[i]; +} - if (iod->use_sgl) { - struct nvme_sgl_desc *sg_list = addr; +static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - next_dma_addr = - le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr); - } else { - __le64 *prp_list = addr; + if (is_pci_p2pdma_page(sg_page(iod->sg))) + pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, + rq_dma_dir(req)); + else + dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); +} - next_dma_addr = le64_to_cpu(prp_list[last_prp]); - } +static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) +{ + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - dma_pool_free(dev->prp_page_pool, addr, dma_addr); - dma_addr = next_dma_addr; + if (iod->dma_len) { + dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len, + rq_dma_dir(req)); + return; } + WARN_ON_ONCE(!iod->nents); + + nvme_unmap_sg(dev, req); + if (iod->npages == 0) + dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], + iod->first_dma); + else if (iod->use_sgl) + nvme_free_sgls(dev, req); + else + nvme_free_prps(dev, req); mempool_free(iod->sg, dev->iod_mempool); } @@ -661,7 +683,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, __le64 *old_prp_list = prp_list; prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); if (!prp_list) - return BLK_STS_RESOURCE; + goto free_prps; list[iod->npages++] = prp_list; prp_list[0] = old_prp_list[i - 1]; old_prp_list[i - 1] = cpu_to_le64(prp_dma); @@ -681,14 +703,14 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, dma_addr = sg_dma_address(sg); dma_len = sg_dma_len(sg); } - done: cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg)); cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); - return BLK_STS_OK; - - bad_sgl: +free_prps: + nvme_free_prps(dev, req); + return BLK_STS_RESOURCE; +bad_sgl: WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents), "Invalid SGL for payload:%d nents:%d\n", blk_rq_payload_bytes(req), iod->nents); @@ -760,7 +782,7 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma); if (!sg_list) - return BLK_STS_RESOURCE; + goto free_sgls; i = 0; nvme_pci_iod_list(req)[iod->npages++] = sg_list; @@ -773,6 +795,9 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, } while (--entries > 0); return BLK_STS_OK; +free_sgls: + nvme_free_sgls(dev, req); + return BLK_STS_RESOURCE; } static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev, @@ -841,7 +866,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, sg_init_table(iod->sg, blk_rq_nr_phys_segments(req)); iod->nents = blk_rq_map_sg(req->q, req, iod->sg); if (!iod->nents) - goto out; + goto out_free_sg; if (is_pci_p2pdma_page(sg_page(iod->sg))) nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, @@ -850,16 +875,21 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN); if (!nr_mapped) - goto out; + goto out_free_sg; iod->use_sgl = nvme_pci_use_sgls(dev, req); if (iod->use_sgl) ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped); else ret = nvme_pci_setup_prps(dev, req, &cmnd->rw); -out: if (ret != BLK_STS_OK) - nvme_unmap_data(dev, req); + goto out_unmap_sg; + return BLK_STS_OK; + +out_unmap_sg: + nvme_unmap_sg(dev, req); +out_free_sg: + mempool_free(iod->sg, dev->iod_mempool); return ret; } @@ -967,6 +997,7 @@ static inline struct blk_mq_tags *nvme_queue_tagset(struct nvme_queue *nvmeq) static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) { struct nvme_completion *cqe = &nvmeq->cqes[idx]; + __u16 command_id = READ_ONCE(cqe->command_id); struct request *req; /* @@ -975,17 +1006,17 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) * aborts. We don't even bother to allocate a struct request * for them but rather special case them here. */ - if (unlikely(nvme_is_aen_req(nvmeq->qid, cqe->command_id))) { + if (unlikely(nvme_is_aen_req(nvmeq->qid, command_id))) { nvme_complete_async_event(&nvmeq->dev->ctrl, cqe->status, &cqe->result); return; } - req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id); + req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), command_id); if (unlikely(!req)) { dev_warn(nvmeq->dev->ctrl.device, "invalid id %d completed on queue %d\n", - cqe->command_id, le16_to_cpu(cqe->sq_id)); + command_id, le16_to_cpu(cqe->sq_id)); return; } @@ -1794,6 +1825,9 @@ static void nvme_map_cmb(struct nvme_dev *dev) if (dev->cmb_size) return; + if (NVME_CAP_CMBS(dev->ctrl.cap)) + writel(NVME_CMBMSC_CRE, dev->bar + NVME_REG_CMBMSC); + dev->cmbsz = readl(dev->bar + NVME_REG_CMBSZ); if (!dev->cmbsz) return; @@ -1808,6 +1842,16 @@ static void nvme_map_cmb(struct nvme_dev *dev) return; /* + * Tell the controller about the host side address mapping the CMB, + * and enable CMB decoding for the NVMe 1.4+ scheme: + */ + if (NVME_CAP_CMBS(dev->ctrl.cap)) { + hi_lo_writeq(NVME_CMBMSC_CRE | NVME_CMBMSC_CMSE | + (pci_bus_address(pdev, bar) + offset), + dev->bar + NVME_REG_CMBMSC); + } + + /* * Controllers may support a CMB size larger than their BAR, * for example, due to being behind a bridge. Reduce the CMB to * the reported size of the BAR @@ -3196,7 +3240,8 @@ static const struct pci_device_id nvme_id_table[] = { { PCI_DEVICE(0x144d, 0xa821), /* Samsung PM1725 */ .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, }, { PCI_DEVICE(0x144d, 0xa822), /* Samsung PM1725a */ - .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY, }, + .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY | + NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_DEVICE(0x1d1d, 0x1f1f), /* LighNVM qemu device */ .driver_data = NVME_QUIRK_LIGHTNVM, }, { PCI_DEVICE(0x1d1d, 0x2807), /* CNEX WL */ diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index cf6c49d09c82..b7ce4f221d99 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -97,6 +97,7 @@ struct nvme_rdma_queue { struct completion cm_done; bool pi_support; int cq_size; + struct mutex queue_lock; }; struct nvme_rdma_ctrl { @@ -579,6 +580,7 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, int ret; queue = &ctrl->queues[idx]; + mutex_init(&queue->queue_lock); queue->ctrl = ctrl; if (idx && ctrl->ctrl.max_integrity_segments) queue->pi_support = true; @@ -598,7 +600,8 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, if (IS_ERR(queue->cm_id)) { dev_info(ctrl->ctrl.device, "failed to create CM ID: %ld\n", PTR_ERR(queue->cm_id)); - return PTR_ERR(queue->cm_id); + ret = PTR_ERR(queue->cm_id); + goto out_destroy_mutex; } if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR) @@ -628,6 +631,8 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, out_destroy_cm_id: rdma_destroy_id(queue->cm_id); nvme_rdma_destroy_queue_ib(queue); +out_destroy_mutex: + mutex_destroy(&queue->queue_lock); return ret; } @@ -639,9 +644,10 @@ static void __nvme_rdma_stop_queue(struct nvme_rdma_queue *queue) static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue) { - if (!test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags)) - return; - __nvme_rdma_stop_queue(queue); + mutex_lock(&queue->queue_lock); + if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags)) + __nvme_rdma_stop_queue(queue); + mutex_unlock(&queue->queue_lock); } static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue) @@ -651,6 +657,7 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue) nvme_rdma_destroy_queue_ib(queue); rdma_destroy_id(queue->cm_id); + mutex_destroy(&queue->queue_lock); } static void nvme_rdma_free_io_queues(struct nvme_rdma_ctrl *ctrl) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 1ba659927442..881d28eb15e9 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -76,6 +76,7 @@ struct nvme_tcp_queue { struct work_struct io_work; int io_cpu; + struct mutex queue_lock; struct mutex send_mutex; struct llist_head req_list; struct list_head send_list; @@ -201,7 +202,7 @@ static inline size_t nvme_tcp_req_cur_offset(struct nvme_tcp_request *req) static inline size_t nvme_tcp_req_cur_length(struct nvme_tcp_request *req) { - return min_t(size_t, req->iter.bvec->bv_len - req->iter.iov_offset, + return min_t(size_t, iov_iter_single_seg_count(&req->iter), req->pdu_len - req->pdu_sent); } @@ -262,6 +263,16 @@ static inline void nvme_tcp_advance_req(struct nvme_tcp_request *req, } } +static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue) +{ + int ret; + + /* drain the send queue as much as we can... */ + do { + ret = nvme_tcp_try_send(queue); + } while (ret > 0); +} + static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req, bool sync, bool last) { @@ -276,10 +287,10 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req, * directly, otherwise queue io_work. Also, only do that if we * are on the same cpu, so we don't introduce contention. */ - if (queue->io_cpu == smp_processor_id() && + if (queue->io_cpu == __smp_processor_id() && sync && empty && mutex_trylock(&queue->send_mutex)) { queue->more_requests = !last; - nvme_tcp_try_send(queue); + nvme_tcp_send_all(queue); queue->more_requests = false; mutex_unlock(&queue->send_mutex); } else if (last) { @@ -1209,6 +1220,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) sock_release(queue->sock); kfree(queue->pdu); + mutex_destroy(&queue->queue_lock); } static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue) @@ -1370,6 +1382,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, struct nvme_tcp_queue *queue = &ctrl->queues[qid]; int ret, rcv_pdu_size; + mutex_init(&queue->queue_lock); queue->ctrl = ctrl; init_llist_head(&queue->req_list); INIT_LIST_HEAD(&queue->send_list); @@ -1388,7 +1401,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, if (ret) { dev_err(nctrl->device, "failed to create socket: %d\n", ret); - return ret; + goto err_destroy_mutex; } /* Single syn retry */ @@ -1497,6 +1510,8 @@ err_crypto: err_sock: sock_release(queue->sock); queue->sock = NULL; +err_destroy_mutex: + mutex_destroy(&queue->queue_lock); return ret; } @@ -1524,9 +1539,10 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid) struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; - if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) - return; - __nvme_tcp_stop_queue(queue); + mutex_lock(&queue->queue_lock); + if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) + __nvme_tcp_stop_queue(queue); + mutex_unlock(&queue->queue_lock); } static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 8d90235e4fcc..dc1ea468b182 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -487,8 +487,10 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) /* return an all zeroed buffer if we can't find an active namespace */ ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid); - if (!ns) + if (!ns) { + status = NVME_SC_INVALID_NS; goto done; + } nvmet_ns_revalidate(ns); @@ -541,7 +543,9 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) id->nsattr |= (1 << 0); nvmet_put_namespace(ns); done: - status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); + if (!status) + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); + kfree(id); out: nvmet_req_complete(req, status); diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c index 733d9363900e..68213f0a052b 100644 --- a/drivers/nvme/target/fcloop.c +++ b/drivers/nvme/target/fcloop.c @@ -1501,7 +1501,8 @@ static ssize_t fcloop_set_cmd_drop(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - int opcode, starting, amount; + unsigned int opcode; + int starting, amount; if (sscanf(buf, "%x:%d:%d", &opcode, &starting, &amount) != 3) return -EBADRQC; @@ -1588,8 +1589,8 @@ out_destroy_class: static void __exit fcloop_exit(void) { - struct fcloop_lport *lport; - struct fcloop_nport *nport; + struct fcloop_lport *lport = NULL; + struct fcloop_nport *nport = NULL; struct fcloop_tport *tport; struct fcloop_rport *rport; unsigned long flags; diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 5c1e7cb7fe0d..06b6b742bb21 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -1220,6 +1220,14 @@ nvmet_rdma_find_get_device(struct rdma_cm_id *cm_id) } ndev->inline_data_size = nport->inline_data_size; ndev->inline_page_count = inline_page_count; + + if (nport->pi_enable && !(cm_id->device->attrs.device_cap_flags & + IB_DEVICE_INTEGRITY_HANDOVER)) { + pr_warn("T10-PI is not supported by device %s. Disabling it\n", + cm_id->device->name); + nport->pi_enable = false; + } + ndev->device = cm_id->device; kref_init(&ndev->ref); @@ -1641,6 +1649,16 @@ static void __nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue) spin_lock_irqsave(&queue->state_lock, flags); switch (queue->state) { case NVMET_RDMA_Q_CONNECTING: + while (!list_empty(&queue->rsp_wait_list)) { + struct nvmet_rdma_rsp *rsp; + + rsp = list_first_entry(&queue->rsp_wait_list, + struct nvmet_rdma_rsp, + wait_list); + list_del(&rsp->wait_list); + nvmet_rdma_put_rsp(rsp); + } + fallthrough; case NVMET_RDMA_Q_LIVE: queue->state = NVMET_RDMA_Q_DISCONNECTING; disconnect = true; @@ -1845,14 +1863,6 @@ static int nvmet_rdma_enable_port(struct nvmet_rdma_port *port) goto out_destroy_id; } - if (port->nport->pi_enable && - !(cm_id->device->attrs.device_cap_flags & - IB_DEVICE_INTEGRITY_HANDOVER)) { - pr_err("T10-PI is not supported for %pISpcs\n", addr); - ret = -EINVAL; - goto out_destroy_id; - } - port->cm_id = cm_id; return 0; diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 794a37d50853..cb2f55f450e4 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -726,11 +726,6 @@ static int armpmu_get_cpu_irq(struct arm_pmu *pmu, int cpu) return per_cpu(hw_events->irq, cpu); } -bool arm_pmu_irq_is_nmi(void) -{ - return has_nmi; -} - /* * PMU hardware loses all context when a CPU goes offline. * When a CPU is hotplugged back in, since some hardware registers are diff --git a/drivers/phy/ingenic/Makefile b/drivers/phy/ingenic/Makefile index 65d5ea00fc9d..1cb158d7233f 100644 --- a/drivers/phy/ingenic/Makefile +++ b/drivers/phy/ingenic/Makefile @@ -1,2 +1,2 @@ # SPDX-License-Identifier: GPL-2.0 -obj-y += phy-ingenic-usb.o +obj-$(CONFIG_PHY_INGENIC_USB) += phy-ingenic-usb.o diff --git a/drivers/phy/mediatek/Kconfig b/drivers/phy/mediatek/Kconfig index d38def43b1bf..55f8e6c048ab 100644 --- a/drivers/phy/mediatek/Kconfig +++ b/drivers/phy/mediatek/Kconfig @@ -49,7 +49,9 @@ config PHY_MTK_HDMI config PHY_MTK_MIPI_DSI tristate "MediaTek MIPI-DSI Driver" - depends on ARCH_MEDIATEK && OF + depends on ARCH_MEDIATEK || COMPILE_TEST + depends on COMMON_CLK + depends on OF select GENERIC_PHY help Support MIPI DSI for Mediatek SoCs. diff --git a/drivers/phy/motorola/phy-cpcap-usb.c b/drivers/phy/motorola/phy-cpcap-usb.c index 442522ba487f..4728e2bff662 100644 --- a/drivers/phy/motorola/phy-cpcap-usb.c +++ b/drivers/phy/motorola/phy-cpcap-usb.c @@ -662,35 +662,42 @@ static int cpcap_usb_phy_probe(struct platform_device *pdev) generic_phy = devm_phy_create(ddata->dev, NULL, &ops); if (IS_ERR(generic_phy)) { error = PTR_ERR(generic_phy); - return PTR_ERR(generic_phy); + goto out_reg_disable; } phy_set_drvdata(generic_phy, ddata); phy_provider = devm_of_phy_provider_register(ddata->dev, of_phy_simple_xlate); - if (IS_ERR(phy_provider)) - return PTR_ERR(phy_provider); + if (IS_ERR(phy_provider)) { + error = PTR_ERR(phy_provider); + goto out_reg_disable; + } error = cpcap_usb_init_optional_pins(ddata); if (error) - return error; + goto out_reg_disable; cpcap_usb_init_optional_gpios(ddata); error = cpcap_usb_init_iio(ddata); if (error) - return error; + goto out_reg_disable; error = cpcap_usb_init_interrupts(pdev, ddata); if (error) - return error; + goto out_reg_disable; usb_add_phy_dev(&ddata->phy); atomic_set(&ddata->active, 1); schedule_delayed_work(&ddata->detect_work, msecs_to_jiffies(1)); return 0; + +out_reg_disable: + regulator_disable(ddata->vusb); + + return error; } static int cpcap_usb_phy_remove(struct platform_device *pdev) diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c index 34803a6c7664..5c1a109842a7 100644 --- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c +++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c @@ -347,7 +347,7 @@ FUNC_GROUP_DECL(RMII4, F24, E23, E24, E25, C25, C24, B26, B25, B24); #define D22 40 SIG_EXPR_LIST_DECL_SESG(D22, SD1CLK, SD1, SIG_DESC_SET(SCU414, 8)); -SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU414, 8)); +SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU4B4, 8)); PIN_DECL_2(D22, GPIOF0, SD1CLK, PWM8); GROUP_DECL(PWM8G0, D22); diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c index 7aeb552d16ce..72f17f26acd8 100644 --- a/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c +++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c @@ -920,6 +920,10 @@ int mtk_pinconf_adv_pull_set(struct mtk_pinctrl *hw, err = hw->soc->bias_set(hw, desc, pullup); if (err) return err; + } else if (hw->soc->bias_set_combo) { + err = hw->soc->bias_set_combo(hw, desc, pullup, arg); + if (err) + return err; } else { return -ENOTSUPP; } diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c index d4ea10803fd9..abfe11c7b49f 100644 --- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c +++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c @@ -949,7 +949,6 @@ static void nmk_gpio_dbg_show_one(struct seq_file *s, } else { int irq = chip->to_irq(chip, offset); const int pullidx = pull ? 1 : 0; - bool wake; int val; static const char * const pulls[] = { "none ", diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c index 53a6a24bd052..3ea163498647 100644 --- a/drivers/pinctrl/pinctrl-ingenic.c +++ b/drivers/pinctrl/pinctrl-ingenic.c @@ -37,11 +37,11 @@ #define JZ4740_GPIO_TRIG 0x70 #define JZ4740_GPIO_FLAG 0x80 -#define JZ4760_GPIO_INT 0x10 -#define JZ4760_GPIO_PAT1 0x30 -#define JZ4760_GPIO_PAT0 0x40 -#define JZ4760_GPIO_FLAG 0x50 -#define JZ4760_GPIO_PEN 0x70 +#define JZ4770_GPIO_INT 0x10 +#define JZ4770_GPIO_PAT1 0x30 +#define JZ4770_GPIO_PAT0 0x40 +#define JZ4770_GPIO_FLAG 0x50 +#define JZ4770_GPIO_PEN 0x70 #define X1830_GPIO_PEL 0x110 #define X1830_GPIO_PEH 0x120 @@ -1688,8 +1688,8 @@ static inline bool ingenic_gpio_get_value(struct ingenic_gpio_chip *jzgc, static void ingenic_gpio_set_value(struct ingenic_gpio_chip *jzgc, u8 offset, int value) { - if (jzgc->jzpc->info->version >= ID_JZ4760) - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_PAT0, offset, !!value); + if (jzgc->jzpc->info->version >= ID_JZ4770) + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_PAT0, offset, !!value); else ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, offset, !!value); } @@ -1718,9 +1718,9 @@ static void irq_set_type(struct ingenic_gpio_chip *jzgc, break; } - if (jzgc->jzpc->info->version >= ID_JZ4760) { - reg1 = JZ4760_GPIO_PAT1; - reg2 = JZ4760_GPIO_PAT0; + if (jzgc->jzpc->info->version >= ID_JZ4770) { + reg1 = JZ4770_GPIO_PAT1; + reg2 = JZ4770_GPIO_PAT0; } else { reg1 = JZ4740_GPIO_TRIG; reg2 = JZ4740_GPIO_DIR; @@ -1758,8 +1758,8 @@ static void ingenic_gpio_irq_enable(struct irq_data *irqd) struct ingenic_gpio_chip *jzgc = gpiochip_get_data(gc); int irq = irqd->hwirq; - if (jzgc->jzpc->info->version >= ID_JZ4760) - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, true); + if (jzgc->jzpc->info->version >= ID_JZ4770) + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_INT, irq, true); else ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, true); @@ -1774,8 +1774,8 @@ static void ingenic_gpio_irq_disable(struct irq_data *irqd) ingenic_gpio_irq_mask(irqd); - if (jzgc->jzpc->info->version >= ID_JZ4760) - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, false); + if (jzgc->jzpc->info->version >= ID_JZ4770) + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_INT, irq, false); else ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, false); } @@ -1799,8 +1799,8 @@ static void ingenic_gpio_irq_ack(struct irq_data *irqd) irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH); } - if (jzgc->jzpc->info->version >= ID_JZ4760) - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_FLAG, irq, false); + if (jzgc->jzpc->info->version >= ID_JZ4770) + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_FLAG, irq, false); else ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, irq, true); } @@ -1856,8 +1856,8 @@ static void ingenic_gpio_irq_handler(struct irq_desc *desc) chained_irq_enter(irq_chip, desc); - if (jzgc->jzpc->info->version >= ID_JZ4760) - flag = ingenic_gpio_read_reg(jzgc, JZ4760_GPIO_FLAG); + if (jzgc->jzpc->info->version >= ID_JZ4770) + flag = ingenic_gpio_read_reg(jzgc, JZ4770_GPIO_FLAG); else flag = ingenic_gpio_read_reg(jzgc, JZ4740_GPIO_FLAG); @@ -1938,9 +1938,9 @@ static int ingenic_gpio_get_direction(struct gpio_chip *gc, unsigned int offset) struct ingenic_pinctrl *jzpc = jzgc->jzpc; unsigned int pin = gc->base + offset; - if (jzpc->info->version >= ID_JZ4760) { - if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) || - ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1)) + if (jzpc->info->version >= ID_JZ4770) { + if (ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_INT) || + ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_PAT1)) return GPIO_LINE_DIRECTION_IN; return GPIO_LINE_DIRECTION_OUT; } @@ -1991,20 +1991,20 @@ static int ingenic_pinmux_set_pin_fn(struct ingenic_pinctrl *jzpc, 'A' + offt, idx, func); if (jzpc->info->version >= ID_X1000) { - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, false); - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2); - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1); + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, func & 0x2); + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, func & 0x1); ingenic_shadow_config_pin_load(jzpc, pin); - } else if (jzpc->info->version >= ID_JZ4760) { - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); + } else if (jzpc->info->version >= ID_JZ4770) { + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); ingenic_config_pin(jzpc, pin, GPIO_MSK, false); - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2); - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1); + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, func & 0x2); + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, func & 0x1); } else { ingenic_config_pin(jzpc, pin, JZ4740_GPIO_FUNC, true); ingenic_config_pin(jzpc, pin, JZ4740_GPIO_TRIG, func & 0x2); - ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func > 0); + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func & 0x1); } return 0; @@ -2057,14 +2057,14 @@ static int ingenic_pinmux_gpio_set_direction(struct pinctrl_dev *pctldev, 'A' + offt, idx, input ? "in" : "out"); if (jzpc->info->version >= ID_X1000) { - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, true); - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input); + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, input); ingenic_shadow_config_pin_load(jzpc, pin); - } else if (jzpc->info->version >= ID_JZ4760) { - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); + } else if (jzpc->info->version >= ID_JZ4770) { + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); ingenic_config_pin(jzpc, pin, GPIO_MSK, true); - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input); + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, input); } else { ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, false); ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DIR, !input); @@ -2091,8 +2091,8 @@ static int ingenic_pinconf_get(struct pinctrl_dev *pctldev, unsigned int offt = pin / PINS_PER_GPIO_CHIP; bool pull; - if (jzpc->info->version >= ID_JZ4760) - pull = !ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PEN); + if (jzpc->info->version >= ID_JZ4770) + pull = !ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_PEN); else pull = !ingenic_get_pin_config(jzpc, pin, JZ4740_GPIO_PULL_DIS); @@ -2141,8 +2141,8 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc, REG_SET(X1830_GPIO_PEH), bias << idxh); } - } else if (jzpc->info->version >= ID_JZ4760) { - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PEN, !bias); + } else if (jzpc->info->version >= ID_JZ4770) { + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PEN, !bias); } else { ingenic_config_pin(jzpc, pin, JZ4740_GPIO_PULL_DIS, !bias); } @@ -2151,8 +2151,8 @@ static void ingenic_set_bias(struct ingenic_pinctrl *jzpc, static void ingenic_set_output_level(struct ingenic_pinctrl *jzpc, unsigned int pin, bool high) { - if (jzpc->info->version >= ID_JZ4760) - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, high); + if (jzpc->info->version >= ID_JZ4770) + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, high); else ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DATA, high); } diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c index e051aecf95c4..d70caecd21d2 100644 --- a/drivers/pinctrl/qcom/pinctrl-msm.c +++ b/drivers/pinctrl/qcom/pinctrl-msm.c @@ -51,6 +51,7 @@ * @dual_edge_irqs: Bitmap of irqs that need sw emulated dual edge * detection. * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller + * @disabled_for_mux: These IRQs were disabled because we muxed away. * @soc: Reference to soc_data of platform specific data. * @regs: Base addresses for the TLMM tiles. * @phys_base: Physical base address @@ -72,6 +73,7 @@ struct msm_pinctrl { DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO); DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO); DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO); + DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO); const struct msm_pinctrl_soc_data *soc; void __iomem *regs[MAX_NR_TILES]; @@ -96,6 +98,14 @@ MSM_ACCESSOR(intr_cfg) MSM_ACCESSOR(intr_status) MSM_ACCESSOR(intr_target) +static void msm_ack_intr_status(struct msm_pinctrl *pctrl, + const struct msm_pingroup *g) +{ + u32 val = g->intr_ack_high ? BIT(g->intr_status_bit) : 0; + + msm_writel_intr_status(val, pctrl, g); +} + static int msm_get_groups_count(struct pinctrl_dev *pctldev) { struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev); @@ -171,6 +181,10 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned group) { struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev); + struct gpio_chip *gc = &pctrl->chip; + unsigned int irq = irq_find_mapping(gc->irq.domain, group); + struct irq_data *d = irq_get_irq_data(irq); + unsigned int gpio_func = pctrl->soc->gpio_func; const struct msm_pingroup *g; unsigned long flags; u32 val, mask; @@ -187,6 +201,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev, if (WARN_ON(i == g->nfuncs)) return -EINVAL; + /* + * If an GPIO interrupt is setup on this pin then we need special + * handling. Specifically interrupt detection logic will still see + * the pin twiddle even when we're muxed away. + * + * When we see a pin with an interrupt setup on it then we'll disable + * (mask) interrupts on it when we mux away until we mux back. Note + * that disable_irq() refcounts and interrupts are disabled as long as + * at least one disable_irq() has been called. + */ + if (d && i != gpio_func && + !test_and_set_bit(d->hwirq, pctrl->disabled_for_mux)) + disable_irq(irq); + raw_spin_lock_irqsave(&pctrl->lock, flags); val = msm_readl_ctl(pctrl, g); @@ -196,6 +224,20 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev, raw_spin_unlock_irqrestore(&pctrl->lock, flags); + if (d && i == gpio_func && + test_and_clear_bit(d->hwirq, pctrl->disabled_for_mux)) { + /* + * Clear interrupts detected while not GPIO since we only + * masked things. + */ + if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) + irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, false); + else + msm_ack_intr_status(pctrl, g); + + enable_irq(irq); + } + return 0; } @@ -210,8 +252,7 @@ static int msm_pinmux_request_gpio(struct pinctrl_dev *pctldev, if (!g->nfuncs) return 0; - /* For now assume function 0 is GPIO because it always is */ - return msm_pinmux_set_mux(pctldev, g->funcs[0], offset); + return msm_pinmux_set_mux(pctldev, g->funcs[pctrl->soc->gpio_func], offset); } static const struct pinmux_ops msm_pinmux_ops = { @@ -774,7 +815,7 @@ static void msm_gpio_irq_mask(struct irq_data *d) raw_spin_unlock_irqrestore(&pctrl->lock, flags); } -static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear) +static void msm_gpio_irq_unmask(struct irq_data *d) { struct gpio_chip *gc = irq_data_get_irq_chip_data(d); struct msm_pinctrl *pctrl = gpiochip_get_data(gc); @@ -792,17 +833,6 @@ static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear) raw_spin_lock_irqsave(&pctrl->lock, flags); - if (status_clear) { - /* - * clear the interrupt status bit before unmask to avoid - * any erroneous interrupts that would have got latched - * when the interrupt is not in use. - */ - val = msm_readl_intr_status(pctrl, g); - val &= ~BIT(g->intr_status_bit); - msm_writel_intr_status(val, pctrl, g); - } - val = msm_readl_intr_cfg(pctrl, g); val |= BIT(g->intr_raw_status_bit); val |= BIT(g->intr_enable_bit); @@ -822,7 +852,7 @@ static void msm_gpio_irq_enable(struct irq_data *d) irq_chip_enable_parent(d); if (!test_bit(d->hwirq, pctrl->skip_wake_irqs)) - msm_gpio_irq_clear_unmask(d, true); + msm_gpio_irq_unmask(d); } static void msm_gpio_irq_disable(struct irq_data *d) @@ -837,11 +867,6 @@ static void msm_gpio_irq_disable(struct irq_data *d) msm_gpio_irq_mask(d); } -static void msm_gpio_irq_unmask(struct irq_data *d) -{ - msm_gpio_irq_clear_unmask(d, false); -} - /** * msm_gpio_update_dual_edge_parent() - Prime next edge for IRQs handled by parent. * @d: The irq dta. @@ -894,7 +919,6 @@ static void msm_gpio_irq_ack(struct irq_data *d) struct msm_pinctrl *pctrl = gpiochip_get_data(gc); const struct msm_pingroup *g; unsigned long flags; - u32 val; if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) { if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) @@ -906,12 +930,7 @@ static void msm_gpio_irq_ack(struct irq_data *d) raw_spin_lock_irqsave(&pctrl->lock, flags); - val = msm_readl_intr_status(pctrl, g); - if (g->intr_ack_high) - val |= BIT(g->intr_status_bit); - else - val &= ~BIT(g->intr_status_bit); - msm_writel_intr_status(val, pctrl, g); + msm_ack_intr_status(pctrl, g); if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) msm_gpio_update_dual_edge_pos(pctrl, g, d); @@ -936,6 +955,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type) struct msm_pinctrl *pctrl = gpiochip_get_data(gc); const struct msm_pingroup *g; unsigned long flags; + bool was_enabled; u32 val; if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) { @@ -997,6 +1017,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type) * could cause the INTR_STATUS to be set for EDGE interrupts. */ val = msm_readl_intr_cfg(pctrl, g); + was_enabled = val & BIT(g->intr_raw_status_bit); val |= BIT(g->intr_raw_status_bit); if (g->intr_detection_width == 2) { val &= ~(3 << g->intr_detection_bit); @@ -1046,6 +1067,14 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type) } msm_writel_intr_cfg(val, pctrl, g); + /* + * The first time we set RAW_STATUS_EN it could trigger an interrupt. + * Clear the interrupt. This is safe because we have + * IRQCHIP_SET_TYPE_MASKED. + */ + if (!was_enabled) + msm_ack_intr_status(pctrl, g); + if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) msm_gpio_update_dual_edge_pos(pctrl, g, d); @@ -1099,16 +1128,11 @@ static int msm_gpio_irq_reqres(struct irq_data *d) } /* - * Clear the interrupt that may be pending before we enable - * the line. - * This is especially a problem with the GPIOs routed to the - * PDC. These GPIOs are direct-connect interrupts to the GIC. - * Disabling the interrupt line at the PDC does not prevent - * the interrupt from being latched at the GIC. The state at - * GIC needs to be cleared before enabling. + * The disable / clear-enable workaround we do in msm_pinmux_set_mux() + * only works if disable is not lazy since we only clear any bogus + * interrupt in hardware. Explicitly mark the interrupt as UNLAZY. */ - if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) - irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, 0); + irq_set_status_flags(d->irq, IRQ_DISABLE_UNLAZY); return 0; out: diff --git a/drivers/pinctrl/qcom/pinctrl-msm.h b/drivers/pinctrl/qcom/pinctrl-msm.h index 333f99243c43..e31a5167c91e 100644 --- a/drivers/pinctrl/qcom/pinctrl-msm.h +++ b/drivers/pinctrl/qcom/pinctrl-msm.h @@ -118,6 +118,7 @@ struct msm_gpio_wakeirq_map { * @wakeirq_dual_edge_errata: If true then GPIOs using the wakeirq_map need * to be aware that their parent can't handle dual * edge interrupts. + * @gpio_func: Which function number is GPIO (usually 0). */ struct msm_pinctrl_soc_data { const struct pinctrl_pin_desc *pins; @@ -134,6 +135,7 @@ struct msm_pinctrl_soc_data { const struct msm_gpio_wakeirq_map *wakeirq_map; unsigned int nwakeirq_map; bool wakeirq_dual_edge_errata; + unsigned int gpio_func; }; extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops; diff --git a/drivers/platform/surface/Kconfig b/drivers/platform/surface/Kconfig index 33040b0b3b79..2c941cdac9ee 100644 --- a/drivers/platform/surface/Kconfig +++ b/drivers/platform/surface/Kconfig @@ -5,6 +5,7 @@ menuconfig SURFACE_PLATFORMS bool "Microsoft Surface Platform-Specific Device Drivers" + depends on ACPI default y help Say Y here to get to see options for platform-specific device drivers @@ -29,20 +30,19 @@ config SURFACE3_WMI config SURFACE_3_BUTTON tristate "Power/home/volume buttons driver for Microsoft Surface 3 tablet" - depends on ACPI && KEYBOARD_GPIO && I2C + depends on KEYBOARD_GPIO && I2C help This driver handles the power/home/volume buttons on the Microsoft Surface 3 tablet. config SURFACE_3_POWER_OPREGION tristate "Surface 3 battery platform operation region support" - depends on ACPI && I2C + depends on I2C help This driver provides support for ACPI operation region of the Surface 3 battery platform driver. config SURFACE_GPE tristate "Surface GPE/Lid Support Driver" - depends on ACPI depends on DMI help This driver marks the GPEs related to the ACPI lid device found on @@ -52,7 +52,7 @@ config SURFACE_GPE config SURFACE_PRO3_BUTTON tristate "Power/home/volume buttons driver for Microsoft Surface Pro 3/4 tablet" - depends on ACPI && INPUT + depends on INPUT help This driver handles the power/home/volume buttons on the Microsoft Surface Pro 3/4 tablet. diff --git a/drivers/platform/surface/surface_gpe.c b/drivers/platform/surface/surface_gpe.c index e49e5d6d5d4e..86f6991b1215 100644 --- a/drivers/platform/surface/surface_gpe.c +++ b/drivers/platform/surface/surface_gpe.c @@ -181,12 +181,12 @@ static int surface_lid_enable_wakeup(struct device *dev, bool enable) return 0; } -static int surface_gpe_suspend(struct device *dev) +static int __maybe_unused surface_gpe_suspend(struct device *dev) { return surface_lid_enable_wakeup(dev, true); } -static int surface_gpe_resume(struct device *dev) +static int __maybe_unused surface_gpe_resume(struct device *dev) { return surface_lid_enable_wakeup(dev, false); } diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c index 0102bf1c7916..ef8342572463 100644 --- a/drivers/platform/x86/amd-pmc.c +++ b/drivers/platform/x86/amd-pmc.c @@ -85,7 +85,7 @@ static inline void amd_pmc_reg_write(struct amd_pmc_dev *dev, int reg_offset, u3 iowrite32(val, dev->regbase + reg_offset); } -#if CONFIG_DEBUG_FS +#ifdef CONFIG_DEBUG_FS static int smu_fw_info_show(struct seq_file *s, void *unused) { struct amd_pmc_dev *dev = s->private; diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c index ecd477964d11..18bf8aeb5f87 100644 --- a/drivers/platform/x86/hp-wmi.c +++ b/drivers/platform/x86/hp-wmi.c @@ -247,7 +247,8 @@ static int hp_wmi_perform_query(int query, enum hp_wmi_command command, ret = bios_return->return_code; if (ret) { - if (ret != HPWMI_RET_UNKNOWN_CMDTYPE) + if (ret != HPWMI_RET_UNKNOWN_COMMAND && + ret != HPWMI_RET_UNKNOWN_CMDTYPE) pr_warn("query 0x%x returned error 0x%x\n", query, ret); goto out_free; } diff --git a/drivers/platform/x86/i2c-multi-instantiate.c b/drivers/platform/x86/i2c-multi-instantiate.c index b457b0babde3..2cce82579d09 100644 --- a/drivers/platform/x86/i2c-multi-instantiate.c +++ b/drivers/platform/x86/i2c-multi-instantiate.c @@ -164,13 +164,29 @@ static const struct i2c_inst_data bsg2150_data[] = { {} }; -static const struct i2c_inst_data int3515_data[] = { - { "tps6598x", IRQ_RESOURCE_APIC, 0 }, - { "tps6598x", IRQ_RESOURCE_APIC, 1 }, - { "tps6598x", IRQ_RESOURCE_APIC, 2 }, - { "tps6598x", IRQ_RESOURCE_APIC, 3 }, - {} -}; +/* + * Device with _HID INT3515 (TI PD controllers) has some unresolved interrupt + * issues. The most common problem seen is interrupt flood. + * + * There are at least two known causes. Firstly, on some boards, the + * I2CSerialBus resource index does not match the Interrupt resource, i.e. they + * are not one-to-one mapped like in the array below. Secondly, on some boards + * the IRQ line from the PD controller is not actually connected at all. But the + * interrupt flood is also seen on some boards where those are not a problem, so + * there are some other problems as well. + * + * Because of the issues with the interrupt, the device is disabled for now. If + * you wish to debug the issues, uncomment the below, and add an entry for the + * INT3515 device to the i2c_multi_instance_ids table. + * + * static const struct i2c_inst_data int3515_data[] = { + * { "tps6598x", IRQ_RESOURCE_APIC, 0 }, + * { "tps6598x", IRQ_RESOURCE_APIC, 1 }, + * { "tps6598x", IRQ_RESOURCE_APIC, 2 }, + * { "tps6598x", IRQ_RESOURCE_APIC, 3 }, + * { } + * }; + */ /* * Note new device-ids must also be added to i2c_multi_instantiate_ids in @@ -179,7 +195,6 @@ static const struct i2c_inst_data int3515_data[] = { static const struct acpi_device_id i2c_multi_inst_acpi_ids[] = { { "BSG1160", (unsigned long)bsg1160_data }, { "BSG2150", (unsigned long)bsg2150_data }, - { "INT3515", (unsigned long)int3515_data }, { } }; MODULE_DEVICE_TABLE(acpi, i2c_multi_inst_acpi_ids); diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c index 7598cd46cf60..5b81bafa5c16 100644 --- a/drivers/platform/x86/ideapad-laptop.c +++ b/drivers/platform/x86/ideapad-laptop.c @@ -92,6 +92,7 @@ struct ideapad_private { struct dentry *debug; unsigned long cfg; bool has_hw_rfkill_switch; + bool has_touchpad_switch; const char *fnesc_guid; }; @@ -535,7 +536,9 @@ static umode_t ideapad_is_visible(struct kobject *kobj, } else if (attr == &dev_attr_fn_lock.attr) { supported = acpi_has_method(priv->adev->handle, "HALS") && acpi_has_method(priv->adev->handle, "SALS"); - } else + } else if (attr == &dev_attr_touchpad.attr) + supported = priv->has_touchpad_switch; + else supported = true; return supported ? attr->mode : 0; @@ -867,6 +870,9 @@ static void ideapad_sync_touchpad_state(struct ideapad_private *priv) { unsigned long value; + if (!priv->has_touchpad_switch) + return; + /* Without reading from EC touchpad LED doesn't switch state */ if (!read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value)) { /* Some IdeaPads don't really turn off touchpad - they only @@ -989,6 +995,9 @@ static int ideapad_acpi_add(struct platform_device *pdev) priv->platform_device = pdev; priv->has_hw_rfkill_switch = dmi_check_system(hw_rfkill_list); + /* Most ideapads with ELAN0634 touchpad don't use EC touchpad switch */ + priv->has_touchpad_switch = !acpi_dev_present("ELAN0634", NULL, -1); + ret = ideapad_sysfs_init(priv); if (ret) return ret; @@ -1006,6 +1015,10 @@ static int ideapad_acpi_add(struct platform_device *pdev) if (!priv->has_hw_rfkill_switch) write_ec_cmd(priv->adev->handle, VPCCMD_W_RF, 1); + /* The same for Touchpad */ + if (!priv->has_touchpad_switch) + write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, 1); + for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++) if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg)) ideapad_register_rfkill(priv, i); diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c index 3b49a1f4061b..30a9062d2b4b 100644 --- a/drivers/platform/x86/intel-vbtn.c +++ b/drivers/platform/x86/intel-vbtn.c @@ -207,19 +207,19 @@ static const struct dmi_system_id dmi_switches_allow_list[] = { { .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), - DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"), + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"), }, }, { .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), - DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"), + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271"), }, }, { .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Acer"), - DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271"), + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7352"), }, }, {} /* Array terminator */ diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c index e03df2881dc6..f3e8eca8d86d 100644 --- a/drivers/platform/x86/thinkpad_acpi.c +++ b/drivers/platform/x86/thinkpad_acpi.c @@ -8783,6 +8783,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = { TPACPI_Q_LNV3('N', '1', 'T', TPACPI_FAN_2CTL), /* P71 */ TPACPI_Q_LNV3('N', '1', 'U', TPACPI_FAN_2CTL), /* P51 */ TPACPI_Q_LNV3('N', '2', 'C', TPACPI_FAN_2CTL), /* P52 / P72 */ + TPACPI_Q_LNV3('N', '2', 'N', TPACPI_FAN_2CTL), /* P53 / P73 */ TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (1st gen) */ TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (2nd gen) */ TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (3nd gen) */ @@ -9951,9 +9952,9 @@ static int tpacpi_proxsensor_init(struct ibm_init_struct *iibm) if ((palm_err == -ENODEV) && (lap_err == -ENODEV)) return 0; /* Otherwise, if there was an error return it */ - if (palm_err && (palm_err != ENODEV)) + if (palm_err && (palm_err != -ENODEV)) return palm_err; - if (lap_err && (lap_err != ENODEV)) + if (lap_err && (lap_err != -ENODEV)) return lap_err; if (has_palmsensor) { diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c index 5783139d0a11..c4de932302d6 100644 --- a/drivers/platform/x86/touchscreen_dmi.c +++ b/drivers/platform/x86/touchscreen_dmi.c @@ -263,6 +263,16 @@ static const struct ts_dmi_data digma_citi_e200_data = { .properties = digma_citi_e200_props, }; +static const struct property_entry estar_beauty_hd_props[] = { + PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), + { } +}; + +static const struct ts_dmi_data estar_beauty_hd_data = { + .acpi_name = "GDIX1001:00", + .properties = estar_beauty_hd_props, +}; + static const struct property_entry gp_electronic_t701_props[] = { PROPERTY_ENTRY_U32("touchscreen-size-x", 960), PROPERTY_ENTRY_U32("touchscreen-size-y", 640), @@ -943,6 +953,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = { }, }, { + /* Estar Beauty HD (MID 7316R) */ + .driver_data = (void *)&estar_beauty_hd_data, + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Estar"), + DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"), + }, + }, + { /* GP-electronic T701 */ .driver_data = (void *)&gp_electronic_t701_data, .matches = { diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig index 476d7c7fe70a..f2edef0df40f 100644 --- a/drivers/ptp/Kconfig +++ b/drivers/ptp/Kconfig @@ -64,6 +64,7 @@ config DP83640_PHY depends on NETWORK_PHY_TIMESTAMPING depends on PHYLIB depends on PTP_1588_CLOCK + select CRC32 help Supports the DP83640 PHYTER with IEEE 1588 features. @@ -78,6 +79,7 @@ config DP83640_PHY config PTP_1588_CLOCK_INES tristate "ZHAW InES PTP time stamping IP core" depends on NETWORK_PHY_TIMESTAMPING + depends on HAS_IOMEM depends on PHYLIB depends on PTP_1588_CLOCK help diff --git a/drivers/regulator/Kconfig b/drivers/regulator/Kconfig index 53fa84f4d1e1..5abdd29fb9f3 100644 --- a/drivers/regulator/Kconfig +++ b/drivers/regulator/Kconfig @@ -881,6 +881,7 @@ config REGULATOR_QCOM_RPM config REGULATOR_QCOM_RPMH tristate "Qualcomm Technologies, Inc. RPMh regulator driver" depends on QCOM_RPMH || (QCOM_RPMH=n && COMPILE_TEST) + depends on QCOM_COMMAND_DB || (QCOM_COMMAND_DB=n && COMPILE_TEST) help This driver supports control of PMIC regulators via the RPMh hardware block found on Qualcomm Technologies Inc. SoCs. RPMh regulator diff --git a/drivers/regulator/bd718x7-regulator.c b/drivers/regulator/bd718x7-regulator.c index e6d5d98c3cea..9309765d0450 100644 --- a/drivers/regulator/bd718x7-regulator.c +++ b/drivers/regulator/bd718x7-regulator.c @@ -15,6 +15,36 @@ #include <linux/regulator/of_regulator.h> #include <linux/slab.h> +/* Typical regulator startup times as per data sheet in uS */ +#define BD71847_BUCK1_STARTUP_TIME 144 +#define BD71847_BUCK2_STARTUP_TIME 162 +#define BD71847_BUCK3_STARTUP_TIME 162 +#define BD71847_BUCK4_STARTUP_TIME 240 +#define BD71847_BUCK5_STARTUP_TIME 270 +#define BD71847_BUCK6_STARTUP_TIME 200 +#define BD71847_LDO1_STARTUP_TIME 440 +#define BD71847_LDO2_STARTUP_TIME 370 +#define BD71847_LDO3_STARTUP_TIME 310 +#define BD71847_LDO4_STARTUP_TIME 400 +#define BD71847_LDO5_STARTUP_TIME 530 +#define BD71847_LDO6_STARTUP_TIME 400 + +#define BD71837_BUCK1_STARTUP_TIME 160 +#define BD71837_BUCK2_STARTUP_TIME 180 +#define BD71837_BUCK3_STARTUP_TIME 180 +#define BD71837_BUCK4_STARTUP_TIME 180 +#define BD71837_BUCK5_STARTUP_TIME 160 +#define BD71837_BUCK6_STARTUP_TIME 240 +#define BD71837_BUCK7_STARTUP_TIME 220 +#define BD71837_BUCK8_STARTUP_TIME 200 +#define BD71837_LDO1_STARTUP_TIME 440 +#define BD71837_LDO2_STARTUP_TIME 370 +#define BD71837_LDO3_STARTUP_TIME 310 +#define BD71837_LDO4_STARTUP_TIME 400 +#define BD71837_LDO5_STARTUP_TIME 310 +#define BD71837_LDO6_STARTUP_TIME 400 +#define BD71837_LDO7_STARTUP_TIME 530 + /* * BD718(37/47/50) have two "enable control modes". ON/OFF can either be * controlled by software - or by PMIC internal HW state machine. Whether @@ -613,6 +643,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD718XX_REG_BUCK1_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK1_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -646,6 +677,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD718XX_REG_BUCK2_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK2_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -680,6 +712,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .linear_range_selectors = bd71847_buck3_volt_range_sel, .enable_reg = BD718XX_REG_1ST_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK3_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -706,6 +739,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_range_mask = BD71847_BUCK4_RANGE_MASK, .linear_range_selectors = bd71847_buck4_volt_range_sel, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK4_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -727,6 +761,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = BD718XX_3RD_NODVS_BUCK_MASK, .enable_reg = BD718XX_REG_3RD_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK5_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -750,6 +785,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = BD718XX_4TH_NODVS_BUCK_MASK, .enable_reg = BD718XX_REG_4TH_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71847_BUCK6_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -775,6 +811,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .linear_range_selectors = bd718xx_ldo1_volt_range_sel, .enable_reg = BD718XX_REG_LDO1_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO1_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -796,6 +833,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .n_voltages = ARRAY_SIZE(ldo_2_volts), .enable_reg = BD718XX_REG_LDO2_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO2_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -818,6 +856,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = BD718XX_LDO3_MASK, .enable_reg = BD718XX_REG_LDO3_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO3_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -840,6 +879,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = BD718XX_LDO4_MASK, .enable_reg = BD718XX_REG_LDO4_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO4_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -865,6 +905,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .linear_range_selectors = bd71847_ldo5_volt_range_sel, .enable_reg = BD718XX_REG_LDO5_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO5_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -889,6 +930,7 @@ static struct bd718xx_regulator_data bd71847_regulators[] = { .vsel_mask = BD718XX_LDO6_MASK, .enable_reg = BD718XX_REG_LDO6_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71847_LDO6_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -942,6 +984,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD718XX_REG_BUCK1_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK1_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -975,6 +1018,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD718XX_REG_BUCK2_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK2_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -1005,6 +1049,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD71837_REG_BUCK3_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK3_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -1033,6 +1078,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = DVS_BUCK_RUN_MASK, .enable_reg = BD71837_REG_BUCK4_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK4_STARTUP_TIME, .owner = THIS_MODULE, .of_parse_cb = buck_set_hw_dvs_levels, }, @@ -1065,6 +1111,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .linear_range_selectors = bd71837_buck5_volt_range_sel, .enable_reg = BD718XX_REG_1ST_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK5_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1088,6 +1135,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD71837_BUCK6_MASK, .enable_reg = BD718XX_REG_2ND_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK6_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1109,6 +1157,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD718XX_3RD_NODVS_BUCK_MASK, .enable_reg = BD718XX_REG_3RD_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK7_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1132,6 +1181,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD718XX_4TH_NODVS_BUCK_MASK, .enable_reg = BD718XX_REG_4TH_NODVS_BUCK_CTRL, .enable_mask = BD718XX_BUCK_EN, + .enable_time = BD71837_BUCK8_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1157,6 +1207,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .linear_range_selectors = bd718xx_ldo1_volt_range_sel, .enable_reg = BD718XX_REG_LDO1_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO1_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1178,6 +1229,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .n_voltages = ARRAY_SIZE(ldo_2_volts), .enable_reg = BD718XX_REG_LDO2_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO2_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1200,6 +1252,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD718XX_LDO3_MASK, .enable_reg = BD718XX_REG_LDO3_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO3_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1222,6 +1275,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD718XX_LDO4_MASK, .enable_reg = BD718XX_REG_LDO4_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO4_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1246,6 +1300,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD71837_LDO5_MASK, .enable_reg = BD718XX_REG_LDO5_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO5_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1272,6 +1327,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD718XX_LDO6_MASK, .enable_reg = BD718XX_REG_LDO6_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO6_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { @@ -1296,6 +1352,7 @@ static struct bd718xx_regulator_data bd71837_regulators[] = { .vsel_mask = BD71837_LDO7_MASK, .enable_reg = BD71837_REG_LDO7_VOLT, .enable_mask = BD718XX_LDO_EN, + .enable_time = BD71837_LDO7_STARTUP_TIME, .owner = THIS_MODULE, }, .init = { diff --git a/drivers/regulator/pf8x00-regulator.c b/drivers/regulator/pf8x00-regulator.c index 308c27fa6ea8..af9918cd27aa 100644 --- a/drivers/regulator/pf8x00-regulator.c +++ b/drivers/regulator/pf8x00-regulator.c @@ -469,13 +469,17 @@ static int pf8x00_i2c_probe(struct i2c_client *client) } static const struct of_device_id pf8x00_dt_ids[] = { - { .compatible = "nxp,pf8x00",}, + { .compatible = "nxp,pf8100",}, + { .compatible = "nxp,pf8121a",}, + { .compatible = "nxp,pf8200",}, { } }; MODULE_DEVICE_TABLE(of, pf8x00_dt_ids); static const struct i2c_device_id pf8x00_i2c_id[] = { - { "pf8x00", 0 }, + { "pf8100", 0 }, + { "pf8121a", 0 }, + { "pf8200", 0 }, {}, }; MODULE_DEVICE_TABLE(i2c, pf8x00_i2c_id); diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c index fe030ec4b7db..c395a8dda6f7 100644 --- a/drivers/regulator/qcom-rpmh-regulator.c +++ b/drivers/regulator/qcom-rpmh-regulator.c @@ -726,7 +726,7 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps510 = { static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = { .regulator_type = VRM, .ops = &rpmh_regulator_vrm_ops, - .voltage_range = REGULATOR_LINEAR_RANGE(2800000, 0, 4, 1600), + .voltage_range = REGULATOR_LINEAR_RANGE(2800000, 0, 4, 16000), .n_voltages = 5, .pmic_mode_map = pmic_mode_map_pmic5_smps, .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode, diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 6f5ddc3eab8c..28f637042d44 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -1079,7 +1079,8 @@ struct qeth_card *qeth_get_card_by_busid(char *bus_id); void qeth_set_allowed_threads(struct qeth_card *card, unsigned long threads, int clear_start_mask); int qeth_threads_running(struct qeth_card *, unsigned long); -int qeth_set_offline(struct qeth_card *card, bool resetting); +int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc, + bool resetting); int qeth_send_ipa_cmd(struct qeth_card *, struct qeth_cmd_buffer *, int (*reply_cb) diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index f4b60294a969..cf18d87da41e 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -5507,12 +5507,12 @@ out: return rc; } -static int qeth_set_online(struct qeth_card *card) +static int qeth_set_online(struct qeth_card *card, + const struct qeth_discipline *disc) { bool carrier_ok; int rc; - mutex_lock(&card->discipline_mutex); mutex_lock(&card->conf_mutex); QETH_CARD_TEXT(card, 2, "setonlin"); @@ -5529,7 +5529,7 @@ static int qeth_set_online(struct qeth_card *card) /* no need for locking / error handling at this early stage: */ qeth_set_real_num_tx_queues(card, qeth_tx_actual_queues(card)); - rc = card->discipline->set_online(card, carrier_ok); + rc = disc->set_online(card, carrier_ok); if (rc) goto err_online; @@ -5537,7 +5537,6 @@ static int qeth_set_online(struct qeth_card *card) kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE); mutex_unlock(&card->conf_mutex); - mutex_unlock(&card->discipline_mutex); return 0; err_online: @@ -5552,15 +5551,14 @@ err_hardsetup: qdio_free(CARD_DDEV(card)); mutex_unlock(&card->conf_mutex); - mutex_unlock(&card->discipline_mutex); return rc; } -int qeth_set_offline(struct qeth_card *card, bool resetting) +int qeth_set_offline(struct qeth_card *card, const struct qeth_discipline *disc, + bool resetting) { int rc, rc2, rc3; - mutex_lock(&card->discipline_mutex); mutex_lock(&card->conf_mutex); QETH_CARD_TEXT(card, 3, "setoffl"); @@ -5581,7 +5579,7 @@ int qeth_set_offline(struct qeth_card *card, bool resetting) cancel_work_sync(&card->rx_mode_work); - card->discipline->set_offline(card); + disc->set_offline(card); qeth_qdio_clear_card(card, 0); qeth_drain_output_queues(card); @@ -5602,16 +5600,19 @@ int qeth_set_offline(struct qeth_card *card, bool resetting) kobject_uevent(&card->gdev->dev.kobj, KOBJ_CHANGE); mutex_unlock(&card->conf_mutex); - mutex_unlock(&card->discipline_mutex); return 0; } EXPORT_SYMBOL_GPL(qeth_set_offline); static int qeth_do_reset(void *data) { + const struct qeth_discipline *disc; struct qeth_card *card = data; int rc; + /* Lock-free, other users will block until we are done. */ + disc = card->discipline; + QETH_CARD_TEXT(card, 2, "recover1"); if (!qeth_do_run_thread(card, QETH_RECOVER_THREAD)) return 0; @@ -5619,8 +5620,8 @@ static int qeth_do_reset(void *data) dev_warn(&card->gdev->dev, "A recovery process has been started for the device\n"); - qeth_set_offline(card, true); - rc = qeth_set_online(card); + qeth_set_offline(card, disc, true); + rc = qeth_set_online(card, disc); if (!rc) { dev_info(&card->gdev->dev, "Device successfully recovered!\n"); @@ -6584,6 +6585,7 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev) break; default: card->info.layer_enforced = true; + /* It's so early that we don't need the discipline_mutex yet. */ rc = qeth_core_load_discipline(card, enforced_disc); if (rc) goto err_load; @@ -6616,10 +6618,12 @@ static void qeth_core_remove_device(struct ccwgroup_device *gdev) QETH_CARD_TEXT(card, 2, "removedv"); + mutex_lock(&card->discipline_mutex); if (card->discipline) { card->discipline->remove(gdev); qeth_core_free_discipline(card); } + mutex_unlock(&card->discipline_mutex); qeth_free_qdio_queues(card); @@ -6634,6 +6638,7 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev) int rc = 0; enum qeth_discipline_id def_discipline; + mutex_lock(&card->discipline_mutex); if (!card->discipline) { def_discipline = IS_IQD(card) ? QETH_DISCIPLINE_LAYER3 : QETH_DISCIPLINE_LAYER2; @@ -6647,16 +6652,23 @@ static int qeth_core_set_online(struct ccwgroup_device *gdev) } } - rc = qeth_set_online(card); + rc = qeth_set_online(card, card->discipline); + err: + mutex_unlock(&card->discipline_mutex); return rc; } static int qeth_core_set_offline(struct ccwgroup_device *gdev) { struct qeth_card *card = dev_get_drvdata(&gdev->dev); + int rc; - return qeth_set_offline(card, false); + mutex_lock(&card->discipline_mutex); + rc = qeth_set_offline(card, card->discipline, false); + mutex_unlock(&card->discipline_mutex); + + return rc; } static void qeth_core_shutdown(struct ccwgroup_device *gdev) diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 4ed0fb0705a5..4254caf1d9b6 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -2208,7 +2208,7 @@ static void qeth_l2_remove_device(struct ccwgroup_device *gdev) wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0); if (gdev->state == CCWGROUP_ONLINE) - qeth_set_offline(card, false); + qeth_set_offline(card, card->discipline, false); cancel_work_sync(&card->close_dev_work); if (card->dev->reg_state == NETREG_REGISTERED) diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index d138ac432d01..4c2cae7ae9a7 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -1813,7 +1813,7 @@ static netdev_features_t qeth_l3_osa_features_check(struct sk_buff *skb, struct net_device *dev, netdev_features_t features) { - if (qeth_get_ip_version(skb) != 4) + if (vlan_get_protocol(skb) != htons(ETH_P_IP)) features &= ~NETIF_F_HW_VLAN_CTAG_TX; return qeth_features_check(skb, dev, features); } @@ -1971,7 +1971,7 @@ static void qeth_l3_remove_device(struct ccwgroup_device *cgdev) wait_event(card->wait_q, qeth_threads_running(card, 0xffffffff) == 0); if (cgdev->state == CCWGROUP_ONLINE) - qeth_set_offline(card, false); + qeth_set_offline(card, card->discipline, false); cancel_work_sync(&card->close_dev_work); if (card->dev->reg_state == NETREG_REGISTERED) diff --git a/drivers/scsi/fnic/vnic_dev.c b/drivers/scsi/fnic/vnic_dev.c index a2beee6e09f0..5988c300cc82 100644 --- a/drivers/scsi/fnic/vnic_dev.c +++ b/drivers/scsi/fnic/vnic_dev.c @@ -444,7 +444,8 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev) fetch_index = ioread32(&vdev->devcmd2->wq.ctrl->fetch_index); if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone */ pr_err("error in devcmd2 init"); - return -ENODEV; + err = -ENODEV; + goto err_free_wq; } /* @@ -460,7 +461,7 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev) err = vnic_dev_alloc_desc_ring(vdev, &vdev->devcmd2->results_ring, DEVCMD2_RING_SIZE, DEVCMD2_DESC_SIZE); if (err) - goto err_free_wq; + goto err_disable_wq; vdev->devcmd2->result = (struct devcmd2_result *) vdev->devcmd2->results_ring.descs; @@ -481,8 +482,9 @@ static int vnic_dev_init_devcmd2(struct vnic_dev *vdev) err_free_desc_ring: vnic_dev_free_desc_ring(vdev, &vdev->devcmd2->results_ring); -err_free_wq: +err_disable_wq: vnic_wq_disable(&vdev->devcmd2->wq); +err_free_wq: vnic_wq_free(&vdev->devcmd2->wq); err_free_devcmd2: kfree(vdev->devcmd2); diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h index 2b28dd405600..e821dd32dd28 100644 --- a/drivers/scsi/hisi_sas/hisi_sas.h +++ b/drivers/scsi/hisi_sas/hisi_sas.h @@ -14,6 +14,7 @@ #include <linux/debugfs.h> #include <linux/dmapool.h> #include <linux/iopoll.h> +#include <linux/irq.h> #include <linux/lcm.h> #include <linux/libata.h> #include <linux/mfd/syscon.h> @@ -294,6 +295,7 @@ enum { struct hisi_sas_hw { int (*hw_init)(struct hisi_hba *hisi_hba); + int (*interrupt_preinit)(struct hisi_hba *hisi_hba); void (*setup_itct)(struct hisi_hba *hisi_hba, struct hisi_sas_device *device); int (*slot_index_alloc)(struct hisi_hba *hisi_hba, @@ -393,6 +395,8 @@ struct hisi_hba { u32 refclk_frequency_mhz; u8 sas_addr[SAS_ADDR_SIZE]; + int *irq_map; /* v2 hw */ + int n_phy; spinlock_t lock; struct semaphore sem; diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c index b6d4419c32f2..cf0bfac920a8 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_main.c +++ b/drivers/scsi/hisi_sas/hisi_sas_main.c @@ -2614,6 +2614,13 @@ err_out: return NULL; } +static int hisi_sas_interrupt_preinit(struct hisi_hba *hisi_hba) +{ + if (hisi_hba->hw->interrupt_preinit) + return hisi_hba->hw->interrupt_preinit(hisi_hba); + return 0; +} + int hisi_sas_probe(struct platform_device *pdev, const struct hisi_sas_hw *hw) { @@ -2671,6 +2678,10 @@ int hisi_sas_probe(struct platform_device *pdev, sha->sas_port[i] = &hisi_hba->port[i].sas_port; } + rc = hisi_sas_interrupt_preinit(hisi_hba); + if (rc) + goto err_out_ha; + rc = scsi_add_host(shost, &pdev->dev); if (rc) goto err_out_ha; diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c index b57177b52fac..9adfdefef9ca 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c @@ -3302,6 +3302,28 @@ static irq_handler_t fatal_interrupts[HISI_SAS_FATAL_INT_NR] = { fatal_axi_int_v2_hw }; +#define CQ0_IRQ_INDEX (96) + +static int hisi_sas_v2_interrupt_preinit(struct hisi_hba *hisi_hba) +{ + struct platform_device *pdev = hisi_hba->platform_dev; + struct Scsi_Host *shost = hisi_hba->shost; + struct irq_affinity desc = { + .pre_vectors = CQ0_IRQ_INDEX, + .post_vectors = 16, + }; + int resv = desc.pre_vectors + desc.post_vectors, minvec = resv + 1, nvec; + + nvec = devm_platform_get_irqs_affinity(pdev, &desc, minvec, 128, + &hisi_hba->irq_map); + if (nvec < 0) + return nvec; + + shost->nr_hw_queues = hisi_hba->cq_nvecs = nvec - resv; + + return 0; +} + /* * There is a limitation in the hip06 chipset that we need * to map in all mbigen interrupts, even if they are not used. @@ -3310,14 +3332,11 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba) { struct platform_device *pdev = hisi_hba->platform_dev; struct device *dev = &pdev->dev; - int irq, rc = 0, irq_map[128]; + int irq, rc = 0; int i, phy_no, fatal_no, queue_no; - for (i = 0; i < 128; i++) - irq_map[i] = platform_get_irq(pdev, i); - for (i = 0; i < HISI_SAS_PHY_INT_NR; i++) { - irq = irq_map[i + 1]; /* Phy up/down is irq1 */ + irq = hisi_hba->irq_map[i + 1]; /* Phy up/down is irq1 */ rc = devm_request_irq(dev, irq, phy_interrupts[i], 0, DRV_NAME " phy", hisi_hba); if (rc) { @@ -3331,7 +3350,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba) for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; - irq = irq_map[phy_no + 72]; + irq = hisi_hba->irq_map[phy_no + 72]; rc = devm_request_irq(dev, irq, sata_int_v2_hw, 0, DRV_NAME " sata", phy); if (rc) { @@ -3343,7 +3362,7 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba) } for (fatal_no = 0; fatal_no < HISI_SAS_FATAL_INT_NR; fatal_no++) { - irq = irq_map[fatal_no + 81]; + irq = hisi_hba->irq_map[fatal_no + 81]; rc = devm_request_irq(dev, irq, fatal_interrupts[fatal_no], 0, DRV_NAME " fatal", hisi_hba); if (rc) { @@ -3354,24 +3373,22 @@ static int interrupt_init_v2_hw(struct hisi_hba *hisi_hba) } } - for (queue_no = 0; queue_no < hisi_hba->queue_count; queue_no++) { + for (queue_no = 0; queue_no < hisi_hba->cq_nvecs; queue_no++) { struct hisi_sas_cq *cq = &hisi_hba->cq[queue_no]; - cq->irq_no = irq_map[queue_no + 96]; + cq->irq_no = hisi_hba->irq_map[queue_no + 96]; rc = devm_request_threaded_irq(dev, cq->irq_no, cq_interrupt_v2_hw, cq_thread_v2_hw, IRQF_ONESHOT, DRV_NAME " cq", cq); if (rc) { dev_err(dev, "irq init: could not request cq interrupt %d, rc=%d\n", - irq, rc); + cq->irq_no, rc); rc = -ENOENT; goto err_out; } + cq->irq_mask = irq_get_affinity_mask(cq->irq_no); } - - hisi_hba->cq_nvecs = hisi_hba->queue_count; - err_out: return rc; } @@ -3529,6 +3546,26 @@ static struct device_attribute *host_attrs_v2_hw[] = { NULL }; +static int map_queues_v2_hw(struct Scsi_Host *shost) +{ + struct hisi_hba *hisi_hba = shost_priv(shost); + struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT]; + const struct cpumask *mask; + unsigned int queue, cpu; + + for (queue = 0; queue < qmap->nr_queues; queue++) { + mask = irq_get_affinity_mask(hisi_hba->irq_map[96 + queue]); + if (!mask) + continue; + + for_each_cpu(cpu, mask) + qmap->mq_map[cpu] = qmap->queue_offset + queue; + } + + return 0; + +} + static struct scsi_host_template sht_v2_hw = { .name = DRV_NAME, .proc_name = DRV_NAME, @@ -3553,10 +3590,13 @@ static struct scsi_host_template sht_v2_hw = { #endif .shost_attrs = host_attrs_v2_hw, .host_reset = hisi_sas_host_reset, + .map_queues = map_queues_v2_hw, + .host_tagset = 1, }; static const struct hisi_sas_hw hisi_sas_v2_hw = { .hw_init = hisi_sas_v2_init, + .interrupt_preinit = hisi_sas_v2_interrupt_preinit, .setup_itct = setup_itct_v2_hw, .slot_index_alloc = slot_index_alloc_quirk_v2_hw, .alloc_dev = alloc_dev_quirk_v2_hw, diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 42e4d35e0d35..65f168c41d23 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -1744,7 +1744,7 @@ static int ibmvfc_queuecommand_lck(struct scsi_cmnd *cmnd, iu->pri_task_attr = IBMVFC_SIMPLE_TASK; } - vfc_cmd->correlation = cpu_to_be64(evt); + vfc_cmd->correlation = cpu_to_be64((u64)evt); if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev)))) return ibmvfc_send_event(evt, vhost, 0); @@ -2418,7 +2418,7 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev) tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF)); evt->sync_iu = &rsp_iu; - tmf->correlation = cpu_to_be64(evt); + tmf->correlation = cpu_to_be64((u64)evt); init_completion(&evt->comp); rsp_rc = ibmvfc_send_event(evt, vhost, default_timeout); @@ -3007,8 +3007,10 @@ static int ibmvfc_slave_configure(struct scsi_device *sdev) unsigned long flags = 0; spin_lock_irqsave(shost->host_lock, flags); - if (sdev->type == TYPE_DISK) + if (sdev->type == TYPE_DISK) { sdev->allow_restart = 1; + blk_queue_rq_timeout(sdev->request_queue, 120 * HZ); + } spin_unlock_irqrestore(shost->host_lock, flags); return 0; } diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c index d71afae6191c..841000445b9a 100644 --- a/drivers/scsi/libfc/fc_exch.c +++ b/drivers/scsi/libfc/fc_exch.c @@ -1623,8 +1623,13 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp) rc = fc_exch_done_locked(ep); WARN_ON(fc_seq_exch(sp) != ep); spin_unlock_bh(&ep->ex_lock); - if (!rc) + if (!rc) { fc_exch_delete(ep); + } else { + FC_EXCH_DBG(ep, "ep is completed already," + "hence skip calling the resp\n"); + goto skip_resp; + } } /* @@ -1643,6 +1648,7 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp) if (!fc_invoke_resp(ep, sp, fp)) fc_frame_free(fp); +skip_resp: fc_exch_release(ep); return; rel: @@ -1899,10 +1905,16 @@ static void fc_exch_reset(struct fc_exch *ep) fc_exch_hold(ep); - if (!rc) + if (!rc) { fc_exch_delete(ep); + } else { + FC_EXCH_DBG(ep, "ep is completed already," + "hence skip calling the resp\n"); + goto skip_resp; + } fc_invoke_resp(ep, sp, ERR_PTR(-FC_EX_CLOSED)); +skip_resp: fc_seq_set_resp(sp, NULL, ep->arg); fc_exch_release(ep); } diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index 6e4bf05c6d77..63a4f48bdc75 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -37,6 +37,7 @@ #include <linux/poll.h> #include <linux/vmalloc.h> #include <linux/irq_poll.h> +#include <linux/blk-mq-pci.h> #include <scsi/scsi.h> #include <scsi/scsi_cmnd.h> @@ -113,6 +114,10 @@ unsigned int enable_sdev_max_qd; module_param(enable_sdev_max_qd, int, 0444); MODULE_PARM_DESC(enable_sdev_max_qd, "Enable sdev max qd as can_queue. Default: 0"); +int host_tagset_enable = 1; +module_param(host_tagset_enable, int, 0444); +MODULE_PARM_DESC(host_tagset_enable, "Shared host tagset enable/disable Default: enable(1)"); + MODULE_LICENSE("GPL"); MODULE_VERSION(MEGASAS_VERSION); MODULE_AUTHOR("[email protected]"); @@ -3119,6 +3124,19 @@ megasas_bios_param(struct scsi_device *sdev, struct block_device *bdev, return 0; } +static int megasas_map_queues(struct Scsi_Host *shost) +{ + struct megasas_instance *instance; + + instance = (struct megasas_instance *)shost->hostdata; + + if (shost->nr_hw_queues == 1) + return 0; + + return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT], + instance->pdev, instance->low_latency_index_start); +} + static void megasas_aen_polling(struct work_struct *work); /** @@ -3427,6 +3445,7 @@ static struct scsi_host_template megasas_template = { .eh_timed_out = megasas_reset_timer, .shost_attrs = megaraid_host_attrs, .bios_param = megasas_bios_param, + .map_queues = megasas_map_queues, .change_queue_depth = scsi_change_queue_depth, .max_segment_size = 0xffffffff, }; @@ -6808,6 +6827,26 @@ static int megasas_io_attach(struct megasas_instance *instance) host->max_lun = MEGASAS_MAX_LUN; host->max_cmd_len = 16; + /* Use shared host tagset only for fusion adaptors + * if there are managed interrupts (smp affinity enabled case). + * Single msix_vectors in kdump, so shared host tag is also disabled. + */ + + host->host_tagset = 0; + host->nr_hw_queues = 1; + + if ((instance->adapter_type != MFI_SERIES) && + (instance->msix_vectors > instance->low_latency_index_start) && + host_tagset_enable && + instance->smp_affinity_enable) { + host->host_tagset = 1; + host->nr_hw_queues = instance->msix_vectors - + instance->low_latency_index_start; + } + + dev_info(&instance->pdev->dev, + "Max firmware commands: %d shared with nr_hw_queues = %d\n", + instance->max_fw_cmds, host->nr_hw_queues); /* * Notify the mid-layer about the new controller */ @@ -8205,11 +8244,9 @@ megasas_mgmt_fw_ioctl(struct megasas_instance *instance, goto out; } + /* always store 64 bits regardless of addressing */ sense_ptr = (void *)cmd->frame + ioc->sense_off; - if (instance->consistent_mask_64bit) - put_unaligned_le64(sense_handle, sense_ptr); - else - put_unaligned_le32(sense_handle, sense_ptr); + put_unaligned_le64(sense_handle, sense_ptr); } /* diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c index b0c01cf0428f..fd607287608e 100644 --- a/drivers/scsi/megaraid/megaraid_sas_fusion.c +++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c @@ -359,24 +359,29 @@ megasas_get_msix_index(struct megasas_instance *instance, { int sdev_busy; - /* nr_hw_queue = 1 for MegaRAID */ - struct blk_mq_hw_ctx *hctx = - scmd->device->request_queue->queue_hw_ctx[0]; - - sdev_busy = atomic_read(&hctx->nr_active); + /* TBD - if sml remove device_busy in future, driver + * should track counter in internal structure. + */ + sdev_busy = atomic_read(&scmd->device->device_busy); if (instance->perf_mode == MR_BALANCED_PERF_MODE && - sdev_busy > (data_arms * MR_DEVICE_HIGH_IOPS_DEPTH)) + sdev_busy > (data_arms * MR_DEVICE_HIGH_IOPS_DEPTH)) { cmd->request_desc->SCSIIO.MSIxIndex = mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) / MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start); - else if (instance->msix_load_balance) + } else if (instance->msix_load_balance) { cmd->request_desc->SCSIIO.MSIxIndex = (mega_mod64(atomic64_add_return(1, &instance->total_io_count), instance->msix_vectors)); - else + } else if (instance->host->nr_hw_queues > 1) { + u32 tag = blk_mq_unique_tag(scmd->request); + + cmd->request_desc->SCSIIO.MSIxIndex = blk_mq_unique_tag_to_hwq(tag) + + instance->low_latency_index_start; + } else { cmd->request_desc->SCSIIO.MSIxIndex = instance->reply_map[raw_smp_processor_id()]; + } } /** @@ -956,9 +961,6 @@ megasas_alloc_cmds_fusion(struct megasas_instance *instance) if (megasas_alloc_cmdlist_fusion(instance)) goto fail_exit; - dev_info(&instance->pdev->dev, "Configured max firmware commands: %d\n", - instance->max_fw_cmds); - /* The first 256 bytes (SMID 0) is not used. Don't add to the cmd list */ io_req_base = fusion->io_request_frames + MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE; io_req_base_phys = fusion->io_request_frames_phys + MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE; @@ -1102,8 +1104,9 @@ megasas_ioc_init_fusion(struct megasas_instance *instance) MR_HIGH_IOPS_QUEUE_COUNT) && cur_intr_coalescing) instance->perf_mode = MR_BALANCED_PERF_MODE; - dev_info(&instance->pdev->dev, "Performance mode :%s\n", - MEGASAS_PERF_MODE_2STR(instance->perf_mode)); + dev_info(&instance->pdev->dev, "Performance mode :%s (latency index = %d)\n", + MEGASAS_PERF_MODE_2STR(instance->perf_mode), + instance->low_latency_index_start); instance->fw_sync_cache_support = (scratch_pad_1 & MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0; diff --git a/drivers/scsi/mpt3sas/Kconfig b/drivers/scsi/mpt3sas/Kconfig index 86209455172d..c299f7e078fb 100644 --- a/drivers/scsi/mpt3sas/Kconfig +++ b/drivers/scsi/mpt3sas/Kconfig @@ -79,5 +79,5 @@ config SCSI_MPT2SAS select SCSI_MPT3SAS depends on PCI && SCSI help - Dummy config option for backwards compatiblity: configure the MPT3SAS + Dummy config option for backwards compatibility: configure the MPT3SAS driver instead. diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index f5fc7f518f8a..47ad64b06623 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -2245,7 +2245,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type, chap_name); break; case ISCSI_BOOT_TGT_CHAP_SECRET: - rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, + rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, chap_secret); break; case ISCSI_BOOT_TGT_REV_CHAP_NAME: @@ -2253,7 +2253,7 @@ qedi_show_boot_tgt_info(struct qedi_ctx *qedi, int type, mchap_name); break; case ISCSI_BOOT_TGT_REV_CHAP_SECRET: - rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_NAME_MAX_LEN, + rc = sprintf(buf, "%.*s\n", NVM_ISCSI_CFG_CHAP_PWD_MAX_LEN, mchap_secret); break; case ISCSI_BOOT_TGT_FLAGS: diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 24c0f7ec0351..4a08c450b756 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -6740,7 +6740,7 @@ static int __init scsi_debug_init(void) k = sdeb_zbc_model_str(sdeb_zbc_model_s); if (k < 0) { ret = k; - goto free_vm; + goto free_q_arr; } sdeb_zbc_model = k; switch (sdeb_zbc_model) { @@ -6753,7 +6753,8 @@ static int __init scsi_debug_init(void) break; default: pr_err("Invalid ZBC model\n"); - return -EINVAL; + ret = -EINVAL; + goto free_q_arr; } } if (sdeb_zbc_model != BLK_ZONED_NONE) { diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c index cba1cf6a1c12..1e939a2a387f 100644 --- a/drivers/scsi/scsi_transport_srp.c +++ b/drivers/scsi/scsi_transport_srp.c @@ -541,7 +541,14 @@ int srp_reconnect_rport(struct srp_rport *rport) res = mutex_lock_interruptible(&rport->mutex); if (res) goto out; - scsi_target_block(&shost->shost_gendev); + if (rport->state != SRP_RPORT_FAIL_FAST) + /* + * sdev state must be SDEV_TRANSPORT_OFFLINE, transition + * to SDEV_BLOCK is illegal. Calling scsi_target_unblock() + * later is ok though, scsi_internal_device_unblock_nowait() + * treats SDEV_TRANSPORT_OFFLINE like SDEV_BLOCK. + */ + scsi_target_block(&shost->shost_gendev); res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV; pr_debug("%s (state %d): transport.reconnect() returned %d\n", dev_name(&shost->shost_gendev), rport->state, res); diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 679c2c025047..a3d2d4bc4a3d 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -984,8 +984,10 @@ static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) } } - if (sdp->no_write_same) + if (sdp->no_write_same) { + rq->rq_flags |= RQF_QUIET; return BLK_STS_TARGET; + } if (sdkp->ws16 || lba > 0xffffffff || nr_blocks > 0xffff) return sd_setup_write_same16_cmnd(cmd, false); @@ -3510,10 +3512,8 @@ static int sd_probe(struct device *dev) static int sd_remove(struct device *dev) { struct scsi_disk *sdkp; - dev_t devt; sdkp = dev_get_drvdata(dev); - devt = disk_devt(sdkp->disk); scsi_autopm_get_device(sdkp->device); async_synchronize_full_domain(&scsi_sd_pm_domain); diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 3f6dfed4fe84..b915b38c2b27 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -72,6 +72,7 @@ config SCSI_UFS_DWC_TC_PCI config SCSI_UFSHCD_PLATFORM tristate "Platform bus based UFS Controller support" depends on SCSI_UFSHCD + depends on HAS_IOMEM help This selects the UFS host controller support. Select this if you have an UFS controller on Platform bus. diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 82ad31781bc9..fb32d122f2e3 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -289,7 +289,8 @@ static inline void ufshcd_wb_config(struct ufs_hba *hba) if (ret) dev_err(hba->dev, "%s: En WB flush during H8: failed: %d\n", __func__, ret); - ufshcd_wb_toggle_flush(hba, true); + if (!(hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL)) + ufshcd_wb_toggle_flush(hba, true); } static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba) @@ -3995,6 +3996,8 @@ int ufshcd_link_recovery(struct ufs_hba *hba) if (ret) dev_err(hba->dev, "%s: link recovery failed, err %d", __func__, ret); + else + ufshcd_clear_ua_wluns(hba); return ret; } @@ -4991,7 +4994,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) break; } /* end of switch */ - if ((host_byte(result) != DID_OK) && !hba->silence_err_logs) + if ((host_byte(result) != DID_OK) && + (host_byte(result) != DID_REQUEUE) && !hba->silence_err_logs) ufshcd_print_trs(hba, 1 << lrbp->task_tag, true); return result; } @@ -5436,9 +5440,6 @@ static int ufshcd_wb_toggle_flush_during_h8(struct ufs_hba *hba, bool set) static inline void ufshcd_wb_toggle_flush(struct ufs_hba *hba, bool enable) { - if (hba->quirks & UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL) - return; - if (enable) ufshcd_wb_buf_flush_enable(hba); else @@ -6003,6 +6004,9 @@ skip_err_handling: ufshcd_scsi_unblock_requests(hba); ufshcd_err_handling_unprepare(hba); up(&hba->eh_sem); + + if (!err && needs_reset) + ufshcd_clear_ua_wluns(hba); } /** @@ -6297,9 +6301,13 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba) intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); } - if (enabled_intr_status && retval == IRQ_NONE) { - dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n", - __func__, intr_status); + if (enabled_intr_status && retval == IRQ_NONE && + !ufshcd_eh_in_progress(hba)) { + dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n", + __func__, + intr_status, + hba->ufs_stats.last_intr_status, + enabled_intr_status); ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: "); } @@ -6343,7 +6351,10 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba, * Even though we use wait_event() which sleeps indefinitely, * the maximum wait time is bounded by %TM_CMD_TIMEOUT. */ - req = blk_get_request(q, REQ_OP_DRV_OUT, BLK_MQ_REQ_RESERVED); + req = blk_get_request(q, REQ_OP_DRV_OUT, 0); + if (IS_ERR(req)) + return PTR_ERR(req); + req->end_io_data = &wait; free_slot = req->tag; WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs); @@ -6661,19 +6672,16 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd) { struct Scsi_Host *host; struct ufs_hba *hba; - unsigned int tag; u32 pos; int err; - u8 resp = 0xF; - struct ufshcd_lrb *lrbp; + u8 resp = 0xF, lun; unsigned long flags; host = cmd->device->host; hba = shost_priv(host); - tag = cmd->request->tag; - lrbp = &hba->lrb[tag]; - err = ufshcd_issue_tm_cmd(hba, lrbp->lun, 0, UFS_LOGICAL_RESET, &resp); + lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); + err = ufshcd_issue_tm_cmd(hba, lun, 0, UFS_LOGICAL_RESET, &resp); if (err || resp != UPIU_TASK_MANAGEMENT_FUNC_COMPL) { if (!err) err = resp; @@ -6682,7 +6690,7 @@ static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd) /* clear the commands that were pending for corresponding LUN */ for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) { - if (hba->lrb[pos].lun == lrbp->lun) { + if (hba->lrb[pos].lun == lun) { err = ufshcd_clear_cmd(hba, pos); if (err) break; @@ -6943,14 +6951,11 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba) ufshcd_set_clk_freq(hba, true); err = ufshcd_hba_enable(hba); - if (err) - goto out; /* Establish the link again and restore the device */ - err = ufshcd_probe_hba(hba, false); if (!err) - ufshcd_clear_ua_wluns(hba); -out: + err = ufshcd_probe_hba(hba, false); + if (err) dev_err(hba->dev, "%s: Host init failed %d\n", __func__, err); ufshcd_update_evt_hist(hba, UFS_EVT_HOST_RESET, (u32)err); @@ -7721,6 +7726,8 @@ static int ufshcd_add_lus(struct ufs_hba *hba) if (ret) goto out; + ufshcd_clear_ua_wluns(hba); + /* Initialize devfreq after UFS device is detected */ if (ufshcd_is_clkscaling_supported(hba)) { memcpy(&hba->clk_scaling.saved_pwr_info.info, @@ -7922,8 +7929,6 @@ out: pm_runtime_put_sync(hba->dev); ufshcd_exit_clk_scaling(hba); ufshcd_hba_exit(hba); - } else { - ufshcd_clear_ua_wluns(hba); } } @@ -8698,6 +8703,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) ufshcd_wb_need_flush(hba)); } + flush_work(&hba->eeh_work); + if (req_dev_pwr_mode != hba->curr_dev_pwr_mode) { if (!ufshcd_is_runtime_pm(pm_op)) /* ensure that bkops is disabled */ @@ -8710,8 +8717,6 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) } } - flush_work(&hba->eeh_work); - /* * In the case of DeepSleep, the device is expected to remain powered * with the link off, so do not check for bkops. @@ -8780,6 +8785,7 @@ enable_gating: ufshcd_resume_clkscaling(hba); hba->clk_gating.is_suspended = false; hba->dev_info.b_rpm_dev_flush_capable = false; + ufshcd_clear_ua_wluns(hba); ufshcd_release(hba); out: if (hba->dev_info.b_rpm_dev_flush_capable) { @@ -8890,6 +8896,8 @@ static int ufshcd_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); } + ufshcd_clear_ua_wluns(hba); + /* Schedule clock gating in case of no access to UFS device yet */ ufshcd_release(hba); @@ -8938,7 +8946,8 @@ int ufshcd_system_suspend(struct ufs_hba *hba) if ((ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl) == hba->curr_dev_pwr_mode) && (ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl) == - hba->uic_link_state)) + hba->uic_link_state) && + !hba->dev_info.b_rpm_dev_flush_capable) goto out; if (pm_runtime_suspended(hba->dev)) { diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c index f8e070d67fa3..a14684ffe4c1 100644 --- a/drivers/sh/intc/core.c +++ b/drivers/sh/intc/core.c @@ -214,7 +214,7 @@ int __init register_intc_controller(struct intc_desc *desc) d->window[k].phys = res->start; d->window[k].size = resource_size(res); d->window[k].virt = ioremap(res->start, - resource_size(res)); + resource_size(res)); if (!d->window[k].virt) goto err2; } diff --git a/drivers/sh/intc/virq-debugfs.c b/drivers/sh/intc/virq-debugfs.c index 9e62ba9311f0..939915a07d99 100644 --- a/drivers/sh/intc/virq-debugfs.c +++ b/drivers/sh/intc/virq-debugfs.c @@ -16,7 +16,7 @@ #include <linux/debugfs.h> #include "internals.h" -static int intc_irq_xlate_debug(struct seq_file *m, void *priv) +static int intc_irq_xlate_show(struct seq_file *m, void *priv) { int i; @@ -37,17 +37,7 @@ static int intc_irq_xlate_debug(struct seq_file *m, void *priv) return 0; } -static int intc_irq_xlate_open(struct inode *inode, struct file *file) -{ - return single_open(file, intc_irq_xlate_debug, inode->i_private); -} - -static const struct file_operations intc_irq_xlate_fops = { - .open = intc_irq_xlate_open, - .read = seq_read, - .llseek = seq_lseek, - .release = single_release, -}; +DEFINE_SHOW_ATTRIBUTE(intc_irq_xlate); static int __init intc_irq_xlate_init(void) { diff --git a/drivers/soc/litex/litex_soc_ctrl.c b/drivers/soc/litex/litex_soc_ctrl.c index 1217cafdfd4d..9b0766384570 100644 --- a/drivers/soc/litex/litex_soc_ctrl.c +++ b/drivers/soc/litex/litex_soc_ctrl.c @@ -140,12 +140,13 @@ struct litex_soc_ctrl_device { void __iomem *base; }; +#ifdef CONFIG_OF static const struct of_device_id litex_soc_ctrl_of_match[] = { {.compatible = "litex,soc-controller"}, {}, }; - MODULE_DEVICE_TABLE(of, litex_soc_ctrl_of_match); +#endif /* CONFIG_OF */ static int litex_soc_ctrl_probe(struct platform_device *pdev) { diff --git a/drivers/soc/mediatek/Makefile b/drivers/soc/mediatek/Makefile index b6908db534c2..90270f8114ed 100644 --- a/drivers/soc/mediatek/Makefile +++ b/drivers/soc/mediatek/Makefile @@ -6,3 +6,4 @@ obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o obj-$(CONFIG_MTK_SCPSYS_PM_DOMAINS) += mtk-pm-domains.o obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o +obj-$(CONFIG_MTK_MMSYS) += mtk-mutex.o diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp.c b/drivers/soc/mediatek/mtk-mutex.c index 1f99db6b1a42..f531b119da7a 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_ddp.c +++ b/drivers/soc/mediatek/mtk-mutex.c @@ -9,12 +9,11 @@ #include <linux/of_device.h> #include <linux/platform_device.h> #include <linux/regmap.h> +#include <linux/soc/mediatek/mtk-mmsys.h> +#include <linux/soc/mediatek/mtk-mutex.h> -#include "mtk_drm_ddp.h" -#include "mtk_drm_ddp_comp.h" - -#define MT2701_DISP_MUTEX0_MOD0 0x2c -#define MT2701_DISP_MUTEX0_SOF0 0x30 +#define MT2701_MUTEX0_MOD0 0x2c +#define MT2701_MUTEX0_SOF0 0x30 #define DISP_REG_MUTEX_EN(n) (0x20 + 0x20 * (n)) #define DISP_REG_MUTEX(n) (0x24 + 0x20 * (n)) @@ -79,33 +78,32 @@ #define MT2701_MUTEX_MOD_DISP_RDMA0 10 #define MT2701_MUTEX_MOD_DISP_RDMA1 12 -#define MUTEX_SOF_SINGLE_MODE 0 -#define MUTEX_SOF_DSI0 1 -#define MUTEX_SOF_DSI1 2 -#define MUTEX_SOF_DPI0 3 -#define MUTEX_SOF_DPI1 4 -#define MUTEX_SOF_DSI2 5 -#define MUTEX_SOF_DSI3 6 -#define MT8167_MUTEX_SOF_DPI0 2 -#define MT8167_MUTEX_SOF_DPI1 3 - - -struct mtk_disp_mutex { +#define MT2712_MUTEX_SOF_SINGLE_MODE 0 +#define MT2712_MUTEX_SOF_DSI0 1 +#define MT2712_MUTEX_SOF_DSI1 2 +#define MT2712_MUTEX_SOF_DPI0 3 +#define MT2712_MUTEX_SOF_DPI1 4 +#define MT2712_MUTEX_SOF_DSI2 5 +#define MT2712_MUTEX_SOF_DSI3 6 +#define MT8167_MUTEX_SOF_DPI0 2 +#define MT8167_MUTEX_SOF_DPI1 3 + +struct mtk_mutex { int id; bool claimed; }; -enum mtk_ddp_mutex_sof_id { - DDP_MUTEX_SOF_SINGLE_MODE, - DDP_MUTEX_SOF_DSI0, - DDP_MUTEX_SOF_DSI1, - DDP_MUTEX_SOF_DPI0, - DDP_MUTEX_SOF_DPI1, - DDP_MUTEX_SOF_DSI2, - DDP_MUTEX_SOF_DSI3, +enum mtk_mutex_sof_id { + MUTEX_SOF_SINGLE_MODE, + MUTEX_SOF_DSI0, + MUTEX_SOF_DSI1, + MUTEX_SOF_DPI0, + MUTEX_SOF_DPI1, + MUTEX_SOF_DSI2, + MUTEX_SOF_DSI3, }; -struct mtk_ddp_data { +struct mtk_mutex_data { const unsigned int *mutex_mod; const unsigned int *mutex_sof; const unsigned int mutex_mod_reg; @@ -113,12 +111,12 @@ struct mtk_ddp_data { const bool no_clk; }; -struct mtk_ddp { +struct mtk_mutex_ctx { struct device *dev; struct clk *clk; void __iomem *regs; - struct mtk_disp_mutex mutex[10]; - const struct mtk_ddp_data *data; + struct mtk_mutex mutex[10]; + const struct mtk_mutex_data *data; }; static const unsigned int mt2701_mutex_mod[DDP_COMPONENT_ID_MAX] = { @@ -183,150 +181,155 @@ static const unsigned int mt8173_mutex_mod[DDP_COMPONENT_ID_MAX] = { [DDP_COMPONENT_WDMA1] = MT8173_MUTEX_MOD_DISP_WDMA1, }; -static const unsigned int mt2712_mutex_sof[DDP_MUTEX_SOF_DSI3 + 1] = { - [DDP_MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, - [DDP_MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, - [DDP_MUTEX_SOF_DSI1] = MUTEX_SOF_DSI1, - [DDP_MUTEX_SOF_DPI0] = MUTEX_SOF_DPI0, - [DDP_MUTEX_SOF_DPI1] = MUTEX_SOF_DPI1, - [DDP_MUTEX_SOF_DSI2] = MUTEX_SOF_DSI2, - [DDP_MUTEX_SOF_DSI3] = MUTEX_SOF_DSI3, +static const unsigned int mt2712_mutex_sof[MUTEX_SOF_DSI3 + 1] = { + [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, + [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, + [MUTEX_SOF_DSI1] = MUTEX_SOF_DSI1, + [MUTEX_SOF_DPI0] = MUTEX_SOF_DPI0, + [MUTEX_SOF_DPI1] = MUTEX_SOF_DPI1, + [MUTEX_SOF_DSI2] = MUTEX_SOF_DSI2, + [MUTEX_SOF_DSI3] = MUTEX_SOF_DSI3, }; -static const unsigned int mt8167_mutex_sof[DDP_MUTEX_SOF_DSI3 + 1] = { - [DDP_MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, - [DDP_MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, - [DDP_MUTEX_SOF_DPI0] = MT8167_MUTEX_SOF_DPI0, - [DDP_MUTEX_SOF_DPI1] = MT8167_MUTEX_SOF_DPI1, +static const unsigned int mt8167_mutex_sof[MUTEX_SOF_DSI3 + 1] = { + [MUTEX_SOF_SINGLE_MODE] = MUTEX_SOF_SINGLE_MODE, + [MUTEX_SOF_DSI0] = MUTEX_SOF_DSI0, + [MUTEX_SOF_DPI0] = MT8167_MUTEX_SOF_DPI0, + [MUTEX_SOF_DPI1] = MT8167_MUTEX_SOF_DPI1, }; -static const struct mtk_ddp_data mt2701_ddp_driver_data = { +static const struct mtk_mutex_data mt2701_mutex_driver_data = { .mutex_mod = mt2701_mutex_mod, .mutex_sof = mt2712_mutex_sof, - .mutex_mod_reg = MT2701_DISP_MUTEX0_MOD0, - .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, + .mutex_mod_reg = MT2701_MUTEX0_MOD0, + .mutex_sof_reg = MT2701_MUTEX0_SOF0, }; -static const struct mtk_ddp_data mt2712_ddp_driver_data = { +static const struct mtk_mutex_data mt2712_mutex_driver_data = { .mutex_mod = mt2712_mutex_mod, .mutex_sof = mt2712_mutex_sof, - .mutex_mod_reg = MT2701_DISP_MUTEX0_MOD0, - .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, + .mutex_mod_reg = MT2701_MUTEX0_MOD0, + .mutex_sof_reg = MT2701_MUTEX0_SOF0, }; -static const struct mtk_ddp_data mt8167_ddp_driver_data = { +static const struct mtk_mutex_data mt8167_mutex_driver_data = { .mutex_mod = mt8167_mutex_mod, .mutex_sof = mt8167_mutex_sof, - .mutex_mod_reg = MT2701_DISP_MUTEX0_MOD0, - .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, + .mutex_mod_reg = MT2701_MUTEX0_MOD0, + .mutex_sof_reg = MT2701_MUTEX0_SOF0, .no_clk = true, }; -static const struct mtk_ddp_data mt8173_ddp_driver_data = { +static const struct mtk_mutex_data mt8173_mutex_driver_data = { .mutex_mod = mt8173_mutex_mod, .mutex_sof = mt2712_mutex_sof, - .mutex_mod_reg = MT2701_DISP_MUTEX0_MOD0, - .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, + .mutex_mod_reg = MT2701_MUTEX0_MOD0, + .mutex_sof_reg = MT2701_MUTEX0_SOF0, }; -struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id) +struct mtk_mutex *mtk_mutex_get(struct device *dev) { - struct mtk_ddp *ddp = dev_get_drvdata(dev); - - if (id >= 10) - return ERR_PTR(-EINVAL); - if (ddp->mutex[id].claimed) - return ERR_PTR(-EBUSY); + struct mtk_mutex_ctx *mtx = dev_get_drvdata(dev); + int i; - ddp->mutex[id].claimed = true; + for (i = 0; i < 10; i++) + if (!mtx->mutex[i].claimed) { + mtx->mutex[i].claimed = true; + return &mtx->mutex[i]; + } - return &ddp->mutex[id]; + return ERR_PTR(-EBUSY); } +EXPORT_SYMBOL_GPL(mtk_mutex_get); -void mtk_disp_mutex_put(struct mtk_disp_mutex *mutex) +void mtk_mutex_put(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); - WARN_ON(&ddp->mutex[mutex->id] != mutex); + WARN_ON(&mtx->mutex[mutex->id] != mutex); mutex->claimed = false; } +EXPORT_SYMBOL_GPL(mtk_mutex_put); -int mtk_disp_mutex_prepare(struct mtk_disp_mutex *mutex) +int mtk_mutex_prepare(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); - return clk_prepare_enable(ddp->clk); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); + return clk_prepare_enable(mtx->clk); } +EXPORT_SYMBOL_GPL(mtk_mutex_prepare); -void mtk_disp_mutex_unprepare(struct mtk_disp_mutex *mutex) +void mtk_mutex_unprepare(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); - clk_disable_unprepare(ddp->clk); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); + clk_disable_unprepare(mtx->clk); } +EXPORT_SYMBOL_GPL(mtk_mutex_unprepare); -void mtk_disp_mutex_add_comp(struct mtk_disp_mutex *mutex, - enum mtk_ddp_comp_id id) +void mtk_mutex_add_comp(struct mtk_mutex *mutex, + enum mtk_ddp_comp_id id) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); unsigned int reg; unsigned int sof_id; unsigned int offset; - WARN_ON(&ddp->mutex[mutex->id] != mutex); + WARN_ON(&mtx->mutex[mutex->id] != mutex); switch (id) { case DDP_COMPONENT_DSI0: - sof_id = DDP_MUTEX_SOF_DSI0; + sof_id = MUTEX_SOF_DSI0; break; case DDP_COMPONENT_DSI1: - sof_id = DDP_MUTEX_SOF_DSI0; + sof_id = MUTEX_SOF_DSI0; break; case DDP_COMPONENT_DSI2: - sof_id = DDP_MUTEX_SOF_DSI2; + sof_id = MUTEX_SOF_DSI2; break; case DDP_COMPONENT_DSI3: - sof_id = DDP_MUTEX_SOF_DSI3; + sof_id = MUTEX_SOF_DSI3; break; case DDP_COMPONENT_DPI0: - sof_id = DDP_MUTEX_SOF_DPI0; + sof_id = MUTEX_SOF_DPI0; break; case DDP_COMPONENT_DPI1: - sof_id = DDP_MUTEX_SOF_DPI1; + sof_id = MUTEX_SOF_DPI1; break; default: - if (ddp->data->mutex_mod[id] < 32) { - offset = DISP_REG_MUTEX_MOD(ddp->data->mutex_mod_reg, + if (mtx->data->mutex_mod[id] < 32) { + offset = DISP_REG_MUTEX_MOD(mtx->data->mutex_mod_reg, mutex->id); - reg = readl_relaxed(ddp->regs + offset); - reg |= 1 << ddp->data->mutex_mod[id]; - writel_relaxed(reg, ddp->regs + offset); + reg = readl_relaxed(mtx->regs + offset); + reg |= 1 << mtx->data->mutex_mod[id]; + writel_relaxed(reg, mtx->regs + offset); } else { offset = DISP_REG_MUTEX_MOD2(mutex->id); - reg = readl_relaxed(ddp->regs + offset); - reg |= 1 << (ddp->data->mutex_mod[id] - 32); - writel_relaxed(reg, ddp->regs + offset); + reg = readl_relaxed(mtx->regs + offset); + reg |= 1 << (mtx->data->mutex_mod[id] - 32); + writel_relaxed(reg, mtx->regs + offset); } return; } - writel_relaxed(ddp->data->mutex_sof[sof_id], - ddp->regs + - DISP_REG_MUTEX_SOF(ddp->data->mutex_sof_reg, mutex->id)); + writel_relaxed(mtx->data->mutex_sof[sof_id], + mtx->regs + + DISP_REG_MUTEX_SOF(mtx->data->mutex_sof_reg, mutex->id)); } +EXPORT_SYMBOL_GPL(mtk_mutex_add_comp); -void mtk_disp_mutex_remove_comp(struct mtk_disp_mutex *mutex, - enum mtk_ddp_comp_id id) +void mtk_mutex_remove_comp(struct mtk_mutex *mutex, + enum mtk_ddp_comp_id id) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); unsigned int reg; unsigned int offset; - WARN_ON(&ddp->mutex[mutex->id] != mutex); + WARN_ON(&mtx->mutex[mutex->id] != mutex); switch (id) { case DDP_COMPONENT_DSI0: @@ -336,129 +339,136 @@ void mtk_disp_mutex_remove_comp(struct mtk_disp_mutex *mutex, case DDP_COMPONENT_DPI0: case DDP_COMPONENT_DPI1: writel_relaxed(MUTEX_SOF_SINGLE_MODE, - ddp->regs + - DISP_REG_MUTEX_SOF(ddp->data->mutex_sof_reg, + mtx->regs + + DISP_REG_MUTEX_SOF(mtx->data->mutex_sof_reg, mutex->id)); break; default: - if (ddp->data->mutex_mod[id] < 32) { - offset = DISP_REG_MUTEX_MOD(ddp->data->mutex_mod_reg, + if (mtx->data->mutex_mod[id] < 32) { + offset = DISP_REG_MUTEX_MOD(mtx->data->mutex_mod_reg, mutex->id); - reg = readl_relaxed(ddp->regs + offset); - reg &= ~(1 << ddp->data->mutex_mod[id]); - writel_relaxed(reg, ddp->regs + offset); + reg = readl_relaxed(mtx->regs + offset); + reg &= ~(1 << mtx->data->mutex_mod[id]); + writel_relaxed(reg, mtx->regs + offset); } else { offset = DISP_REG_MUTEX_MOD2(mutex->id); - reg = readl_relaxed(ddp->regs + offset); - reg &= ~(1 << (ddp->data->mutex_mod[id] - 32)); - writel_relaxed(reg, ddp->regs + offset); + reg = readl_relaxed(mtx->regs + offset); + reg &= ~(1 << (mtx->data->mutex_mod[id] - 32)); + writel_relaxed(reg, mtx->regs + offset); } break; } } +EXPORT_SYMBOL_GPL(mtk_mutex_remove_comp); -void mtk_disp_mutex_enable(struct mtk_disp_mutex *mutex) +void mtk_mutex_enable(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); - WARN_ON(&ddp->mutex[mutex->id] != mutex); + WARN_ON(&mtx->mutex[mutex->id] != mutex); - writel(1, ddp->regs + DISP_REG_MUTEX_EN(mutex->id)); + writel(1, mtx->regs + DISP_REG_MUTEX_EN(mutex->id)); } +EXPORT_SYMBOL_GPL(mtk_mutex_enable); -void mtk_disp_mutex_disable(struct mtk_disp_mutex *mutex) +void mtk_mutex_disable(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); - WARN_ON(&ddp->mutex[mutex->id] != mutex); + WARN_ON(&mtx->mutex[mutex->id] != mutex); - writel(0, ddp->regs + DISP_REG_MUTEX_EN(mutex->id)); + writel(0, mtx->regs + DISP_REG_MUTEX_EN(mutex->id)); } +EXPORT_SYMBOL_GPL(mtk_mutex_disable); -void mtk_disp_mutex_acquire(struct mtk_disp_mutex *mutex) +void mtk_mutex_acquire(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); u32 tmp; - writel(1, ddp->regs + DISP_REG_MUTEX_EN(mutex->id)); - writel(1, ddp->regs + DISP_REG_MUTEX(mutex->id)); - if (readl_poll_timeout_atomic(ddp->regs + DISP_REG_MUTEX(mutex->id), + writel(1, mtx->regs + DISP_REG_MUTEX_EN(mutex->id)); + writel(1, mtx->regs + DISP_REG_MUTEX(mutex->id)); + if (readl_poll_timeout_atomic(mtx->regs + DISP_REG_MUTEX(mutex->id), tmp, tmp & INT_MUTEX, 1, 10000)) pr_err("could not acquire mutex %d\n", mutex->id); } +EXPORT_SYMBOL_GPL(mtk_mutex_acquire); -void mtk_disp_mutex_release(struct mtk_disp_mutex *mutex) +void mtk_mutex_release(struct mtk_mutex *mutex) { - struct mtk_ddp *ddp = container_of(mutex, struct mtk_ddp, - mutex[mutex->id]); + struct mtk_mutex_ctx *mtx = container_of(mutex, struct mtk_mutex_ctx, + mutex[mutex->id]); - writel(0, ddp->regs + DISP_REG_MUTEX(mutex->id)); + writel(0, mtx->regs + DISP_REG_MUTEX(mutex->id)); } +EXPORT_SYMBOL_GPL(mtk_mutex_release); -static int mtk_ddp_probe(struct platform_device *pdev) +static int mtk_mutex_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; - struct mtk_ddp *ddp; + struct mtk_mutex_ctx *mtx; struct resource *regs; int i; - ddp = devm_kzalloc(dev, sizeof(*ddp), GFP_KERNEL); - if (!ddp) + mtx = devm_kzalloc(dev, sizeof(*mtx), GFP_KERNEL); + if (!mtx) return -ENOMEM; for (i = 0; i < 10; i++) - ddp->mutex[i].id = i; + mtx->mutex[i].id = i; - ddp->data = of_device_get_match_data(dev); + mtx->data = of_device_get_match_data(dev); - if (!ddp->data->no_clk) { - ddp->clk = devm_clk_get(dev, NULL); - if (IS_ERR(ddp->clk)) { - if (PTR_ERR(ddp->clk) != -EPROBE_DEFER) + if (!mtx->data->no_clk) { + mtx->clk = devm_clk_get(dev, NULL); + if (IS_ERR(mtx->clk)) { + if (PTR_ERR(mtx->clk) != -EPROBE_DEFER) dev_err(dev, "Failed to get clock\n"); - return PTR_ERR(ddp->clk); + return PTR_ERR(mtx->clk); } } regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); - ddp->regs = devm_ioremap_resource(dev, regs); - if (IS_ERR(ddp->regs)) { + mtx->regs = devm_ioremap_resource(dev, regs); + if (IS_ERR(mtx->regs)) { dev_err(dev, "Failed to map mutex registers\n"); - return PTR_ERR(ddp->regs); + return PTR_ERR(mtx->regs); } - platform_set_drvdata(pdev, ddp); + platform_set_drvdata(pdev, mtx); return 0; } -static int mtk_ddp_remove(struct platform_device *pdev) +static int mtk_mutex_remove(struct platform_device *pdev) { return 0; } -static const struct of_device_id ddp_driver_dt_match[] = { +static const struct of_device_id mutex_driver_dt_match[] = { { .compatible = "mediatek,mt2701-disp-mutex", - .data = &mt2701_ddp_driver_data}, + .data = &mt2701_mutex_driver_data}, { .compatible = "mediatek,mt2712-disp-mutex", - .data = &mt2712_ddp_driver_data}, + .data = &mt2712_mutex_driver_data}, { .compatible = "mediatek,mt8167-disp-mutex", - .data = &mt8167_ddp_driver_data}, + .data = &mt8167_mutex_driver_data}, { .compatible = "mediatek,mt8173-disp-mutex", - .data = &mt8173_ddp_driver_data}, + .data = &mt8173_mutex_driver_data}, {}, }; -MODULE_DEVICE_TABLE(of, ddp_driver_dt_match); +MODULE_DEVICE_TABLE(of, mutex_driver_dt_match); -struct platform_driver mtk_ddp_driver = { - .probe = mtk_ddp_probe, - .remove = mtk_ddp_remove, +struct platform_driver mtk_mutex_driver = { + .probe = mtk_mutex_probe, + .remove = mtk_mutex_remove, .driver = { - .name = "mediatek-ddp", + .name = "mediatek-mutex", .owner = THIS_MODULE, - .of_match_table = ddp_driver_dt_match, + .of_match_table = mutex_driver_dt_match, }, }; + +builtin_platform_driver(mtk_mutex_driver); diff --git a/drivers/spi/spi-altera.c b/drivers/spi/spi-altera.c index 809bfff3690a..cbc4c28c1541 100644 --- a/drivers/spi/spi-altera.c +++ b/drivers/spi/spi-altera.c @@ -189,24 +189,26 @@ static int altera_spi_txrx(struct spi_master *master, /* send the first byte */ altera_spi_tx_word(hw); - } else { - while (hw->count < hw->len) { - altera_spi_tx_word(hw); - for (;;) { - altr_spi_readl(hw, ALTERA_SPI_STATUS, &val); - if (val & ALTERA_SPI_STATUS_RRDY_MSK) - break; + return 1; + } + + while (hw->count < hw->len) { + altera_spi_tx_word(hw); - cpu_relax(); - } + for (;;) { + altr_spi_readl(hw, ALTERA_SPI_STATUS, &val); + if (val & ALTERA_SPI_STATUS_RRDY_MSK) + break; - altera_spi_rx_word(hw); + cpu_relax(); } - spi_finalize_current_transfer(master); + + altera_spi_rx_word(hw); } + spi_finalize_current_transfer(master); - return t->len; + return 0; } static irqreturn_t altera_spi_irq(int irq, void *dev) diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c index 70467b9d61ba..a3afd1b9ac56 100644 --- a/drivers/spi/spi-cadence.c +++ b/drivers/spi/spi-cadence.c @@ -115,6 +115,7 @@ struct cdns_spi { void __iomem *regs; struct clk *ref_clk; struct clk *pclk; + unsigned int clk_rate; u32 speed_hz; const u8 *txbuf; u8 *rxbuf; @@ -250,7 +251,7 @@ static void cdns_spi_config_clock_freq(struct spi_device *spi, u32 ctrl_reg, baud_rate_val; unsigned long frequency; - frequency = clk_get_rate(xspi->ref_clk); + frequency = xspi->clk_rate; ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR); @@ -558,8 +559,9 @@ static int cdns_spi_probe(struct platform_device *pdev) master->auto_runtime_pm = true; master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; + xspi->clk_rate = clk_get_rate(xspi->ref_clk); /* Set to default valid value */ - master->max_speed_hz = clk_get_rate(xspi->ref_clk) / 4; + master->max_speed_hz = xspi->clk_rate / 4; xspi->speed_hz = master->max_speed_hz; master->bits_per_word_mask = SPI_BPW_MASK(8); diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c index 9494257e1c33..6d8e0a05a535 100644 --- a/drivers/spi/spi-fsl-spi.c +++ b/drivers/spi/spi-fsl-spi.c @@ -115,14 +115,13 @@ static void fsl_spi_chipselect(struct spi_device *spi, int value) { struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master); struct fsl_spi_platform_data *pdata; - bool pol = spi->mode & SPI_CS_HIGH; struct spi_mpc8xxx_cs *cs = spi->controller_state; pdata = spi->dev.parent->parent->platform_data; if (value == BITBANG_CS_INACTIVE) { if (pdata->cs_control) - pdata->cs_control(spi, !pol); + pdata->cs_control(spi, false); } if (value == BITBANG_CS_ACTIVE) { @@ -134,7 +133,7 @@ static void fsl_spi_chipselect(struct spi_device *spi, int value) fsl_spi_change_mode(spi); if (pdata->cs_control) - pdata->cs_control(spi, pol); + pdata->cs_control(spi, true); } } diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c index 512e925d5ea4..881f645661cc 100644 --- a/drivers/spi/spi-geni-qcom.c +++ b/drivers/spi/spi-geni-qcom.c @@ -83,6 +83,7 @@ struct spi_geni_master { spinlock_t lock; int irq; bool cs_flag; + bool abort_failed; }; static int get_spi_clk_cfg(unsigned int speed_hz, @@ -141,8 +142,49 @@ static void handle_fifo_timeout(struct spi_master *spi, spin_unlock_irq(&mas->lock); time_left = wait_for_completion_timeout(&mas->abort_done, HZ); - if (!time_left) + if (!time_left) { dev_err(mas->dev, "Failed to cancel/abort m_cmd\n"); + + /* + * No need for a lock since SPI core has a lock and we never + * access this from an interrupt. + */ + mas->abort_failed = true; + } +} + +static bool spi_geni_is_abort_still_pending(struct spi_geni_master *mas) +{ + struct geni_se *se = &mas->se; + u32 m_irq, m_irq_en; + + if (!mas->abort_failed) + return false; + + /* + * The only known case where a transfer times out and then a cancel + * times out then an abort times out is if something is blocking our + * interrupt handler from running. Avoid starting any new transfers + * until that sorts itself out. + */ + spin_lock_irq(&mas->lock); + m_irq = readl(se->base + SE_GENI_M_IRQ_STATUS); + m_irq_en = readl(se->base + SE_GENI_M_IRQ_EN); + spin_unlock_irq(&mas->lock); + + if (m_irq & m_irq_en) { + dev_err(mas->dev, "Interrupts pending after abort: %#010x\n", + m_irq & m_irq_en); + return true; + } + + /* + * If we're here the problem resolved itself so no need to check more + * on future transfers. + */ + mas->abort_failed = false; + + return false; } static void spi_geni_set_cs(struct spi_device *slv, bool set_flag) @@ -158,10 +200,21 @@ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag) if (set_flag == mas->cs_flag) return; - mas->cs_flag = set_flag; - pm_runtime_get_sync(mas->dev); + + if (spi_geni_is_abort_still_pending(mas)) { + dev_err(mas->dev, "Can't set chip select\n"); + goto exit; + } + spin_lock_irq(&mas->lock); + if (mas->cur_xfer) { + dev_err(mas->dev, "Can't set CS when prev xfer running\n"); + spin_unlock_irq(&mas->lock); + goto exit; + } + + mas->cs_flag = set_flag; reinit_completion(&mas->cs_done); if (set_flag) geni_se_setup_m_cmd(se, SPI_CS_ASSERT, 0); @@ -170,9 +223,12 @@ static void spi_geni_set_cs(struct spi_device *slv, bool set_flag) spin_unlock_irq(&mas->lock); time_left = wait_for_completion_timeout(&mas->cs_done, HZ); - if (!time_left) + if (!time_left) { + dev_warn(mas->dev, "Timeout setting chip select\n"); handle_fifo_timeout(spi, NULL); + } +exit: pm_runtime_put(mas->dev); } @@ -280,6 +336,9 @@ static int spi_geni_prepare_message(struct spi_master *spi, int ret; struct spi_geni_master *mas = spi_master_get_devdata(spi); + if (spi_geni_is_abort_still_pending(mas)) + return -EBUSY; + ret = setup_fifo_params(spi_msg->spi, spi); if (ret) dev_err(mas->dev, "Couldn't select mode %d\n", ret); @@ -354,6 +413,12 @@ static bool geni_spi_handle_tx(struct spi_geni_master *mas) unsigned int bytes_per_fifo_word = geni_byte_per_fifo_word(mas); unsigned int i = 0; + /* Stop the watermark IRQ if nothing to send */ + if (!mas->cur_xfer) { + writel(0, se->base + SE_GENI_TX_WATERMARK_REG); + return false; + } + max_bytes = (mas->tx_fifo_depth - mas->tx_wm) * bytes_per_fifo_word; if (mas->tx_rem_bytes < max_bytes) max_bytes = mas->tx_rem_bytes; @@ -396,6 +461,14 @@ static void geni_spi_handle_rx(struct spi_geni_master *mas) if (rx_last_byte_valid && rx_last_byte_valid < 4) rx_bytes -= bytes_per_fifo_word - rx_last_byte_valid; } + + /* Clear out the FIFO and bail if nowhere to put it */ + if (!mas->cur_xfer) { + for (i = 0; i < DIV_ROUND_UP(rx_bytes, bytes_per_fifo_word); i++) + readl(se->base + SE_GENI_RX_FIFOn); + return; + } + if (mas->rx_rem_bytes < rx_bytes) rx_bytes = mas->rx_rem_bytes; @@ -495,6 +568,9 @@ static int spi_geni_transfer_one(struct spi_master *spi, { struct spi_geni_master *mas = spi_master_get_devdata(spi); + if (spi_geni_is_abort_still_pending(mas)) + return -EBUSY; + /* Terminate and return success for 0 byte length transfer */ if (!xfer->len) return 0; diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c index 471dedf3d339..6017209c6d2f 100644 --- a/drivers/spi/spi-stm32.c +++ b/drivers/spi/spi-stm32.c @@ -493,9 +493,9 @@ static u32 stm32h7_spi_prepare_fthlv(struct stm32_spi *spi, u32 xfer_len) /* align packet size with data registers access */ if (spi->cur_bpw > 8) - fthlv -= (fthlv % 2); /* multiple of 2 */ + fthlv += (fthlv % 2) ? 1 : 0; else - fthlv -= (fthlv % 4); /* multiple of 4 */ + fthlv += (fthlv % 4) ? (4 - (fthlv % 4)) : 0; if (!fthlv) fthlv = 1; diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index 51d7c004fbab..720ab34784c1 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1108,6 +1108,7 @@ static int spi_transfer_wait(struct spi_controller *ctlr, { struct spi_statistics *statm = &ctlr->statistics; struct spi_statistics *stats = &msg->spi->statistics; + u32 speed_hz = xfer->speed_hz; unsigned long long ms; if (spi_controller_is_slave(ctlr)) { @@ -1116,8 +1117,11 @@ static int spi_transfer_wait(struct spi_controller *ctlr, return -EINTR; } } else { + if (!speed_hz) + speed_hz = 100000; + ms = 8LL * 1000LL * xfer->len; - do_div(ms, xfer->speed_hz); + do_div(ms, speed_hz); ms += ms + 200; /* some tolerance */ if (ms > UINT_MAX) @@ -3378,8 +3382,9 @@ int spi_setup(struct spi_device *spi) if (status) return status; - if (!spi->max_speed_hz || - spi->max_speed_hz > spi->controller->max_speed_hz) + if (spi->controller->max_speed_hz && + (!spi->max_speed_hz || + spi->max_speed_hz > spi->controller->max_speed_hz)) spi->max_speed_hz = spi->controller->max_speed_hz; mutex_lock(&spi->controller->io_mutex); diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c index d99231c737fb..80d74cce2a01 100644 --- a/drivers/staging/comedi/comedi_fops.c +++ b/drivers/staging/comedi/comedi_fops.c @@ -2987,7 +2987,9 @@ static int put_compat_cmd(struct comedi32_cmd_struct __user *cmd32, v32.chanlist_len = cmd->chanlist_len; v32.data = ptr_to_compat(cmd->data); v32.data_len = cmd->data_len; - return copy_to_user(cmd32, &v32, sizeof(v32)); + if (copy_to_user(cmd32, &v32, sizeof(v32))) + return -EFAULT; + return 0; } /* Handle 32-bit COMEDI_CMD ioctl. */ diff --git a/drivers/staging/hikey9xx/hisi-spmi-controller.c b/drivers/staging/hikey9xx/hisi-spmi-controller.c index 861aedd5de48..0d42bc65f39b 100644 --- a/drivers/staging/hikey9xx/hisi-spmi-controller.c +++ b/drivers/staging/hikey9xx/hisi-spmi-controller.c @@ -278,21 +278,24 @@ static int spmi_controller_probe(struct platform_device *pdev) iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!iores) { dev_err(&pdev->dev, "can not get resource!\n"); - return -EINVAL; + ret = -EINVAL; + goto err_put_controller; } spmi_controller->base = devm_ioremap(&pdev->dev, iores->start, resource_size(iores)); if (!spmi_controller->base) { dev_err(&pdev->dev, "can not remap base addr!\n"); - return -EADDRNOTAVAIL; + ret = -EADDRNOTAVAIL; + goto err_put_controller; } ret = of_property_read_u32(pdev->dev.of_node, "spmi-channel", &spmi_controller->channel); if (ret) { dev_err(&pdev->dev, "can not get channel\n"); - return -ENODEV; + ret = -ENODEV; + goto err_put_controller; } platform_set_drvdata(pdev, spmi_controller); @@ -309,9 +312,15 @@ static int spmi_controller_probe(struct platform_device *pdev) ctrl->write_cmd = spmi_write_cmd; ret = spmi_controller_add(ctrl); - if (ret) - dev_err(&pdev->dev, "spmi_add_controller failed with error %d!\n", ret); + if (ret) { + dev_err(&pdev->dev, "spmi_controller_add failed with error %d!\n", ret); + goto err_put_controller; + } + + return 0; +err_put_controller: + spmi_controller_put(ctrl); return ret; } @@ -320,7 +329,7 @@ static int spmi_del_controller(struct platform_device *pdev) struct spmi_controller *ctrl = platform_get_drvdata(pdev); spmi_controller_remove(ctrl); - kfree(ctrl); + spmi_controller_put(ctrl); return 0; } diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c index 52b9fb18c87f..b666cb23e5ca 100644 --- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c +++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c @@ -1062,26 +1062,6 @@ static const struct v4l2_ctrl_config ctrl_select_isp_version = { .def = 0, }; -#if 0 /* #ifdef CONFIG_ION */ -/* - * Control for ISP ion device fd - * - * userspace will open ion device and pass the fd to kernel. - * this fd will be used to map shared fd to buffer. - */ -/* V4L2_CID_ATOMISP_ION_DEVICE_FD is not defined */ -static const struct v4l2_ctrl_config ctrl_ion_dev_fd = { - .ops = &ctrl_ops, - .id = V4L2_CID_ATOMISP_ION_DEVICE_FD, - .type = V4L2_CTRL_TYPE_INTEGER, - .name = "Ion Device Fd", - .min = -1, - .max = 1024, - .step = 1, - .def = ION_FD_UNSET -}; -#endif - static void atomisp_init_subdev_pipe(struct atomisp_sub_device *asd, struct atomisp_video_pipe *pipe, enum v4l2_buf_type buf_type) { diff --git a/drivers/staging/mt7621-dma/mtk-hsdma.c b/drivers/staging/mt7621-dma/mtk-hsdma.c index d241349214e7..bc4bb4374313 100644 --- a/drivers/staging/mt7621-dma/mtk-hsdma.c +++ b/drivers/staging/mt7621-dma/mtk-hsdma.c @@ -712,7 +712,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev) ret = dma_async_device_register(dd); if (ret) { dev_err(&pdev->dev, "failed to register dma device\n"); - return ret; + goto err_uninit_hsdma; } ret = of_dma_controller_register(pdev->dev.of_node, @@ -728,6 +728,8 @@ static int mtk_hsdma_probe(struct platform_device *pdev) err_unregister: dma_async_device_unregister(dd); +err_uninit_hsdma: + mtk_hsdma_uninit(hsdma); return ret; } diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 6b171fff007b..a5991df23581 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -562,8 +562,6 @@ tcmu_get_block_page(struct tcmu_dev *udev, uint32_t dbi) static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd) { - if (tcmu_cmd->se_cmd) - tcmu_cmd->se_cmd->priv = NULL; kfree(tcmu_cmd->dbi); kmem_cache_free(tcmu_cmd_cache, tcmu_cmd); } @@ -1174,11 +1172,12 @@ tcmu_queue_cmd(struct se_cmd *se_cmd) return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; mutex_lock(&udev->cmdr_lock); - se_cmd->priv = tcmu_cmd; if (!(se_cmd->transport_state & CMD_T_ABORTED)) ret = queue_cmd_ring(tcmu_cmd, &scsi_ret); if (ret < 0) tcmu_free_cmd(tcmu_cmd); + else + se_cmd->priv = tcmu_cmd; mutex_unlock(&udev->cmdr_lock); return scsi_ret; } @@ -1241,6 +1240,7 @@ tcmu_tmr_notify(struct se_device *se_dev, enum tcm_tmreq_table tmf, list_del_init(&cmd->queue_entry); tcmu_free_cmd(cmd); + se_cmd->priv = NULL; target_complete_cmd(se_cmd, SAM_STAT_TASK_ABORTED); unqueued = true; } @@ -1332,6 +1332,7 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry * } done: + se_cmd->priv = NULL; if (read_len_valid) { pr_debug("read_len = %d\n", read_len); target_complete_cmd_with_length(cmd->se_cmd, @@ -1478,6 +1479,7 @@ static void tcmu_check_expired_queue_cmd(struct tcmu_cmd *cmd) se_cmd = cmd->se_cmd; tcmu_free_cmd(cmd); + se_cmd->priv = NULL; target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL); } @@ -1592,6 +1594,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail) * removed then LIO core will do the right thing and * fail the retry. */ + tcmu_cmd->se_cmd->priv = NULL; target_complete_cmd(tcmu_cmd->se_cmd, SAM_STAT_BUSY); tcmu_free_cmd(tcmu_cmd); continue; @@ -1605,6 +1608,7 @@ static void run_qfull_queue(struct tcmu_dev *udev, bool fail) * Ignore scsi_ret for now. target_complete_cmd * drops it. */ + tcmu_cmd->se_cmd->priv = NULL; target_complete_cmd(tcmu_cmd->se_cmd, SAM_STAT_CHECK_CONDITION); tcmu_free_cmd(tcmu_cmd); @@ -2212,6 +2216,7 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level) if (!test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) { WARN_ON(!cmd->se_cmd); list_del_init(&cmd->queue_entry); + cmd->se_cmd->priv = NULL; if (err_level == 1) { /* * Userspace was not able to start the diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c index 44e15d7fb2f0..66d6f1d06f21 100644 --- a/drivers/target/target_core_xcopy.c +++ b/drivers/target/target_core_xcopy.c @@ -46,60 +46,83 @@ static int target_xcopy_gen_naa_ieee(struct se_device *dev, unsigned char *buf) return 0; } -struct xcopy_dev_search_info { - const unsigned char *dev_wwn; - struct se_device *found_dev; -}; - +/** + * target_xcopy_locate_se_dev_e4_iter - compare XCOPY NAA device identifiers + * + * @se_dev: device being considered for match + * @dev_wwn: XCOPY requested NAA dev_wwn + * @return: 1 on match, 0 on no-match + */ static int target_xcopy_locate_se_dev_e4_iter(struct se_device *se_dev, - void *data) + const unsigned char *dev_wwn) { - struct xcopy_dev_search_info *info = data; unsigned char tmp_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; int rc; - if (!se_dev->dev_attrib.emulate_3pc) + if (!se_dev->dev_attrib.emulate_3pc) { + pr_debug("XCOPY: emulate_3pc disabled on se_dev %p\n", se_dev); return 0; + } memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN); target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]); - rc = memcmp(&tmp_dev_wwn[0], info->dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN); - if (rc != 0) - return 0; - - info->found_dev = se_dev; - pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev); - - rc = target_depend_item(&se_dev->dev_group.cg_item); + rc = memcmp(&tmp_dev_wwn[0], dev_wwn, XCOPY_NAA_IEEE_REGEX_LEN); if (rc != 0) { - pr_err("configfs_depend_item attempt failed: %d for se_dev: %p\n", - rc, se_dev); - return rc; + pr_debug("XCOPY: skip non-matching: %*ph\n", + XCOPY_NAA_IEEE_REGEX_LEN, tmp_dev_wwn); + return 0; } + pr_debug("XCOPY 0xe4: located se_dev: %p\n", se_dev); - pr_debug("Called configfs_depend_item for se_dev: %p se_dev->se_dev_group: %p\n", - se_dev, &se_dev->dev_group); return 1; } -static int target_xcopy_locate_se_dev_e4(const unsigned char *dev_wwn, - struct se_device **found_dev) +static int target_xcopy_locate_se_dev_e4(struct se_session *sess, + const unsigned char *dev_wwn, + struct se_device **_found_dev, + struct percpu_ref **_found_lun_ref) { - struct xcopy_dev_search_info info; - int ret; - - memset(&info, 0, sizeof(info)); - info.dev_wwn = dev_wwn; - - ret = target_for_each_device(target_xcopy_locate_se_dev_e4_iter, &info); - if (ret == 1) { - *found_dev = info.found_dev; - return 0; - } else { - pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n"); - return -EINVAL; + struct se_dev_entry *deve; + struct se_node_acl *nacl; + struct se_lun *this_lun = NULL; + struct se_device *found_dev = NULL; + + /* cmd with NULL sess indicates no associated $FABRIC_MOD */ + if (!sess) + goto err_out; + + pr_debug("XCOPY 0xe4: searching for: %*ph\n", + XCOPY_NAA_IEEE_REGEX_LEN, dev_wwn); + + nacl = sess->se_node_acl; + rcu_read_lock(); + hlist_for_each_entry_rcu(deve, &nacl->lun_entry_hlist, link) { + struct se_device *this_dev; + int rc; + + this_lun = rcu_dereference(deve->se_lun); + this_dev = rcu_dereference_raw(this_lun->lun_se_dev); + + rc = target_xcopy_locate_se_dev_e4_iter(this_dev, dev_wwn); + if (rc) { + if (percpu_ref_tryget_live(&this_lun->lun_ref)) + found_dev = this_dev; + break; + } } + rcu_read_unlock(); + if (found_dev == NULL) + goto err_out; + + pr_debug("lun_ref held for se_dev: %p se_dev->se_dev_group: %p\n", + found_dev, &found_dev->dev_group); + *_found_dev = found_dev; + *_found_lun_ref = &this_lun->lun_ref; + return 0; +err_out: + pr_debug_ratelimited("Unable to locate 0xe4 descriptor for EXTENDED_COPY\n"); + return -EINVAL; } static int target_xcopy_parse_tiddesc_e4(struct se_cmd *se_cmd, struct xcopy_op *xop, @@ -246,12 +269,16 @@ static int target_xcopy_parse_target_descriptors(struct se_cmd *se_cmd, switch (xop->op_origin) { case XCOL_SOURCE_RECV_OP: - rc = target_xcopy_locate_se_dev_e4(xop->dst_tid_wwn, - &xop->dst_dev); + rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess, + xop->dst_tid_wwn, + &xop->dst_dev, + &xop->remote_lun_ref); break; case XCOL_DEST_RECV_OP: - rc = target_xcopy_locate_se_dev_e4(xop->src_tid_wwn, - &xop->src_dev); + rc = target_xcopy_locate_se_dev_e4(se_cmd->se_sess, + xop->src_tid_wwn, + &xop->src_dev, + &xop->remote_lun_ref); break; default: pr_err("XCOPY CSCD descriptor IDs not found in CSCD list - " @@ -391,18 +418,12 @@ static int xcopy_pt_get_cmd_state(struct se_cmd *se_cmd) static void xcopy_pt_undepend_remotedev(struct xcopy_op *xop) { - struct se_device *remote_dev; - if (xop->op_origin == XCOL_SOURCE_RECV_OP) - remote_dev = xop->dst_dev; + pr_debug("putting dst lun_ref for %p\n", xop->dst_dev); else - remote_dev = xop->src_dev; - - pr_debug("Calling configfs_undepend_item for" - " remote_dev: %p remote_dev->dev_group: %p\n", - remote_dev, &remote_dev->dev_group.cg_item); + pr_debug("putting src lun_ref for %p\n", xop->src_dev); - target_undepend_item(&remote_dev->dev_group.cg_item); + percpu_ref_put(xop->remote_lun_ref); } static void xcopy_pt_release_cmd(struct se_cmd *se_cmd) diff --git a/drivers/target/target_core_xcopy.h b/drivers/target/target_core_xcopy.h index c56a1bde9417..e5f20005179a 100644 --- a/drivers/target/target_core_xcopy.h +++ b/drivers/target/target_core_xcopy.h @@ -27,6 +27,7 @@ struct xcopy_op { struct se_device *dst_dev; unsigned char dst_tid_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; unsigned char local_dev_wwn[XCOPY_NAA_IEEE_REGEX_LEN]; + struct percpu_ref *remote_lun_ref; sector_t src_lba; sector_t dst_lba; diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index 8b7f941a9bb7..b8c4159bc32d 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -2316,7 +2316,7 @@ static int icm_usb4_switch_nvm_authenticate_status(struct tb_switch *sw, if (auth && auth->reply.route_hi == sw->config.route_hi && auth->reply.route_lo == sw->config.route_lo) { - tb_dbg(tb, "NVM_AUTH found for %llx flags 0x%#x status %#x\n", + tb_dbg(tb, "NVM_AUTH found for %llx flags %#x status %#x\n", tb_route(sw), auth->reply.hdr.flags, auth->reply.status); if (auth->reply.hdr.flags & ICM_FLAGS_ERROR) ret = -EIO; diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig index 47a6e42f0d04..e15cd6b5bb99 100644 --- a/drivers/tty/Kconfig +++ b/drivers/tty/Kconfig @@ -401,6 +401,20 @@ config MIPS_EJTAG_FDC_KGDB_CHAN help FDC channel number to use for KGDB. +config NULL_TTY + tristate "NULL TTY driver" + help + Say Y here if you want a NULL TTY which simply discards messages. + + This is useful to allow userspace applications which expect a console + device to work without modifications even when no console is + available or desired. + + In order to use this driver, you should redirect the console to this + TTY, or boot the kernel with console=ttynull. + + If unsure, say N. + config TRACE_ROUTER tristate "Trace data router for MIPI P1149.7 cJTAG standard" depends on TRACE_SINK diff --git a/drivers/tty/Makefile b/drivers/tty/Makefile index 3c1c5a9240a7..b3ccae932660 100644 --- a/drivers/tty/Makefile +++ b/drivers/tty/Makefile @@ -2,7 +2,7 @@ obj-$(CONFIG_TTY) += tty_io.o n_tty.o tty_ioctl.o tty_ldisc.o \ tty_buffer.o tty_port.o tty_mutex.o \ tty_ldsem.o tty_baudrate.o tty_jobctrl.o \ - n_null.o ttynull.o + n_null.o obj-$(CONFIG_LEGACY_PTYS) += pty.o obj-$(CONFIG_UNIX98_PTYS) += pty.o obj-$(CONFIG_AUDIT) += tty_audit.o @@ -25,6 +25,7 @@ obj-$(CONFIG_ISI) += isicom.o obj-$(CONFIG_MOXA_INTELLIO) += moxa.o obj-$(CONFIG_MOXA_SMARTIO) += mxser.o obj-$(CONFIG_NOZOMI) += nozomi.o +obj-$(CONFIG_NULL_TTY) += ttynull.o obj-$(CONFIG_ROCKETPORT) += rocket.o obj-$(CONFIG_SYNCLINK_GT) += synclink_gt.o obj-$(CONFIG_PPC_EPAPR_HV_BYTECHAN) += ehv_bytechan.o diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c index 118b29912289..e0c00a1b0763 100644 --- a/drivers/tty/serial/mvebu-uart.c +++ b/drivers/tty/serial/mvebu-uart.c @@ -648,6 +648,14 @@ static void wait_for_xmitr(struct uart_port *port) (val & STAT_TX_RDY(port)), 1, 10000); } +static void wait_for_xmite(struct uart_port *port) +{ + u32 val; + + readl_poll_timeout_atomic(port->membase + UART_STAT, val, + (val & STAT_TX_EMP), 1, 10000); +} + static void mvebu_uart_console_putchar(struct uart_port *port, int ch) { wait_for_xmitr(port); @@ -675,7 +683,7 @@ static void mvebu_uart_console_write(struct console *co, const char *s, uart_console_write(port, s, count, mvebu_uart_console_putchar); - wait_for_xmitr(port); + wait_for_xmite(port); if (ier) writel(ier, port->membase + UART_CTRL(port)); diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c index 1066eebe3b28..328d5a78792f 100644 --- a/drivers/tty/serial/sifive.c +++ b/drivers/tty/serial/sifive.c @@ -1000,6 +1000,7 @@ static int sifive_serial_probe(struct platform_device *pdev) /* Set up clock divider */ ssp->clkin_rate = clk_get_rate(ssp->clk); ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE; + ssp->port.uartclk = ssp->baud_rate * 16; __ssp_update_div(ssp); platform_set_drvdata(pdev, ssp); diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 8034489337d7..4a208a95e921 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -143,9 +143,8 @@ LIST_HEAD(tty_drivers); /* linked list of tty drivers */ DEFINE_MUTEX(tty_mutex); static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *); -static ssize_t tty_write(struct file *, const char __user *, size_t, loff_t *); -ssize_t redirected_tty_write(struct file *, const char __user *, - size_t, loff_t *); +static ssize_t tty_write(struct kiocb *, struct iov_iter *); +ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *); static __poll_t tty_poll(struct file *, poll_table *); static int tty_open(struct inode *, struct file *); long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg); @@ -438,8 +437,7 @@ static ssize_t hung_up_tty_read(struct file *file, char __user *buf, return 0; } -static ssize_t hung_up_tty_write(struct file *file, const char __user *buf, - size_t count, loff_t *ppos) +static ssize_t hung_up_tty_write(struct kiocb *iocb, struct iov_iter *from) { return -EIO; } @@ -478,7 +476,8 @@ static void tty_show_fdinfo(struct seq_file *m, struct file *file) static const struct file_operations tty_fops = { .llseek = no_llseek, .read = tty_read, - .write = tty_write, + .write_iter = tty_write, + .splice_write = iter_file_splice_write, .poll = tty_poll, .unlocked_ioctl = tty_ioctl, .compat_ioctl = tty_compat_ioctl, @@ -491,7 +490,8 @@ static const struct file_operations tty_fops = { static const struct file_operations console_fops = { .llseek = no_llseek, .read = tty_read, - .write = redirected_tty_write, + .write_iter = redirected_tty_write, + .splice_write = iter_file_splice_write, .poll = tty_poll, .unlocked_ioctl = tty_ioctl, .compat_ioctl = tty_compat_ioctl, @@ -503,7 +503,7 @@ static const struct file_operations console_fops = { static const struct file_operations hung_up_tty_fops = { .llseek = no_llseek, .read = hung_up_tty_read, - .write = hung_up_tty_write, + .write_iter = hung_up_tty_write, .poll = hung_up_tty_poll, .unlocked_ioctl = hung_up_tty_ioctl, .compat_ioctl = hung_up_tty_compat_ioctl, @@ -606,9 +606,9 @@ static void __tty_hangup(struct tty_struct *tty, int exit_session) /* This breaks for file handles being sent over AF_UNIX sockets ? */ list_for_each_entry(priv, &tty->tty_files, list) { filp = priv->file; - if (filp->f_op->write == redirected_tty_write) + if (filp->f_op->write_iter == redirected_tty_write) cons_filp = filp; - if (filp->f_op->write != tty_write) + if (filp->f_op->write_iter != tty_write) continue; closecount++; __tty_fasync(-1, filp, 0); /* can't block */ @@ -901,9 +901,9 @@ static inline ssize_t do_tty_write( ssize_t (*write)(struct tty_struct *, struct file *, const unsigned char *, size_t), struct tty_struct *tty, struct file *file, - const char __user *buf, - size_t count) + struct iov_iter *from) { + size_t count = iov_iter_count(from); ssize_t ret, written = 0; unsigned int chunk; @@ -955,14 +955,20 @@ static inline ssize_t do_tty_write( size_t size = count; if (size > chunk) size = chunk; + ret = -EFAULT; - if (copy_from_user(tty->write_buf, buf, size)) + if (copy_from_iter(tty->write_buf, size, from) != size) break; + ret = write(tty, file, tty->write_buf, size); if (ret <= 0) break; + + /* FIXME! Have Al check this! */ + if (ret != size) + iov_iter_revert(from, size-ret); + written += ret; - buf += ret; count -= ret; if (!count) break; @@ -1022,9 +1028,9 @@ void tty_write_message(struct tty_struct *tty, char *msg) * write method will not be invoked in parallel for each device. */ -static ssize_t tty_write(struct file *file, const char __user *buf, - size_t count, loff_t *ppos) +static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from) { + struct file *file = iocb->ki_filp; struct tty_struct *tty = file_tty(file); struct tty_ldisc *ld; ssize_t ret; @@ -1038,17 +1044,16 @@ static ssize_t tty_write(struct file *file, const char __user *buf, tty_err(tty, "missing write_room method\n"); ld = tty_ldisc_ref_wait(tty); if (!ld) - return hung_up_tty_write(file, buf, count, ppos); + return hung_up_tty_write(iocb, from); if (!ld->ops->write) ret = -EIO; else - ret = do_tty_write(ld->ops->write, tty, file, buf, count); + ret = do_tty_write(ld->ops->write, tty, file, from); tty_ldisc_deref(ld); return ret; } -ssize_t redirected_tty_write(struct file *file, const char __user *buf, - size_t count, loff_t *ppos) +ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter) { struct file *p = NULL; @@ -1059,11 +1064,11 @@ ssize_t redirected_tty_write(struct file *file, const char __user *buf, if (p) { ssize_t res; - res = vfs_write(p, buf, count, &p->f_pos); + res = vfs_iocb_iter_write(p, iocb, iter); fput(p); return res; } - return tty_write(file, buf, count, ppos); + return tty_write(iocb, iter); } /* @@ -2295,7 +2300,7 @@ static int tioccons(struct file *file) { if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (file->f_op->write == redirected_tty_write) { + if (file->f_op->write_iter == redirected_tty_write) { struct file *f; spin_lock(&redirect_lock); f = redirect; diff --git a/drivers/tty/ttynull.c b/drivers/tty/ttynull.c index eced70ec54e1..17f05b7eb6d3 100644 --- a/drivers/tty/ttynull.c +++ b/drivers/tty/ttynull.c @@ -2,13 +2,6 @@ /* * Copyright (C) 2019 Axis Communications AB * - * The console is useful for userspace applications which expect a console - * device to work without modifications even when no console is available - * or desired. - * - * In order to use this driver, you should redirect the console to this - * TTY, or boot the kernel with console=ttynull. - * * Based on ttyprintk.c: * Copyright (C) 2010 Samo Pogacnik */ @@ -66,17 +59,6 @@ static struct console ttynull_console = { .device = ttynull_device, }; -void __init register_ttynull_console(void) -{ - if (!ttynull_driver) - return; - - if (add_preferred_console(ttynull_console.name, 0, NULL)) - return; - - register_console(&ttynull_console); -} - static int __init ttynull_init(void) { struct tty_driver *driver; diff --git a/drivers/usb/cdns3/cdns3-imx.c b/drivers/usb/cdns3/cdns3-imx.c index 22a56c4dce67..7990fee03fe4 100644 --- a/drivers/usb/cdns3/cdns3-imx.c +++ b/drivers/usb/cdns3/cdns3-imx.c @@ -185,7 +185,11 @@ static int cdns_imx_probe(struct platform_device *pdev) } data->num_clks = ARRAY_SIZE(imx_cdns3_core_clks); - data->clks = (struct clk_bulk_data *)imx_cdns3_core_clks; + data->clks = devm_kmemdup(dev, imx_cdns3_core_clks, + sizeof(imx_cdns3_core_clks), GFP_KERNEL); + if (!data->clks) + return -ENOMEM; + ret = devm_clk_bulk_get(dev, data->num_clks, data->clks); if (ret) return ret; @@ -214,20 +218,16 @@ err: return ret; } -static int cdns_imx_remove_core(struct device *dev, void *data) -{ - struct platform_device *pdev = to_platform_device(dev); - - platform_device_unregister(pdev); - - return 0; -} - static int cdns_imx_remove(struct platform_device *pdev) { struct device *dev = &pdev->dev; + struct cdns_imx *data = dev_get_drvdata(dev); - device_for_each_child(dev, NULL, cdns_imx_remove_core); + pm_runtime_get_sync(dev); + of_platform_depopulate(dev); + clk_bulk_disable_unprepare(data->num_clks, data->clks); + pm_runtime_disable(dev); + pm_runtime_put_noidle(dev); platform_set_drvdata(pdev, NULL); return 0; diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c index 9e12152ea46b..8b7bc10b6e8b 100644 --- a/drivers/usb/chipidea/ci_hdrc_imx.c +++ b/drivers/usb/chipidea/ci_hdrc_imx.c @@ -139,9 +139,13 @@ static struct imx_usbmisc_data *usbmisc_get_init_data(struct device *dev) misc_pdev = of_find_device_by_node(args.np); of_node_put(args.np); - if (!misc_pdev || !platform_get_drvdata(misc_pdev)) + if (!misc_pdev) return ERR_PTR(-EPROBE_DEFER); + if (!platform_get_drvdata(misc_pdev)) { + put_device(&misc_pdev->dev); + return ERR_PTR(-EPROBE_DEFER); + } data->dev = &misc_pdev->dev; /* diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c index f52f1bc0559f..781905745812 100644 --- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -1895,6 +1895,10 @@ static const struct usb_device_id acm_ids[] = { { USB_DEVICE(0x04d8, 0xfd08), .driver_info = IGNORE_DEVICE, }, + + { USB_DEVICE(0x04d8, 0xf58b), + .driver_info = IGNORE_DEVICE, + }, #endif /*Samsung phone in firmware update mode */ diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c index 02d0cfd23bb2..508b1c3f8b73 100644 --- a/drivers/usb/class/cdc-wdm.c +++ b/drivers/usb/class/cdc-wdm.c @@ -465,13 +465,23 @@ static int service_outstanding_interrupt(struct wdm_device *desc) if (!desc->resp_count || !--desc->resp_count) goto out; + if (test_bit(WDM_DISCONNECTING, &desc->flags)) { + rv = -ENODEV; + goto out; + } + if (test_bit(WDM_RESETTING, &desc->flags)) { + rv = -EIO; + goto out; + } + set_bit(WDM_RESPONDING, &desc->flags); spin_unlock_irq(&desc->iuspin); rv = usb_submit_urb(desc->response, GFP_KERNEL); spin_lock_irq(&desc->iuspin); if (rv) { - dev_err(&desc->intf->dev, - "usb_submit_urb failed with result %d\n", rv); + if (!test_bit(WDM_DISCONNECTING, &desc->flags)) + dev_err(&desc->intf->dev, + "usb_submit_urb failed with result %d\n", rv); /* make sure the next notification trigger a submit */ clear_bit(WDM_RESPONDING, &desc->flags); @@ -1027,9 +1037,9 @@ static void wdm_disconnect(struct usb_interface *intf) wake_up_all(&desc->wait); mutex_lock(&desc->rlock); mutex_lock(&desc->wlock); - kill_urbs(desc); cancel_work_sync(&desc->rxwork); cancel_work_sync(&desc->service_outs_intr); + kill_urbs(desc); mutex_unlock(&desc->wlock); mutex_unlock(&desc->rlock); diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c index 67cbd42421be..134dc2005ce9 100644 --- a/drivers/usb/class/usblp.c +++ b/drivers/usb/class/usblp.c @@ -274,8 +274,25 @@ static int usblp_ctrl_msg(struct usblp *usblp, int request, int type, int dir, i #define usblp_reset(usblp)\ usblp_ctrl_msg(usblp, USBLP_REQ_RESET, USB_TYPE_CLASS, USB_DIR_OUT, USB_RECIP_OTHER, 0, NULL, 0) -#define usblp_hp_channel_change_request(usblp, channel, buffer) \ - usblp_ctrl_msg(usblp, USBLP_REQ_HP_CHANNEL_CHANGE_REQUEST, USB_TYPE_VENDOR, USB_DIR_IN, USB_RECIP_INTERFACE, channel, buffer, 1) +static int usblp_hp_channel_change_request(struct usblp *usblp, int channel, u8 *new_channel) +{ + u8 *buf; + int ret; + + buf = kzalloc(1, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + ret = usblp_ctrl_msg(usblp, USBLP_REQ_HP_CHANNEL_CHANGE_REQUEST, + USB_TYPE_VENDOR, USB_DIR_IN, USB_RECIP_INTERFACE, + channel, buf, 1); + if (ret == 0) + *new_channel = buf[0]; + + kfree(buf); + + return ret; +} /* * See the description for usblp_select_alts() below for the usage diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index 60886a7464c3..ad5a0f405a75 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -1649,14 +1649,12 @@ static void __usb_hcd_giveback_urb(struct urb *urb) urb->status = status; /* * This function can be called in task context inside another remote - * coverage collection section, but KCOV doesn't support that kind of + * coverage collection section, but kcov doesn't support that kind of * recursion yet. Only collect coverage in softirq context for now. */ - if (in_serving_softirq()) - kcov_remote_start_usb((u64)urb->dev->bus->busnum); + kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); urb->complete(urb); - if (in_serving_softirq()) - kcov_remote_stop(); + kcov_remote_stop_softirq(); usb_anchor_resume_wakeups(anchor); atomic_dec(&urb->use_count); diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 2f95f08ca511..1b241f937d8f 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -285,6 +285,7 @@ /* Global USB2 PHY Vendor Control Register */ #define DWC3_GUSB2PHYACC_NEWREGREQ BIT(25) +#define DWC3_GUSB2PHYACC_DONE BIT(24) #define DWC3_GUSB2PHYACC_BUSY BIT(23) #define DWC3_GUSB2PHYACC_WRITE BIT(22) #define DWC3_GUSB2PHYACC_ADDR(n) (n << 16) diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c index 417e05381b5d..bdf1f98dfad8 100644 --- a/drivers/usb/dwc3/dwc3-meson-g12a.c +++ b/drivers/usb/dwc3/dwc3-meson-g12a.c @@ -754,7 +754,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev) ret = priv->drvdata->setup_regmaps(priv, base); if (ret) - return ret; + goto err_disable_clks; if (priv->vbus) { ret = regulator_enable(priv->vbus); diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 78cb4db8a6e4..ee44321fee38 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -1763,6 +1763,8 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, list_for_each_entry_safe(r, t, &dep->started_list, list) dwc3_gadget_move_cancelled_request(r); + dep->flags &= ~DWC3_EP_WAIT_TRANSFER_COMPLETE; + goto out; } } @@ -2083,6 +2085,7 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on, int suspend) static void dwc3_gadget_disable_irq(struct dwc3 *dwc); static void __dwc3_gadget_stop(struct dwc3 *dwc); +static int __dwc3_gadget_start(struct dwc3 *dwc); static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on) { @@ -2145,6 +2148,8 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on) dwc->ev_buf->lpos = (dwc->ev_buf->lpos + count) % dwc->ev_buf->length; } + } else { + __dwc3_gadget_start(dwc); } ret = dwc3_gadget_run_stop(dwc, is_on, false); @@ -2319,10 +2324,6 @@ static int dwc3_gadget_start(struct usb_gadget *g, } dwc->gadget_driver = driver; - - if (pm_runtime_active(dwc->dev)) - __dwc3_gadget_start(dwc); - spin_unlock_irqrestore(&dwc->lock, flags); return 0; @@ -2348,13 +2349,6 @@ static int dwc3_gadget_stop(struct usb_gadget *g) unsigned long flags; spin_lock_irqsave(&dwc->lock, flags); - - if (pm_runtime_suspended(dwc->dev)) - goto out; - - __dwc3_gadget_stop(dwc); - -out: dwc->gadget_driver = NULL; spin_unlock_irqrestore(&dwc->lock, flags); diff --git a/drivers/usb/dwc3/ulpi.c b/drivers/usb/dwc3/ulpi.c index aa213c9815f6..f23f4c9a557e 100644 --- a/drivers/usb/dwc3/ulpi.c +++ b/drivers/usb/dwc3/ulpi.c @@ -7,6 +7,8 @@ * Author: Heikki Krogerus <[email protected]> */ +#include <linux/delay.h> +#include <linux/time64.h> #include <linux/ulpi/regs.h> #include "core.h" @@ -17,14 +19,28 @@ DWC3_GUSB2PHYACC_ADDR(ULPI_ACCESS_EXTENDED) | \ DWC3_GUSB2PHYACC_EXTEND_ADDR(a) : DWC3_GUSB2PHYACC_ADDR(a)) -static int dwc3_ulpi_busyloop(struct dwc3 *dwc) +#define DWC3_ULPI_BASE_DELAY DIV_ROUND_UP(NSEC_PER_SEC, 60000000L) + +static int dwc3_ulpi_busyloop(struct dwc3 *dwc, u8 addr, bool read) { - unsigned int count = 1000; + unsigned long ns = 5L * DWC3_ULPI_BASE_DELAY; + unsigned int count = 10000; u32 reg; + if (addr >= ULPI_EXT_VENDOR_SPECIFIC) + ns += DWC3_ULPI_BASE_DELAY; + + if (read) + ns += DWC3_ULPI_BASE_DELAY; + + reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); + if (reg & DWC3_GUSB2PHYCFG_SUSPHY) + usleep_range(1000, 1200); + while (count--) { + ndelay(ns); reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYACC(0)); - if (!(reg & DWC3_GUSB2PHYACC_BUSY)) + if (reg & DWC3_GUSB2PHYACC_DONE) return 0; cpu_relax(); } @@ -38,16 +54,10 @@ static int dwc3_ulpi_read(struct device *dev, u8 addr) u32 reg; int ret; - reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); - if (reg & DWC3_GUSB2PHYCFG_SUSPHY) { - reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; - dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); - } - reg = DWC3_GUSB2PHYACC_NEWREGREQ | DWC3_ULPI_ADDR(addr); dwc3_writel(dwc->regs, DWC3_GUSB2PHYACC(0), reg); - ret = dwc3_ulpi_busyloop(dwc); + ret = dwc3_ulpi_busyloop(dwc, addr, true); if (ret) return ret; @@ -61,17 +71,11 @@ static int dwc3_ulpi_write(struct device *dev, u8 addr, u8 val) struct dwc3 *dwc = dev_get_drvdata(dev); u32 reg; - reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); - if (reg & DWC3_GUSB2PHYCFG_SUSPHY) { - reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; - dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); - } - reg = DWC3_GUSB2PHYACC_NEWREGREQ | DWC3_ULPI_ADDR(addr); reg |= DWC3_GUSB2PHYACC_WRITE | val; dwc3_writel(dwc->regs, DWC3_GUSB2PHYACC(0), reg); - return dwc3_ulpi_busyloop(dwc); + return dwc3_ulpi_busyloop(dwc, addr, false); } static const struct ulpi_ops dwc3_ulpi_ops = { diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig index 7e47e6223089..2d152571a7de 100644 --- a/drivers/usb/gadget/Kconfig +++ b/drivers/usb/gadget/Kconfig @@ -265,6 +265,7 @@ config USB_CONFIGFS_NCM depends on NET select USB_U_ETHER select USB_F_NCM + select CRC32 help NCM is an advanced protocol for Ethernet encapsulation, allows grouping of several ethernet frames into one USB transfer and @@ -314,6 +315,7 @@ config USB_CONFIGFS_EEM depends on NET select USB_U_ETHER select USB_F_EEM + select CRC32 help CDC EEM is a newer USB standard that is somewhat simpler than CDC ECM and therefore can be supported by more hardware. Technically ECM and diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c index c6d455f2bb92..1a556a628971 100644 --- a/drivers/usb/gadget/composite.c +++ b/drivers/usb/gadget/composite.c @@ -392,8 +392,11 @@ int usb_function_deactivate(struct usb_function *function) spin_lock_irqsave(&cdev->lock, flags); - if (cdev->deactivations == 0) + if (cdev->deactivations == 0) { + spin_unlock_irqrestore(&cdev->lock, flags); status = usb_gadget_deactivate(cdev->gadget); + spin_lock_irqsave(&cdev->lock, flags); + } if (status == 0) cdev->deactivations++; @@ -424,8 +427,11 @@ int usb_function_activate(struct usb_function *function) status = -EINVAL; else { cdev->deactivations--; - if (cdev->deactivations == 0) + if (cdev->deactivations == 0) { + spin_unlock_irqrestore(&cdev->lock, flags); status = usb_gadget_activate(cdev->gadget); + spin_lock_irqsave(&cdev->lock, flags); + } } spin_unlock_irqrestore(&cdev->lock, flags); diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c index 56051bb97349..36ffb43f9c1a 100644 --- a/drivers/usb/gadget/configfs.c +++ b/drivers/usb/gadget/configfs.c @@ -221,9 +221,16 @@ static ssize_t gadget_dev_desc_bcdUSB_store(struct config_item *item, static ssize_t gadget_dev_desc_UDC_show(struct config_item *item, char *page) { - char *udc_name = to_gadget_info(item)->composite.gadget_driver.udc_name; + struct gadget_info *gi = to_gadget_info(item); + char *udc_name; + int ret; + + mutex_lock(&gi->lock); + udc_name = gi->composite.gadget_driver.udc_name; + ret = sprintf(page, "%s\n", udc_name ?: ""); + mutex_unlock(&gi->lock); - return sprintf(page, "%s\n", udc_name ?: ""); + return ret; } static int unregister_gadget(struct gadget_info *gi) @@ -1248,9 +1255,9 @@ static void purge_configs_funcs(struct gadget_info *gi) cfg = container_of(c, struct config_usb_cfg, c); - list_for_each_entry_safe(f, tmp, &c->functions, list) { + list_for_each_entry_safe_reverse(f, tmp, &c->functions, list) { - list_move_tail(&f->list, &cfg->func_list); + list_move(&f->list, &cfg->func_list); if (f->unbind) { dev_dbg(&gi->cdev.gadget->dev, "unbind function '%s'/%p\n", @@ -1536,7 +1543,7 @@ static const struct usb_gadget_driver configfs_driver_template = { .suspend = configfs_composite_suspend, .resume = configfs_composite_resume, - .max_speed = USB_SPEED_SUPER, + .max_speed = USB_SPEED_SUPER_PLUS, .driver = { .owner = THIS_MODULE, .name = "configfs-gadget", @@ -1576,7 +1583,7 @@ static struct config_group *gadgets_make( gi->composite.unbind = configfs_do_nothing; gi->composite.suspend = NULL; gi->composite.resume = NULL; - gi->composite.max_speed = USB_SPEED_SUPER; + gi->composite.max_speed = USB_SPEED_SUPER_PLUS; spin_lock_init(&gi->spinlock); mutex_init(&gi->lock); diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c index 64a4112068fc..2f1eb2e81d30 100644 --- a/drivers/usb/gadget/function/f_printer.c +++ b/drivers/usb/gadget/function/f_printer.c @@ -1162,6 +1162,7 @@ fail_tx_reqs: printer_req_free(dev->in_ep, req); } + usb_free_all_descriptors(f); return ret; } diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c index 3633df6d7610..5d960b6603b6 100644 --- a/drivers/usb/gadget/function/f_uac2.c +++ b/drivers/usb/gadget/function/f_uac2.c @@ -271,7 +271,7 @@ static struct usb_endpoint_descriptor fs_epout_desc = { .bEndpointAddress = USB_DIR_OUT, .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, - .wMaxPacketSize = cpu_to_le16(1023), + /* .wMaxPacketSize = DYNAMIC */ .bInterval = 1, }; @@ -280,7 +280,7 @@ static struct usb_endpoint_descriptor hs_epout_desc = { .bDescriptorType = USB_DT_ENDPOINT, .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, - .wMaxPacketSize = cpu_to_le16(1024), + /* .wMaxPacketSize = DYNAMIC */ .bInterval = 4, }; @@ -348,7 +348,7 @@ static struct usb_endpoint_descriptor fs_epin_desc = { .bEndpointAddress = USB_DIR_IN, .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, - .wMaxPacketSize = cpu_to_le16(1023), + /* .wMaxPacketSize = DYNAMIC */ .bInterval = 1, }; @@ -357,7 +357,7 @@ static struct usb_endpoint_descriptor hs_epin_desc = { .bDescriptorType = USB_DT_ENDPOINT, .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, - .wMaxPacketSize = cpu_to_le16(1024), + /* .wMaxPacketSize = DYNAMIC */ .bInterval = 4, }; @@ -444,12 +444,28 @@ struct cntrl_range_lay3 { __le32 dRES; } __packed; -static void set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts, +static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts, struct usb_endpoint_descriptor *ep_desc, - unsigned int factor, bool is_playback) + enum usb_device_speed speed, bool is_playback) { int chmask, srate, ssize; - u16 max_packet_size; + u16 max_size_bw, max_size_ep; + unsigned int factor; + + switch (speed) { + case USB_SPEED_FULL: + max_size_ep = 1023; + factor = 1000; + break; + + case USB_SPEED_HIGH: + max_size_ep = 1024; + factor = 8000; + break; + + default: + return -EINVAL; + } if (is_playback) { chmask = uac2_opts->p_chmask; @@ -461,10 +477,12 @@ static void set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts, ssize = uac2_opts->c_ssize; } - max_packet_size = num_channels(chmask) * ssize * + max_size_bw = num_channels(chmask) * ssize * DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1))); - ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_packet_size, - le16_to_cpu(ep_desc->wMaxPacketSize))); + ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw, + max_size_ep)); + + return 0; } /* Use macro to overcome line length limitation */ @@ -670,10 +688,33 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn) } /* Calculate wMaxPacketSize according to audio bandwidth */ - set_ep_max_packet_size(uac2_opts, &fs_epin_desc, 1000, true); - set_ep_max_packet_size(uac2_opts, &fs_epout_desc, 1000, false); - set_ep_max_packet_size(uac2_opts, &hs_epin_desc, 8000, true); - set_ep_max_packet_size(uac2_opts, &hs_epout_desc, 8000, false); + ret = set_ep_max_packet_size(uac2_opts, &fs_epin_desc, USB_SPEED_FULL, + true); + if (ret < 0) { + dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); + return ret; + } + + ret = set_ep_max_packet_size(uac2_opts, &fs_epout_desc, USB_SPEED_FULL, + false); + if (ret < 0) { + dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); + return ret; + } + + ret = set_ep_max_packet_size(uac2_opts, &hs_epin_desc, USB_SPEED_HIGH, + true); + if (ret < 0) { + dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); + return ret; + } + + ret = set_ep_max_packet_size(uac2_opts, &hs_epout_desc, USB_SPEED_HIGH, + false); + if (ret < 0) { + dev_err(dev, "%s:%d Error!\n", __func__, __LINE__); + return ret; + } if (EPOUT_EN(uac2_opts)) { agdev->out_ep = usb_ep_autoconfig(gadget, &fs_epout_desc); diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c index 31ea76adcc0d..c019f2b0c0af 100644 --- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -45,9 +45,10 @@ #define UETH__VERSION "29-May-2008" /* Experiments show that both Linux and Windows hosts allow up to 16k - * frame sizes. Set the max size to 15k+52 to prevent allocating 32k + * frame sizes. Set the max MTU size to 15k+52 to prevent allocating 32k * blocks and still have efficient handling. */ -#define GETHER_MAX_ETH_FRAME_LEN 15412 +#define GETHER_MAX_MTU_SIZE 15412 +#define GETHER_MAX_ETH_FRAME_LEN (GETHER_MAX_MTU_SIZE + ETH_HLEN) struct eth_dev { /* lock is held while accessing port_usb @@ -786,7 +787,7 @@ struct eth_dev *gether_setup_name(struct usb_gadget *g, /* MTU range: 14 - 15412 */ net->min_mtu = ETH_HLEN; - net->max_mtu = GETHER_MAX_ETH_FRAME_LEN; + net->max_mtu = GETHER_MAX_MTU_SIZE; dev->gadget = g; SET_NETDEV_DEV(net, &g->dev); @@ -848,7 +849,7 @@ struct net_device *gether_setup_name_default(const char *netname) /* MTU range: 14 - 15412 */ net->min_mtu = ETH_HLEN; - net->max_mtu = GETHER_MAX_ETH_FRAME_LEN; + net->max_mtu = GETHER_MAX_MTU_SIZE; return net; } diff --git a/drivers/usb/gadget/legacy/acm_ms.c b/drivers/usb/gadget/legacy/acm_ms.c index 59be2d8417c9..e8033e5f0c18 100644 --- a/drivers/usb/gadget/legacy/acm_ms.c +++ b/drivers/usb/gadget/legacy/acm_ms.c @@ -200,8 +200,10 @@ static int acm_ms_bind(struct usb_composite_dev *cdev) struct usb_descriptor_header *usb_desc; usb_desc = usb_otg_descriptor_alloc(gadget); - if (!usb_desc) + if (!usb_desc) { + status = -ENOMEM; goto fail_string_ids; + } usb_otg_descriptor_init(gadget, usb_desc); otg_desc[0] = usb_desc; otg_desc[1] = NULL; diff --git a/drivers/usb/gadget/udc/Kconfig b/drivers/usb/gadget/udc/Kconfig index 1a12aab208b4..8c614bb86c66 100644 --- a/drivers/usb/gadget/udc/Kconfig +++ b/drivers/usb/gadget/udc/Kconfig @@ -90,7 +90,7 @@ config USB_BCM63XX_UDC config USB_FSL_USB2 tristate "Freescale Highspeed USB DR Peripheral Controller" - depends on FSL_SOC || ARCH_MXC + depends on FSL_SOC help Some of Freescale PowerPC and i.MX processors have a High Speed Dual-Role(DR) USB controller, which supports device mode. diff --git a/drivers/usb/gadget/udc/Makefile b/drivers/usb/gadget/udc/Makefile index f5a7ce28aecd..a21f2224e7eb 100644 --- a/drivers/usb/gadget/udc/Makefile +++ b/drivers/usb/gadget/udc/Makefile @@ -23,7 +23,6 @@ obj-$(CONFIG_USB_ATMEL_USBA) += atmel_usba_udc.o obj-$(CONFIG_USB_BCM63XX_UDC) += bcm63xx_udc.o obj-$(CONFIG_USB_FSL_USB2) += fsl_usb2_udc.o fsl_usb2_udc-y := fsl_udc_core.o -fsl_usb2_udc-$(CONFIG_ARCH_MXC) += fsl_mxc_udc.o obj-$(CONFIG_USB_TEGRA_XUDC) += tegra-xudc.o obj-$(CONFIG_USB_M66592) += m66592-udc.o obj-$(CONFIG_USB_R8A66597) += r8a66597-udc.o diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c index 0bd6b20435b8..02d8bfae58fb 100644 --- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c +++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c @@ -420,7 +420,10 @@ static void ast_vhub_stop_active_req(struct ast_vhub_ep *ep, u32 state, reg, loops; /* Stop DMA activity */ - writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); + if (ep->epn.desc_mode) + writel(VHUB_EP_DMA_CTRL_RESET, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); + else + writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); /* Wait for it to complete */ for (loops = 0; loops < 1000; loops++) { diff --git a/drivers/usb/gadget/udc/bdc/Kconfig b/drivers/usb/gadget/udc/bdc/Kconfig index 3e88c7670b2e..fb01ff47b64c 100644 --- a/drivers/usb/gadget/udc/bdc/Kconfig +++ b/drivers/usb/gadget/udc/bdc/Kconfig @@ -17,7 +17,7 @@ if USB_BDC_UDC comment "Platform Support" config USB_BDC_PCI tristate "BDC support for PCIe based platforms" - depends on USB_PCI + depends on USB_PCI && BROKEN default USB_BDC_UDC help Enable support for platforms which have BDC connected through PCIe, such as Lego3 FPGA platform. diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c index 5b5cfeb6c14a..ea114f922ccf 100644 --- a/drivers/usb/gadget/udc/core.c +++ b/drivers/usb/gadget/udc/core.c @@ -659,8 +659,7 @@ EXPORT_SYMBOL_GPL(usb_gadget_vbus_disconnect); * * Enables the D+ (or potentially D-) pullup. The host will start * enumerating this gadget when the pullup is active and a VBUS session - * is active (the link is powered). This pullup is always enabled unless - * usb_gadget_disconnect() has been used to disable it. + * is active (the link is powered). * * Returns zero on success, else negative errno. */ @@ -1530,10 +1529,13 @@ static ssize_t soft_connect_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t n) { struct usb_udc *udc = container_of(dev, struct usb_udc, dev); + ssize_t ret; + mutex_lock(&udc_lock); if (!udc->driver) { dev_err(dev, "soft-connect without a gadget driver\n"); - return -EOPNOTSUPP; + ret = -EOPNOTSUPP; + goto out; } if (sysfs_streq(buf, "connect")) { @@ -1544,10 +1546,14 @@ static ssize_t soft_connect_store(struct device *dev, usb_gadget_udc_stop(udc); } else { dev_err(dev, "unsupported command '%s'\n", buf); - return -EINVAL; + ret = -EINVAL; + goto out; } - return n; + ret = n; +out: + mutex_unlock(&udc_lock); + return ret; } static DEVICE_ATTR_WO(soft_connect); diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index ab5e978b5052..57067763b100 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -2118,9 +2118,21 @@ static int dummy_hub_control( dum_hcd->port_status &= ~USB_PORT_STAT_POWER; set_link_state(dum_hcd); break; - default: + case USB_PORT_FEAT_ENABLE: + case USB_PORT_FEAT_C_ENABLE: + case USB_PORT_FEAT_C_SUSPEND: + /* Not allowed for USB-3 */ + if (hcd->speed == HCD_USB3) + goto error; + fallthrough; + case USB_PORT_FEAT_C_CONNECTION: + case USB_PORT_FEAT_C_RESET: dum_hcd->port_status &= ~(1 << wValue); set_link_state(dum_hcd); + break; + default: + /* Disallow INDICATOR and C_OVER_CURRENT */ + goto error; } break; case GetHubDescriptor: @@ -2258,17 +2270,20 @@ static int dummy_hub_control( } fallthrough; case USB_PORT_FEAT_RESET: + if (!(dum_hcd->port_status & USB_PORT_STAT_CONNECTION)) + break; /* if it's already enabled, disable */ if (hcd->speed == HCD_USB3) { - dum_hcd->port_status = 0; dum_hcd->port_status = (USB_SS_PORT_STAT_POWER | USB_PORT_STAT_CONNECTION | USB_PORT_STAT_RESET); - } else + } else { dum_hcd->port_status &= ~(USB_PORT_STAT_ENABLE | USB_PORT_STAT_LOW_SPEED | USB_PORT_STAT_HIGH_SPEED); + dum_hcd->port_status |= USB_PORT_STAT_RESET; + } /* * We want to reset device status. All but the * Self powered feature @@ -2280,19 +2295,19 @@ static int dummy_hub_control( * interval? Is it still 50msec as for HS? */ dum_hcd->re_timeout = jiffies + msecs_to_jiffies(50); - fallthrough; - default: - if (hcd->speed == HCD_USB3) { - if ((dum_hcd->port_status & - USB_SS_PORT_STAT_POWER) != 0) { - dum_hcd->port_status |= (1 << wValue); - } - } else - if ((dum_hcd->port_status & - USB_PORT_STAT_POWER) != 0) { - dum_hcd->port_status |= (1 << wValue); - } set_link_state(dum_hcd); + break; + case USB_PORT_FEAT_C_CONNECTION: + case USB_PORT_FEAT_C_RESET: + case USB_PORT_FEAT_C_ENABLE: + case USB_PORT_FEAT_C_SUSPEND: + /* Not allowed for USB-3, and ignored for USB-2 */ + if (hcd->speed == HCD_USB3) + goto error; + break; + default: + /* Disallow TEST, INDICATOR, and C_OVER_CURRENT */ + goto error; } break; case GetPortErrorCount: diff --git a/drivers/usb/gadget/udc/fsl_mxc_udc.c b/drivers/usb/gadget/udc/fsl_mxc_udc.c deleted file mode 100644 index 5a321992decc..000000000000 --- a/drivers/usb/gadget/udc/fsl_mxc_udc.c +++ /dev/null @@ -1,122 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0+ -/* - * Copyright (C) 2009 - * Guennadi Liakhovetski, DENX Software Engineering, <[email protected]> - * - * Description: - * Helper routines for i.MX3x SoCs from Freescale, needed by the fsl_usb2_udc.c - * driver to function correctly on these systems. - */ -#include <linux/clk.h> -#include <linux/delay.h> -#include <linux/err.h> -#include <linux/fsl_devices.h> -#include <linux/mod_devicetable.h> -#include <linux/platform_device.h> -#include <linux/io.h> - -#include "fsl_usb2_udc.h" - -static struct clk *mxc_ahb_clk; -static struct clk *mxc_per_clk; -static struct clk *mxc_ipg_clk; - -/* workaround ENGcm09152 for i.MX35 */ -#define MX35_USBPHYCTRL_OFFSET 0x600 -#define USBPHYCTRL_OTGBASE_OFFSET 0x8 -#define USBPHYCTRL_EVDO (1 << 23) - -int fsl_udc_clk_init(struct platform_device *pdev) -{ - struct fsl_usb2_platform_data *pdata; - unsigned long freq; - int ret; - - pdata = dev_get_platdata(&pdev->dev); - - mxc_ipg_clk = devm_clk_get(&pdev->dev, "ipg"); - if (IS_ERR(mxc_ipg_clk)) { - dev_err(&pdev->dev, "clk_get(\"ipg\") failed\n"); - return PTR_ERR(mxc_ipg_clk); - } - - mxc_ahb_clk = devm_clk_get(&pdev->dev, "ahb"); - if (IS_ERR(mxc_ahb_clk)) { - dev_err(&pdev->dev, "clk_get(\"ahb\") failed\n"); - return PTR_ERR(mxc_ahb_clk); - } - - mxc_per_clk = devm_clk_get(&pdev->dev, "per"); - if (IS_ERR(mxc_per_clk)) { - dev_err(&pdev->dev, "clk_get(\"per\") failed\n"); - return PTR_ERR(mxc_per_clk); - } - - clk_prepare_enable(mxc_ipg_clk); - clk_prepare_enable(mxc_ahb_clk); - clk_prepare_enable(mxc_per_clk); - - /* make sure USB_CLK is running at 60 MHz +/- 1000 Hz */ - if (!strcmp(pdev->id_entry->name, "imx-udc-mx27")) { - freq = clk_get_rate(mxc_per_clk); - if (pdata->phy_mode != FSL_USB2_PHY_ULPI && - (freq < 59999000 || freq > 60001000)) { - dev_err(&pdev->dev, "USB_CLK=%lu, should be 60MHz\n", freq); - ret = -EINVAL; - goto eclkrate; - } - } - - return 0; - -eclkrate: - clk_disable_unprepare(mxc_ipg_clk); - clk_disable_unprepare(mxc_ahb_clk); - clk_disable_unprepare(mxc_per_clk); - mxc_per_clk = NULL; - return ret; -} - -int fsl_udc_clk_finalize(struct platform_device *pdev) -{ - struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev); - int ret = 0; - - /* workaround ENGcm09152 for i.MX35 */ - if (pdata->workaround & FLS_USB2_WORKAROUND_ENGCM09152) { - unsigned int v; - struct resource *res = platform_get_resource - (pdev, IORESOURCE_MEM, 0); - void __iomem *phy_regs = ioremap(res->start + - MX35_USBPHYCTRL_OFFSET, 512); - if (!phy_regs) { - dev_err(&pdev->dev, "ioremap for phy address fails\n"); - ret = -EINVAL; - goto ioremap_err; - } - - v = readl(phy_regs + USBPHYCTRL_OTGBASE_OFFSET); - writel(v | USBPHYCTRL_EVDO, - phy_regs + USBPHYCTRL_OTGBASE_OFFSET); - - iounmap(phy_regs); - } - - -ioremap_err: - /* ULPI transceivers don't need usbpll */ - if (pdata->phy_mode == FSL_USB2_PHY_ULPI) { - clk_disable_unprepare(mxc_per_clk); - mxc_per_clk = NULL; - } - - return ret; -} - -void fsl_udc_clk_release(void) -{ - if (mxc_per_clk) - clk_disable_unprepare(mxc_per_clk); - clk_disable_unprepare(mxc_ahb_clk); - clk_disable_unprepare(mxc_ipg_clk); -} diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c index e358ae17d51e..1926b328b6aa 100644 --- a/drivers/usb/host/ehci-hcd.c +++ b/drivers/usb/host/ehci-hcd.c @@ -574,6 +574,7 @@ static int ehci_run (struct usb_hcd *hcd) struct ehci_hcd *ehci = hcd_to_ehci (hcd); u32 temp; u32 hcc_params; + int rc; hcd->uses_new_polling = 1; @@ -629,9 +630,20 @@ static int ehci_run (struct usb_hcd *hcd) down_write(&ehci_cf_port_reset_rwsem); ehci->rh_state = EHCI_RH_RUNNING; ehci_writel(ehci, FLAG_CF, &ehci->regs->configured_flag); + + /* Wait until HC become operational */ ehci_readl(ehci, &ehci->regs->command); /* unblock posted writes */ msleep(5); + rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000); + up_write(&ehci_cf_port_reset_rwsem); + + if (rc) { + ehci_err(ehci, "USB %x.%x, controller refused to start: %d\n", + ((ehci->sbrn & 0xf0)>>4), (ehci->sbrn & 0x0f), rc); + return rc; + } + ehci->last_periodic_enable = ktime_get_real(); temp = HC_VERSION(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase)); diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c index 087402aec5cb..9f9ab5ccea88 100644 --- a/drivers/usb/host/ehci-hub.c +++ b/drivers/usb/host/ehci-hub.c @@ -345,6 +345,9 @@ static int ehci_bus_suspend (struct usb_hcd *hcd) unlink_empty_async_suspended(ehci); + /* Some Synopsys controllers mistakenly leave IAA turned on */ + ehci_writel(ehci, STS_IAA, &ehci->regs->status); + /* Any IAA cycle that started before the suspend is now invalid */ end_iaa_cycle(ehci); ehci_handle_start_intr_unlinks(ehci); diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 5677b81c0915..cf0c93a90200 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2931,6 +2931,8 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, trb->field[0] = cpu_to_le32(field1); trb->field[1] = cpu_to_le32(field2); trb->field[2] = cpu_to_le32(field3); + /* make sure TRB is fully written before giving it to the controller */ + wmb(); trb->field[3] = cpu_to_le32(field4); trace_xhci_queue_trb(ring, trb); diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c index 934be1686352..50bb91b6a4b8 100644 --- a/drivers/usb/host/xhci-tegra.c +++ b/drivers/usb/host/xhci-tegra.c @@ -623,6 +623,13 @@ static void tegra_xusb_mbox_handle(struct tegra_xusb *tegra, enable); if (err < 0) break; + + /* + * wait 500us for LFPS detector to be disabled before + * sending ACK + */ + if (!enable) + usleep_range(500, 1000); } if (err < 0) { diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 91ab81c3fc79..e86940571b4c 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -4770,19 +4770,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci, { unsigned long long timeout_ns; + if (xhci->quirks & XHCI_INTEL_HOST) + timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); + else + timeout_ns = udev->u1_params.sel; + /* Prevent U1 if service interval is shorter than U1 exit latency */ if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { - if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) { + if (xhci_service_interval_to_ns(desc) <= timeout_ns) { dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n"); return USB3_LPM_DISABLED; } } - if (xhci->quirks & XHCI_INTEL_HOST) - timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); - else - timeout_ns = udev->u1_params.sel; - /* The U1 timeout is encoded in 1us intervals. * Don't return a timeout of zero, because that's USB3_LPM_DISABLED. */ @@ -4834,19 +4834,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci, { unsigned long long timeout_ns; + if (xhci->quirks & XHCI_INTEL_HOST) + timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); + else + timeout_ns = udev->u2_params.sel; + /* Prevent U2 if service interval is shorter than U2 exit latency */ if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { - if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) { + if (xhci_service_interval_to_ns(desc) <= timeout_ns) { dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n"); return USB3_LPM_DISABLED; } } - if (xhci->quirks & XHCI_INTEL_HOST) - timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); - else - timeout_ns = udev->u2_params.sel; - /* The U2 timeout is encoded in 256us intervals */ timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000); /* If the necessary timeout value is bigger than what we can set in the diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c index 73ebfa6e9715..c640f98d20c5 100644 --- a/drivers/usb/misc/yurex.c +++ b/drivers/usb/misc/yurex.c @@ -496,6 +496,9 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer, timeout = schedule_timeout(YUREX_WRITE_TIMEOUT); finish_wait(&dev->waitq, &wait); + /* make sure URB is idle after timeout or (spurious) CMD_ACK */ + usb_kill_urb(dev->cntl_urb); + mutex_unlock(&dev->io_mutex); if (retval < 0) { diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c index f1201d4de297..e8f06b41a503 100644 --- a/drivers/usb/serial/iuu_phoenix.c +++ b/drivers/usb/serial/iuu_phoenix.c @@ -532,23 +532,29 @@ static int iuu_uart_flush(struct usb_serial_port *port) struct device *dev = &port->dev; int i; int status; - u8 rxcmd = IUU_UART_RX; + u8 *rxcmd; struct iuu_private *priv = usb_get_serial_port_data(port); if (iuu_led(port, 0xF000, 0, 0, 0xFF) < 0) return -EIO; + rxcmd = kmalloc(1, GFP_KERNEL); + if (!rxcmd) + return -ENOMEM; + + rxcmd[0] = IUU_UART_RX; + for (i = 0; i < 2; i++) { - status = bulk_immediate(port, &rxcmd, 1); + status = bulk_immediate(port, rxcmd, 1); if (status != IUU_OPERATION_OK) { dev_dbg(dev, "%s - uart_flush_write error\n", __func__); - return status; + goto out_free; } status = read_immediate(port, &priv->len, 1); if (status != IUU_OPERATION_OK) { dev_dbg(dev, "%s - uart_flush_read error\n", __func__); - return status; + goto out_free; } if (priv->len > 0) { @@ -556,12 +562,16 @@ static int iuu_uart_flush(struct usb_serial_port *port) status = read_immediate(port, priv->buf, priv->len); if (status != IUU_OPERATION_OK) { dev_dbg(dev, "%s - uart_flush_read error\n", __func__); - return status; + goto out_free; } } } dev_dbg(dev, "%s - uart_flush_read OK!\n", __func__); iuu_led(port, 0, 0xF000, 0, 0xFF); + +out_free: + kfree(rxcmd); + return status; } diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 2c21e34235bb..3fe959104311 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1117,6 +1117,8 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff), .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) }, /* EM160R-GL */ + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), @@ -2057,6 +2059,7 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff), /* Fibocom NL678 series */ .driver_info = RSVD(6) }, { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */ + { USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */ diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h index 870e9cf3d5dc..f9677a5ec31b 100644 --- a/drivers/usb/storage/unusual_uas.h +++ b/drivers/usb/storage/unusual_uas.h @@ -91,6 +91,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999, US_FL_BROKEN_FUA), /* Reported-by: Thinh Nguyen <[email protected]> */ +UNUSUAL_DEV(0x154b, 0xf00b, 0x0000, 0x9999, + "PNY", + "Pro Elite SSD", + USB_SC_DEVICE, USB_PR_DEVICE, NULL, + US_FL_NO_ATA_1X), + +/* Reported-by: Thinh Nguyen <[email protected]> */ UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999, "PNY", "Pro Elite SSD", diff --git a/drivers/usb/typec/altmodes/Kconfig b/drivers/usb/typec/altmodes/Kconfig index 187690fd1a5b..60d375e9c3c7 100644 --- a/drivers/usb/typec/altmodes/Kconfig +++ b/drivers/usb/typec/altmodes/Kconfig @@ -20,6 +20,6 @@ config TYPEC_NVIDIA_ALTMODE to enable support for VirtualLink devices with NVIDIA GPUs. To compile this driver as a module, choose M here: the - module will be called typec_displayport. + module will be called typec_nvidia. endmenu diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c index ebfd3113a9a8..8f77669f9cf4 100644 --- a/drivers/usb/typec/class.c +++ b/drivers/usb/typec/class.c @@ -766,6 +766,7 @@ int typec_partner_set_num_altmodes(struct typec_partner *partner, int num_altmod return ret; sysfs_notify(&partner->dev.kobj, NULL, "number_of_alternate_modes"); + kobject_uevent(&partner->dev.kobj, KOBJ_CHANGE); return 0; } @@ -923,6 +924,7 @@ int typec_plug_set_num_altmodes(struct typec_plug *plug, int num_altmodes) return ret; sysfs_notify(&plug->dev.kobj, NULL, "number_of_alternate_modes"); + kobject_uevent(&plug->dev.kobj, KOBJ_CHANGE); return 0; } diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c index cf37a59ce130..46a25b8db72e 100644 --- a/drivers/usb/typec/mux/intel_pmc_mux.c +++ b/drivers/usb/typec/mux/intel_pmc_mux.c @@ -207,10 +207,21 @@ static int pmc_usb_mux_dp_hpd(struct pmc_usb_port *port, struct typec_displayport_data *dp) { u8 msg[2] = { }; + int ret; msg[0] = PMC_USB_DP_HPD; msg[0] |= port->usb3_port << PMC_USB_MSG_USB3_PORT_SHIFT; + /* Configure HPD first if HPD,IRQ comes together */ + if (!IOM_PORT_HPD_ASSERTED(port->iom_status) && + dp->status & DP_STATUS_IRQ_HPD && + dp->status & DP_STATUS_HPD_STATE) { + msg[1] = PMC_USB_DP_HPD_LVL; + ret = pmc_usb_command(port, msg, sizeof(msg)); + if (ret) + return ret; + } + if (dp->status & DP_STATUS_IRQ_HPD) msg[1] = PMC_USB_DP_HPD_IRQ; diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c index 66cde5e5f796..3209b5ddd30c 100644 --- a/drivers/usb/usbip/vhci_hcd.c +++ b/drivers/usb/usbip/vhci_hcd.c @@ -396,6 +396,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, default: usbip_dbg_vhci_rh(" ClearPortFeature: default %x\n", wValue); + if (wValue >= 32) + goto error; vhci_hcd->port_status[rhport] &= ~(1 << wValue); break; } diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 531a00d703cd..c8784dfafdd7 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -863,6 +863,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) size_t len, total_len = 0; int err; struct vhost_net_ubuf_ref *ubufs; + struct ubuf_info *ubuf; bool zcopy_used; int sent_pkts = 0; @@ -895,9 +896,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) /* use msg_control to pass vhost zerocopy ubuf info to skb */ if (zcopy_used) { - struct ubuf_info *ubuf; ubuf = nvq->ubuf_info + nvq->upend_idx; - vq->heads[nvq->upend_idx].id = cpu_to_vhost32(vq, head); vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS; ubuf->callback = vhost_zerocopy_callback; @@ -927,7 +926,8 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) err = sock->ops->sendmsg(sock, &msg, len); if (unlikely(err < 0)) { if (zcopy_used) { - vhost_net_ubuf_put(ubufs); + if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS) + vhost_net_ubuf_put(ubufs); nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) % UIO_MAXIOV; } diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index a483cec31d5c..5e78fb719602 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -30,7 +30,12 @@ #define VHOST_VSOCK_PKT_WEIGHT 256 enum { - VHOST_VSOCK_FEATURES = VHOST_FEATURES, + VHOST_VSOCK_FEATURES = VHOST_FEATURES | + (1ULL << VIRTIO_F_ACCESS_PLATFORM) +}; + +enum { + VHOST_VSOCK_BACKEND_FEATURES = (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2) }; /* Used to track all the vhost_vsock instances on the system. */ @@ -94,6 +99,9 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, if (!vhost_vq_get_backend(vq)) goto out; + if (!vq_meta_prefetch(vq)) + goto out; + /* Avoid further vmexits, we're already processing the virtqueue */ vhost_disable_notify(&vsock->dev, vq); @@ -449,6 +457,9 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) if (!vhost_vq_get_backend(vq)) goto out; + if (!vq_meta_prefetch(vq)) + goto out; + vhost_disable_notify(&vsock->dev, vq); do { u32 len; @@ -766,8 +777,12 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) mutex_lock(&vsock->dev.mutex); if ((features & (1 << VHOST_F_LOG_ALL)) && !vhost_log_access_ok(&vsock->dev)) { - mutex_unlock(&vsock->dev.mutex); - return -EFAULT; + goto err; + } + + if ((features & (1ULL << VIRTIO_F_ACCESS_PLATFORM))) { + if (vhost_init_device_iotlb(&vsock->dev, true)) + goto err; } for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { @@ -778,6 +793,10 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) } mutex_unlock(&vsock->dev.mutex); return 0; + +err: + mutex_unlock(&vsock->dev.mutex); + return -EFAULT; } static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl, @@ -811,6 +830,18 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl, if (copy_from_user(&features, argp, sizeof(features))) return -EFAULT; return vhost_vsock_set_features(vsock, features); + case VHOST_GET_BACKEND_FEATURES: + features = VHOST_VSOCK_BACKEND_FEATURES; + if (copy_to_user(argp, &features, sizeof(features))) + return -EFAULT; + return 0; + case VHOST_SET_BACKEND_FEATURES: + if (copy_from_user(&features, argp, sizeof(features))) + return -EFAULT; + if (features & ~VHOST_VSOCK_BACKEND_FEATURES) + return -EOPNOTSUPP; + vhost_set_backend_features(&vsock->dev, features); + return 0; default: mutex_lock(&vsock->dev.mutex); r = vhost_dev_ioctl(&vsock->dev, ioctl, argp); @@ -823,6 +854,34 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl, } } +static ssize_t vhost_vsock_chr_read_iter(struct kiocb *iocb, struct iov_iter *to) +{ + struct file *file = iocb->ki_filp; + struct vhost_vsock *vsock = file->private_data; + struct vhost_dev *dev = &vsock->dev; + int noblock = file->f_flags & O_NONBLOCK; + + return vhost_chr_read_iter(dev, to, noblock); +} + +static ssize_t vhost_vsock_chr_write_iter(struct kiocb *iocb, + struct iov_iter *from) +{ + struct file *file = iocb->ki_filp; + struct vhost_vsock *vsock = file->private_data; + struct vhost_dev *dev = &vsock->dev; + + return vhost_chr_write_iter(dev, from); +} + +static __poll_t vhost_vsock_chr_poll(struct file *file, poll_table *wait) +{ + struct vhost_vsock *vsock = file->private_data; + struct vhost_dev *dev = &vsock->dev; + + return vhost_chr_poll(file, dev, wait); +} + static const struct file_operations vhost_vsock_fops = { .owner = THIS_MODULE, .open = vhost_vsock_dev_open, @@ -830,6 +889,9 @@ static const struct file_operations vhost_vsock_fops = { .llseek = noop_llseek, .unlocked_ioctl = vhost_vsock_dev_ioctl, .compat_ioctl = compat_ptr_ioctl, + .read_iter = vhost_vsock_chr_read_iter, + .write_iter = vhost_vsock_chr_write_iter, + .poll = vhost_vsock_chr_poll, }; static struct miscdevice vhost_vsock_misc = { diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index a8030332a191..e850f79351cb 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -2060,16 +2060,6 @@ static struct irq_chip xen_percpu_chip __read_mostly = { .irq_ack = ack_dynirq, }; -int xen_set_callback_via(uint64_t via) -{ - struct xen_hvm_param a; - a.domid = DOMID_SELF; - a.index = HVM_PARAM_CALLBACK_IRQ; - a.value = via; - return HYPERVISOR_hvm_op(HVMOP_set_param, &a); -} -EXPORT_SYMBOL_GPL(xen_set_callback_via); - #ifdef CONFIG_XEN_PVHVM /* Vector callbacks are better than PCI interrupts to receive event * channel notifications because we can receive vector callbacks on any diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c index dd911e1ff782..18f0ed8b1f93 100644 --- a/drivers/xen/platform-pci.c +++ b/drivers/xen/platform-pci.c @@ -132,6 +132,13 @@ static int platform_pci_probe(struct pci_dev *pdev, dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret); goto out; } + /* + * It doesn't strictly *have* to run on CPU0 but it sure + * as hell better process the event channel ports delivered + * to CPU0. + */ + irq_set_affinity(pdev->irq, cpumask_of(0)); + callback_via = get_callback_via(pdev); ret = xen_set_callback_via(callback_via); if (ret) { @@ -149,7 +156,6 @@ static int platform_pci_probe(struct pci_dev *pdev, ret = gnttab_init(); if (ret) goto grant_out; - xenbus_probe(NULL); return 0; grant_out: gnttab_free_auto_xlat_frames(); diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index b0c73c58f987..720a7b7abd46 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -717,14 +717,15 @@ static long privcmd_ioctl_restrict(struct file *file, void __user *udata) return 0; } -static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata) +static long privcmd_ioctl_mmap_resource(struct file *file, + struct privcmd_mmap_resource __user *udata) { struct privcmd_data *data = file->private_data; struct mm_struct *mm = current->mm; struct vm_area_struct *vma; struct privcmd_mmap_resource kdata; xen_pfn_t *pfns = NULL; - struct xen_mem_acquire_resource xdata; + struct xen_mem_acquire_resource xdata = { }; int rc; if (copy_from_user(&kdata, udata, sizeof(kdata))) @@ -734,6 +735,22 @@ static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata) if (data->domid != DOMID_INVALID && data->domid != kdata.dom) return -EPERM; + /* Both fields must be set or unset */ + if (!!kdata.addr != !!kdata.num) + return -EINVAL; + + xdata.domid = kdata.dom; + xdata.type = kdata.type; + xdata.id = kdata.id; + + if (!kdata.addr && !kdata.num) { + /* Query the size of the resource. */ + rc = HYPERVISOR_memory_op(XENMEM_acquire_resource, &xdata); + if (rc) + return rc; + return __put_user(xdata.nr_frames, &udata->num); + } + mmap_write_lock(mm); vma = find_vma(mm, kdata.addr); @@ -768,10 +785,6 @@ static long privcmd_ioctl_mmap_resource(struct file *file, void __user *udata) } else vma->vm_private_data = PRIV_VMA_LOCKED; - memset(&xdata, 0, sizeof(xdata)); - xdata.domid = kdata.dom; - xdata.type = kdata.type; - xdata.id = kdata.id; xdata.frame = kdata.idx; xdata.nr_frames = kdata.num; set_xen_guest_handle(xdata.frame_list, pfns); diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h index 2a93b7c9c159..dc1537335414 100644 --- a/drivers/xen/xenbus/xenbus.h +++ b/drivers/xen/xenbus/xenbus.h @@ -115,6 +115,7 @@ int xenbus_probe_node(struct xen_bus_type *bus, const char *type, const char *nodename); int xenbus_probe_devices(struct xen_bus_type *bus); +void xenbus_probe(void); void xenbus_dev_changed(const char *node, struct xen_bus_type *bus); diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c index eb5151fc8efa..e5fda0256feb 100644 --- a/drivers/xen/xenbus/xenbus_comms.c +++ b/drivers/xen/xenbus/xenbus_comms.c @@ -57,16 +57,8 @@ DEFINE_MUTEX(xs_response_mutex); static int xenbus_irq; static struct task_struct *xenbus_task; -static DECLARE_WORK(probe_work, xenbus_probe); - - static irqreturn_t wake_waiting(int irq, void *unused) { - if (unlikely(xenstored_ready == 0)) { - xenstored_ready = 1; - schedule_work(&probe_work); - } - wake_up(&xb_waitq); return IRQ_HANDLED; } diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c index 44634d970a5c..c8f0282bb649 100644 --- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -683,29 +683,76 @@ void unregister_xenstore_notifier(struct notifier_block *nb) } EXPORT_SYMBOL_GPL(unregister_xenstore_notifier); -void xenbus_probe(struct work_struct *unused) +void xenbus_probe(void) { xenstored_ready = 1; + /* + * In the HVM case, xenbus_init() deferred its call to + * xs_init() in case callbacks were not operational yet. + * So do it now. + */ + if (xen_store_domain_type == XS_HVM) + xs_init(); + /* Notify others that xenstore is up */ blocking_notifier_call_chain(&xenstore_chain, 0, NULL); } -EXPORT_SYMBOL_GPL(xenbus_probe); -static int __init xenbus_probe_initcall(void) +/* + * Returns true when XenStore init must be deferred in order to + * allow the PCI platform device to be initialised, before we + * can actually have event channel interrupts working. + */ +static bool xs_hvm_defer_init_for_callback(void) { - if (!xen_domain()) - return -ENODEV; +#ifdef CONFIG_XEN_PVHVM + return xen_store_domain_type == XS_HVM && + !xen_have_vector_callback; +#else + return false; +#endif +} - if (xen_initial_domain() || xen_hvm_domain()) - return 0; +static int __init xenbus_probe_initcall(void) +{ + /* + * Probe XenBus here in the XS_PV case, and also XS_HVM unless we + * need to wait for the platform PCI device to come up. + */ + if (xen_store_domain_type == XS_PV || + (xen_store_domain_type == XS_HVM && + !xs_hvm_defer_init_for_callback())) + xenbus_probe(); - xenbus_probe(NULL); return 0; } - device_initcall(xenbus_probe_initcall); +int xen_set_callback_via(uint64_t via) +{ + struct xen_hvm_param a; + int ret; + + a.domid = DOMID_SELF; + a.index = HVM_PARAM_CALLBACK_IRQ; + a.value = via; + + ret = HYPERVISOR_hvm_op(HVMOP_set_param, &a); + if (ret) + return ret; + + /* + * If xenbus_probe_initcall() deferred the xenbus_probe() + * due to the callback not functioning yet, we can do it now. + */ + if (!xenstored_ready && xs_hvm_defer_init_for_callback()) + xenbus_probe(); + + return ret; +} +EXPORT_SYMBOL_GPL(xen_set_callback_via); + /* Set up event channel for xenstored which is run as a local process * (this is normally used only in dom0) */ @@ -818,11 +865,17 @@ static int __init xenbus_init(void) break; } - /* Initialize the interface to xenstore. */ - err = xs_init(); - if (err) { - pr_warn("Error initializing xenstore comms: %i\n", err); - goto out_error; + /* + * HVM domains may not have a functional callback yet. In that + * case let xs_init() be called from xenbus_probe(), which will + * get invoked at an appropriate time. + */ + if (xen_store_domain_type != XS_HVM) { + err = xs_init(); + if (err) { + pr_warn("Error initializing xenstore comms: %i\n", err); + goto out_error; + } } if ((xen_store_domain_type != XS_LOCAL) && diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 9068d5578a26..7bd659ad959e 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -350,7 +350,7 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode, unsigned blkoff) { union afs_xdr_dirent *dire; - unsigned offset, next, curr; + unsigned offset, next, curr, nr_slots; size_t nlen; int tmp; @@ -363,13 +363,12 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode, offset < AFS_DIR_SLOTS_PER_BLOCK; offset = next ) { - next = offset + 1; - /* skip entries marked unused in the bitmap */ if (!(block->hdr.bitmap[offset / 8] & (1 << (offset % 8)))) { _debug("ENT[%zu.%u]: unused", blkoff / sizeof(union afs_xdr_dir_block), offset); + next = offset + 1; if (offset >= curr) ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent); @@ -381,35 +380,39 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode, nlen = strnlen(dire->u.name, sizeof(*block) - offset * sizeof(union afs_xdr_dirent)); + if (nlen > AFSNAMEMAX - 1) { + _debug("ENT[%zu]: name too long (len %u/%zu)", + blkoff / sizeof(union afs_xdr_dir_block), + offset, nlen); + return afs_bad(dvnode, afs_file_error_dir_name_too_long); + } _debug("ENT[%zu.%u]: %s %zu \"%s\"", blkoff / sizeof(union afs_xdr_dir_block), offset, (offset < curr ? "skip" : "fill"), nlen, dire->u.name); - /* work out where the next possible entry is */ - for (tmp = nlen; tmp > 15; tmp -= sizeof(union afs_xdr_dirent)) { - if (next >= AFS_DIR_SLOTS_PER_BLOCK) { - _debug("ENT[%zu.%u]:" - " %u travelled beyond end dir block" - " (len %u/%zu)", - blkoff / sizeof(union afs_xdr_dir_block), - offset, next, tmp, nlen); - return afs_bad(dvnode, afs_file_error_dir_over_end); - } - if (!(block->hdr.bitmap[next / 8] & - (1 << (next % 8)))) { - _debug("ENT[%zu.%u]:" - " %u unmarked extension (len %u/%zu)", + nr_slots = afs_dir_calc_slots(nlen); + next = offset + nr_slots; + if (next > AFS_DIR_SLOTS_PER_BLOCK) { + _debug("ENT[%zu.%u]:" + " %u extends beyond end dir block" + " (len %zu)", + blkoff / sizeof(union afs_xdr_dir_block), + offset, next, nlen); + return afs_bad(dvnode, afs_file_error_dir_over_end); + } + + /* Check that the name-extension dirents are all allocated */ + for (tmp = 1; tmp < nr_slots; tmp++) { + unsigned int ix = offset + tmp; + if (!(block->hdr.bitmap[ix / 8] & (1 << (ix % 8)))) { + _debug("ENT[%zu.u]:" + " %u unmarked extension (%u/%u)", blkoff / sizeof(union afs_xdr_dir_block), - offset, next, tmp, nlen); + offset, tmp, nr_slots); return afs_bad(dvnode, afs_file_error_dir_unmarked_ext); } - - _debug("ENT[%zu.%u]: ext %u/%zu", - blkoff / sizeof(union afs_xdr_dir_block), - next, tmp, nlen); - next++; } /* skip if starts before the current position */ diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index 2ffe09abae7f..f4600c1353ad 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -215,8 +215,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode, } /* Work out how many slots we're going to need. */ - need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); - need_slots /= AFS_DIR_DIRENT_SIZE; + need_slots = afs_dir_calc_slots(name->len); meta_page = kmap(page0); meta = &meta_page->blocks[0]; @@ -393,8 +392,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode, } /* Work out how many slots we're going to discard. */ - need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE); - need_slots /= AFS_DIR_DIRENT_SIZE; + need_slots = afs_dir_calc_slots(name->len); meta_page = kmap(page0); meta = &meta_page->blocks[0]; diff --git a/fs/afs/xdr_fs.h b/fs/afs/xdr_fs.h index 94f1f398eefa..8ca868164507 100644 --- a/fs/afs/xdr_fs.h +++ b/fs/afs/xdr_fs.h @@ -54,10 +54,16 @@ union afs_xdr_dirent { __be16 hash_next; __be32 vnode; __be32 unique; - u8 name[16]; - u8 overflow[4]; /* if any char of the name (inc - * NUL) reaches here, consume - * the next dirent too */ + u8 name[]; + /* When determining the number of dirent slots needed to + * represent a directory entry, name should be assumed to be 16 + * bytes, due to a now-standardised (mis)calculation, but it is + * in fact 20 bytes in size. afs_dir_calc_slots() should be + * used for this. + * + * For names longer than (16 or) 20 bytes, extra slots should + * be annexed to this one using the extended_name format. + */ } u; u8 extended_name[32]; } __packed; @@ -96,4 +102,15 @@ struct afs_xdr_dir_page { union afs_xdr_dir_block blocks[AFS_DIR_BLOCKS_PER_PAGE]; }; +/* + * Calculate the number of dirent slots required for any given name length. + * The calculation is made assuming the part of the name in the first slot is + * 16 bytes, rather than 20, but this miscalculation is now standardised. + */ +static inline unsigned int afs_dir_calc_slots(size_t name_len) +{ + name_len++; /* NUL-terminated */ + return 1 + ((name_len + 15) / AFS_DIR_DIRENT_SIZE); +} + #endif /* XDR_FS_H */ diff --git a/fs/block_dev.c b/fs/block_dev.c index 3e5b02f6606c..3b8963e228a1 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -605,6 +605,8 @@ int thaw_bdev(struct block_device *bdev) error = thaw_super(sb); if (error) bdev->bd_fsfreeze_count++; + else + bdev->bd_fsfreeze_sb = NULL; out: mutex_unlock(&bdev->bd_fsfreeze_mutex); return error; @@ -774,8 +776,11 @@ static struct kmem_cache * bdev_cachep __read_mostly; static struct inode *bdev_alloc_inode(struct super_block *sb) { struct bdev_inode *ei = kmem_cache_alloc(bdev_cachep, GFP_KERNEL); + if (!ei) return NULL; + memset(&ei->bdev, 0, sizeof(ei->bdev)); + ei->bdev.bd_bdi = &noop_backing_dev_info; return &ei->vfs_inode; } @@ -869,14 +874,12 @@ struct block_device *bdev_alloc(struct gendisk *disk, u8 partno) mapping_set_gfp_mask(&inode->i_data, GFP_USER); bdev = I_BDEV(inode); - memset(bdev, 0, sizeof(*bdev)); mutex_init(&bdev->bd_mutex); mutex_init(&bdev->bd_fsfreeze_mutex); spin_lock_init(&bdev->bd_size_lock); bdev->bd_disk = disk; bdev->bd_partno = partno; bdev->bd_inode = inode; - bdev->bd_bdi = &noop_backing_dev_info; #ifdef CONFIG_SYSFS INIT_LIST_HEAD(&bdev->bd_holder_disks); #endif diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c index 02d7d7b2563b..9cadacf3ec27 100644 --- a/fs/btrfs/backref.c +++ b/fs/btrfs/backref.c @@ -3117,7 +3117,7 @@ void btrfs_backref_error_cleanup(struct btrfs_backref_cache *cache, list_del_init(&lower->list); if (lower == node) node = NULL; - btrfs_backref_free_node(cache, lower); + btrfs_backref_drop_node(cache, lower); } btrfs_backref_cleanup_node(cache, node); diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index 52f2198d44c9..0886e81e5540 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -2669,7 +2669,8 @@ again: * Go through delayed refs for all the stuff we've just kicked off * and then loop back (just once) */ - ret = btrfs_run_delayed_refs(trans, 0); + if (!ret) + ret = btrfs_run_delayed_refs(trans, 0); if (!ret && loops == 0) { loops++; spin_lock(&cur_trans->dirty_bgs_lock); diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h index 555cbcef6585..d9bf53d9ff90 100644 --- a/fs/btrfs/btrfs_inode.h +++ b/fs/btrfs/btrfs_inode.h @@ -42,6 +42,15 @@ enum { * to an inode. */ BTRFS_INODE_NO_XATTRS, + /* + * Set when we are in a context where we need to start a transaction and + * have dirty pages with the respective file range locked. This is to + * ensure that when reserving space for the transaction, if we are low + * on available space and need to flush delalloc, we will not flush + * delalloc for this inode, because that could result in a deadlock (on + * the file range, inode's io_tree). + */ + BTRFS_INODE_NO_DELALLOC_FLUSH, }; /* in memory btrfs inode */ diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 07810891e204..cc89b63d65a4 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2555,8 +2555,14 @@ out: * @p: Holds all btree nodes along the search path * @root: The root node of the tree * @key: The key we are looking for - * @ins_len: Indicates purpose of search, for inserts it is 1, for - * deletions it's -1. 0 for plain searches + * @ins_len: Indicates purpose of search: + * >0 for inserts it's size of item inserted (*) + * <0 for deletions + * 0 for plain searches, not modifying the tree + * + * (*) If size of item inserted doesn't include + * sizeof(struct btrfs_item), then p->search_for_extension must + * be set. * @cow: boolean should CoW operations be performed. Must always be 1 * when modifying the tree. * @@ -2717,6 +2723,20 @@ cow_done: if (level == 0) { p->slots[level] = slot; + /* + * Item key already exists. In this case, if we are + * allowed to insert the item (for example, in dir_item + * case, item key collision is allowed), it will be + * merged with the original item. Only the item size + * grows, no new btrfs item will be added. If + * search_for_extension is not set, ins_len already + * accounts the size btrfs_item, deduct it here so leaf + * space check will be correct. + */ + if (ret == 0 && ins_len > 0 && !p->search_for_extension) { + ASSERT(ins_len >= sizeof(struct btrfs_item)); + ins_len -= sizeof(struct btrfs_item); + } if (ins_len > 0 && btrfs_leaf_free_space(b) < ins_len) { if (write_lock_level < 1) { diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 1d3c1e479f3d..e6e37591f1de 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -131,6 +131,8 @@ enum { * defrag */ BTRFS_FS_STATE_REMOUNTING, + /* Filesystem in RO mode */ + BTRFS_FS_STATE_RO, /* Track if a transaction abort has been reported on this filesystem */ BTRFS_FS_STATE_TRANS_ABORTED, /* @@ -367,6 +369,12 @@ struct btrfs_path { unsigned int search_commit_root:1; unsigned int need_commit_sem:1; unsigned int skip_release_on_error:1; + /* + * Indicate that new item (btrfs_search_slot) is extending already + * existing item and ins_len contains only the data size and not item + * header (ie. sizeof(struct btrfs_item) is not included). + */ + unsigned int search_for_extension:1; }; #define BTRFS_MAX_EXTENT_ITEM_SIZE(r) ((BTRFS_LEAF_DATA_SIZE(r->fs_info) >> 4) - \ sizeof(struct btrfs_item)) @@ -2885,10 +2893,26 @@ static inline int btrfs_fs_closing(struct btrfs_fs_info *fs_info) * If we remount the fs to be R/O or umount the fs, the cleaner needn't do * anything except sleeping. This function is used to check the status of * the fs. + * We check for BTRFS_FS_STATE_RO to avoid races with a concurrent remount, + * since setting and checking for SB_RDONLY in the superblock's flags is not + * atomic. */ static inline int btrfs_need_cleaner_sleep(struct btrfs_fs_info *fs_info) { - return fs_info->sb->s_flags & SB_RDONLY || btrfs_fs_closing(fs_info); + return test_bit(BTRFS_FS_STATE_RO, &fs_info->fs_state) || + btrfs_fs_closing(fs_info); +} + +static inline void btrfs_set_sb_rdonly(struct super_block *sb) +{ + sb->s_flags |= SB_RDONLY; + set_bit(BTRFS_FS_STATE_RO, &btrfs_sb(sb)->fs_state); +} + +static inline void btrfs_clear_sb_rdonly(struct super_block *sb) +{ + sb->s_flags &= ~SB_RDONLY; + clear_bit(BTRFS_FS_STATE_RO, &btrfs_sb(sb)->fs_state); } /* tree mod log functions from ctree.c */ @@ -3073,7 +3097,8 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, u32 min_type); int btrfs_start_delalloc_snapshot(struct btrfs_root *root); -int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr); +int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr, + bool in_reclaim_context); int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end, unsigned int extra_bits, struct extent_state **cached_state); diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c index a98e33f232d5..324f646d6e5e 100644 --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -715,7 +715,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info, * flush all outstanding I/O and inode extent mappings before the * copy operation is declared as being finished */ - ret = btrfs_start_delalloc_roots(fs_info, U64_MAX); + ret = btrfs_start_delalloc_roots(fs_info, U64_MAX, false); if (ret) { mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); return ret; diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c index 1db966bf85b2..2b8383d41144 100644 --- a/fs/btrfs/discard.c +++ b/fs/btrfs/discard.c @@ -199,16 +199,15 @@ static struct btrfs_block_group *find_next_block_group( static struct btrfs_block_group *peek_discard_list( struct btrfs_discard_ctl *discard_ctl, enum btrfs_discard_state *discard_state, - int *discard_index) + int *discard_index, u64 now) { struct btrfs_block_group *block_group; - const u64 now = ktime_get_ns(); spin_lock(&discard_ctl->lock); again: block_group = find_next_block_group(discard_ctl, now); - if (block_group && now > block_group->discard_eligible_time) { + if (block_group && now >= block_group->discard_eligible_time) { if (block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED && block_group->used != 0) { if (btrfs_is_block_group_data_only(block_group)) @@ -222,12 +221,11 @@ again: block_group->discard_state = BTRFS_DISCARD_EXTENTS; } discard_ctl->block_group = block_group; + } + if (block_group) { *discard_state = block_group->discard_state; *discard_index = block_group->discard_index; - } else { - block_group = NULL; } - spin_unlock(&discard_ctl->lock); return block_group; @@ -330,28 +328,15 @@ void btrfs_discard_queue_work(struct btrfs_discard_ctl *discard_ctl, btrfs_discard_schedule_work(discard_ctl, false); } -/** - * btrfs_discard_schedule_work - responsible for scheduling the discard work - * @discard_ctl: discard control - * @override: override the current timer - * - * Discards are issued by a delayed workqueue item. @override is used to - * update the current delay as the baseline delay interval is reevaluated on - * transaction commit. This is also maxed with any other rate limit. - */ -void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl, - bool override) +static void __btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl, + u64 now, bool override) { struct btrfs_block_group *block_group; - const u64 now = ktime_get_ns(); - - spin_lock(&discard_ctl->lock); if (!btrfs_run_discard_work(discard_ctl)) - goto out; - + return; if (!override && delayed_work_pending(&discard_ctl->work)) - goto out; + return; block_group = find_next_block_group(discard_ctl, now); if (block_group) { @@ -393,7 +378,24 @@ void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl, mod_delayed_work(discard_ctl->discard_workers, &discard_ctl->work, nsecs_to_jiffies(delay)); } -out: +} + +/* + * btrfs_discard_schedule_work - responsible for scheduling the discard work + * @discard_ctl: discard control + * @override: override the current timer + * + * Discards are issued by a delayed workqueue item. @override is used to + * update the current delay as the baseline delay interval is reevaluated on + * transaction commit. This is also maxed with any other rate limit. + */ +void btrfs_discard_schedule_work(struct btrfs_discard_ctl *discard_ctl, + bool override) +{ + const u64 now = ktime_get_ns(); + + spin_lock(&discard_ctl->lock); + __btrfs_discard_schedule_work(discard_ctl, now, override); spin_unlock(&discard_ctl->lock); } @@ -438,13 +440,18 @@ static void btrfs_discard_workfn(struct work_struct *work) int discard_index = 0; u64 trimmed = 0; u64 minlen = 0; + u64 now = ktime_get_ns(); discard_ctl = container_of(work, struct btrfs_discard_ctl, work.work); block_group = peek_discard_list(discard_ctl, &discard_state, - &discard_index); + &discard_index, now); if (!block_group || !btrfs_run_discard_work(discard_ctl)) return; + if (now < block_group->discard_eligible_time) { + btrfs_discard_schedule_work(discard_ctl, false); + return; + } /* Perform discarding */ minlen = discard_minlen[discard_index]; @@ -474,13 +481,6 @@ static void btrfs_discard_workfn(struct work_struct *work) discard_ctl->discard_extent_bytes += trimmed; } - /* - * Updated without locks as this is inside the workfn and nothing else - * is reading the values - */ - discard_ctl->prev_discard = trimmed; - discard_ctl->prev_discard_time = ktime_get_ns(); - /* Determine next steps for a block_group */ if (block_group->discard_cursor >= btrfs_block_group_end(block_group)) { if (discard_state == BTRFS_DISCARD_BITMAPS) { @@ -496,11 +496,13 @@ static void btrfs_discard_workfn(struct work_struct *work) } } + now = ktime_get_ns(); spin_lock(&discard_ctl->lock); + discard_ctl->prev_discard = trimmed; + discard_ctl->prev_discard_time = now; discard_ctl->block_group = NULL; + __btrfs_discard_schedule_work(discard_ctl, now, false); spin_unlock(&discard_ctl->lock); - - btrfs_discard_schedule_work(discard_ctl, false); } /** diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 765deefda92b..6b35b7e88136 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1457,7 +1457,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info) root = list_first_entry(&fs_info->allocated_roots, struct btrfs_root, leak_list); btrfs_err(fs_info, "leaked root %s refcount %d", - btrfs_root_name(root->root_key.objectid, buf), + btrfs_root_name(&root->root_key, buf), refcount_read(&root->refs)); while (refcount_read(&root->refs) > 1) btrfs_put_root(root); @@ -1729,7 +1729,7 @@ static int cleaner_kthread(void *arg) */ btrfs_delete_unused_bgs(fs_info); sleep: - clear_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags); + clear_and_wake_up_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags); if (kthread_should_park()) kthread_parkme(); if (kthread_should_stop()) @@ -2830,6 +2830,9 @@ static int init_mount_fs_info(struct btrfs_fs_info *fs_info, struct super_block return -ENOMEM; btrfs_init_delayed_root(fs_info->delayed_root); + if (sb_rdonly(sb)) + set_bit(BTRFS_FS_STATE_RO, &fs_info->fs_state); + return btrfs_alloc_stripe_hash_table(fs_info); } @@ -2969,6 +2972,7 @@ int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info) } } + ret = btrfs_find_orphan_roots(fs_info); out: return ret; } @@ -3383,10 +3387,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device } } - ret = btrfs_find_orphan_roots(fs_info); - if (ret) - goto fail_qgroup; - fs_info->fs_root = btrfs_get_fs_root(fs_info, BTRFS_FS_TREE_OBJECTID, true); if (IS_ERR(fs_info->fs_root)) { err = PTR_ERR(fs_info->fs_root); @@ -4181,6 +4181,9 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info) invalidate_inode_pages2(fs_info->btree_inode->i_mapping); btrfs_stop_all_workers(fs_info); + /* We shouldn't have any transaction open at this point */ + ASSERT(list_empty(&fs_info->trans_list)); + clear_bit(BTRFS_FS_OPEN, &fs_info->flags); free_root_pointers(fs_info, true); btrfs_free_fs_roots(fs_info); diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index 56ea380f5a17..30b1a630dc2f 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -844,6 +844,7 @@ int lookup_inline_extent_backref(struct btrfs_trans_handle *trans, want = extent_ref_type(parent, owner); if (insert) { extra_size = btrfs_extent_inline_ref_size(want); + path->search_for_extension = 1; path->keep_locks = 1; } else extra_size = -1; @@ -996,6 +997,7 @@ again: out: if (insert) { path->keep_locks = 0; + path->search_for_extension = 0; btrfs_unlock_up_safe(path, 1); } return err; @@ -5547,7 +5549,15 @@ int btrfs_drop_snapshot(struct btrfs_root *root, int update_ref, int for_reloc) goto out_free; } - trans = btrfs_start_transaction(tree_root, 0); + /* + * Use join to avoid potential EINTR from transaction + * start. See wait_reserve_ticket and the whole + * reservation callchain. + */ + if (for_reloc) + trans = btrfs_join_transaction(tree_root); + else + trans = btrfs_start_transaction(tree_root, 0); if (IS_ERR(trans)) { err = PTR_ERR(trans); goto out_free; diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 6e3b72e63e42..c9cee458e001 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -676,9 +676,7 @@ alloc_extent_state_atomic(struct extent_state *prealloc) static void extent_io_tree_panic(struct extent_io_tree *tree, int err) { - struct inode *inode = tree->private_data; - - btrfs_panic(btrfs_sb(inode->i_sb), err, + btrfs_panic(tree->fs_info, err, "locking error: extent tree was modified by another thread while locked"); } diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c index 1545c22ef280..6ccfc019ad90 100644 --- a/fs/btrfs/file-item.c +++ b/fs/btrfs/file-item.c @@ -1016,8 +1016,10 @@ again: } btrfs_release_path(path); + path->search_for_extension = 1; ret = btrfs_search_slot(trans, root, &file_key, path, csum_size, 1); + path->search_for_extension = 0; if (ret < 0) goto out; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 8e23780acfae..a8e0a6b038d3 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -9390,7 +9390,9 @@ static struct btrfs_delalloc_work *btrfs_alloc_delalloc_work(struct inode *inode * some fairly slow code that needs optimization. This walks the list * of all the inodes with pending delalloc and forces them to disk. */ -static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot) +static int start_delalloc_inodes(struct btrfs_root *root, + struct writeback_control *wbc, bool snapshot, + bool in_reclaim_context) { struct btrfs_inode *binode; struct inode *inode; @@ -9398,6 +9400,7 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot struct list_head works; struct list_head splice; int ret = 0; + bool full_flush = wbc->nr_to_write == LONG_MAX; INIT_LIST_HEAD(&works); INIT_LIST_HEAD(&splice); @@ -9411,6 +9414,11 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot list_move_tail(&binode->delalloc_inodes, &root->delalloc_inodes); + + if (in_reclaim_context && + test_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &binode->runtime_flags)) + continue; + inode = igrab(&binode->vfs_inode); if (!inode) { cond_resched_lock(&root->delalloc_lock); @@ -9421,18 +9429,24 @@ static int start_delalloc_inodes(struct btrfs_root *root, u64 *nr, bool snapshot if (snapshot) set_bit(BTRFS_INODE_SNAPSHOT_FLUSH, &binode->runtime_flags); - work = btrfs_alloc_delalloc_work(inode); - if (!work) { - iput(inode); - ret = -ENOMEM; - goto out; - } - list_add_tail(&work->list, &works); - btrfs_queue_work(root->fs_info->flush_workers, - &work->work); - if (*nr != U64_MAX) { - (*nr)--; - if (*nr == 0) + if (full_flush) { + work = btrfs_alloc_delalloc_work(inode); + if (!work) { + iput(inode); + ret = -ENOMEM; + goto out; + } + list_add_tail(&work->list, &works); + btrfs_queue_work(root->fs_info->flush_workers, + &work->work); + } else { + ret = sync_inode(inode, wbc); + if (!ret && + test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, + &BTRFS_I(inode)->runtime_flags)) + ret = sync_inode(inode, wbc); + btrfs_add_delayed_iput(inode); + if (ret || wbc->nr_to_write <= 0) goto out; } cond_resched(); @@ -9458,17 +9472,29 @@ out: int btrfs_start_delalloc_snapshot(struct btrfs_root *root) { + struct writeback_control wbc = { + .nr_to_write = LONG_MAX, + .sync_mode = WB_SYNC_NONE, + .range_start = 0, + .range_end = LLONG_MAX, + }; struct btrfs_fs_info *fs_info = root->fs_info; - u64 nr = U64_MAX; if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) return -EROFS; - return start_delalloc_inodes(root, &nr, true); + return start_delalloc_inodes(root, &wbc, true, false); } -int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr) +int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr, + bool in_reclaim_context) { + struct writeback_control wbc = { + .nr_to_write = (nr == U64_MAX) ? LONG_MAX : (unsigned long)nr, + .sync_mode = WB_SYNC_NONE, + .range_start = 0, + .range_end = LLONG_MAX, + }; struct btrfs_root *root; struct list_head splice; int ret; @@ -9482,6 +9508,13 @@ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr) spin_lock(&fs_info->delalloc_root_lock); list_splice_init(&fs_info->delalloc_roots, &splice); while (!list_empty(&splice) && nr) { + /* + * Reset nr_to_write here so we know that we're doing a full + * flush. + */ + if (nr == U64_MAX) + wbc.nr_to_write = LONG_MAX; + root = list_first_entry(&splice, struct btrfs_root, delalloc_root); root = btrfs_grab_root(root); @@ -9490,9 +9523,9 @@ int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, u64 nr) &fs_info->delalloc_roots); spin_unlock(&fs_info->delalloc_root_lock); - ret = start_delalloc_inodes(root, &nr, false); + ret = start_delalloc_inodes(root, &wbc, false, in_reclaim_context); btrfs_put_root(root); - if (ret < 0) + if (ret < 0 || wbc.nr_to_write <= 0) goto out; spin_lock(&fs_info->delalloc_root_lock); } diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 703212ff50a5..dde49a791f3e 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -4951,7 +4951,7 @@ long btrfs_ioctl(struct file *file, unsigned int case BTRFS_IOC_SYNC: { int ret; - ret = btrfs_start_delalloc_roots(fs_info, U64_MAX); + ret = btrfs_start_delalloc_roots(fs_info, U64_MAX, false); if (ret) return ret; ret = btrfs_sync_fs(inode->i_sb, 1); diff --git a/fs/btrfs/print-tree.c b/fs/btrfs/print-tree.c index fe5e0026129d..aae1027bd76a 100644 --- a/fs/btrfs/print-tree.c +++ b/fs/btrfs/print-tree.c @@ -26,22 +26,22 @@ static const struct root_name_map root_map[] = { { BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" }, }; -const char *btrfs_root_name(u64 objectid, char *buf) +const char *btrfs_root_name(const struct btrfs_key *key, char *buf) { int i; - if (objectid == BTRFS_TREE_RELOC_OBJECTID) { + if (key->objectid == BTRFS_TREE_RELOC_OBJECTID) { snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, - "TREE_RELOC offset=%llu", objectid); + "TREE_RELOC offset=%llu", key->offset); return buf; } for (i = 0; i < ARRAY_SIZE(root_map); i++) { - if (root_map[i].id == objectid) + if (root_map[i].id == key->objectid) return root_map[i].name; } - snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", objectid); + snprintf(buf, BTRFS_ROOT_NAME_BUF_LEN, "%llu", key->objectid); return buf; } diff --git a/fs/btrfs/print-tree.h b/fs/btrfs/print-tree.h index 78b99385a503..8c3e9319ec4e 100644 --- a/fs/btrfs/print-tree.h +++ b/fs/btrfs/print-tree.h @@ -11,6 +11,6 @@ void btrfs_print_leaf(struct extent_buffer *l); void btrfs_print_tree(struct extent_buffer *c, bool follow); -const char *btrfs_root_name(u64 objectid, char *buf); +const char *btrfs_root_name(const struct btrfs_key *key, char *buf); #endif diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index fe3046007f52..808370ada888 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -3190,6 +3190,12 @@ out: return ret; } +static bool rescan_should_stop(struct btrfs_fs_info *fs_info) +{ + return btrfs_fs_closing(fs_info) || + test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state); +} + static void btrfs_qgroup_rescan_worker(struct btrfs_work *work) { struct btrfs_fs_info *fs_info = container_of(work, struct btrfs_fs_info, @@ -3198,6 +3204,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work) struct btrfs_trans_handle *trans = NULL; int err = -ENOMEM; int ret = 0; + bool stopped = false; path = btrfs_alloc_path(); if (!path) @@ -3210,7 +3217,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work) path->skip_locking = 1; err = 0; - while (!err && !btrfs_fs_closing(fs_info)) { + while (!err && !(stopped = rescan_should_stop(fs_info))) { trans = btrfs_start_transaction(fs_info->fs_root, 0); if (IS_ERR(trans)) { err = PTR_ERR(trans); @@ -3253,7 +3260,7 @@ out: } mutex_lock(&fs_info->qgroup_rescan_lock); - if (!btrfs_fs_closing(fs_info)) + if (!stopped) fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN; if (trans) { ret = update_qgroup_status_item(trans); @@ -3272,7 +3279,7 @@ out: btrfs_end_transaction(trans); - if (btrfs_fs_closing(fs_info)) { + if (stopped) { btrfs_info(fs_info, "qgroup scan paused"); } else if (err >= 0) { btrfs_info(fs_info, "qgroup scan completed%s", @@ -3531,16 +3538,6 @@ static int try_flush_qgroup(struct btrfs_root *root) bool can_commit = true; /* - * We don't want to run flush again and again, so if there is a running - * one, we won't try to start a new flush, but exit directly. - */ - if (test_and_set_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)) { - wait_event(root->qgroup_flush_wait, - !test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)); - return 0; - } - - /* * If current process holds a transaction, we shouldn't flush, as we * assume all space reservation happens before a transaction handle is * held. @@ -3554,6 +3551,26 @@ static int try_flush_qgroup(struct btrfs_root *root) current->journal_info != BTRFS_SEND_TRANS_STUB) can_commit = false; + /* + * We don't want to run flush again and again, so if there is a running + * one, we won't try to start a new flush, but exit directly. + */ + if (test_and_set_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)) { + /* + * We are already holding a transaction, thus we can block other + * threads from flushing. So exit right now. This increases + * the chance of EDQUOT for heavy load and near limit cases. + * But we can argue that if we're already near limit, EDQUOT is + * unavoidable anyway. + */ + if (!can_commit) + return 0; + + wait_event(root->qgroup_flush_wait, + !test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)); + return 0; + } + ret = btrfs_start_delalloc_snapshot(root); if (ret < 0) goto out; diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index ab80896315be..b03e7891394e 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -89,6 +89,19 @@ static int copy_inline_to_page(struct btrfs_inode *inode, if (ret) goto out_unlock; + /* + * After dirtying the page our caller will need to start a transaction, + * and if we are low on metadata free space, that can cause flushing of + * delalloc for all inodes in order to get metadata space released. + * However we are holding the range locked for the whole duration of + * the clone/dedupe operation, so we may deadlock if that happens and no + * other task releases enough space. So mark this inode as not being + * possible to flush to avoid such deadlock. We will clear that flag + * when we finish cloning all extents, since a transaction is started + * after finding each extent to clone. + */ + set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags); + if (comp_type == BTRFS_COMPRESS_NONE) { char *map; @@ -549,6 +562,8 @@ process_slot: out: btrfs_free_path(path); kvfree(buf); + clear_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &BTRFS_I(inode)->runtime_flags); + return ret; } diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c index 19b7db8b2117..df63ef64c5c0 100644 --- a/fs/btrfs/relocation.c +++ b/fs/btrfs/relocation.c @@ -2975,11 +2975,16 @@ static int delete_v1_space_cache(struct extent_buffer *leaf, return 0; for (i = 0; i < btrfs_header_nritems(leaf); i++) { + u8 type; + btrfs_item_key_to_cpu(leaf, &key, i); if (key.type != BTRFS_EXTENT_DATA_KEY) continue; ei = btrfs_item_ptr(leaf, i, struct btrfs_file_extent_item); - if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_REG && + type = btrfs_file_extent_type(leaf, ei); + + if ((type == BTRFS_FILE_EXTENT_REG || + type == BTRFS_FILE_EXTENT_PREALLOC) && btrfs_file_extent_disk_bytenr(leaf, ei) == data_bytenr) { found = true; space_cache_ino = key.objectid; diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d719a2755a40..78a35374d492 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -236,6 +236,7 @@ struct waiting_dir_move { * after this directory is moved, we can try to rmdir the ino rmdir_ino. */ u64 rmdir_ino; + u64 rmdir_gen; bool orphanized; }; @@ -316,7 +317,7 @@ static int is_waiting_for_move(struct send_ctx *sctx, u64 ino); static struct waiting_dir_move * get_waiting_dir_move(struct send_ctx *sctx, u64 ino); -static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino); +static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino, u64 gen); static int need_send_hole(struct send_ctx *sctx) { @@ -2299,7 +2300,7 @@ static int get_cur_path(struct send_ctx *sctx, u64 ino, u64 gen, fs_path_reset(name); - if (is_waiting_for_rm(sctx, ino)) { + if (is_waiting_for_rm(sctx, ino, gen)) { ret = gen_unique_name(sctx, ino, gen, name); if (ret < 0) goto out; @@ -2858,8 +2859,8 @@ out: return ret; } -static struct orphan_dir_info * -add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino) +static struct orphan_dir_info *add_orphan_dir_info(struct send_ctx *sctx, + u64 dir_ino, u64 dir_gen) { struct rb_node **p = &sctx->orphan_dirs.rb_node; struct rb_node *parent = NULL; @@ -2868,20 +2869,23 @@ add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino) while (*p) { parent = *p; entry = rb_entry(parent, struct orphan_dir_info, node); - if (dir_ino < entry->ino) { + if (dir_ino < entry->ino) p = &(*p)->rb_left; - } else if (dir_ino > entry->ino) { + else if (dir_ino > entry->ino) p = &(*p)->rb_right; - } else { + else if (dir_gen < entry->gen) + p = &(*p)->rb_left; + else if (dir_gen > entry->gen) + p = &(*p)->rb_right; + else return entry; - } } odi = kmalloc(sizeof(*odi), GFP_KERNEL); if (!odi) return ERR_PTR(-ENOMEM); odi->ino = dir_ino; - odi->gen = 0; + odi->gen = dir_gen; odi->last_dir_index_offset = 0; rb_link_node(&odi->node, parent, p); @@ -2889,8 +2893,8 @@ add_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino) return odi; } -static struct orphan_dir_info * -get_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino) +static struct orphan_dir_info *get_orphan_dir_info(struct send_ctx *sctx, + u64 dir_ino, u64 gen) { struct rb_node *n = sctx->orphan_dirs.rb_node; struct orphan_dir_info *entry; @@ -2901,15 +2905,19 @@ get_orphan_dir_info(struct send_ctx *sctx, u64 dir_ino) n = n->rb_left; else if (dir_ino > entry->ino) n = n->rb_right; + else if (gen < entry->gen) + n = n->rb_left; + else if (gen > entry->gen) + n = n->rb_right; else return entry; } return NULL; } -static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino) +static int is_waiting_for_rm(struct send_ctx *sctx, u64 dir_ino, u64 gen) { - struct orphan_dir_info *odi = get_orphan_dir_info(sctx, dir_ino); + struct orphan_dir_info *odi = get_orphan_dir_info(sctx, dir_ino, gen); return odi != NULL; } @@ -2954,7 +2962,7 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen, key.type = BTRFS_DIR_INDEX_KEY; key.offset = 0; - odi = get_orphan_dir_info(sctx, dir); + odi = get_orphan_dir_info(sctx, dir, dir_gen); if (odi) key.offset = odi->last_dir_index_offset; @@ -2985,7 +2993,7 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen, dm = get_waiting_dir_move(sctx, loc.objectid); if (dm) { - odi = add_orphan_dir_info(sctx, dir); + odi = add_orphan_dir_info(sctx, dir, dir_gen); if (IS_ERR(odi)) { ret = PTR_ERR(odi); goto out; @@ -2993,12 +3001,13 @@ static int can_rmdir(struct send_ctx *sctx, u64 dir, u64 dir_gen, odi->gen = dir_gen; odi->last_dir_index_offset = found_key.offset; dm->rmdir_ino = dir; + dm->rmdir_gen = dir_gen; ret = 0; goto out; } if (loc.objectid > send_progress) { - odi = add_orphan_dir_info(sctx, dir); + odi = add_orphan_dir_info(sctx, dir, dir_gen); if (IS_ERR(odi)) { ret = PTR_ERR(odi); goto out; @@ -3038,6 +3047,7 @@ static int add_waiting_dir_move(struct send_ctx *sctx, u64 ino, bool orphanized) return -ENOMEM; dm->ino = ino; dm->rmdir_ino = 0; + dm->rmdir_gen = 0; dm->orphanized = orphanized; while (*p) { @@ -3183,7 +3193,7 @@ static int path_loop(struct send_ctx *sctx, struct fs_path *name, while (ino != BTRFS_FIRST_FREE_OBJECTID) { fs_path_reset(name); - if (is_waiting_for_rm(sctx, ino)) + if (is_waiting_for_rm(sctx, ino, gen)) break; if (is_waiting_for_move(sctx, ino)) { if (*ancestor_ino == 0) @@ -3223,6 +3233,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm) u64 parent_ino, parent_gen; struct waiting_dir_move *dm = NULL; u64 rmdir_ino = 0; + u64 rmdir_gen; u64 ancestor; bool is_orphan; int ret; @@ -3237,6 +3248,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm) dm = get_waiting_dir_move(sctx, pm->ino); ASSERT(dm); rmdir_ino = dm->rmdir_ino; + rmdir_gen = dm->rmdir_gen; is_orphan = dm->orphanized; free_waiting_dir_move(sctx, dm); @@ -3273,6 +3285,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm) dm = get_waiting_dir_move(sctx, pm->ino); ASSERT(dm); dm->rmdir_ino = rmdir_ino; + dm->rmdir_gen = rmdir_gen; } goto out; } @@ -3291,7 +3304,7 @@ static int apply_dir_move(struct send_ctx *sctx, struct pending_dir_move *pm) struct orphan_dir_info *odi; u64 gen; - odi = get_orphan_dir_info(sctx, rmdir_ino); + odi = get_orphan_dir_info(sctx, rmdir_ino, rmdir_gen); if (!odi) { /* already deleted */ goto finish; @@ -5499,6 +5512,21 @@ static int clone_range(struct send_ctx *sctx, break; offset += clone_len; clone_root->offset += clone_len; + + /* + * If we are cloning from the file we are currently processing, + * and using the send root as the clone root, we must stop once + * the current clone offset reaches the current eof of the file + * at the receiver, otherwise we would issue an invalid clone + * operation (source range going beyond eof) and cause the + * receiver to fail. So if we reach the current eof, bail out + * and fallback to a regular write. + */ + if (clone_root->root == sctx->send_root && + clone_root->ino == sctx->cur_ino && + clone_root->offset >= sctx->cur_inode_next_write_offset) + break; + data_offset += clone_len; next: path->slots[0]++; diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c index 64099565ab8f..e8347461c8dd 100644 --- a/fs/btrfs/space-info.c +++ b/fs/btrfs/space-info.c @@ -532,7 +532,9 @@ static void shrink_delalloc(struct btrfs_fs_info *fs_info, loops = 0; while ((delalloc_bytes || dio_bytes) && loops < 3) { - btrfs_start_delalloc_roots(fs_info, items); + u64 nr_pages = min(delalloc_bytes, to_reclaim) >> PAGE_SHIFT; + + btrfs_start_delalloc_roots(fs_info, nr_pages, true); loops++; if (wait_ordered && !trans) { diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 022f20810089..12d7d3be7cd4 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -175,7 +175,7 @@ void __btrfs_handle_fs_error(struct btrfs_fs_info *fs_info, const char *function btrfs_discard_stop(fs_info); /* btrfs handle error by forcing the filesystem readonly */ - sb->s_flags |= SB_RDONLY; + btrfs_set_sb_rdonly(sb); btrfs_info(fs_info, "forced readonly"); /* * Note that a running device replace operation is not canceled here @@ -1953,7 +1953,7 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) /* avoid complains from lockdep et al. */ up(&fs_info->uuid_tree_rescan_sem); - sb->s_flags |= SB_RDONLY; + btrfs_set_sb_rdonly(sb); /* * Setting SB_RDONLY will put the cleaner thread to @@ -1964,10 +1964,42 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) */ btrfs_delete_unused_bgs(fs_info); + /* + * The cleaner task could be already running before we set the + * flag BTRFS_FS_STATE_RO (and SB_RDONLY in the superblock). + * We must make sure that after we finish the remount, i.e. after + * we call btrfs_commit_super(), the cleaner can no longer start + * a transaction - either because it was dropping a dead root, + * running delayed iputs or deleting an unused block group (the + * cleaner picked a block group from the list of unused block + * groups before we were able to in the previous call to + * btrfs_delete_unused_bgs()). + */ + wait_on_bit(&fs_info->flags, BTRFS_FS_CLEANER_RUNNING, + TASK_UNINTERRUPTIBLE); + + /* + * We've set the superblock to RO mode, so we might have made + * the cleaner task sleep without running all pending delayed + * iputs. Go through all the delayed iputs here, so that if an + * unmount happens without remounting RW we don't end up at + * finishing close_ctree() with a non-empty list of delayed + * iputs. + */ + btrfs_run_delayed_iputs(fs_info); + btrfs_dev_replace_suspend_for_unmount(fs_info); btrfs_scrub_cancel(fs_info); btrfs_pause_balance(fs_info); + /* + * Pause the qgroup rescan worker if it is running. We don't want + * it to be still running after we are in RO mode, as after that, + * by the time we unmount, it might have left a transaction open, + * so we would leak the transaction and/or crash. + */ + btrfs_qgroup_wait_for_completion(fs_info, false); + ret = btrfs_commit_super(fs_info); if (ret) goto restore; @@ -2006,7 +2038,7 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) if (ret) goto restore; - sb->s_flags &= ~SB_RDONLY; + btrfs_clear_sb_rdonly(sb); set_bit(BTRFS_FS_OPEN, &fs_info->flags); } @@ -2028,6 +2060,8 @@ restore: /* We've hit an error - don't reset SB_RDONLY */ if (sb_rdonly(sb)) old_flags |= SB_RDONLY; + if (!(old_flags & SB_RDONLY)) + clear_bit(BTRFS_FS_STATE_RO, &fs_info->fs_state); sb->s_flags = old_flags; fs_info->mount_opt = old_opts; fs_info->compress_type = old_compress_type; diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c index 8ca334d554af..6bd97bd4cb37 100644 --- a/fs/btrfs/tests/btrfs-tests.c +++ b/fs/btrfs/tests/btrfs-tests.c @@ -55,8 +55,14 @@ struct inode *btrfs_new_test_inode(void) struct inode *inode; inode = new_inode(test_mnt->mnt_sb); - if (inode) - inode_init_owner(inode, NULL, S_IFREG); + if (!inode) + return NULL; + + inode->i_mode = S_IFREG; + BTRFS_I(inode)->location.type = BTRFS_INODE_ITEM_KEY; + BTRFS_I(inode)->location.objectid = BTRFS_FIRST_FREE_OBJECTID; + BTRFS_I(inode)->location.offset = 0; + inode_init_owner(inode, NULL, S_IFREG); return inode; } diff --git a/fs/btrfs/tests/inode-tests.c b/fs/btrfs/tests/inode-tests.c index 04022069761d..c9874b12d337 100644 --- a/fs/btrfs/tests/inode-tests.c +++ b/fs/btrfs/tests/inode-tests.c @@ -232,11 +232,6 @@ static noinline int test_btrfs_get_extent(u32 sectorsize, u32 nodesize) return ret; } - inode->i_mode = S_IFREG; - BTRFS_I(inode)->location.type = BTRFS_INODE_ITEM_KEY; - BTRFS_I(inode)->location.objectid = BTRFS_FIRST_FREE_OBJECTID; - BTRFS_I(inode)->location.offset = 0; - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); @@ -835,10 +830,6 @@ static int test_hole_first(u32 sectorsize, u32 nodesize) return ret; } - BTRFS_I(inode)->location.type = BTRFS_INODE_ITEM_KEY; - BTRFS_I(inode)->location.objectid = BTRFS_FIRST_FREE_OBJECTID; - BTRFS_I(inode)->location.offset = 0; - fs_info = btrfs_alloc_dummy_fs_info(nodesize, sectorsize); if (!fs_info) { test_std_err(TEST_ALLOC_FS_INFO); diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c index 8e0f7a1029c6..6af7f2bf92de 100644 --- a/fs/btrfs/transaction.c +++ b/fs/btrfs/transaction.c @@ -2265,14 +2265,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans) btrfs_free_log_root_tree(trans, fs_info); /* - * commit_fs_roots() can call btrfs_save_ino_cache(), which generates - * new delayed refs. Must handle them or qgroup can be wrong. - */ - ret = btrfs_run_delayed_refs(trans, (unsigned long)-1); - if (ret) - goto unlock_tree_log; - - /* * Since fs roots are all committed, we can get a quite accurate * new_roots. So let's do quota accounting. */ diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c index 028e733e42f3..582061c7b547 100644 --- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -760,6 +760,7 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf, { struct btrfs_fs_info *fs_info = leaf->fs_info; u64 length; + u64 chunk_end; u64 stripe_len; u16 num_stripes; u16 sub_stripes; @@ -814,6 +815,12 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf, "invalid chunk length, have %llu", length); return -EUCLEAN; } + if (unlikely(check_add_overflow(logical, length, &chunk_end))) { + chunk_err(leaf, chunk, logical, +"invalid chunk logical start and length, have logical start %llu length %llu", + logical, length); + return -EUCLEAN; + } if (unlikely(!is_power_of_2(stripe_len) || stripe_len != BTRFS_STRIPE_LEN)) { chunk_err(leaf, chunk, logical, "invalid chunk stripe length: %llu", diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index ee086fc56c30..0a6de859eb22 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -2592,7 +2592,7 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path set_blocksize(device->bdev, BTRFS_BDEV_BLOCKSIZE); if (seeding_dev) { - sb->s_flags &= ~SB_RDONLY; + btrfs_clear_sb_rdonly(sb); ret = btrfs_prepare_sprout(fs_info); if (ret) { btrfs_abort_transaction(trans, ret); @@ -2728,7 +2728,7 @@ error_sysfs: mutex_unlock(&fs_info->fs_devices->device_list_mutex); error_trans: if (seeding_dev) - sb->s_flags |= SB_RDONLY; + btrfs_set_sb_rdonly(sb); if (trans) btrfs_end_transaction(trans); error_free_zone: @@ -4317,6 +4317,8 @@ int btrfs_recover_balance(struct btrfs_fs_info *fs_info) btrfs_warn(fs_info, "balance: cannot set exclusive op status, resume manually"); + btrfs_release_path(path); + mutex_lock(&fs_info->balance_mutex); BUG_ON(fs_info->balance_ctl); spin_lock(&fs_info->balance_lock); diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 8bda092e60c5..e027c718ca01 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -413,7 +413,6 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op, inode = d_backing_inode(object->backer); ASSERT(S_ISREG(inode->i_mode)); - ASSERT(inode->i_mapping->a_ops->readpages); /* calculate the shift required to use bmap */ shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits; @@ -713,7 +712,6 @@ int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op, inode = d_backing_inode(object->backer); ASSERT(S_ISREG(inode->i_mode)); - ASSERT(inode->i_mapping->a_ops->readpages); /* calculate the shift required to use bmap */ shift = PAGE_SHIFT - inode->i_sb->s_blocksize_bits; diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 840587037b59..d87bd852ed96 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -5038,7 +5038,7 @@ bad: return; } -static struct ceph_connection *con_get(struct ceph_connection *con) +static struct ceph_connection *mds_get_con(struct ceph_connection *con) { struct ceph_mds_session *s = con->private; @@ -5047,7 +5047,7 @@ static struct ceph_connection *con_get(struct ceph_connection *con) return NULL; } -static void con_put(struct ceph_connection *con) +static void mds_put_con(struct ceph_connection *con) { struct ceph_mds_session *s = con->private; @@ -5058,7 +5058,7 @@ static void con_put(struct ceph_connection *con) * if the client is unresponsive for long enough, the mds will kill * the session entirely. */ -static void peer_reset(struct ceph_connection *con) +static void mds_peer_reset(struct ceph_connection *con) { struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; @@ -5067,7 +5067,7 @@ static void peer_reset(struct ceph_connection *con) send_mds_reconnect(mdsc, s); } -static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) +static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg) { struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; @@ -5125,8 +5125,8 @@ out: * Note: returned pointer is the address of a structure that's * managed separately. Caller must *not* attempt to free it. */ -static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, - int *proto, int force_new) +static struct ceph_auth_handshake * +mds_get_authorizer(struct ceph_connection *con, int *proto, int force_new) { struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; @@ -5142,7 +5142,7 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, return auth; } -static int add_authorizer_challenge(struct ceph_connection *con, +static int mds_add_authorizer_challenge(struct ceph_connection *con, void *challenge_buf, int challenge_buf_len) { struct ceph_mds_session *s = con->private; @@ -5153,7 +5153,7 @@ static int add_authorizer_challenge(struct ceph_connection *con, challenge_buf, challenge_buf_len); } -static int verify_authorizer_reply(struct ceph_connection *con) +static int mds_verify_authorizer_reply(struct ceph_connection *con) { struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; @@ -5165,7 +5165,7 @@ static int verify_authorizer_reply(struct ceph_connection *con) NULL, NULL, NULL, NULL); } -static int invalidate_authorizer(struct ceph_connection *con) +static int mds_invalidate_authorizer(struct ceph_connection *con) { struct ceph_mds_session *s = con->private; struct ceph_mds_client *mdsc = s->s_mdsc; @@ -5288,15 +5288,15 @@ static int mds_check_message_signature(struct ceph_msg *msg) } static const struct ceph_connection_operations mds_con_ops = { - .get = con_get, - .put = con_put, - .dispatch = dispatch, - .get_authorizer = get_authorizer, - .add_authorizer_challenge = add_authorizer_challenge, - .verify_authorizer_reply = verify_authorizer_reply, - .invalidate_authorizer = invalidate_authorizer, - .peer_reset = peer_reset, + .get = mds_get_con, + .put = mds_put_con, .alloc_msg = mds_alloc_msg, + .dispatch = mds_dispatch, + .peer_reset = mds_peer_reset, + .get_authorizer = mds_get_authorizer, + .add_authorizer_challenge = mds_add_authorizer_challenge, + .verify_authorizer_reply = mds_verify_authorizer_reply, + .invalidate_authorizer = mds_invalidate_authorizer, .sign_message = mds_sign_message, .check_message_signature = mds_check_message_signature, .get_auth_request = mds_get_auth_request, diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index b9df85506938..c8ef24bac94f 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -2195,7 +2195,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx) if (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING) tcon->nohandlecache = ctx->nohandlecache; else - tcon->nohandlecache = 1; + tcon->nohandlecache = true; tcon->nodelete = ctx->nodelete; tcon->local_lease = ctx->local_lease; INIT_LIST_HEAD(&tcon->pending_opens); @@ -2628,7 +2628,7 @@ void reset_cifs_unix_caps(unsigned int xid, struct cifs_tcon *tcon, } else if (ctx) tcon->unix_ext = 1; /* Unix Extensions supported */ - if (tcon->unix_ext == 0) { + if (!tcon->unix_ext) { cifs_dbg(FYI, "Unix extensions disabled so not set on reconnect\n"); return; } @@ -3740,7 +3740,7 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses, if (!ses->binding) { ses->capabilities = server->capabilities; - if (linuxExtEnabled == 0) + if (!linuxExtEnabled) ses->capabilities &= (~server->vals->cap_unix); if (ses->auth_key.response) { diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c index 6ad6ba5f6ebe..0fdb0de7ff86 100644 --- a/fs/cifs/dfs_cache.c +++ b/fs/cifs/dfs_cache.c @@ -1260,7 +1260,8 @@ void dfs_cache_del_vol(const char *fullpath) vi = find_vol(fullpath); spin_unlock(&vol_list_lock); - kref_put(&vi->refcnt, vol_release); + if (!IS_ERR(vi)) + kref_put(&vi->refcnt, vol_release); } /** diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c index 0afccbbed2e6..076bcadc756a 100644 --- a/fs/cifs/fs_context.c +++ b/fs/cifs/fs_context.c @@ -303,8 +303,6 @@ do { \ int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx) { - int rc = 0; - memcpy(new_ctx, ctx, sizeof(*ctx)); new_ctx->prepath = NULL; new_ctx->mount_options = NULL; @@ -327,7 +325,7 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx DUP_CTX_STR(nodename); DUP_CTX_STR(iocharset); - return rc; + return 0; } static int diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index 067eb44c7baa..794fc3b68b4f 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -3248,7 +3248,7 @@ close_exit: free_rsp_buf(resp_buftype, rsp); /* retry close in a worker thread if this one is interrupted */ - if (rc == -EINTR) { + if (is_interrupt_error(rc)) { int tmp_rc; tmp_rc = smb2_handle_cancelled_close(tcon, persistent_fid, diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h index 204a622b89ed..d85edf5d1429 100644 --- a/fs/cifs/smb2pdu.h +++ b/fs/cifs/smb2pdu.h @@ -424,7 +424,7 @@ struct smb2_rdma_transform_capabilities_context { __le16 TransformCount; __u16 Reserved1; __u32 Reserved2; - __le16 RDMATransformIds[1]; + __le16 RDMATransformIds[]; } __packed; /* Signing algorithms */ diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c index e9abb41aa89b..95ef26b555b9 100644 --- a/fs/cifs/transport.c +++ b/fs/cifs/transport.c @@ -338,7 +338,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst, if (ssocket == NULL) return -EAGAIN; - if (signal_pending(current)) { + if (fatal_signal_pending(current)) { cifs_dbg(FYI, "signal pending before send request\n"); return -ERESTARTSYS; } @@ -429,7 +429,7 @@ unmask: if (signal_pending(current) && (total_len != send_length)) { cifs_dbg(FYI, "signal is pending after attempt to send\n"); - rc = -EINTR; + rc = -ERESTARTSYS; } /* uncork it */ diff --git a/fs/ext4/ext4_jbd2.c b/fs/ext4/ext4_jbd2.c index 1a0a827a7f34..be799040a415 100644 --- a/fs/ext4/ext4_jbd2.c +++ b/fs/ext4/ext4_jbd2.c @@ -372,20 +372,3 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line, } return err; } - -int __ext4_handle_dirty_super(const char *where, unsigned int line, - handle_t *handle, struct super_block *sb) -{ - struct buffer_head *bh = EXT4_SB(sb)->s_sbh; - int err = 0; - - ext4_superblock_csum_set(sb); - if (ext4_handle_valid(handle)) { - err = jbd2_journal_dirty_metadata(handle, bh); - if (err) - ext4_journal_abort_handle(where, line, __func__, - bh, handle, err); - } else - mark_buffer_dirty(bh); - return err; -} diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h index a124c68b0c75..0d2fa423b7ad 100644 --- a/fs/ext4/ext4_jbd2.h +++ b/fs/ext4/ext4_jbd2.h @@ -244,9 +244,6 @@ int __ext4_handle_dirty_metadata(const char *where, unsigned int line, handle_t *handle, struct inode *inode, struct buffer_head *bh); -int __ext4_handle_dirty_super(const char *where, unsigned int line, - handle_t *handle, struct super_block *sb); - #define ext4_journal_get_write_access(handle, bh) \ __ext4_journal_get_write_access(__func__, __LINE__, (handle), (bh)) #define ext4_forget(handle, is_metadata, inode, bh, block_nr) \ @@ -257,8 +254,6 @@ int __ext4_handle_dirty_super(const char *where, unsigned int line, #define ext4_handle_dirty_metadata(handle, inode, bh) \ __ext4_handle_dirty_metadata(__func__, __LINE__, (handle), (inode), \ (bh)) -#define ext4_handle_dirty_super(handle, sb) \ - __ext4_handle_dirty_super(__func__, __LINE__, (handle), (sb)) handle_t *__ext4_journal_start_sb(struct super_block *sb, unsigned int line, int type, int blocks, int rsv_blocks, diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c index 4fcc21c25e79..0a14a7c87bf8 100644 --- a/fs/ext4/fast_commit.c +++ b/fs/ext4/fast_commit.c @@ -604,13 +604,13 @@ void ext4_fc_track_range(handle_t *handle, struct inode *inode, ext4_lblk_t star trace_ext4_fc_track_range(inode, start, end, ret); } -static void ext4_fc_submit_bh(struct super_block *sb) +static void ext4_fc_submit_bh(struct super_block *sb, bool is_tail) { int write_flags = REQ_SYNC; struct buffer_head *bh = EXT4_SB(sb)->s_fc_bh; - /* TODO: REQ_FUA | REQ_PREFLUSH is unnecessarily expensive. */ - if (test_opt(sb, BARRIER)) + /* Add REQ_FUA | REQ_PREFLUSH only its tail */ + if (test_opt(sb, BARRIER) && is_tail) write_flags |= REQ_FUA | REQ_PREFLUSH; lock_buffer(bh); set_buffer_dirty(bh); @@ -684,7 +684,7 @@ static u8 *ext4_fc_reserve_space(struct super_block *sb, int len, u32 *crc) *crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl)); if (pad_len > 0) ext4_fc_memzero(sb, tl + 1, pad_len, crc); - ext4_fc_submit_bh(sb); + ext4_fc_submit_bh(sb, false); ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh); if (ret) @@ -741,7 +741,7 @@ static int ext4_fc_write_tail(struct super_block *sb, u32 crc) tail.fc_crc = cpu_to_le32(crc); ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL); - ext4_fc_submit_bh(sb); + ext4_fc_submit_bh(sb, true); return 0; } @@ -1268,7 +1268,7 @@ static void ext4_fc_cleanup(journal_t *journal, int full) list_splice_init(&sbi->s_fc_dentry_q[FC_Q_STAGING], &sbi->s_fc_dentry_q[FC_Q_MAIN]); list_splice_init(&sbi->s_fc_q[FC_Q_STAGING], - &sbi->s_fc_q[FC_Q_STAGING]); + &sbi->s_fc_q[FC_Q_MAIN]); ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING); ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE); @@ -1318,14 +1318,14 @@ static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl) entry.len = darg.dname_len; inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode %d not found", darg.ino); return 0; } old_parent = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(old_parent)) { + if (IS_ERR(old_parent)) { jbd_debug(1, "Dir with inode %d not found", darg.parent_ino); iput(inode); return 0; @@ -1410,7 +1410,7 @@ static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl) darg.parent_ino, darg.dname_len); inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode not found."); return 0; } @@ -1466,10 +1466,11 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl) trace_ext4_fc_replay(sb, tag, ino, 0, 0); inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); - if (!IS_ERR_OR_NULL(inode)) { + if (!IS_ERR(inode)) { ext4_ext_clear_bb(inode); iput(inode); } + inode = NULL; ext4_fc_record_modified_inode(sb, ino); @@ -1512,7 +1513,7 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl) /* Given that we just wrote the inode on disk, this SHOULD succeed. */ inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode not found."); return -EFSCORRUPTED; } @@ -1564,7 +1565,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl) goto out; inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "inode %d not found.", darg.ino); inode = NULL; ret = -EINVAL; @@ -1577,7 +1578,7 @@ static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl) * dot and dot dot dirents are setup properly. */ dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(dir)) { + if (IS_ERR(dir)) { jbd_debug(1, "Dir %d not found.", darg.ino); goto out; } @@ -1653,7 +1654,7 @@ static int ext4_fc_replay_add_range(struct super_block *sb, inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino), EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode not found."); return 0; } @@ -1777,7 +1778,7 @@ ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl) le32_to_cpu(lrange->fc_ino), cur, remaining); inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino)); return 0; } @@ -1832,7 +1833,7 @@ static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb) for (i = 0; i < state->fc_modified_inodes_used; i++) { inode = ext4_iget(sb, state->fc_modified_inodes[i], EXT4_IGET_NORMAL); - if (IS_ERR_OR_NULL(inode)) { + if (IS_ERR(inode)) { jbd_debug(1, "Inode %d not found.", state->fc_modified_inodes[i]); continue; @@ -1849,7 +1850,7 @@ static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb) if (ret > 0) { path = ext4_find_extent(inode, map.m_lblk, NULL, 0); - if (!IS_ERR_OR_NULL(path)) { + if (!IS_ERR(path)) { for (j = 0; j < path->p_depth; j++) ext4_mb_mark_bb(inode->i_sb, path[j].p_block, 1, 1); diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 3ed8c048fb12..349b27f0dda0 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -809,9 +809,12 @@ static int ext4_sample_last_mounted(struct super_block *sb, err = ext4_journal_get_write_access(handle, sbi->s_sbh); if (err) goto out_journal; - strlcpy(sbi->s_es->s_last_mounted, cp, + lock_buffer(sbi->s_sbh); + strncpy(sbi->s_es->s_last_mounted, cp, sizeof(sbi->s_es->s_last_mounted)); - ext4_handle_dirty_super(handle, sb); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); + ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); out_journal: ext4_journal_stop(handle); out: diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 27946882d4ce..c173c8405856 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -5150,9 +5150,13 @@ static int ext4_do_update_inode(handle_t *handle, err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh); if (err) goto out_brelse; + lock_buffer(EXT4_SB(sb)->s_sbh); ext4_set_feature_large_file(sb); + ext4_superblock_csum_set(sb); + unlock_buffer(EXT4_SB(sb)->s_sbh); ext4_handle_sync(handle); - err = ext4_handle_dirty_super(handle, sb); + err = ext4_handle_dirty_metadata(handle, NULL, + EXT4_SB(sb)->s_sbh); } ext4_update_inode_fsync_trans(handle, inode, need_datasync); out_brelse: diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c index 524e13432447..d9665d2f82db 100644 --- a/fs/ext4/ioctl.c +++ b/fs/ext4/ioctl.c @@ -1157,7 +1157,10 @@ resizefs_out: err = ext4_journal_get_write_access(handle, sbi->s_sbh); if (err) goto pwsalt_err_journal; + lock_buffer(sbi->s_sbh); generate_random_uuid(sbi->s_es->s_encrypt_pw_salt); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); pwsalt_err_journal: diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index b17a082b7db1..cf652ba3e74d 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -2976,14 +2976,17 @@ int ext4_orphan_add(handle_t *handle, struct inode *inode) (le32_to_cpu(sbi->s_es->s_inodes_count))) { /* Insert this inode at the head of the on-disk orphan list */ NEXT_ORPHAN(inode) = le32_to_cpu(sbi->s_es->s_last_orphan); + lock_buffer(sbi->s_sbh); sbi->s_es->s_last_orphan = cpu_to_le32(inode->i_ino); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); dirty = true; } list_add(&EXT4_I(inode)->i_orphan, &sbi->s_orphan); mutex_unlock(&sbi->s_orphan_lock); if (dirty) { - err = ext4_handle_dirty_super(handle, sb); + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); rc = ext4_mark_iloc_dirty(handle, inode, &iloc); if (!err) err = rc; @@ -3059,9 +3062,12 @@ int ext4_orphan_del(handle_t *handle, struct inode *inode) mutex_unlock(&sbi->s_orphan_lock); goto out_brelse; } + lock_buffer(sbi->s_sbh); sbi->s_es->s_last_orphan = cpu_to_le32(ino_next); + ext4_superblock_csum_set(inode->i_sb); + unlock_buffer(sbi->s_sbh); mutex_unlock(&sbi->s_orphan_lock); - err = ext4_handle_dirty_super(handle, inode->i_sb); + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); } else { struct ext4_iloc iloc2; struct inode *i_prev = @@ -3593,9 +3599,6 @@ static int ext4_setent(handle_t *handle, struct ext4_renament *ent, return retval2; } } - brelse(ent->bh); - ent->bh = NULL; - return retval; } @@ -3794,6 +3797,7 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry, } } + old_file_type = old.de->file_type; if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir)) ext4_handle_sync(handle); @@ -3821,7 +3825,6 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry, force_reread = (new.dir->i_ino == old.dir->i_ino && ext4_test_inode_flag(new.dir, EXT4_INODE_INLINE_DATA)); - old_file_type = old.de->file_type; if (whiteout) { /* * Do this before adding a new entry, so the old entry is sure @@ -3919,15 +3922,19 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry, retval = 0; end_rename: - brelse(old.dir_bh); - brelse(old.bh); - brelse(new.bh); if (whiteout) { - if (retval) + if (retval) { + ext4_setent(handle, &old, + old.inode->i_ino, old_file_type); drop_nlink(whiteout); + } unlock_new_inode(whiteout); iput(whiteout); + } + brelse(old.dir_bh); + brelse(old.bh); + brelse(new.bh); if (handle) ext4_journal_stop(handle); return retval; diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c index 928700d57eb6..bd0d185654f3 100644 --- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -899,8 +899,11 @@ static int add_new_gdb(handle_t *handle, struct inode *inode, EXT4_SB(sb)->s_gdb_count++; ext4_kvfree_array_rcu(o_group_desc); + lock_buffer(EXT4_SB(sb)->s_sbh); le16_add_cpu(&es->s_reserved_gdt_blocks, -1); - err = ext4_handle_dirty_super(handle, sb); + ext4_superblock_csum_set(sb); + unlock_buffer(EXT4_SB(sb)->s_sbh); + err = ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); if (err) ext4_std_error(sb, err); return err; @@ -1384,6 +1387,7 @@ static void ext4_update_super(struct super_block *sb, reserved_blocks *= blocks_count; do_div(reserved_blocks, 100); + lock_buffer(sbi->s_sbh); ext4_blocks_count_set(es, ext4_blocks_count(es) + blocks_count); ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + free_blocks); le32_add_cpu(&es->s_inodes_count, EXT4_INODES_PER_GROUP(sb) * @@ -1421,6 +1425,8 @@ static void ext4_update_super(struct super_block *sb, * active. */ ext4_r_blocks_count_set(es, ext4_r_blocks_count(es) + reserved_blocks); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); /* Update the free space counts */ percpu_counter_add(&sbi->s_freeclusters_counter, @@ -1515,7 +1521,7 @@ static int ext4_flex_group_add(struct super_block *sb, ext4_update_super(sb, flex_gd); - err = ext4_handle_dirty_super(handle, sb); + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); exit_journal: err2 = ext4_journal_stop(handle); @@ -1717,15 +1723,18 @@ static int ext4_group_extend_no_check(struct super_block *sb, goto errout; } + lock_buffer(EXT4_SB(sb)->s_sbh); ext4_blocks_count_set(es, o_blocks_count + add); ext4_free_blocks_count_set(es, ext4_free_blocks_count(es) + add); + ext4_superblock_csum_set(sb); + unlock_buffer(EXT4_SB(sb)->s_sbh); ext4_debug("freeing blocks %llu through %llu\n", o_blocks_count, o_blocks_count + add); /* We add the blocks to the bitmap and set the group need init bit */ err = ext4_group_add_blocks(handle, sb, o_blocks_count, add); if (err) goto errout; - ext4_handle_dirty_super(handle, sb); + ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); ext4_debug("freed blocks %llu through %llu\n", o_blocks_count, o_blocks_count + add); errout: @@ -1874,12 +1883,15 @@ static int ext4_convert_meta_bg(struct super_block *sb, struct inode *inode) if (err) goto errout; + lock_buffer(sbi->s_sbh); ext4_clear_feature_resize_inode(sb); ext4_set_feature_meta_bg(sb); sbi->s_es->s_first_meta_bg = cpu_to_le32(num_desc_blocks(sb, sbi->s_groups_count)); + ext4_superblock_csum_set(sb); + unlock_buffer(sbi->s_sbh); - err = ext4_handle_dirty_super(handle, sb); + err = ext4_handle_dirty_metadata(handle, NULL, sbi->s_sbh); if (err) { ext4_std_error(sb, err); goto errout; diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 21121787c874..9a6f9875aa34 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -65,7 +65,8 @@ static struct ratelimit_state ext4_mount_msg_ratelimit; static int ext4_load_journal(struct super_block *, struct ext4_super_block *, unsigned long journal_devnum); static int ext4_show_options(struct seq_file *seq, struct dentry *root); -static int ext4_commit_super(struct super_block *sb, int sync); +static void ext4_update_super(struct super_block *sb); +static int ext4_commit_super(struct super_block *sb); static int ext4_mark_recovery_complete(struct super_block *sb, struct ext4_super_block *es); static int ext4_clear_journal_err(struct super_block *sb, @@ -586,15 +587,12 @@ static int ext4_errno_to_code(int errno) return EXT4_ERR_UNKNOWN; } -static void __save_error_info(struct super_block *sb, int error, - __u32 ino, __u64 block, - const char *func, unsigned int line) +static void save_error_info(struct super_block *sb, int error, + __u32 ino, __u64 block, + const char *func, unsigned int line) { struct ext4_sb_info *sbi = EXT4_SB(sb); - EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; - if (bdev_read_only(sb->s_bdev)) - return; /* We default to EFSCORRUPTED error... */ if (error == 0) error = EFSCORRUPTED; @@ -618,15 +616,6 @@ static void __save_error_info(struct super_block *sb, int error, spin_unlock(&sbi->s_error_lock); } -static void save_error_info(struct super_block *sb, int error, - __u32 ino, __u64 block, - const char *func, unsigned int line) -{ - __save_error_info(sb, error, ino, block, func, line); - if (!bdev_read_only(sb->s_bdev)) - ext4_commit_super(sb, 1); -} - /* Deal with the reporting of failure conditions on a filesystem such as * inconsistencies detected or read IO failures. * @@ -647,19 +636,40 @@ static void save_error_info(struct super_block *sb, int error, * used to deal with unrecoverable failures such as journal IO errors or ENOMEM * at a critical moment in log management. */ -static void ext4_handle_error(struct super_block *sb, bool force_ro) +static void ext4_handle_error(struct super_block *sb, bool force_ro, int error, + __u32 ino, __u64 block, + const char *func, unsigned int line) { journal_t *journal = EXT4_SB(sb)->s_journal; + bool continue_fs = !force_ro && test_opt(sb, ERRORS_CONT); + EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; if (test_opt(sb, WARN_ON_ERROR)) WARN_ON_ONCE(1); - if (sb_rdonly(sb) || (!force_ro && test_opt(sb, ERRORS_CONT))) + if (!continue_fs && !sb_rdonly(sb)) { + ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED); + if (journal) + jbd2_journal_abort(journal, -EIO); + } + + if (!bdev_read_only(sb->s_bdev)) { + save_error_info(sb, error, ino, block, func, line); + /* + * In case the fs should keep running, we need to writeout + * superblock through the journal. Due to lock ordering + * constraints, it may not be safe to do it right here so we + * defer superblock flushing to a workqueue. + */ + if (continue_fs) + schedule_work(&EXT4_SB(sb)->s_error_work); + else + ext4_commit_super(sb); + } + + if (sb_rdonly(sb) || continue_fs) return; - ext4_set_mount_flag(sb, EXT4_MF_FS_ABORTED); - if (journal) - jbd2_journal_abort(journal, -EIO); /* * We force ERRORS_RO behavior when system is rebooting. Otherwise we * could panic during 'reboot -f' as the underlying device got already @@ -682,8 +692,39 @@ static void flush_stashed_error_work(struct work_struct *work) { struct ext4_sb_info *sbi = container_of(work, struct ext4_sb_info, s_error_work); + journal_t *journal = sbi->s_journal; + handle_t *handle; - ext4_commit_super(sbi->s_sb, 1); + /* + * If the journal is still running, we have to write out superblock + * through the journal to avoid collisions of other journalled sb + * updates. + * + * We use directly jbd2 functions here to avoid recursing back into + * ext4 error handling code during handling of previous errors. + */ + if (!sb_rdonly(sbi->s_sb) && journal) { + handle = jbd2_journal_start(journal, 1); + if (IS_ERR(handle)) + goto write_directly; + if (jbd2_journal_get_write_access(handle, sbi->s_sbh)) { + jbd2_journal_stop(handle); + goto write_directly; + } + ext4_update_super(sbi->s_sb); + if (jbd2_journal_dirty_metadata(handle, sbi->s_sbh)) { + jbd2_journal_stop(handle); + goto write_directly; + } + jbd2_journal_stop(handle); + return; + } +write_directly: + /* + * Write through journal failed. Write sb directly to get error info + * out and hope for the best. + */ + ext4_commit_super(sbi->s_sb); } #define ext4_error_ratelimit(sb) \ @@ -710,8 +751,7 @@ void __ext4_error(struct super_block *sb, const char *function, sb->s_id, function, line, current->comm, &vaf); va_end(args); } - save_error_info(sb, error, 0, block, function, line); - ext4_handle_error(sb, force_ro); + ext4_handle_error(sb, force_ro, error, 0, block, function, line); } void __ext4_error_inode(struct inode *inode, const char *function, @@ -741,9 +781,8 @@ void __ext4_error_inode(struct inode *inode, const char *function, current->comm, &vaf); va_end(args); } - save_error_info(inode->i_sb, error, inode->i_ino, block, - function, line); - ext4_handle_error(inode->i_sb, false); + ext4_handle_error(inode->i_sb, false, error, inode->i_ino, block, + function, line); } void __ext4_error_file(struct file *file, const char *function, @@ -780,9 +819,8 @@ void __ext4_error_file(struct file *file, const char *function, current->comm, path, &vaf); va_end(args); } - save_error_info(inode->i_sb, EFSCORRUPTED, inode->i_ino, block, - function, line); - ext4_handle_error(inode->i_sb, false); + ext4_handle_error(inode->i_sb, false, EFSCORRUPTED, inode->i_ino, block, + function, line); } const char *ext4_decode_error(struct super_block *sb, int errno, @@ -849,8 +887,7 @@ void __ext4_std_error(struct super_block *sb, const char *function, sb->s_id, function, line, errstr); } - save_error_info(sb, -errno, 0, 0, function, line); - ext4_handle_error(sb, false); + ext4_handle_error(sb, false, -errno, 0, 0, function, line); } void __ext4_msg(struct super_block *sb, @@ -944,13 +981,16 @@ __acquires(bitlock) if (test_opt(sb, ERRORS_CONT)) { if (test_opt(sb, WARN_ON_ERROR)) WARN_ON_ONCE(1); - __save_error_info(sb, EFSCORRUPTED, ino, block, function, line); - schedule_work(&EXT4_SB(sb)->s_error_work); + EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; + if (!bdev_read_only(sb->s_bdev)) { + save_error_info(sb, EFSCORRUPTED, ino, block, function, + line); + schedule_work(&EXT4_SB(sb)->s_error_work); + } return; } ext4_unlock_group(sb, grp); - save_error_info(sb, EFSCORRUPTED, ino, block, function, line); - ext4_handle_error(sb, false); + ext4_handle_error(sb, false, EFSCORRUPTED, ino, block, function, line); /* * We only get here in the ERRORS_RO case; relocking the group * may be dangerous, but nothing bad will happen since the @@ -1152,7 +1192,7 @@ static void ext4_put_super(struct super_block *sb) es->s_state = cpu_to_le16(sbi->s_mount_state); } if (!sb_rdonly(sb)) - ext4_commit_super(sb, 1); + ext4_commit_super(sb); rcu_read_lock(); group_desc = rcu_dereference(sbi->s_group_desc); @@ -2642,7 +2682,7 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es, if (sbi->s_journal) ext4_set_feature_journal_needs_recovery(sb); - err = ext4_commit_super(sb, 1); + err = ext4_commit_super(sb); done: if (test_opt(sb, DEBUG)) printk(KERN_INFO "[EXT4 FS bs=%lu, gc=%u, " @@ -4868,7 +4908,7 @@ no_journal: if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) && !ext4_has_feature_encrypt(sb)) { ext4_set_feature_encrypt(sb); - ext4_commit_super(sb, 1); + ext4_commit_super(sb); } /* @@ -5418,7 +5458,7 @@ static int ext4_load_journal(struct super_block *sb, es->s_journal_dev = cpu_to_le32(journal_devnum); /* Make sure we flush the recovery flag to disk. */ - ext4_commit_super(sb, 1); + ext4_commit_super(sb); } return 0; @@ -5428,16 +5468,14 @@ err_out: return err; } -static int ext4_commit_super(struct super_block *sb, int sync) +/* Copy state of EXT4_SB(sb) into buffer for on-disk superblock */ +static void ext4_update_super(struct super_block *sb) { struct ext4_sb_info *sbi = EXT4_SB(sb); - struct ext4_super_block *es = EXT4_SB(sb)->s_es; - struct buffer_head *sbh = EXT4_SB(sb)->s_sbh; - int error = 0; - - if (!sbh || block_device_ejected(sb)) - return error; + struct ext4_super_block *es = sbi->s_es; + struct buffer_head *sbh = sbi->s_sbh; + lock_buffer(sbh); /* * If the file system is mounted read-only, don't update the * superblock write time. This avoids updating the superblock @@ -5451,17 +5489,17 @@ static int ext4_commit_super(struct super_block *sb, int sync) if (!(sb->s_flags & SB_RDONLY)) ext4_update_tstamp(es, s_wtime); es->s_kbytes_written = - cpu_to_le64(EXT4_SB(sb)->s_kbytes_written + + cpu_to_le64(sbi->s_kbytes_written + ((part_stat_read(sb->s_bdev, sectors[STAT_WRITE]) - - EXT4_SB(sb)->s_sectors_written_start) >> 1)); - if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeclusters_counter)) + sbi->s_sectors_written_start) >> 1)); + if (percpu_counter_initialized(&sbi->s_freeclusters_counter)) ext4_free_blocks_count_set(es, - EXT4_C2B(EXT4_SB(sb), percpu_counter_sum_positive( - &EXT4_SB(sb)->s_freeclusters_counter))); - if (percpu_counter_initialized(&EXT4_SB(sb)->s_freeinodes_counter)) + EXT4_C2B(sbi, percpu_counter_sum_positive( + &sbi->s_freeclusters_counter))); + if (percpu_counter_initialized(&sbi->s_freeinodes_counter)) es->s_free_inodes_count = cpu_to_le32(percpu_counter_sum_positive( - &EXT4_SB(sb)->s_freeinodes_counter)); + &sbi->s_freeinodes_counter)); /* Copy error information to the on-disk superblock */ spin_lock(&sbi->s_error_lock); if (sbi->s_add_error_count > 0) { @@ -5502,10 +5540,20 @@ static int ext4_commit_super(struct super_block *sb, int sync) } spin_unlock(&sbi->s_error_lock); - BUFFER_TRACE(sbh, "marking dirty"); ext4_superblock_csum_set(sb); - if (sync) - lock_buffer(sbh); + unlock_buffer(sbh); +} + +static int ext4_commit_super(struct super_block *sb) +{ + struct buffer_head *sbh = EXT4_SB(sb)->s_sbh; + int error = 0; + + if (!sbh || block_device_ejected(sb)) + return error; + + ext4_update_super(sb); + if (buffer_write_io_error(sbh) || !buffer_uptodate(sbh)) { /* * Oh, dear. A previous attempt to write the @@ -5520,17 +5568,15 @@ static int ext4_commit_super(struct super_block *sb, int sync) clear_buffer_write_io_error(sbh); set_buffer_uptodate(sbh); } + BUFFER_TRACE(sbh, "marking dirty"); mark_buffer_dirty(sbh); - if (sync) { - unlock_buffer(sbh); - error = __sync_dirty_buffer(sbh, - REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0)); - if (buffer_write_io_error(sbh)) { - ext4_msg(sb, KERN_ERR, "I/O error while writing " - "superblock"); - clear_buffer_write_io_error(sbh); - set_buffer_uptodate(sbh); - } + error = __sync_dirty_buffer(sbh, + REQ_SYNC | (test_opt(sb, BARRIER) ? REQ_FUA : 0)); + if (buffer_write_io_error(sbh)) { + ext4_msg(sb, KERN_ERR, "I/O error while writing " + "superblock"); + clear_buffer_write_io_error(sbh); + set_buffer_uptodate(sbh); } return error; } @@ -5561,7 +5607,7 @@ static int ext4_mark_recovery_complete(struct super_block *sb, if (ext4_has_feature_journal_needs_recovery(sb) && sb_rdonly(sb)) { ext4_clear_feature_journal_needs_recovery(sb); - ext4_commit_super(sb, 1); + ext4_commit_super(sb); } out: jbd2_journal_unlock_updates(journal); @@ -5603,7 +5649,7 @@ static int ext4_clear_journal_err(struct super_block *sb, EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; es->s_state |= cpu_to_le16(EXT4_ERROR_FS); - ext4_commit_super(sb, 1); + ext4_commit_super(sb); jbd2_journal_clear_err(journal); jbd2_journal_update_sb_errno(journal); @@ -5705,7 +5751,7 @@ static int ext4_freeze(struct super_block *sb) ext4_clear_feature_journal_needs_recovery(sb); } - error = ext4_commit_super(sb, 1); + error = ext4_commit_super(sb); out: if (journal) /* we rely on upper layer to stop further updates */ @@ -5727,7 +5773,7 @@ static int ext4_unfreeze(struct super_block *sb) ext4_set_feature_journal_needs_recovery(sb); } - ext4_commit_super(sb, 1); + ext4_commit_super(sb); return 0; } @@ -5987,7 +6033,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data) } if (sbi->s_journal == NULL && !(old_sb_flags & SB_RDONLY)) { - err = ext4_commit_super(sb, 1); + err = ext4_commit_super(sb); if (err) goto restore_opts; } diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index 4e3b1f8c2e81..372208500f4e 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -792,8 +792,11 @@ static void ext4_xattr_update_super_block(handle_t *handle, BUFFER_TRACE(EXT4_SB(sb)->s_sbh, "get_write_access"); if (ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh) == 0) { + lock_buffer(EXT4_SB(sb)->s_sbh); ext4_set_feature_xattr(sb); - ext4_handle_dirty_super(handle, sb); + ext4_superblock_csum_set(sb); + unlock_buffer(EXT4_SB(sb)->s_sbh); + ext4_handle_dirty_metadata(handle, NULL, EXT4_SB(sb)->s_sbh); } } diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index acfb55834af2..c41cb887eb7d 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1474,21 +1474,25 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc) } /* - * Some filesystems may redirty the inode during the writeback - * due to delalloc, clear dirty metadata flags right before - * write_inode() + * If the inode has dirty timestamps and we need to write them, call + * mark_inode_dirty_sync() to notify the filesystem about it and to + * change I_DIRTY_TIME into I_DIRTY_SYNC. */ - spin_lock(&inode->i_lock); - - dirty = inode->i_state & I_DIRTY; if ((inode->i_state & I_DIRTY_TIME) && - ((dirty & I_DIRTY_INODE) || - wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync || + (wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync || time_after(jiffies, inode->dirtied_time_when + dirtytime_expire_interval * HZ))) { - dirty |= I_DIRTY_TIME; trace_writeback_lazytime(inode); + mark_inode_dirty_sync(inode); } + + /* + * Some filesystems may redirty the inode during the writeback + * due to delalloc, clear dirty metadata flags right before + * write_inode() + */ + spin_lock(&inode->i_lock); + dirty = inode->i_state & I_DIRTY; inode->i_state &= ~dirty; /* @@ -1509,8 +1513,6 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc) spin_unlock(&inode->i_lock); - if (dirty & I_DIRTY_TIME) - mark_inode_dirty_sync(inode); /* Don't write the inode if only I_DIRTY_PAGES was set */ if (dirty & ~I_DIRTY_PAGES) { int err = write_inode(inode, wbc); diff --git a/fs/io_uring.c b/fs/io_uring.c index ca46f314640b..c07913ec0cca 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -262,6 +262,7 @@ struct io_ring_ctx { unsigned int drain_next: 1; unsigned int eventfd_async: 1; unsigned int restricted: 1; + unsigned int sqo_dead: 1; /* * Ring buffer of indices into array of io_uring_sqe, which is @@ -353,6 +354,7 @@ struct io_ring_ctx { unsigned cq_entries; unsigned cq_mask; atomic_t cq_timeouts; + unsigned cq_last_tm_flush; unsigned long cq_check_overflow; struct wait_queue_head cq_wait; struct fasync_struct *cq_fasync; @@ -992,6 +994,9 @@ enum io_mem_account { ACCT_PINNED, }; +static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx, + struct task_struct *task); + static void destroy_fixed_file_ref_node(struct fixed_file_ref_node *ref_node); static struct fixed_file_ref_node *alloc_fixed_file_ref_node( struct io_ring_ctx *ctx); @@ -1020,6 +1025,7 @@ static ssize_t io_import_iovec(int rw, struct io_kiocb *req, static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec, const struct iovec *fast_iov, struct iov_iter *iter, bool force); +static void io_req_drop_files(struct io_kiocb *req); static struct kmem_cache *req_cachep; @@ -1043,8 +1049,7 @@ EXPORT_SYMBOL(io_uring_get_socket); static inline void io_clean_op(struct io_kiocb *req) { - if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED | - REQ_F_INFLIGHT)) + if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED)) __io_clean_op(req); } @@ -1070,8 +1075,11 @@ static bool io_match_task(struct io_kiocb *head, return true; io_for_each_link(req, head) { - if ((req->flags & REQ_F_WORK_INITIALIZED) && - (req->work.flags & IO_WQ_WORK_FILES) && + if (!(req->flags & REQ_F_WORK_INITIALIZED)) + continue; + if (req->file && req->file->f_op == &io_uring_fops) + return true; + if ((req->work.flags & IO_WQ_WORK_FILES) && req->work.identity->files == files) return true; } @@ -1102,6 +1110,9 @@ static void io_sq_thread_drop_mm_files(void) static int __io_sq_thread_acquire_files(struct io_ring_ctx *ctx) { + if (current->flags & PF_EXITING) + return -EFAULT; + if (!current->files) { struct files_struct *files; struct nsproxy *nsproxy; @@ -1129,6 +1140,8 @@ static int __io_sq_thread_acquire_mm(struct io_ring_ctx *ctx) { struct mm_struct *mm; + if (current->flags & PF_EXITING) + return -EFAULT; if (current->mm) return 0; @@ -1342,11 +1355,6 @@ static void __io_commit_cqring(struct io_ring_ctx *ctx) /* order cqe stores with ring update */ smp_store_release(&rings->cq.tail, ctx->cached_cq_tail); - - if (wq_has_sleeper(&ctx->cq_wait)) { - wake_up_interruptible(&ctx->cq_wait); - kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN); - } } static void io_put_identity(struct io_uring_task *tctx, struct io_kiocb *req) @@ -1389,6 +1397,8 @@ static void io_req_clean_work(struct io_kiocb *req) free_fs_struct(fs); req->work.flags &= ~IO_WQ_WORK_FS; } + if (req->flags & REQ_F_INFLIGHT) + io_req_drop_files(req); io_put_identity(req->task->io_uring, req); } @@ -1498,11 +1508,14 @@ static bool io_grab_identity(struct io_kiocb *req) return false; atomic_inc(&id->files->count); get_nsproxy(id->nsproxy); - req->flags |= REQ_F_INFLIGHT; - spin_lock_irq(&ctx->inflight_lock); - list_add(&req->inflight_entry, &ctx->inflight_list); - spin_unlock_irq(&ctx->inflight_lock); + if (!(req->flags & REQ_F_INFLIGHT)) { + req->flags |= REQ_F_INFLIGHT; + + spin_lock_irq(&ctx->inflight_lock); + list_add(&req->inflight_entry, &ctx->inflight_list); + spin_unlock_irq(&ctx->inflight_lock); + } req->work.flags |= IO_WQ_WORK_FILES; } if (!(req->work.flags & IO_WQ_WORK_MM) && @@ -1520,10 +1533,8 @@ static void io_prep_async_work(struct io_kiocb *req) { const struct io_op_def *def = &io_op_defs[req->opcode]; struct io_ring_ctx *ctx = req->ctx; - struct io_identity *id; io_req_init_async(req); - id = req->work.identity; if (req->flags & REQ_F_FORCE_ASYNC) req->work.flags |= IO_WQ_WORK_CONCURRENT; @@ -1637,19 +1648,38 @@ static void __io_queue_deferred(struct io_ring_ctx *ctx) static void io_flush_timeouts(struct io_ring_ctx *ctx) { - while (!list_empty(&ctx->timeout_list)) { + u32 seq; + + if (list_empty(&ctx->timeout_list)) + return; + + seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); + + do { + u32 events_needed, events_got; struct io_kiocb *req = list_first_entry(&ctx->timeout_list, struct io_kiocb, timeout.list); if (io_is_timeout_noseq(req)) break; - if (req->timeout.target_seq != ctx->cached_cq_tail - - atomic_read(&ctx->cq_timeouts)) + + /* + * Since seq can easily wrap around over time, subtract + * the last seq at which timeouts were flushed before comparing. + * Assuming not more than 2^31-1 events have happened since, + * these subtractions won't have wrapped, so we can check if + * target is in [last_seq, current_seq] by comparing the two. + */ + events_needed = req->timeout.target_seq - ctx->cq_last_tm_flush; + events_got = seq - ctx->cq_last_tm_flush; + if (events_got < events_needed) break; list_del_init(&req->timeout.list); io_kill_timeout(req); - } + } while (!list_empty(&ctx->timeout_list)); + + ctx->cq_last_tm_flush = seq; } static void io_commit_cqring(struct io_ring_ctx *ctx) @@ -1704,18 +1734,42 @@ static inline unsigned __io_cqring_events(struct io_ring_ctx *ctx) static void io_cqring_ev_posted(struct io_ring_ctx *ctx) { + /* see waitqueue_active() comment */ + smp_mb(); + if (waitqueue_active(&ctx->wait)) wake_up(&ctx->wait); if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait)) wake_up(&ctx->sq_data->wait); if (io_should_trigger_evfd(ctx)) eventfd_signal(ctx->cq_ev_fd, 1); + if (waitqueue_active(&ctx->cq_wait)) { + wake_up_interruptible(&ctx->cq_wait); + kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN); + } +} + +static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx) +{ + /* see waitqueue_active() comment */ + smp_mb(); + + if (ctx->flags & IORING_SETUP_SQPOLL) { + if (waitqueue_active(&ctx->wait)) + wake_up(&ctx->wait); + } + if (io_should_trigger_evfd(ctx)) + eventfd_signal(ctx->cq_ev_fd, 1); + if (waitqueue_active(&ctx->cq_wait)) { + wake_up_interruptible(&ctx->cq_wait); + kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN); + } } /* Returns true if there are no backlogged entries after the flush */ -static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, - struct task_struct *tsk, - struct files_struct *files) +static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, + struct task_struct *tsk, + struct files_struct *files) { struct io_rings *rings = ctx->rings; struct io_kiocb *req, *tmp; @@ -1768,6 +1822,20 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, return all_flushed; } +static void io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, + struct task_struct *tsk, + struct files_struct *files) +{ + if (test_bit(0, &ctx->cq_check_overflow)) { + /* iopoll syncs against uring_lock, not completion_lock */ + if (ctx->flags & IORING_SETUP_IOPOLL) + mutex_lock(&ctx->uring_lock); + __io_cqring_overflow_flush(ctx, force, tsk, files); + if (ctx->flags & IORING_SETUP_IOPOLL) + mutex_unlock(&ctx->uring_lock); + } +} + static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags) { struct io_ring_ctx *ctx = req->ctx; @@ -2127,14 +2195,14 @@ static void __io_req_task_submit(struct io_kiocb *req) { struct io_ring_ctx *ctx = req->ctx; - if (!__io_sq_thread_acquire_mm(ctx) && - !__io_sq_thread_acquire_files(ctx)) { - mutex_lock(&ctx->uring_lock); + mutex_lock(&ctx->uring_lock); + if (!ctx->sqo_dead && + !__io_sq_thread_acquire_mm(ctx) && + !__io_sq_thread_acquire_files(ctx)) __io_queue_sqe(req, NULL); - mutex_unlock(&ctx->uring_lock); - } else { + else __io_req_task_cancel(req, -EFAULT); - } + mutex_unlock(&ctx->uring_lock); } static void io_req_task_submit(struct callback_head *cb) @@ -2210,6 +2278,8 @@ static void io_req_free_batch_finish(struct io_ring_ctx *ctx, struct io_uring_task *tctx = rb->task->io_uring; percpu_counter_sub(&tctx->inflight, rb->task_refs); + if (atomic_read(&tctx->in_idle)) + wake_up(&tctx->wait); put_task_struct_many(rb->task, rb->task_refs); rb->task = NULL; } @@ -2228,6 +2298,8 @@ static void io_req_free_batch(struct req_batch *rb, struct io_kiocb *req) struct io_uring_task *tctx = rb->task->io_uring; percpu_counter_sub(&tctx->inflight, rb->task_refs); + if (atomic_read(&tctx->in_idle)) + wake_up(&tctx->wait); put_task_struct_many(rb->task, rb->task_refs); } rb->task = req->task; @@ -2313,20 +2385,8 @@ static void io_double_put_req(struct io_kiocb *req) io_free_req(req); } -static unsigned io_cqring_events(struct io_ring_ctx *ctx, bool noflush) +static unsigned io_cqring_events(struct io_ring_ctx *ctx) { - if (test_bit(0, &ctx->cq_check_overflow)) { - /* - * noflush == true is from the waitqueue handler, just ensure - * we wake up the task, and the next invocation will flush the - * entries. We cannot safely to it from here. - */ - if (noflush) - return -1U; - - io_cqring_overflow_flush(ctx, false, NULL, NULL); - } - /* See comment at the top of this file */ smp_rmb(); return __io_cqring_events(ctx); @@ -2424,8 +2484,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, } io_commit_cqring(ctx); - if (ctx->flags & IORING_SETUP_SQPOLL) - io_cqring_ev_posted(ctx); + io_cqring_ev_posted_iopoll(ctx); io_req_free_batch_finish(ctx, &rb); if (!list_empty(&again)) @@ -2551,7 +2610,9 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min) * If we do, we can potentially be spinning for commands that * already triggered a CQE (eg in error). */ - if (io_cqring_events(ctx, false)) + if (test_bit(0, &ctx->cq_check_overflow)) + __io_cqring_overflow_flush(ctx, false, NULL, NULL); + if (io_cqring_events(ctx)) break; /* @@ -2668,6 +2729,8 @@ static bool io_rw_reissue(struct io_kiocb *req, long res) if ((res != -EAGAIN && res != -EOPNOTSUPP) || io_wq_current_is_worker()) return false; + lockdep_assert_held(&req->ctx->uring_lock); + ret = io_sq_thread_acquire_mm_files(req->ctx, req); if (io_resubmit_prep(req, ret)) { @@ -3497,7 +3560,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, /* read it all, or we did blocking attempt. no retry. */ if (!iov_iter_count(iter) || !force_nonblock || - (req->file->f_flags & O_NONBLOCK)) + (req->file->f_flags & O_NONBLOCK) || !(req->flags & REQ_F_ISREG)) goto done; io_size -= ret; @@ -4417,7 +4480,6 @@ static int io_close_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) * io_wq_work.flags, so initialize io_wq_work firstly. */ io_req_init_async(req); - req->work.flags |= IO_WQ_WORK_NO_CANCEL; if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; @@ -4450,6 +4512,8 @@ static int io_close(struct io_kiocb *req, bool force_nonblock, /* if the file has a flush method, be safe and punt to async */ if (close->put_file->f_op->flush && force_nonblock) { + /* not safe to cancel at this point */ + req->work.flags |= IO_WQ_WORK_NO_CANCEL; /* was never set, but play safe */ req->flags &= ~REQ_F_NOWAIT; /* avoid grabbing files - we don't need the files */ @@ -5806,6 +5870,12 @@ static int io_timeout(struct io_kiocb *req) tail = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); req->timeout.target_seq = tail + off; + /* Update the last seq here in case io_flush_timeouts() hasn't. + * This is safe because ->completion_lock is held, and submissions + * and completions are never mixed in the same ->completion_lock section. + */ + ctx->cq_last_tm_flush = tail; + /* * Insertion sort, ensuring the first entry in the list is always * the one we need first. @@ -6100,8 +6170,10 @@ static void io_req_drop_files(struct io_kiocb *req) struct io_uring_task *tctx = req->task->io_uring; unsigned long flags; - put_files_struct(req->work.identity->files); - put_nsproxy(req->work.identity->nsproxy); + if (req->work.flags & IO_WQ_WORK_FILES) { + put_files_struct(req->work.identity->files); + put_nsproxy(req->work.identity->nsproxy); + } spin_lock_irqsave(&ctx->inflight_lock, flags); list_del(&req->inflight_entry); spin_unlock_irqrestore(&ctx->inflight_lock, flags); @@ -6168,9 +6240,6 @@ static void __io_clean_op(struct io_kiocb *req) } req->flags &= ~REQ_F_NEED_CLEANUP; } - - if (req->flags & REQ_F_INFLIGHT) - io_req_drop_files(req); } static int io_issue_sqe(struct io_kiocb *req, bool force_nonblock, @@ -6389,6 +6458,15 @@ static struct file *io_file_get(struct io_submit_state *state, file = __io_file_get(state, fd); } + if (file && file->f_op == &io_uring_fops) { + io_req_init_async(req); + req->flags |= REQ_F_INFLIGHT; + + spin_lock_irq(&ctx->inflight_lock); + list_add(&req->inflight_entry, &ctx->inflight_list); + spin_unlock_irq(&ctx->inflight_lock); + } + return file; } @@ -6826,7 +6904,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) /* if we have a backlog and couldn't flush it all, return BUSY */ if (test_bit(0, &ctx->sq_check_overflow)) { - if (!io_cqring_overflow_flush(ctx, false, NULL, NULL)) + if (!__io_cqring_overflow_flush(ctx, false, NULL, NULL)) return -EBUSY; } @@ -6928,7 +7006,8 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries) if (!list_empty(&ctx->iopoll_list)) io_do_iopoll(ctx, &nr_events, 0); - if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs))) + if (to_submit && !ctx->sqo_dead && + likely(!percpu_ref_is_dying(&ctx->refs))) ret = io_submit_sqes(ctx, to_submit); mutex_unlock(&ctx->uring_lock); } @@ -7029,6 +7108,7 @@ static int io_sq_thread(void *data) if (sqt_spin || !time_after(jiffies, timeout)) { io_run_task_work(); + io_sq_thread_drop_mm_files(); cond_resched(); if (sqt_spin) timeout = jiffies + sqd->sq_thread_idle; @@ -7066,6 +7146,7 @@ static int io_sq_thread(void *data) } io_run_task_work(); + io_sq_thread_drop_mm_files(); if (cur_css) io_sq_thread_unassociate_blkcg(); @@ -7089,7 +7170,7 @@ struct io_wait_queue { unsigned nr_timeouts; }; -static inline bool io_should_wake(struct io_wait_queue *iowq, bool noflush) +static inline bool io_should_wake(struct io_wait_queue *iowq) { struct io_ring_ctx *ctx = iowq->ctx; @@ -7098,7 +7179,7 @@ static inline bool io_should_wake(struct io_wait_queue *iowq, bool noflush) * started waiting. For timeouts, we always want to return to userspace, * regardless of event count. */ - return io_cqring_events(ctx, noflush) >= iowq->to_wait || + return io_cqring_events(ctx) >= iowq->to_wait || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts; } @@ -7108,11 +7189,13 @@ static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode, struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue, wq); - /* use noflush == true, as we can't safely rely on locking context */ - if (!io_should_wake(iowq, true)) - return -1; - - return autoremove_wake_function(curr, mode, wake_flags, key); + /* + * Cannot safely flush overflowed CQEs from here, ensure we wake up + * the task, and the next invocation will do it. + */ + if (io_should_wake(iowq) || test_bit(0, &iowq->ctx->cq_check_overflow)) + return autoremove_wake_function(curr, mode, wake_flags, key); + return -1; } static int io_run_task_work_sig(void) @@ -7149,7 +7232,8 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, int ret = 0; do { - if (io_cqring_events(ctx, false) >= min_events) + io_cqring_overflow_flush(ctx, false, NULL, NULL); + if (io_cqring_events(ctx) >= min_events) return 0; if (!io_run_task_work()) break; @@ -7177,6 +7261,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts); trace_io_uring_cqring_wait(ctx, min_events); do { + io_cqring_overflow_flush(ctx, false, NULL, NULL); prepare_to_wait_exclusive(&ctx->wait, &iowq.wq, TASK_INTERRUPTIBLE); /* make sure we run task_work before checking for signals */ @@ -7185,8 +7270,10 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, continue; else if (ret < 0) break; - if (io_should_wake(&iowq, false)) + if (io_should_wake(&iowq)) break; + if (test_bit(0, &ctx->cq_check_overflow)) + continue; if (uts) { timeout = schedule_timeout(timeout); if (timeout == 0) { @@ -7684,12 +7771,12 @@ static struct fixed_file_ref_node *alloc_fixed_file_ref_node( ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL); if (!ref_node) - return ERR_PTR(-ENOMEM); + return NULL; if (percpu_ref_init(&ref_node->refs, io_file_data_ref_zero, 0, GFP_KERNEL)) { kfree(ref_node); - return ERR_PTR(-ENOMEM); + return NULL; } INIT_LIST_HEAD(&ref_node->node); INIT_LIST_HEAD(&ref_node->file_list); @@ -7783,9 +7870,9 @@ static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg, } ref_node = alloc_fixed_file_ref_node(ctx); - if (IS_ERR(ref_node)) { + if (!ref_node) { io_sqe_files_unregister(ctx); - return PTR_ERR(ref_node); + return -ENOMEM; } io_sqe_files_set_node(file_data, ref_node); @@ -7885,8 +7972,8 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx, return -EINVAL; ref_node = alloc_fixed_file_ref_node(ctx); - if (IS_ERR(ref_node)) - return PTR_ERR(ref_node); + if (!ref_node) + return -ENOMEM; done = 0; fds = u64_to_user_ptr(up->fds); @@ -8624,7 +8711,8 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait) smp_rmb(); if (!io_sqring_full(ctx)) mask |= EPOLLOUT | EPOLLWRNORM; - if (io_cqring_events(ctx, false)) + io_cqring_overflow_flush(ctx, false, NULL, NULL); + if (io_cqring_events(ctx)) mask |= EPOLLIN | EPOLLRDNORM; return mask; @@ -8663,7 +8751,7 @@ static void io_ring_exit_work(struct work_struct *work) * as nobody else will be looking for them. */ do { - io_iopoll_try_reap_events(ctx); + __io_uring_cancel_task_requests(ctx, NULL); } while (!wait_for_completion_timeout(&ctx->ref_comp, HZ/20)); io_ring_ctx_free(ctx); } @@ -8679,10 +8767,14 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) { mutex_lock(&ctx->uring_lock); percpu_ref_kill(&ctx->refs); + + if (WARN_ON_ONCE((ctx->flags & IORING_SETUP_SQPOLL) && !ctx->sqo_dead)) + ctx->sqo_dead = 1; + /* if force is set, the ring is going away. always drop after that */ ctx->cq_overflow_flushed = 1; if (ctx->rings) - io_cqring_overflow_flush(ctx, true, NULL, NULL); + __io_cqring_overflow_flush(ctx, true, NULL, NULL); mutex_unlock(&ctx->uring_lock); io_kill_timeouts(ctx, NULL, NULL); @@ -8785,8 +8877,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx, spin_lock_irq(&ctx->inflight_lock); list_for_each_entry(req, &ctx->inflight_list, inflight_entry) { - if (req->task != task || - req->work.identity->files != files) + if (!io_match_task(req, task, files)) continue; found = true; break; @@ -8803,6 +8894,7 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx, io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true); io_poll_remove_all(ctx, task, files); io_kill_timeouts(ctx, task, files); + io_cqring_overflow_flush(ctx, true, task, files); /* cancellations _may_ trigger task work */ io_run_task_work(); schedule(); @@ -8818,9 +8910,11 @@ static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx, enum io_wq_cancel cret; bool ret = false; - cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true); - if (cret != IO_WQ_CANCEL_NOTFOUND) - ret = true; + if (ctx->io_wq) { + cret = io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, + &cancel, true); + ret |= (cret != IO_WQ_CANCEL_NOTFOUND); + } /* SQPOLL thread does its own polling */ if (!(ctx->flags & IORING_SETUP_SQPOLL)) { @@ -8839,6 +8933,17 @@ static void __io_uring_cancel_task_requests(struct io_ring_ctx *ctx, } } +static void io_disable_sqo_submit(struct io_ring_ctx *ctx) +{ + mutex_lock(&ctx->uring_lock); + ctx->sqo_dead = 1; + mutex_unlock(&ctx->uring_lock); + + /* make sure callers enter the ring to get error */ + if (ctx->rings) + io_ring_set_wakeup_flag(ctx); +} + /* * We need to iteratively cancel requests, in case a request has dependent * hard links. These persist even for failure of cancelations, hence keep @@ -8850,15 +8955,16 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx, struct task_struct *task = current; if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) { + /* for SQPOLL only sqo_task has task notes */ + WARN_ON_ONCE(ctx->sqo_task != current); + io_disable_sqo_submit(ctx); task = ctx->sq_data->thread; atomic_inc(&task->io_uring->in_idle); io_sq_thread_park(ctx->sq_data); } io_cancel_defer_files(ctx, task, files); - io_ring_submit_lock(ctx, (ctx->flags & IORING_SETUP_IOPOLL)); io_cqring_overflow_flush(ctx, true, task, files); - io_ring_submit_unlock(ctx, (ctx->flags & IORING_SETUP_IOPOLL)); if (!files) __io_uring_cancel_task_requests(ctx, task); @@ -8931,20 +9037,12 @@ static void io_uring_del_task_file(struct file *file) fput(file); } -/* - * Drop task note for this file if we're the only ones that hold it after - * pending fput() - */ -static void io_uring_attempt_task_drop(struct file *file) +static void io_uring_remove_task_files(struct io_uring_task *tctx) { - if (!current->io_uring) - return; - /* - * fput() is pending, will be 2 if the only other ref is our potential - * task file note. If the task is exiting, drop regardless of count. - */ - if (fatal_signal_pending(current) || (current->flags & PF_EXITING) || - atomic_long_read(&file->f_count) == 2) + struct file *file; + unsigned long index; + + xa_for_each(&tctx->xa, index, file) io_uring_del_task_file(file); } @@ -8956,16 +9054,12 @@ void __io_uring_files_cancel(struct files_struct *files) /* make sure overflow events are dropped */ atomic_inc(&tctx->in_idle); - - xa_for_each(&tctx->xa, index, file) { - struct io_ring_ctx *ctx = file->private_data; - - io_uring_cancel_task_requests(ctx, files); - if (files) - io_uring_del_task_file(file); - } - + xa_for_each(&tctx->xa, index, file) + io_uring_cancel_task_requests(file->private_data, files); atomic_dec(&tctx->in_idle); + + if (files) + io_uring_remove_task_files(tctx); } static s64 tctx_inflight(struct io_uring_task *tctx) @@ -9008,6 +9102,10 @@ void __io_uring_task_cancel(void) /* make sure overflow events are dropped */ atomic_inc(&tctx->in_idle); + /* trigger io_disable_sqo_submit() */ + if (tctx->sqpoll) + __io_uring_files_cancel(NULL); + do { /* read completions before cancelations */ inflight = tctx_inflight(tctx); @@ -9027,12 +9125,44 @@ void __io_uring_task_cancel(void) finish_wait(&tctx->wait, &wait); } while (1); + finish_wait(&tctx->wait, &wait); atomic_dec(&tctx->in_idle); + + io_uring_remove_task_files(tctx); } static int io_uring_flush(struct file *file, void *data) { - io_uring_attempt_task_drop(file); + struct io_uring_task *tctx = current->io_uring; + struct io_ring_ctx *ctx = file->private_data; + + if (!tctx) + return 0; + + /* we should have cancelled and erased it before PF_EXITING */ + WARN_ON_ONCE((current->flags & PF_EXITING) && + xa_load(&tctx->xa, (unsigned long)file)); + + /* + * fput() is pending, will be 2 if the only other ref is our potential + * task file note. If the task is exiting, drop regardless of count. + */ + if (atomic_long_read(&file->f_count) != 2) + return 0; + + if (ctx->flags & IORING_SETUP_SQPOLL) { + /* there is only one file note, which is owned by sqo_task */ + WARN_ON_ONCE(ctx->sqo_task != current && + xa_load(&tctx->xa, (unsigned long)file)); + /* sqo_dead check is for when this happens after cancellation */ + WARN_ON_ONCE(ctx->sqo_task == current && !ctx->sqo_dead && + !xa_load(&tctx->xa, (unsigned long)file)); + + io_disable_sqo_submit(ctx); + } + + if (!(ctx->flags & IORING_SETUP_SQPOLL) || ctx->sqo_task == current) + io_uring_del_task_file(file); return 0; } @@ -9106,8 +9236,9 @@ static unsigned long io_uring_nommu_get_unmapped_area(struct file *file, #endif /* !CONFIG_MMU */ -static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx) +static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx) { + int ret = 0; DEFINE_WAIT(wait); do { @@ -9116,6 +9247,11 @@ static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx) prepare_to_wait(&ctx->sqo_sq_wait, &wait, TASK_INTERRUPTIBLE); + if (unlikely(ctx->sqo_dead)) { + ret = -EOWNERDEAD; + goto out; + } + if (!io_sqring_full(ctx)) break; @@ -9123,6 +9259,8 @@ static void io_sqpoll_wait_sq(struct io_ring_ctx *ctx) } while (!signal_pending(current)); finish_wait(&ctx->sqo_sq_wait, &wait); +out: + return ret; } static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz, @@ -9194,17 +9332,18 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, */ ret = 0; if (ctx->flags & IORING_SETUP_SQPOLL) { - if (!list_empty_careful(&ctx->cq_overflow_list)) { - bool needs_lock = ctx->flags & IORING_SETUP_IOPOLL; + io_cqring_overflow_flush(ctx, false, NULL, NULL); - io_ring_submit_lock(ctx, needs_lock); - io_cqring_overflow_flush(ctx, false, NULL, NULL); - io_ring_submit_unlock(ctx, needs_lock); - } + ret = -EOWNERDEAD; + if (unlikely(ctx->sqo_dead)) + goto out; if (flags & IORING_ENTER_SQ_WAKEUP) wake_up(&ctx->sq_data->wait); - if (flags & IORING_ENTER_SQ_WAIT) - io_sqpoll_wait_sq(ctx); + if (flags & IORING_ENTER_SQ_WAIT) { + ret = io_sqpoll_wait_sq(ctx); + if (ret) + goto out; + } submitted = to_submit; } else if (to_submit) { ret = io_uring_add_task_file(ctx, f.file); @@ -9623,6 +9762,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p, */ ret = io_uring_install_fd(ctx, file); if (ret < 0) { + io_disable_sqo_submit(ctx); /* fput will clean it up */ fput(file); return ret; @@ -9631,6 +9771,7 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p, trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags); return ret; err: + io_disable_sqo_submit(ctx); io_ring_ctx_wait_and_kill(ctx); return ret; } diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c index f277d023ebcd..c75719312147 100644 --- a/fs/kernfs/file.c +++ b/fs/kernfs/file.c @@ -14,6 +14,7 @@ #include <linux/pagemap.h> #include <linux/sched/mm.h> #include <linux/fsnotify.h> +#include <linux/uio.h> #include "kernfs-internal.h" @@ -180,11 +181,10 @@ static const struct seq_operations kernfs_seq_ops = { * it difficult to use seq_file. Implement simplistic custom buffering for * bin files. */ -static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of, - char __user *user_buf, size_t count, - loff_t *ppos) +static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) { - ssize_t len = min_t(size_t, count, PAGE_SIZE); + struct kernfs_open_file *of = kernfs_of(iocb->ki_filp); + ssize_t len = min_t(size_t, iov_iter_count(iter), PAGE_SIZE); const struct kernfs_ops *ops; char *buf; @@ -210,7 +210,7 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of, of->event = atomic_read(&of->kn->attr.open->event); ops = kernfs_ops(of->kn); if (ops->read) - len = ops->read(of, buf, len, *ppos); + len = ops->read(of, buf, len, iocb->ki_pos); else len = -EINVAL; @@ -220,12 +220,12 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of, if (len < 0) goto out_free; - if (copy_to_user(user_buf, buf, len)) { + if (copy_to_iter(buf, len, iter) != len) { len = -EFAULT; goto out_free; } - *ppos += len; + iocb->ki_pos += len; out_free: if (buf == of->prealloc_buf) @@ -235,31 +235,14 @@ static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of, return len; } -/** - * kernfs_fop_read - kernfs vfs read callback - * @file: file pointer - * @user_buf: data to write - * @count: number of bytes - * @ppos: starting offset - */ -static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf, - size_t count, loff_t *ppos) +static ssize_t kernfs_fop_read_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct kernfs_open_file *of = kernfs_of(file); - - if (of->kn->flags & KERNFS_HAS_SEQ_SHOW) - return seq_read(file, user_buf, count, ppos); - else - return kernfs_file_direct_read(of, user_buf, count, ppos); + if (kernfs_of(iocb->ki_filp)->kn->flags & KERNFS_HAS_SEQ_SHOW) + return seq_read_iter(iocb, iter); + return kernfs_file_read_iter(iocb, iter); } -/** - * kernfs_fop_write - kernfs vfs write callback - * @file: file pointer - * @user_buf: data to write - * @count: number of bytes - * @ppos: starting offset - * +/* * Copy data in from userland and pass it to the matching kernfs write * operation. * @@ -269,20 +252,18 @@ static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf, * modify only the the value you're changing, then write entire buffer * back. */ -static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf, - size_t count, loff_t *ppos) +static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct kernfs_open_file *of = kernfs_of(file); + struct kernfs_open_file *of = kernfs_of(iocb->ki_filp); + ssize_t len = iov_iter_count(iter); const struct kernfs_ops *ops; - ssize_t len; char *buf; if (of->atomic_write_len) { - len = count; if (len > of->atomic_write_len) return -E2BIG; } else { - len = min_t(size_t, count, PAGE_SIZE); + len = min_t(size_t, len, PAGE_SIZE); } buf = of->prealloc_buf; @@ -293,7 +274,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf, if (!buf) return -ENOMEM; - if (copy_from_user(buf, user_buf, len)) { + if (copy_from_iter(buf, len, iter) != len) { len = -EFAULT; goto out_free; } @@ -312,7 +293,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf, ops = kernfs_ops(of->kn); if (ops->write) - len = ops->write(of, buf, len, *ppos); + len = ops->write(of, buf, len, iocb->ki_pos); else len = -EINVAL; @@ -320,7 +301,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf, mutex_unlock(&of->mutex); if (len > 0) - *ppos += len; + iocb->ki_pos += len; out_free: if (buf == of->prealloc_buf) @@ -673,7 +654,7 @@ static int kernfs_fop_open(struct inode *inode, struct file *file) /* * Write path needs to atomic_write_len outside active reference. - * Cache it in open_file. See kernfs_fop_write() for details. + * Cache it in open_file. See kernfs_fop_write_iter() for details. */ of->atomic_write_len = ops->atomic_write_len; @@ -960,14 +941,16 @@ void kernfs_notify(struct kernfs_node *kn) EXPORT_SYMBOL_GPL(kernfs_notify); const struct file_operations kernfs_file_fops = { - .read = kernfs_fop_read, - .write = kernfs_fop_write, + .read_iter = kernfs_fop_read_iter, + .write_iter = kernfs_fop_write_iter, .llseek = generic_file_llseek, .mmap = kernfs_fop_mmap, .open = kernfs_fop_open, .release = kernfs_fop_release, .poll = kernfs_fop_poll, .fsync = noop_fsync, + .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, }; /** diff --git a/fs/namespace.c b/fs/namespace.c index d2db7dfe232b..9d33909d0f9e 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -1713,8 +1713,6 @@ static int can_umount(const struct path *path, int flags) { struct mount *mnt = real_mount(path->mnt); - if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW)) - return -EINVAL; if (!may_mount()) return -EPERM; if (path->dentry != path->mnt->mnt_root) @@ -1728,6 +1726,7 @@ static int can_umount(const struct path *path, int flags) return 0; } +// caller is responsible for flags being sane int path_umount(struct path *path, int flags) { struct mount *mnt = real_mount(path->mnt); @@ -1749,6 +1748,10 @@ static int ksys_umount(char __user *name, int flags) struct path path; int ret; + // basic validity checks done first + if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW)) + return -EINVAL; + if (!(flags & UMOUNT_NOFOLLOW)) lookup_flags |= LOOKUP_FOLLOW; ret = user_path_at(AT_FDCWD, name, lookup_flags, &path); diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c index 816e1427f17e..04bf8066980c 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c @@ -1011,22 +1011,24 @@ nfs_delegation_find_inode_server(struct nfs_server *server, const struct nfs_fh *fhandle) { struct nfs_delegation *delegation; - struct inode *freeme, *res = NULL; + struct super_block *freeme = NULL; + struct inode *res = NULL; list_for_each_entry_rcu(delegation, &server->delegations, super_list) { spin_lock(&delegation->lock); if (delegation->inode != NULL && !test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) && nfs_compare_fh(fhandle, &NFS_I(delegation->inode)->fh) == 0) { - freeme = igrab(delegation->inode); - if (freeme && nfs_sb_active(freeme->i_sb)) - res = freeme; + if (nfs_sb_active(server->super)) { + freeme = server->super; + res = igrab(delegation->inode); + } spin_unlock(&delegation->lock); if (res != NULL) return res; if (freeme) { rcu_read_unlock(); - iput(freeme); + nfs_sb_deactive(freeme); rcu_read_lock(); } return ERR_PTR(-EAGAIN); diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index b840d0a91c9d..62d3189745cd 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -136,9 +136,29 @@ struct nfs_fs_context { } clone_data; }; -#define nfs_errorf(fc, fmt, ...) errorf(fc, fmt, ## __VA_ARGS__) -#define nfs_invalf(fc, fmt, ...) invalf(fc, fmt, ## __VA_ARGS__) -#define nfs_warnf(fc, fmt, ...) warnf(fc, fmt, ## __VA_ARGS__) +#define nfs_errorf(fc, fmt, ...) ((fc)->log.log ? \ + errorf(fc, fmt, ## __VA_ARGS__) : \ + ({ dprintk(fmt "\n", ## __VA_ARGS__); })) + +#define nfs_ferrorf(fc, fac, fmt, ...) ((fc)->log.log ? \ + errorf(fc, fmt, ## __VA_ARGS__) : \ + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); })) + +#define nfs_invalf(fc, fmt, ...) ((fc)->log.log ? \ + invalf(fc, fmt, ## __VA_ARGS__) : \ + ({ dprintk(fmt "\n", ## __VA_ARGS__); -EINVAL; })) + +#define nfs_finvalf(fc, fac, fmt, ...) ((fc)->log.log ? \ + invalf(fc, fmt, ## __VA_ARGS__) : \ + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); -EINVAL; })) + +#define nfs_warnf(fc, fmt, ...) ((fc)->log.log ? \ + warnf(fc, fmt, ## __VA_ARGS__) : \ + ({ dprintk(fmt "\n", ## __VA_ARGS__); })) + +#define nfs_fwarnf(fc, fac, fmt, ...) ((fc)->log.log ? \ + warnf(fc, fmt, ## __VA_ARGS__) : \ + ({ dfprintk(fac, fmt "\n", ## __VA_ARGS__); })) static inline struct nfs_fs_context *nfs_fc2context(const struct fs_context *fc) { @@ -579,12 +599,14 @@ extern void nfs4_test_session_trunk(struct rpc_clnt *clnt, static inline struct inode *nfs_igrab_and_active(struct inode *inode) { - inode = igrab(inode); - if (inode != NULL && !nfs_sb_active(inode->i_sb)) { - iput(inode); - inode = NULL; + struct super_block *sb = inode->i_sb; + + if (sb && nfs_sb_active(sb)) { + if (igrab(inode)) + return inode; + nfs_sb_deactive(sb); } - return inode; + return NULL; } static inline void nfs_iput_and_deactive(struct inode *inode) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 0ce04e0e5d82..2f4679a62712 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -3536,10 +3536,8 @@ static void nfs4_close_done(struct rpc_task *task, void *data) trace_nfs4_close(state, &calldata->arg, &calldata->res, task->tk_status); /* Handle Layoutreturn errors */ - if (pnfs_roc_done(task, calldata->inode, - &calldata->arg.lr_args, - &calldata->res.lr_res, - &calldata->res.lr_ret) == -EAGAIN) + if (pnfs_roc_done(task, &calldata->arg.lr_args, &calldata->res.lr_res, + &calldata->res.lr_ret) == -EAGAIN) goto out_restart; /* hmm. we are done with the inode, and in the process of freeing @@ -6384,10 +6382,8 @@ static void nfs4_delegreturn_done(struct rpc_task *task, void *calldata) trace_nfs4_delegreturn_exit(&data->args, &data->res, task->tk_status); /* Handle Layoutreturn errors */ - if (pnfs_roc_done(task, data->inode, - &data->args.lr_args, - &data->res.lr_res, - &data->res.lr_ret) == -EAGAIN) + if (pnfs_roc_done(task, &data->args.lr_args, &data->res.lr_res, + &data->res.lr_ret) == -EAGAIN) goto out_restart; switch (task->tk_status) { @@ -6441,10 +6437,10 @@ static void nfs4_delegreturn_release(void *calldata) struct nfs4_delegreturndata *data = calldata; struct inode *inode = data->inode; + if (data->lr.roc) + pnfs_roc_release(&data->lr.arg, &data->lr.res, + data->res.lr_ret); if (inode) { - if (data->lr.roc) - pnfs_roc_release(&data->lr.arg, &data->lr.res, - data->res.lr_ret); nfs_post_op_update_inode_force_wcc(inode, &data->fattr); nfs_iput_and_deactive(inode); } @@ -6520,16 +6516,14 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred, nfs_fattr_init(data->res.fattr); data->timestamp = jiffies; data->rpc_status = 0; - data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, cred); data->inode = nfs_igrab_and_active(inode); - if (data->inode) { + if (data->inode || issync) { + data->lr.roc = pnfs_roc(inode, &data->lr.arg, &data->lr.res, + cred); if (data->lr.roc) { data->args.lr_args = &data->lr.arg; data->res.lr_res = &data->lr.res; } - } else if (data->lr.roc) { - pnfs_roc_release(&data->lr.arg, &data->lr.res, 0); - data->lr.roc = false; } task_setup_data.callback_data = data; @@ -7111,9 +7105,9 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f data->arg.new_lock_owner, ret); } else data->cancelled = true; + trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret); rpc_put_task(task); dprintk("%s: done, ret = %d!\n", __func__, ret); - trace_nfs4_set_lock(fl, state, &data->res.stateid, cmd, ret); return ret; } diff --git a/fs/nfs/nfs4super.c b/fs/nfs/nfs4super.c index 984cc42ee54d..d09bcfd7db89 100644 --- a/fs/nfs/nfs4super.c +++ b/fs/nfs/nfs4super.c @@ -227,7 +227,7 @@ int nfs4_try_get_tree(struct fs_context *fc) fc, ctx->nfs_server.hostname, ctx->nfs_server.export_path); if (err) { - nfs_errorf(fc, "NFS4: Couldn't follow remote path"); + nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path"); dfprintk(MOUNT, "<-- nfs4_try_get_tree() = %d [error]\n", err); } else { dfprintk(MOUNT, "<-- nfs4_try_get_tree() = 0\n"); @@ -250,7 +250,7 @@ int nfs4_get_referral_tree(struct fs_context *fc) fc, ctx->nfs_server.hostname, ctx->nfs_server.export_path); if (err) { - nfs_errorf(fc, "NFS4: Couldn't follow remote path"); + nfs_ferrorf(fc, MOUNT, "NFS4: Couldn't follow remote path"); dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = %d [error]\n", err); } else { dfprintk(MOUNT, "<-- nfs4_get_referral_tree() = 0\n"); diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 07f59dc8cb2e..4f274f21c4ab 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -1152,7 +1152,7 @@ void pnfs_layoutreturn_free_lsegs(struct pnfs_layout_hdr *lo, LIST_HEAD(freeme); spin_lock(&inode->i_lock); - if (!pnfs_layout_is_valid(lo) || !arg_stateid || + if (!pnfs_layout_is_valid(lo) || !nfs4_stateid_match_other(&lo->plh_stateid, arg_stateid)) goto out_unlock; if (stateid) { @@ -1509,10 +1509,8 @@ out_noroc: return false; } -int pnfs_roc_done(struct rpc_task *task, struct inode *inode, - struct nfs4_layoutreturn_args **argpp, - struct nfs4_layoutreturn_res **respp, - int *ret) +int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, + struct nfs4_layoutreturn_res **respp, int *ret) { struct nfs4_layoutreturn_args *arg = *argpp; int retval = -EAGAIN; @@ -1545,7 +1543,7 @@ int pnfs_roc_done(struct rpc_task *task, struct inode *inode, return 0; case -NFS4ERR_OLD_STATEID: if (!nfs4_layout_refresh_old_stateid(&arg->stateid, - &arg->range, inode)) + &arg->range, arg->inode)) break; *ret = -NFS4ERR_NOMATCHING_LAYOUT; return -EAGAIN; @@ -1560,23 +1558,28 @@ void pnfs_roc_release(struct nfs4_layoutreturn_args *args, int ret) { struct pnfs_layout_hdr *lo = args->layout; - const nfs4_stateid *arg_stateid = NULL; + struct inode *inode = args->inode; const nfs4_stateid *res_stateid = NULL; struct nfs4_xdr_opaque_data *ld_private = args->ld_private; switch (ret) { case -NFS4ERR_NOMATCHING_LAYOUT: + spin_lock(&inode->i_lock); + if (pnfs_layout_is_valid(lo) && + nfs4_stateid_match_other(&args->stateid, &lo->plh_stateid)) + pnfs_set_plh_return_info(lo, args->range.iomode, 0); + pnfs_clear_layoutreturn_waitbit(lo); + spin_unlock(&inode->i_lock); break; case 0: if (res->lrs_present) res_stateid = &res->stateid; fallthrough; default: - arg_stateid = &args->stateid; + pnfs_layoutreturn_free_lsegs(lo, &args->stateid, &args->range, + res_stateid); } trace_nfs4_layoutreturn_on_close(args->inode, &args->stateid, ret); - pnfs_layoutreturn_free_lsegs(lo, arg_stateid, &args->range, - res_stateid); if (ld_private && ld_private->ops && ld_private->ops->free) ld_private->ops->free(ld_private); pnfs_put_layout_hdr(lo); @@ -2015,6 +2018,27 @@ lookup_again: goto lookup_again; } + /* + * Because we free lsegs when sending LAYOUTRETURN, we need to wait + * for LAYOUTRETURN. + */ + if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) { + spin_unlock(&ino->i_lock); + dprintk("%s wait for layoutreturn\n", __func__); + lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo)); + if (!IS_ERR(lseg)) { + pnfs_put_layout_hdr(lo); + dprintk("%s retrying\n", __func__); + trace_pnfs_update_layout(ino, pos, count, iomode, lo, + lseg, + PNFS_UPDATE_LAYOUT_RETRY); + goto lookup_again; + } + trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, + PNFS_UPDATE_LAYOUT_RETURN); + goto out_put_layout_hdr; + } + lseg = pnfs_find_lseg(lo, &arg, strict_iomode); if (lseg) { trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, @@ -2067,28 +2091,6 @@ lookup_again: nfs4_stateid_copy(&stateid, &lo->plh_stateid); } - /* - * Because we free lsegs before sending LAYOUTRETURN, we need to wait - * for LAYOUTRETURN even if first is true. - */ - if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags)) { - spin_unlock(&ino->i_lock); - dprintk("%s wait for layoutreturn\n", __func__); - lseg = ERR_PTR(pnfs_prepare_to_retry_layoutget(lo)); - if (!IS_ERR(lseg)) { - if (first) - pnfs_clear_first_layoutget(lo); - pnfs_put_layout_hdr(lo); - dprintk("%s retrying\n", __func__); - trace_pnfs_update_layout(ino, pos, count, iomode, lo, - lseg, PNFS_UPDATE_LAYOUT_RETRY); - goto lookup_again; - } - trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, - PNFS_UPDATE_LAYOUT_RETURN); - goto out_put_layout_hdr; - } - if (pnfs_layoutgets_blocked(lo)) { trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, PNFS_UPDATE_LAYOUT_BLOCKED); @@ -2242,6 +2244,7 @@ static void _lgopen_prepare_attached(struct nfs4_opendata *data, &rng, GFP_KERNEL); if (!lgp) { pnfs_clear_first_layoutget(lo); + nfs_layoutget_end(lo); pnfs_put_layout_hdr(lo); return; } diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index bbd3de1025f2..d810ae674f4e 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -297,10 +297,8 @@ bool pnfs_roc(struct inode *ino, struct nfs4_layoutreturn_args *args, struct nfs4_layoutreturn_res *res, const struct cred *cred); -int pnfs_roc_done(struct rpc_task *task, struct inode *inode, - struct nfs4_layoutreturn_args **argpp, - struct nfs4_layoutreturn_res **respp, - int *ret); +int pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, + struct nfs4_layoutreturn_res **respp, int *ret); void pnfs_roc_release(struct nfs4_layoutreturn_args *args, struct nfs4_layoutreturn_res *res, int ret); @@ -772,7 +770,7 @@ pnfs_roc(struct inode *ino, } static inline int -pnfs_roc_done(struct rpc_task *task, struct inode *inode, +pnfs_roc_done(struct rpc_task *task, struct nfs4_layoutreturn_args **argpp, struct nfs4_layoutreturn_res **respp, int *ret) diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c index 2efcfdd348a1..49d3389bd813 100644 --- a/fs/nfs/pnfs_nfs.c +++ b/fs/nfs/pnfs_nfs.c @@ -78,22 +78,18 @@ void pnfs_generic_clear_request_commit(struct nfs_page *req, struct nfs_commit_info *cinfo) { - struct pnfs_layout_segment *freeme = NULL; + struct pnfs_commit_bucket *bucket = NULL; if (!test_and_clear_bit(PG_COMMIT_TO_DS, &req->wb_flags)) goto out; cinfo->ds->nwritten--; - if (list_is_singular(&req->wb_list)) { - struct pnfs_commit_bucket *bucket; - + if (list_is_singular(&req->wb_list)) bucket = list_first_entry(&req->wb_list, - struct pnfs_commit_bucket, - written); - freeme = pnfs_free_bucket_lseg(bucket); - } + struct pnfs_commit_bucket, written); out: nfs_request_remove_commit_list(req, cinfo); - pnfs_put_lseg(freeme); + if (bucket) + pnfs_put_lseg(pnfs_free_bucket_lseg(bucket)); } EXPORT_SYMBOL_GPL(pnfs_generic_clear_request_commit); @@ -407,12 +403,16 @@ pnfs_bucket_get_committing(struct list_head *head, struct pnfs_commit_bucket *bucket, struct nfs_commit_info *cinfo) { + struct pnfs_layout_segment *lseg; struct list_head *pos; list_for_each(pos, &bucket->committing) cinfo->ds->ncommitting--; list_splice_init(&bucket->committing, head); - return pnfs_free_bucket_lseg(bucket); + lseg = pnfs_free_bucket_lseg(bucket); + if (!lseg) + lseg = pnfs_get_lseg(bucket->lseg); + return lseg; } static struct nfs_commit_data * @@ -424,8 +424,6 @@ pnfs_bucket_fetch_commitdata(struct pnfs_commit_bucket *bucket, if (!data) return NULL; data->lseg = pnfs_bucket_get_committing(&data->pages, bucket, cinfo); - if (!data->lseg) - data->lseg = pnfs_get_lseg(bucket->lseg); return data; } diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c index 821db21ba072..34b880211e5e 100644 --- a/fs/nfsd/nfs3xdr.c +++ b/fs/nfsd/nfs3xdr.c @@ -865,9 +865,14 @@ compose_entry_fh(struct nfsd3_readdirres *cd, struct svc_fh *fhp, if (isdotent(name, namlen)) { if (namlen == 2) { dchild = dget_parent(dparent); - /* filesystem root - cannot return filehandle for ".." */ + /* + * Don't return filehandle for ".." if we're at + * the filesystem or export root: + */ if (dchild == dparent) goto out; + if (dparent == exp->ex_path.dentry) + goto out; } else dchild = dget(dparent); } else diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 4727b7f03c5b..8d6d2678abad 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -50,6 +50,11 @@ #include "pnfs.h" #include "trace.h" +static bool inter_copy_offload_enable; +module_param(inter_copy_offload_enable, bool, 0644); +MODULE_PARM_DESC(inter_copy_offload_enable, + "Enable inter server to server copy offload. Default: false"); + #ifdef CONFIG_NFSD_V4_SECURITY_LABEL #include <linux/security.h> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 45ee6b12ce5b..eaaa1605b5b5 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -147,6 +147,25 @@ svcxdr_dupstr(struct nfsd4_compoundargs *argp, void *buf, u32 len) return p; } +static void * +svcxdr_savemem(struct nfsd4_compoundargs *argp, __be32 *p, u32 len) +{ + __be32 *tmp; + + /* + * The location of the decoded data item is stable, + * so @p is OK to use. This is the common case. + */ + if (p != argp->xdr->scratch.iov_base) + return p; + + tmp = svcxdr_tmpalloc(argp, len); + if (!tmp) + return NULL; + memcpy(tmp, p, len); + return tmp; +} + /* * NFSv4 basic data type decoders */ @@ -183,11 +202,10 @@ nfsd4_decode_opaque(struct nfsd4_compoundargs *argp, struct xdr_netobj *o) p = xdr_inline_decode(argp->xdr, len); if (!p) return nfserr_bad_xdr; - o->data = svcxdr_tmpalloc(argp, len); + o->data = svcxdr_savemem(argp, p, len); if (!o->data) return nfserr_jukebox; o->len = len; - memcpy(o->data, p, len); return nfs_ok; } @@ -205,10 +223,9 @@ nfsd4_decode_component4(struct nfsd4_compoundargs *argp, char **namp, u32 *lenp) status = check_filename((char *)p, *lenp); if (status) return status; - *namp = svcxdr_tmpalloc(argp, *lenp); + *namp = svcxdr_savemem(argp, p, *lenp); if (!*namp) return nfserr_jukebox; - memcpy(*namp, p, *lenp); return nfs_ok; } @@ -1200,10 +1217,9 @@ nfsd4_decode_putfh(struct nfsd4_compoundargs *argp, struct nfsd4_putfh *putfh) p = xdr_inline_decode(argp->xdr, putfh->pf_fhlen); if (!p) return nfserr_bad_xdr; - putfh->pf_fhval = svcxdr_tmpalloc(argp, putfh->pf_fhlen); + putfh->pf_fhval = svcxdr_savemem(argp, p, putfh->pf_fhlen); if (!putfh->pf_fhval) return nfserr_jukebox; - memcpy(putfh->pf_fhval, p, putfh->pf_fhlen); return nfs_ok; } @@ -1318,24 +1334,20 @@ nfsd4_decode_setclientid(struct nfsd4_compoundargs *argp, struct nfsd4_setclient p = xdr_inline_decode(argp->xdr, setclientid->se_callback_netid_len); if (!p) return nfserr_bad_xdr; - setclientid->se_callback_netid_val = svcxdr_tmpalloc(argp, + setclientid->se_callback_netid_val = svcxdr_savemem(argp, p, setclientid->se_callback_netid_len); if (!setclientid->se_callback_netid_val) return nfserr_jukebox; - memcpy(setclientid->se_callback_netid_val, p, - setclientid->se_callback_netid_len); if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_addr_len) < 0) return nfserr_bad_xdr; p = xdr_inline_decode(argp->xdr, setclientid->se_callback_addr_len); if (!p) return nfserr_bad_xdr; - setclientid->se_callback_addr_val = svcxdr_tmpalloc(argp, + setclientid->se_callback_addr_val = svcxdr_savemem(argp, p, setclientid->se_callback_addr_len); if (!setclientid->se_callback_addr_val) return nfserr_jukebox; - memcpy(setclientid->se_callback_addr_val, p, - setclientid->se_callback_addr_len); if (xdr_stream_decode_u32(argp->xdr, &setclientid->se_callback_ident) < 0) return nfserr_bad_xdr; @@ -1375,10 +1387,9 @@ nfsd4_decode_verify(struct nfsd4_compoundargs *argp, struct nfsd4_verify *verify p = xdr_inline_decode(argp->xdr, verify->ve_attrlen); if (!p) return nfserr_bad_xdr; - verify->ve_attrval = svcxdr_tmpalloc(argp, verify->ve_attrlen); + verify->ve_attrval = svcxdr_savemem(argp, p, verify->ve_attrlen); if (!verify->ve_attrval) return nfserr_jukebox; - memcpy(verify->ve_attrval, p, verify->ve_attrlen); return nfs_ok; } @@ -2333,10 +2344,9 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp) p = xdr_inline_decode(argp->xdr, argp->taglen); if (!p) return 0; - argp->tag = svcxdr_tmpalloc(argp, argp->taglen); + argp->tag = svcxdr_savemem(argp, p, argp->taglen); if (!argp->tag) return 0; - memcpy(argp->tag, p, argp->taglen); max_reply += xdr_align_size(argp->taglen); } @@ -4756,6 +4766,7 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp, resp->rqstp->rq_vec, read->rd_vlen, maxcount, eof); if (nfserr) return nfserr; + xdr_truncate_encode(xdr, starting_len + 16 + xdr_align_size(*maxcount)); tmp = htonl(NFS4_CONTENT_DATA); write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4); @@ -4763,6 +4774,10 @@ nfsd4_encode_read_plus_data(struct nfsd4_compoundres *resp, write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp64, 8); tmp = htonl(*maxcount); write_bytes_to_xdr_buf(xdr->buf, starting_len + 12, &tmp, 4); + + tmp = xdr_zero; + write_bytes_to_xdr_buf(xdr->buf, starting_len + 16 + *maxcount, &tmp, + xdr_pad_size(*maxcount)); return nfs_ok; } @@ -4855,14 +4870,15 @@ out: if (nfserr && segments == 0) xdr_truncate_encode(xdr, starting_len); else { - tmp = htonl(eof); - write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4); - tmp = htonl(segments); - write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4); if (nfserr) { xdr_truncate_encode(xdr, last_segment); nfserr = nfs_ok; + eof = 0; } + tmp = htonl(eof); + write_bytes_to_xdr_buf(xdr->buf, starting_len, &tmp, 4); + tmp = htonl(segments); + write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4); } return nfserr; diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 00384c332f9b..f9c9f4c63cc7 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -33,12 +33,6 @@ #define NFSDDBG_FACILITY NFSDDBG_SVC -bool inter_copy_offload_enable; -EXPORT_SYMBOL_GPL(inter_copy_offload_enable); -module_param(inter_copy_offload_enable, bool, 0644); -MODULE_PARM_DESC(inter_copy_offload_enable, - "Enable inter server to server copy offload. Default: false"); - extern struct svc_program nfsd_program; static int nfsd(void *vrqstp); #if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL) diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h index a60ff5ce1a37..c300885ae75d 100644 --- a/fs/nfsd/xdr4.h +++ b/fs/nfsd/xdr4.h @@ -568,7 +568,6 @@ struct nfsd4_copy { struct nfs_fh c_fh; nfs4_stateid stateid; }; -extern bool inter_copy_offload_enable; struct nfsd4_seek { /* request */ diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c index 3e01d8f2ab90..dcab112e1f00 100644 --- a/fs/notify/fanotify/fanotify_user.c +++ b/fs/notify/fanotify/fanotify_user.c @@ -1285,26 +1285,23 @@ fput_and_out: return ret; } +#ifndef CONFIG_ARCH_SPLIT_ARG64 SYSCALL_DEFINE5(fanotify_mark, int, fanotify_fd, unsigned int, flags, __u64, mask, int, dfd, const char __user *, pathname) { return do_fanotify_mark(fanotify_fd, flags, mask, dfd, pathname); } +#endif -#ifdef CONFIG_COMPAT -COMPAT_SYSCALL_DEFINE6(fanotify_mark, +#if defined(CONFIG_ARCH_SPLIT_ARG64) || defined(CONFIG_COMPAT) +SYSCALL32_DEFINE6(fanotify_mark, int, fanotify_fd, unsigned int, flags, - __u32, mask0, __u32, mask1, int, dfd, + SC_ARG64(mask), int, dfd, const char __user *, pathname) { - return do_fanotify_mark(fanotify_fd, flags, -#ifdef __BIG_ENDIAN - ((__u64)mask0 << 32) | mask1, -#else - ((__u64)mask1 << 32) | mask0, -#endif - dfd, pathname); + return do_fanotify_mark(fanotify_fd, flags, SC_VAL64(__u64, mask), + dfd, pathname); } #endif diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index 317899222d7f..d2018f70d1fa 100644 --- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -1770,6 +1770,12 @@ static int process_sysctl_arg(char *param, char *val, return 0; } + if (!val) + return -EINVAL; + len = strlen(val); + if (len == 0) + return -EINVAL; + /* * To set sysctl options, we use a temporary mount of proc, look up the * respective sys/ file and write to it. To avoid mounting it when no @@ -1811,7 +1817,6 @@ static int process_sysctl_arg(char *param, char *val, file, param, val); goto out; } - len = strlen(val); wret = kernel_write(file, val, len, &pos); if (wret < 0) { err = wret; diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index ee5a235b3056..602e3a52884d 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1035,6 +1035,25 @@ struct clear_refs_private { }; #ifdef CONFIG_MEM_SOFT_DIRTY + +#define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE) + +static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte) +{ + struct page *page; + + if (!pte_write(pte)) + return false; + if (!is_cow_mapping(vma->vm_flags)) + return false; + if (likely(!atomic_read(&vma->vm_mm->has_pinned))) + return false; + page = vm_normal_page(vma, addr, pte); + if (!page) + return false; + return page_maybe_dma_pinned(page); +} + static inline void clear_soft_dirty(struct vm_area_struct *vma, unsigned long addr, pte_t *pte) { @@ -1049,6 +1068,8 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, if (pte_present(ptent)) { pte_t old_pte; + if (pte_is_pinned(vma, addr, ptent)) + return; old_pte = ptep_modify_prot_start(vma, addr, pte); ptent = pte_wrprotect(old_pte); ptent = pte_clear_soft_dirty(ptent); @@ -1215,41 +1236,26 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, .type = type, }; + if (mmap_write_lock_killable(mm)) { + count = -EINTR; + goto out_mm; + } if (type == CLEAR_REFS_MM_HIWATER_RSS) { - if (mmap_write_lock_killable(mm)) { - count = -EINTR; - goto out_mm; - } - /* * Writing 5 to /proc/pid/clear_refs resets the peak * resident set size to this mm's current rss value. */ reset_mm_hiwater_rss(mm); - mmap_write_unlock(mm); - goto out_mm; + goto out_unlock; } - if (mmap_read_lock_killable(mm)) { - count = -EINTR; - goto out_mm; - } tlb_gather_mmu(&tlb, mm, 0, -1); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY)) continue; - mmap_read_unlock(mm); - if (mmap_write_lock_killable(mm)) { - count = -EINTR; - goto out_mm; - } - for (vma = mm->mmap; vma; vma = vma->vm_next) { - vma->vm_flags &= ~VM_SOFTDIRTY; - vma_set_page_prot(vma); - } - mmap_write_downgrade(mm); - break; + vma->vm_flags &= ~VM_SOFTDIRTY; + vma_set_page_prot(vma); } mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, @@ -1261,7 +1267,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, if (type == CLEAR_REFS_SOFT_DIRTY) mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb, 0, -1); - mmap_read_unlock(mm); +out_unlock: + mmap_write_unlock(mm); out_mm: mmput(mm); } diff --git a/fs/select.c b/fs/select.c index ebfebdfe5c69..37aaa8317f3a 100644 --- a/fs/select.c +++ b/fs/select.c @@ -1011,14 +1011,17 @@ static int do_sys_poll(struct pollfd __user *ufds, unsigned int nfds, fdcount = do_poll(head, &table, end_time); poll_freewait(&table); + if (!user_write_access_begin(ufds, nfds * sizeof(*ufds))) + goto out_fds; + for (walk = head; walk; walk = walk->next) { struct pollfd *fds = walk->entries; int j; - for (j = 0; j < walk->len; j++, ufds++) - if (__put_user(fds[j].revents, &ufds->revents)) - goto out_fds; + for (j = walk->len; j; fds++, ufds++, j--) + unsafe_put_user(fds->revents, &ufds->revents, Efault); } + user_write_access_end(); err = fdcount; out_fds: @@ -1030,6 +1033,11 @@ out_fds: } return err; + +Efault: + user_write_access_end(); + err = -EFAULT; + goto out_fds; } static long do_restart_poll(struct restart_block *restart_block) diff --git a/fs/udf/super.c b/fs/udf/super.c index 5bef3a68395d..d0df217f4712 100644 --- a/fs/udf/super.c +++ b/fs/udf/super.c @@ -705,6 +705,7 @@ static int udf_check_vsd(struct super_block *sb) struct buffer_head *bh = NULL; int nsr = 0; struct udf_sb_info *sbi; + loff_t session_offset; sbi = UDF_SB(sb); if (sb->s_blocksize < sizeof(struct volStructDesc)) @@ -712,7 +713,8 @@ static int udf_check_vsd(struct super_block *sb) else sectorsize = sb->s_blocksize; - sector += (((loff_t)sbi->s_session) << sb->s_blocksize_bits); + session_offset = (loff_t)sbi->s_session << sb->s_blocksize_bits; + sector += session_offset; udf_debug("Starting at sector %u (%lu byte sectors)\n", (unsigned int)(sector >> sb->s_blocksize_bits), @@ -757,8 +759,7 @@ static int udf_check_vsd(struct super_block *sb) if (nsr > 0) return 1; - else if (!bh && sector - (sbi->s_session << sb->s_blocksize_bits) == - VSD_FIRST_SECTOR_OFFSET) + else if (!bh && sector - session_offset == VSD_FIRST_SECTOR_OFFSET) return -1; else return 0; diff --git a/fs/zonefs/Kconfig b/fs/zonefs/Kconfig index ef2697b78820..827278f937fe 100644 --- a/fs/zonefs/Kconfig +++ b/fs/zonefs/Kconfig @@ -3,6 +3,7 @@ config ZONEFS_FS depends on BLOCK depends on BLK_DEV_ZONED select FS_IOMAP + select CRC32 help zonefs is a simple file system which exposes zones of a zoned block device (e.g. host-managed or host-aware SMR disk drives) as files. diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h index dd90c9792909..0e7316a86240 100644 --- a/include/asm-generic/bitops/atomic.h +++ b/include/asm-generic/bitops/atomic.h @@ -11,19 +11,19 @@ * See Documentation/atomic_bitops.txt for details. */ -static inline void set_bit(unsigned int nr, volatile unsigned long *p) +static __always_inline void set_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); } -static inline void clear_bit(unsigned int nr, volatile unsigned long *p) +static __always_inline void clear_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); } -static inline void change_bit(unsigned int nr, volatile unsigned long *p) +static __always_inline void change_bit(unsigned int nr, volatile unsigned long *p) { p += BIT_WORD(nr); atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h index 6236f212da61..edffd1dcca3e 100644 --- a/include/drm/drm_dp_helper.h +++ b/include/drm/drm_dp_helper.h @@ -2029,16 +2029,13 @@ struct drm_dp_desc { int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc, bool is_branch); -u32 drm_dp_get_edid_quirks(const struct edid *edid); /** * enum drm_dp_quirk - Display Port sink/branch device specific quirks * * Display Port sink and branch devices in the wild have a variety of bugs, try * to collect them here. The quirks are shared, but it's up to the drivers to - * implement workarounds for them. Note that because some devices have - * unreliable OUIDs, the EDID of sinks should also be checked for quirks using - * drm_dp_get_edid_quirks(). + * implement workarounds for them. */ enum drm_dp_quirk { /** @@ -2071,16 +2068,6 @@ enum drm_dp_quirk { */ DP_DPCD_QUIRK_DSC_WITHOUT_VIRTUAL_DPCD, /** - * @DP_QUIRK_FORCE_DPCD_BACKLIGHT: - * - * The device is telling the truth when it says that it uses DPCD - * backlight controls, even if the system's firmware disagrees. This - * quirk should be checked against both the ident and panel EDID. - * When present, the driver should honor the DPCD backlight - * capabilities advertised. - */ - DP_QUIRK_FORCE_DPCD_BACKLIGHT, - /** * @DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS: * * The device supports a link rate of 3.24 Gbps (multiplier 0xc) despite @@ -2092,16 +2079,14 @@ enum drm_dp_quirk { /** * drm_dp_has_quirk() - does the DP device have a specific quirk * @desc: Device descriptor filled by drm_dp_read_desc() - * @edid_quirks: Optional quirk bitmask filled by drm_dp_get_edid_quirks() * @quirk: Quirk to query for * * Return true if DP device identified by @desc has @quirk. */ static inline bool -drm_dp_has_quirk(const struct drm_dp_desc *desc, u32 edid_quirks, - enum drm_dp_quirk quirk) +drm_dp_has_quirk(const struct drm_dp_desc *desc, enum drm_dp_quirk quirk) { - return (desc->quirks | edid_quirks) & BIT(quirk); + return desc->quirks & BIT(quirk); } #ifdef CONFIG_DRM_DP_CEC diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h index f5e92fe9151c..bd1c39907b92 100644 --- a/include/drm/drm_dp_mst_helper.h +++ b/include/drm/drm_dp_mst_helper.h @@ -783,6 +783,7 @@ drm_dp_mst_detect_port(struct drm_connector *connector, struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); +int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count); int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc); diff --git a/include/drm/drm_hdcp.h b/include/drm/drm_hdcp.h index fe58dbb46962..ac22c246542a 100644 --- a/include/drm/drm_hdcp.h +++ b/include/drm/drm_hdcp.h @@ -101,11 +101,11 @@ /* Following Macros take a byte at a time for bit(s) masking */ /* - * TODO: This has to be changed for DP MST, as multiple stream on - * same port is possible. - * For HDCP2.2 on HDMI and DP SST this value is always 1. + * TODO: HDCP_2_2_MAX_CONTENT_STREAMS_CNT is based upon actual + * H/W MST streams capacity. + * This required to be moved out to platform specific header. */ -#define HDCP_2_2_MAX_CONTENT_STREAMS_CNT 1 +#define HDCP_2_2_MAX_CONTENT_STREAMS_CNT 4 #define HDCP_2_2_TXCAP_MASK_LEN 2 #define HDCP_2_2_RXCAPS_LEN 3 #define HDCP_2_2_RX_REPEATER(x) ((x) & BIT(0)) diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index fc85f50fa0e9..8dcb3e1477bc 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -13,7 +13,7 @@ #define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) #define ARMV8_PMU_MAX_COUNTER_PAIRS ((ARMV8_PMU_MAX_COUNTERS + 1) >> 1) -#ifdef CONFIG_KVM_ARM_PMU +#ifdef CONFIG_HW_PERF_EVENTS struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 2630c2e953f7..053bf05fb1f7 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -885,6 +885,13 @@ static inline int acpi_device_modalias(struct device *dev, return -ENODEV; } +static inline struct platform_device * +acpi_create_platform_device(struct acpi_device *adev, + struct property_entry *properties) +{ + return NULL; +} + static inline bool acpi_dma_supported(struct acpi_device *adev) { return false; diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 74c6c0486eed..555ab0fddbef 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -13,6 +13,12 @@ /* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 */ #if GCC_VERSION < 40900 # error Sorry, your version of GCC is too old - please use 4.9 or newer. +#elif defined(CONFIG_ARM64) && GCC_VERSION < 50100 +/* + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293 + * https://lore.kernel.org/r/[email protected] + */ +# error Sorry, your version of GCC is too old - please use 5.1 or newer. #endif /* diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index b2a3f4f641a7..ea5e04e75845 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -273,6 +273,12 @@ #define __used __attribute__((__used__)) /* + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-warn_005funused_005fresult-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#nodiscard-warn-unused-result + */ +#define __must_check __attribute__((__warn_unused_result__)) + +/* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute */ diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index bbaa39e98f9f..e5dd5a4ae946 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -121,12 +121,6 @@ struct ftrace_likely_data { unsigned long constant; }; -#ifdef CONFIG_ENABLE_MUST_CHECK -#define __must_check __attribute__((__warn_unused_result__)) -#else -#define __must_check -#endif - #if defined(CC_USING_HOTPATCH) #define notrace __attribute__((hotpatch(0, 0))) #elif defined(CC_USING_PATCHABLE_FUNCTION_ENTRY) diff --git a/include/linux/console.h b/include/linux/console.h index dbe78e8e2602..20874db50bc8 100644 --- a/include/linux/console.h +++ b/include/linux/console.h @@ -186,12 +186,9 @@ extern int braille_register_console(struct console *, int index, extern int braille_unregister_console(struct console *); #ifdef CONFIG_TTY extern void console_sysfs_notify(void); -extern void register_ttynull_console(void); #else static inline void console_sysfs_notify(void) { } -static inline void register_ttynull_console(void) -{ } #endif extern bool console_suspend_enabled; diff --git a/include/linux/device.h b/include/linux/device.h index 89bb8b84173e..1779f90eeb4c 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -609,6 +609,18 @@ static inline const char *dev_name(const struct device *dev) return kobject_name(&dev->kobj); } +/** + * dev_bus_name - Return a device's bus/class name, if at all possible + * @dev: struct device to get the bus/class name of + * + * Will return the name of the bus/class the device is attached to. If it is + * not attached to a bus/class, an empty string will be returned. + */ +static inline const char *dev_bus_name(const struct device *dev) +{ + return dev->bus ? dev->bus->name : (dev->class ? dev->class->name : ""); +} + __printf(2, 3) int dev_set_name(struct device *dev, const char *name, ...); #ifdef CONFIG_NUMA diff --git a/include/linux/dm-bufio.h b/include/linux/dm-bufio.h index 29d255fdd5d6..90bd558a17f5 100644 --- a/include/linux/dm-bufio.h +++ b/include/linux/dm-bufio.h @@ -150,6 +150,7 @@ void dm_bufio_set_minimum_buffers(struct dm_bufio_client *c, unsigned n); unsigned dm_bufio_get_block_size(struct dm_bufio_client *c); sector_t dm_bufio_get_device_size(struct dm_bufio_client *c); +struct dm_io_client *dm_bufio_get_dm_io_client(struct dm_bufio_client *c); sector_t dm_bufio_get_block_number(struct dm_buffer *b); void *dm_bufio_get_block_data(struct dm_buffer *b); void *dm_bufio_get_aux_data(struct dm_buffer *b); diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index d956987ed032..09c6a0bf3892 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -533,11 +533,10 @@ struct dmar_domain { /* Domain ids per IOMMU. Use u16 since * domain ids are 16 bit wide according * to VT-d spec, section 9.3 */ - unsigned int auxd_refcnt; /* Refcount of auxiliary attaching */ bool has_iotlb_device; struct list_head devices; /* all devices' list */ - struct list_head auxd; /* link to device's auxiliary list */ + struct list_head subdevices; /* all subdevices' list */ struct iova_domain iovad; /* iova's that belong to this domain */ struct dma_pte *pgd; /* virtual address */ @@ -610,14 +609,21 @@ struct intel_iommu { struct dmar_drhd_unit *drhd; }; +/* Per subdevice private data */ +struct subdev_domain_info { + struct list_head link_phys; /* link to phys device siblings */ + struct list_head link_domain; /* link to domain siblings */ + struct device *pdev; /* physical device derived from */ + struct dmar_domain *domain; /* aux-domain */ + int users; /* user count */ +}; + /* PCI domain-device relationship */ struct device_domain_info { struct list_head link; /* link to domain siblings */ struct list_head global; /* link to global list */ struct list_head table; /* link to pasid table */ - struct list_head auxiliary_domains; /* auxiliary domains - * attached to this device - */ + struct list_head subdevices; /* subdevices sibling */ u32 segment; /* PCI segment number */ u8 bus; /* PCI bus number */ u8 devfn; /* PCI devfn number */ @@ -758,6 +764,7 @@ struct intel_svm_dev { struct list_head list; struct rcu_head rcu; struct device *dev; + struct intel_iommu *iommu; struct svm_dev_ops *ops; struct iommu_sva sva; u32 pasid; @@ -771,7 +778,6 @@ struct intel_svm { struct mmu_notifier notifier; struct mm_struct *mm; - struct intel_iommu *iommu; unsigned int flags; u32 pasid; int gpasid; /* In case that guest PASID is different from host PASID */ diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 5e0655fb2a6f..fe1ae73ff8b5 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -35,8 +35,12 @@ struct kunit_kasan_expectation { #define KASAN_SHADOW_INIT 0 #endif +#ifndef PTE_HWTABLE_PTRS +#define PTE_HWTABLE_PTRS 0 +#endif + extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; -extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; +extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]; extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; diff --git a/include/linux/kcov.h b/include/linux/kcov.h index a10e84707d82..4e3037dc1204 100644 --- a/include/linux/kcov.h +++ b/include/linux/kcov.h @@ -52,6 +52,25 @@ static inline void kcov_remote_start_usb(u64 id) kcov_remote_start(kcov_remote_handle(KCOV_SUBSYSTEM_USB, id)); } +/* + * The softirq flavor of kcov_remote_*() functions is introduced as a temporary + * work around for kcov's lack of nested remote coverage sections support in + * task context. Adding suport for nested sections is tracked in: + * https://bugzilla.kernel.org/show_bug.cgi?id=210337 + */ + +static inline void kcov_remote_start_usb_softirq(u64 id) +{ + if (in_serving_softirq()) + kcov_remote_start_usb(id); +} + +static inline void kcov_remote_stop_softirq(void) +{ + if (in_serving_softirq()) + kcov_remote_stop(); +} + #else static inline void kcov_task_init(struct task_struct *t) {} @@ -66,6 +85,8 @@ static inline u64 kcov_common_handle(void) } static inline void kcov_remote_start_common(u64 id) {} static inline void kcov_remote_start_usb(u64 id) {} +static inline void kcov_remote_start_usb_softirq(u64 id) {} +static inline void kcov_remote_stop_softirq(void) {} #endif /* CONFIG_KCOV */ #endif /* _LINUX_KCOV_H */ diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 65b81e0c494d..2484ed97e72f 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -33,6 +33,9 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), unsigned int cpu, const char *namefmt); +void kthread_set_per_cpu(struct task_struct *k, int cpu); +bool kthread_is_per_cpu(struct task_struct *k); + /** * kthread_run - create and wake a thread. * @threadfn: the function to run until signal_pending(current). diff --git a/include/linux/ktime.h b/include/linux/ktime.h index a12b5523cc18..73f20deb497d 100644 --- a/include/linux/ktime.h +++ b/include/linux/ktime.h @@ -230,6 +230,5 @@ static inline ktime_t ms_to_ktime(u64 ms) } # include <linux/timekeeping.h> -# include <linux/timekeeping32.h> #endif diff --git a/include/linux/mdio-bitbang.h b/include/linux/mdio-bitbang.h index 5d71e8a8500f..aca4dc037b70 100644 --- a/include/linux/mdio-bitbang.h +++ b/include/linux/mdio-bitbang.h @@ -35,6 +35,9 @@ struct mdiobb_ctrl { const struct mdiobb_ops *ops; }; +int mdiobb_read(struct mii_bus *bus, int phy, int reg); +int mdiobb_write(struct mii_bus *bus, int phy, int reg, u16 val); + /* The returned bus is not yet registered with the phy layer. */ struct mii_bus *alloc_mdio_bitbang(struct mdiobb_ctrl *ctrl); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d827bd7f3bfe..eeb0b52203e9 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -665,7 +665,7 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, { struct mem_cgroup *memcg = page_memcg(page); - VM_WARN_ON_ONCE_PAGE(!memcg, page); + VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page); return mem_cgroup_lruvec(memcg, pgdat); } diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 8fbddec26eb8..442c0160caab 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1280,7 +1280,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 ece_support[0x1]; u8 reserved_at_a4[0x7]; u8 log_max_srq[0x5]; - u8 reserved_at_b0[0x2]; + u8 reserved_at_b0[0x1]; + u8 uplink_follow[0x1]; u8 ts_cqe_to_dest_cqn[0x1]; u8 reserved_at_b3[0xd]; diff --git a/include/linux/nvme.h b/include/linux/nvme.h index d92535997687..bfed36e342cc 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -116,6 +116,9 @@ enum { NVME_REG_BPMBL = 0x0048, /* Boot Partition Memory Buffer * Location */ + NVME_REG_CMBMSC = 0x0050, /* Controller Memory Buffer Memory + * Space Control + */ NVME_REG_PMRCAP = 0x0e00, /* Persistent Memory Capabilities */ NVME_REG_PMRCTL = 0x0e04, /* Persistent Memory Region Control */ NVME_REG_PMRSTS = 0x0e08, /* Persistent Memory Region Status */ @@ -135,6 +138,7 @@ enum { #define NVME_CAP_CSS(cap) (((cap) >> 37) & 0xff) #define NVME_CAP_MPSMIN(cap) (((cap) >> 48) & 0xf) #define NVME_CAP_MPSMAX(cap) (((cap) >> 52) & 0xf) +#define NVME_CAP_CMBS(cap) (((cap) >> 57) & 0x1) #define NVME_CMB_BIR(cmbloc) ((cmbloc) & 0x7) #define NVME_CMB_OFST(cmbloc) (((cmbloc) >> 12) & 0xfffff) @@ -192,6 +196,8 @@ enum { NVME_CSTS_SHST_OCCUR = 1 << 2, NVME_CSTS_SHST_CMPLT = 2 << 2, NVME_CSTS_SHST_MASK = 3 << 2, + NVME_CMBMSC_CRE = 1 << 0, + NVME_CMBMSC_CMSE = 1 << 1, }; struct nvme_id_power_state { diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index bf7966776c55..505480217cf1 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -163,8 +163,6 @@ int arm_pmu_acpi_probe(armpmu_init_fn init_fn); static inline int arm_pmu_acpi_probe(armpmu_init_fn init_fn) { return 0; } #endif -bool arm_pmu_irq_is_nmi(void); - /* Internal functions only for core arm_pmu code */ struct arm_pmu *armpmu_alloc(void); struct arm_pmu *armpmu_alloc_atomic(void); diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index de0826411311..fd02c5fa60cb 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -86,6 +86,12 @@ void rcu_sched_clock_irq(int user); void rcu_report_dead(unsigned int cpu); void rcutree_migrate_callbacks(int cpu); +#ifdef CONFIG_TASKS_RCU_GENERIC +void rcu_init_tasks_generic(void); +#else +static inline void rcu_init_tasks_generic(void) { } +#endif + #ifdef CONFIG_RCU_STALL_COMMON void rcu_sysrq_start(void); void rcu_sysrq_end(void); diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 333bcdc39635..5f60c9e907c9 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -366,7 +366,7 @@ static inline void skb_frag_size_sub(skb_frag_t *frag, int delta) static inline bool skb_frag_must_loop(struct page *p) { #if defined(CONFIG_HIGHMEM) - if (PageHighMem(p)) + if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) || PageHighMem(p)) return true; #endif return false; @@ -1203,6 +1203,7 @@ struct skb_seq_state { struct sk_buff *root_skb; struct sk_buff *cur_skb; __u8 *frag_data; + __u32 frag_off; }; void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from, diff --git a/include/linux/soc/mediatek/mtk-mutex.h b/include/linux/soc/mediatek/mtk-mutex.h new file mode 100644 index 000000000000..6fe4ffbde290 --- /dev/null +++ b/include/linux/soc/mediatek/mtk-mutex.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2015 MediaTek Inc. + */ + +#ifndef MTK_MUTEX_H +#define MTK_MUTEX_H + +struct regmap; +struct device; +struct mtk_mutex; + +struct mtk_mutex *mtk_mutex_get(struct device *dev); +int mtk_mutex_prepare(struct mtk_mutex *mutex); +void mtk_mutex_add_comp(struct mtk_mutex *mutex, + enum mtk_ddp_comp_id id); +void mtk_mutex_enable(struct mtk_mutex *mutex); +void mtk_mutex_disable(struct mtk_mutex *mutex); +void mtk_mutex_remove_comp(struct mtk_mutex *mutex, + enum mtk_ddp_comp_id id); +void mtk_mutex_unprepare(struct mtk_mutex *mutex); +void mtk_mutex_put(struct mtk_mutex *mutex); +void mtk_mutex_acquire(struct mtk_mutex *mutex); +void mtk_mutex_release(struct mtk_mutex *mutex); + +#endif /* MTK_MUTEX_H */ diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index f3929aff39cf..7688bc983de5 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -251,6 +251,30 @@ static inline int is_syscall_trace_event(struct trace_event_call *tp_event) static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) #endif /* __SYSCALL_DEFINEx */ +/* For split 64-bit arguments on 32-bit architectures */ +#ifdef __LITTLE_ENDIAN +#define SC_ARG64(name) u32, name##_lo, u32, name##_hi +#else +#define SC_ARG64(name) u32, name##_hi, u32, name##_lo +#endif +#define SC_VAL64(type, name) ((type) name##_hi << 32 | name##_lo) + +#ifdef CONFIG_COMPAT +#define SYSCALL32_DEFINE1 COMPAT_SYSCALL_DEFINE1 +#define SYSCALL32_DEFINE2 COMPAT_SYSCALL_DEFINE2 +#define SYSCALL32_DEFINE3 COMPAT_SYSCALL_DEFINE3 +#define SYSCALL32_DEFINE4 COMPAT_SYSCALL_DEFINE4 +#define SYSCALL32_DEFINE5 COMPAT_SYSCALL_DEFINE5 +#define SYSCALL32_DEFINE6 COMPAT_SYSCALL_DEFINE6 +#else +#define SYSCALL32_DEFINE1 SYSCALL_DEFINE1 +#define SYSCALL32_DEFINE2 SYSCALL_DEFINE2 +#define SYSCALL32_DEFINE3 SYSCALL_DEFINE3 +#define SYSCALL32_DEFINE4 SYSCALL_DEFINE4 +#define SYSCALL32_DEFINE5 SYSCALL_DEFINE5 +#define SYSCALL32_DEFINE6 SYSCALL_DEFINE6 +#endif + /* * Called before coming back to user-mode. Returning to user-mode with an * address limit different than USER_DS can allow to overwrite kernel memory. diff --git a/include/linux/timekeeping32.h b/include/linux/timekeeping32.h deleted file mode 100644 index 266017fc9ee9..000000000000 --- a/include/linux/timekeeping32.h +++ /dev/null @@ -1,14 +0,0 @@ -#ifndef _LINUX_TIMEKEEPING32_H -#define _LINUX_TIMEKEEPING32_H -/* - * These interfaces are all based on the old timespec type - * and should get replaced with the timespec64 based versions - * over time so we can remove the file here. - */ - -static inline unsigned long get_seconds(void) -{ - return ktime_get_real_seconds(); -} - -#endif diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h index 88a7673894d5..cfbfd6fe01df 100644 --- a/include/linux/usb/usbnet.h +++ b/include/linux/usb/usbnet.h @@ -81,6 +81,8 @@ struct usbnet { # define EVENT_LINK_CHANGE 11 # define EVENT_SET_RX_MODE 12 # define EVENT_NO_IP_ALIGN 13 + u32 rx_speed; /* in bps - NOT Mbps */ + u32 tx_speed; /* in bps - NOT Mbps */ }; static inline struct usb_driver *driver_of(struct usb_interface *intf) diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 9a4bbccddc7f..0d6f7ec86061 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -1756,7 +1756,7 @@ struct cfg80211_sar_specs { /** - * @struct cfg80211_sar_chan_ranges - sar frequency ranges + * struct cfg80211_sar_freq_ranges - sar frequency ranges * @start_freq: start range edge frequency * @end_freq: end range edge frequency */ @@ -3972,6 +3972,8 @@ struct mgmt_frame_regs { * This callback may sleep. * @reset_tid_config: Reset TID specific configuration for the peer, for the * given TIDs. This callback may sleep. + * + * @set_sar_specs: Update the SAR (TX power) settings. */ struct cfg80211_ops { int (*suspend)(struct wiphy *wiphy, struct cfg80211_wowlan *wow); @@ -4929,6 +4931,7 @@ struct wiphy_iftype_akm_suites { * @max_data_retry_count: maximum supported per TID retry count for * configuration through the %NL80211_TID_CONFIG_ATTR_RETRY_SHORT and * %NL80211_TID_CONFIG_ATTR_RETRY_LONG attributes + * @sar_capa: SAR control capabilities */ struct wiphy { /* assign these fields before you register the wiphy */ diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 7338b3865a2a..111d7771b208 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -76,6 +76,8 @@ struct inet_connection_sock_af_ops { * @icsk_ext_hdr_len: Network protocol overhead (IP/IPv6 options) * @icsk_ack: Delayed ACK control data * @icsk_mtup; MTU probing control data + * @icsk_probes_tstamp: Probe timestamp (cleared by non-zero window ack) + * @icsk_user_timeout: TCP_USER_TIMEOUT value */ struct inet_connection_sock { /* inet_sock has to be the first member! */ @@ -129,6 +131,7 @@ struct inet_connection_sock { u32 probe_timestamp; } icsk_mtup; + u32 icsk_probes_tstamp; u32 icsk_user_timeout; u64 icsk_ca_priv[104 / sizeof(u64)]; diff --git a/include/net/mac80211.h b/include/net/mac80211.h index d315740581f1..2bdbf62f4ecd 100644 --- a/include/net/mac80211.h +++ b/include/net/mac80211.h @@ -3880,6 +3880,7 @@ enum ieee80211_reconfig_type { * This callback may sleep. * @sta_set_4addr: Called to notify the driver when a station starts/stops using * 4-address mode + * @set_sar_specs: Update the SAR (TX power) settings. */ struct ieee80211_ops { void (*tx)(struct ieee80211_hw *hw, diff --git a/include/net/red.h b/include/net/red.h index fc455445f4b2..932f0d79d60c 100644 --- a/include/net/red.h +++ b/include/net/red.h @@ -168,12 +168,14 @@ static inline void red_set_vars(struct red_vars *v) v->qcount = -1; } -static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog) +static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log) { if (fls(qth_min) + Wlog > 32) return false; if (fls(qth_max) + Wlog > 32) return false; + if (Scell_log >= 32) + return false; if (qth_max < qth_min) return false; return true; diff --git a/include/net/sock.h b/include/net/sock.h index bdc4323ce53c..129d200bccb4 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1921,10 +1921,13 @@ static inline void sk_set_txhash(struct sock *sk) sk->sk_txhash = net_tx_rndhash(); } -static inline void sk_rethink_txhash(struct sock *sk) +static inline bool sk_rethink_txhash(struct sock *sk) { - if (sk->sk_txhash) + if (sk->sk_txhash) { sk_set_txhash(sk); + return true; + } + return false; } static inline struct dst_entry * @@ -1947,12 +1950,10 @@ sk_dst_get(struct sock *sk) return dst; } -static inline void dst_negative_advice(struct sock *sk) +static inline void __dst_negative_advice(struct sock *sk) { struct dst_entry *ndst, *dst = __sk_dst_get(sk); - sk_rethink_txhash(sk); - if (dst && dst->ops->negative_advice) { ndst = dst->ops->negative_advice(dst); @@ -1964,6 +1965,12 @@ static inline void dst_negative_advice(struct sock *sk) } } +static inline void dst_negative_advice(struct sock *sk) +{ + sk_rethink_txhash(sk); + __dst_negative_advice(sk); +} + static inline void __sk_dst_set(struct sock *sk, struct dst_entry *dst) { diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index 4f4e93bf814c..cc17bc957548 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -58,10 +58,6 @@ struct xdp_sock { struct xsk_queue *tx ____cacheline_aligned_in_smp; struct list_head tx_list; - /* Mutual exclusion of NAPI TX thread and sendmsg error paths - * in the SKB destructor callback. - */ - spinlock_t tx_completion_lock; /* Protects generic receive. */ spinlock_t rx_lock; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 01755b838c74..eaa8386dbc63 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -73,6 +73,11 @@ struct xsk_buff_pool { bool dma_need_sync; bool unaligned; void *addrs; + /* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect: + * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when + * sockets share a single cq when the same netdev and queue id is shared. + */ + spinlock_t cq_lock; struct xdp_buff_xsk *free_heads[]; }; diff --git a/include/soc/nps/common.h b/include/soc/nps/common.h deleted file mode 100644 index 8c18dc6d3fde..000000000000 --- a/include/soc/nps/common.h +++ /dev/null @@ -1,172 +0,0 @@ -/* - * Copyright (c) 2016, Mellanox Technologies. All rights reserved. - * - * This software is available to you under a choice of one of two - * licenses. You may choose to be licensed under the terms of the GNU - * General Public License (GPL) Version 2, available from the file - * COPYING in the main directory of this source tree, or the - * OpenIB.org BSD license below: - * - * Redistribution and use in source and binary forms, with or - * without modification, are permitted provided that the following - * conditions are met: - * - * - Redistributions of source code must retain the above - * copyright notice, this list of conditions and the following - * disclaimer. - * - * - Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following - * disclaimer in the documentation and/or other materials - * provided with the distribution. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ - -#ifndef SOC_NPS_COMMON_H -#define SOC_NPS_COMMON_H - -#ifdef CONFIG_SMP -#define NPS_IPI_IRQ 5 -#endif - -#define NPS_HOST_REG_BASE 0xF6000000 - -#define NPS_MSU_BLKID 0x018 - -#define CTOP_INST_RSPI_GIC_0_R12 0x3C56117E -#define CTOP_INST_MOV2B_FLIP_R3_B1_B2_INST 0x5B60 -#define CTOP_INST_MOV2B_FLIP_R3_B1_B2_LIMM 0x00010422 - -#ifndef AUX_IENABLE -#define AUX_IENABLE 0x40c -#endif - -#define CTOP_AUX_IACK (0xFFFFF800 + 0x088) - -#ifndef __ASSEMBLY__ - -/* In order to increase compilation test coverage */ -#ifdef CONFIG_ARC -static inline void nps_ack_gic(void) -{ - __asm__ __volatile__ ( - " .word %0\n" - : - : "i"(CTOP_INST_RSPI_GIC_0_R12) - : "memory"); -} -#else -static inline void nps_ack_gic(void) { } -#define write_aux_reg(r, v) -#define read_aux_reg(r) 0 -#endif - -/* CPU global ID */ -struct global_id { - union { - struct { -#ifdef CONFIG_EZNPS_MTM_EXT - u32 __reserved:20, cluster:4, core:4, thread:4; -#else - u32 __reserved:24, cluster:4, core:4; -#endif - }; - u32 value; - }; -}; - -/* - * Convert logical to physical CPU IDs - * - * The conversion swap bits 1 and 2 of cluster id (out of 4 bits) - * Now quad of logical clusters id's are adjacent physically, - * and not like the id's physically came with each cluster. - * Below table is 4x4 mesh of core clusters as it layout on chip. - * Cluster ids are in format: logical (physical) - * - * ----------------- ------------------ - * 3 | 5 (3) 7 (7) | | 13 (11) 15 (15)| - * - * 2 | 4 (2) 6 (6) | | 12 (10) 14 (14)| - * ----------------- ------------------ - * 1 | 1 (1) 3 (5) | | 9 (9) 11 (13)| - * - * 0 | 0 (0) 2 (4) | | 8 (8) 10 (12)| - * ----------------- ------------------ - * 0 1 2 3 - */ -static inline int nps_cluster_logic_to_phys(int cluster) -{ -#ifdef __arc__ - __asm__ __volatile__( - " mov r3,%0\n" - " .short %1\n" - " .word %2\n" - " mov %0,r3\n" - : "+r"(cluster) - : "i"(CTOP_INST_MOV2B_FLIP_R3_B1_B2_INST), - "i"(CTOP_INST_MOV2B_FLIP_R3_B1_B2_LIMM) - : "r3"); -#endif - - return cluster; -} - -#define NPS_CPU_TO_CLUSTER_NUM(cpu) \ - ({ struct global_id gid; gid.value = cpu; \ - nps_cluster_logic_to_phys(gid.cluster); }) - -struct nps_host_reg_address { - union { - struct { - u32 base:8, cl_x:4, cl_y:4, - blkid:6, reg:8, __reserved:2; - }; - u32 value; - }; -}; - -struct nps_host_reg_address_non_cl { - union { - struct { - u32 base:7, blkid:11, reg:12, __reserved:2; - }; - u32 value; - }; -}; - -static inline void *nps_host_reg_non_cl(u32 blkid, u32 reg) -{ - struct nps_host_reg_address_non_cl reg_address; - - reg_address.value = NPS_HOST_REG_BASE; - reg_address.blkid = blkid; - reg_address.reg = reg; - - return (void *)reg_address.value; -} - -static inline void *nps_host_reg(u32 cpu, u32 blkid, u32 reg) -{ - struct nps_host_reg_address reg_address; - u32 cl = NPS_CPU_TO_CLUSTER_NUM(cpu); - - reg_address.value = NPS_HOST_REG_BASE; - reg_address.cl_x = (cl >> 2) & 0x3; - reg_address.cl_y = cl & 0x3; - reg_address.blkid = blkid; - reg_address.reg = reg; - - return (void *)reg_address.value; -} -#endif /* __ASSEMBLY__ */ - -#endif /* SOC_NPS_COMMON_H */ diff --git a/include/soc/nps/mtm.h b/include/soc/nps/mtm.h deleted file mode 100644 index d2f5e7e3703e..000000000000 --- a/include/soc/nps/mtm.h +++ /dev/null @@ -1,59 +0,0 @@ -/* - * Copyright (c) 2016, Mellanox Technologies. All rights reserved. - * - * This software is available to you under a choice of one of two - * licenses. You may choose to be licensed under the terms of the GNU - * General Public License (GPL) Version 2, available from the file - * COPYING in the main directory of this source tree, or the - * OpenIB.org BSD license below: - * - * Redistribution and use in source and binary forms, with or - * without modification, are permitted provided that the following - * conditions are met: - * - * - Redistributions of source code must retain the above - * copyright notice, this list of conditions and the following - * disclaimer. - * - * - Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following - * disclaimer in the documentation and/or other materials - * provided with the distribution. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ - -#ifndef SOC_NPS_MTM_H -#define SOC_NPS_MTM_H - -#define CTOP_INST_HWSCHD_OFF_R3 0x3B6F00BF -#define CTOP_INST_HWSCHD_RESTORE_R3 0x3E6F70C3 - -static inline void hw_schd_save(unsigned int *flags) -{ - __asm__ __volatile__( - " .word %1\n" - " st r3,[%0]\n" - : - : "r"(flags), "i"(CTOP_INST_HWSCHD_OFF_R3) - : "r3", "memory"); -} - -static inline void hw_schd_restore(unsigned int flags) -{ - __asm__ __volatile__( - " mov r3, %0\n" - " .word %1\n" - : - : "r"(flags), "i"(CTOP_INST_HWSCHD_RESTORE_R3) - : "r3"); -} - -#endif /* SOC_NPS_MTM_H */ diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 4eef374d4413..4a5cc8c64be3 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -231,6 +231,7 @@ enum afs_file_error { afs_file_error_dir_bad_magic, afs_file_error_dir_big, afs_file_error_dir_missing_page, + afs_file_error_dir_name_too_long, afs_file_error_dir_over_end, afs_file_error_dir_small, afs_file_error_dir_unmarked_ext, @@ -488,6 +489,7 @@ enum afs_cb_break_reason { EM(afs_file_error_dir_bad_magic, "DIR_BAD_MAGIC") \ EM(afs_file_error_dir_big, "DIR_BIG") \ EM(afs_file_error_dir_missing_page, "DIR_MISSING_PAGE") \ + EM(afs_file_error_dir_name_too_long, "DIR_NAME_TOO_LONG") \ EM(afs_file_error_dir_over_end, "DIR_ENT_OVER_END") \ EM(afs_file_error_dir_small, "DIR_SMALL") \ EM(afs_file_error_dir_unmarked_ext, "DIR_UNMARKED_EXT") \ diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 5039af667645..cbe3e152d24c 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -366,7 +366,7 @@ TRACE_EVENT(sched_process_wait, ); /* - * Tracepoint for do_fork: + * Tracepoint for kernel_clone: */ TRACE_EVENT(sched_process_fork, diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h index 58994e013022..6f89c27265f5 100644 --- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -1424,13 +1424,61 @@ TRACE_EVENT(rpcb_unregister, ) ); +/* Record an xdr_buf containing a fully-formed RPC message */ +DECLARE_EVENT_CLASS(svc_xdr_msg_class, + TP_PROTO( + const struct xdr_buf *xdr + ), + + TP_ARGS(xdr), + + TP_STRUCT__entry( + __field(u32, xid) + __field(const void *, head_base) + __field(size_t, head_len) + __field(const void *, tail_base) + __field(size_t, tail_len) + __field(unsigned int, page_len) + __field(unsigned int, msg_len) + ), + + TP_fast_assign( + __be32 *p = (__be32 *)xdr->head[0].iov_base; + + __entry->xid = be32_to_cpu(*p); + __entry->head_base = p; + __entry->head_len = xdr->head[0].iov_len; + __entry->tail_base = xdr->tail[0].iov_base; + __entry->tail_len = xdr->tail[0].iov_len; + __entry->page_len = xdr->page_len; + __entry->msg_len = xdr->len; + ), + + TP_printk("xid=0x%08x head=[%p,%zu] page=%u tail=[%p,%zu] len=%u", + __entry->xid, + __entry->head_base, __entry->head_len, __entry->page_len, + __entry->tail_base, __entry->tail_len, __entry->msg_len + ) +); + +#define DEFINE_SVCXDRMSG_EVENT(name) \ + DEFINE_EVENT(svc_xdr_msg_class, \ + svc_xdr_##name, \ + TP_PROTO( \ + const struct xdr_buf *xdr \ + ), \ + TP_ARGS(xdr)) + +DEFINE_SVCXDRMSG_EVENT(recvfrom); + +/* Record an xdr_buf containing arbitrary data, tagged with an XID */ DECLARE_EVENT_CLASS(svc_xdr_buf_class, TP_PROTO( - const struct svc_rqst *rqst, + __be32 xid, const struct xdr_buf *xdr ), - TP_ARGS(rqst, xdr), + TP_ARGS(xid, xdr), TP_STRUCT__entry( __field(u32, xid) @@ -1443,7 +1491,7 @@ DECLARE_EVENT_CLASS(svc_xdr_buf_class, ), TP_fast_assign( - __entry->xid = be32_to_cpu(rqst->rq_xid); + __entry->xid = be32_to_cpu(xid); __entry->head_base = xdr->head[0].iov_base; __entry->head_len = xdr->head[0].iov_len; __entry->tail_base = xdr->tail[0].iov_base; @@ -1463,12 +1511,11 @@ DECLARE_EVENT_CLASS(svc_xdr_buf_class, DEFINE_EVENT(svc_xdr_buf_class, \ svc_xdr_##name, \ TP_PROTO( \ - const struct svc_rqst *rqst, \ + __be32 xid, \ const struct xdr_buf *xdr \ ), \ - TP_ARGS(rqst, xdr)) + TP_ARGS(xid, xdr)) -DEFINE_SVCXDRBUF_EVENT(recvfrom); DEFINE_SVCXDRBUF_EVENT(sendto); /* diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h index 5f42a14481bd..f76de49c768f 100644 --- a/include/uapi/drm/drm_fourcc.h +++ b/include/uapi/drm/drm_fourcc.h @@ -528,6 +528,25 @@ extern "C" { #define I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS fourcc_mod_code(INTEL, 7) /* + * Intel Color Control Surface with Clear Color (CCS) for Gen-12 render + * compression. + * + * The main surface is Y-tiled and is at plane index 0 whereas CCS is linear + * and at index 1. The clear color is stored at index 2, and the pitch should + * be ignored. The clear color structure is 256 bits. The first 128 bits + * represents Raw Clear Color Red, Green, Blue and Alpha color each represented + * by 32 bits. The raw clear color is consumed by the 3d engine and generates + * the converted clear color of size 64 bits. The first 32 bits store the Lower + * Converted Clear Color value and the next 32 bits store the Higher Converted + * Clear Color value when applicable. The Converted Clear Color values are + * consumed by the DE. The last 64 bits are used to store Color Discard Enable + * and Depth Clear Value Valid which are ignored by the DE. A CCS cache line + * corresponds to an area of 4x1 tiles in the main surface. The main surface + * pitch is required to be a multiple of 4 tile widths. + */ +#define I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS_CC fourcc_mod_code(INTEL, 8) + +/* * Tiled, NV12MT, grouped in 64 (pixels) x 32 (lines) -sized macroblocks * * Macroblocks are laid in a Z-shape, and each pixel data is following the diff --git a/include/uapi/linux/bcache.h b/include/uapi/linux/bcache.h index 52e8bcb33981..cf7399f03b71 100644 --- a/include/uapi/linux/bcache.h +++ b/include/uapi/linux/bcache.h @@ -213,7 +213,7 @@ struct cache_sb_disk { __le16 keys; }; __le64 d[SB_JOURNAL_BUCKETS]; /* journal buckets */ - __le16 bucket_size_hi; + __le16 obso_bucket_size_hi; /* obsoleted */ }; /* diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index 874cc12a34d9..82708c6db432 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -75,8 +75,9 @@ struct rtnl_link_stats { * * @rx_dropped: Number of packets received but not processed, * e.g. due to lack of resources or unsupported protocol. - * For hardware interfaces this counter should not include packets - * dropped by the device which are counted separately in + * For hardware interfaces this counter may include packets discarded + * due to L2 address filtering but should not include packets dropped + * by the device due to buffer exhaustion which are counted separately in * @rx_missed_errors (since procfs folds those two counters together). * * @tx_dropped: Number of packets dropped on their way to transmission, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 886802b8ffba..374c67875cdb 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -251,6 +251,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_X86_RDMSR 29 #define KVM_EXIT_X86_WRMSR 30 #define KVM_EXIT_DIRTY_RING_FULL 31 +#define KVM_EXIT_AP_RESET_HOLD 32 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -573,6 +574,7 @@ struct kvm_vapic_addr { #define KVM_MP_STATE_CHECK_STOP 6 #define KVM_MP_STATE_OPERATING 7 #define KVM_MP_STATE_LOAD 8 +#define KVM_MP_STATE_AP_RESET_HOLD 9 struct kvm_mp_state { __u32 mp_state; diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index 28b6ee53305f..b1633e7ba529 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -293,6 +293,7 @@ enum nft_rule_compat_attributes { * @NFT_SET_EVAL: set can be updated from the evaluation path * @NFT_SET_OBJECT: set contains stateful objects * @NFT_SET_CONCAT: set contains a concatenation + * @NFT_SET_EXPR: set contains expressions */ enum nft_set_flags { NFT_SET_ANONYMOUS = 0x1, @@ -303,6 +304,7 @@ enum nft_set_flags { NFT_SET_EVAL = 0x20, NFT_SET_OBJECT = 0x40, NFT_SET_CONCAT = 0x80, + NFT_SET_EXPR = 0x100, }; /** @@ -706,6 +708,7 @@ enum nft_dynset_ops { enum nft_dynset_flags { NFT_DYNSET_F_INV = (1 << 0), + NFT_DYNSET_F_EXPR = (1 << 1), }; /** diff --git a/include/uapi/linux/ppp-ioctl.h b/include/uapi/linux/ppp-ioctl.h index 8dbecb3ad036..1cc5ce0ae062 100644 --- a/include/uapi/linux/ppp-ioctl.h +++ b/include/uapi/linux/ppp-ioctl.h @@ -116,7 +116,7 @@ struct pppol2tp_ioc_stats { #define PPPIOCGCHAN _IOR('t', 55, int) /* get ppp channel number */ #define PPPIOCGL2TPSTATS _IOR('t', 54, struct pppol2tp_ioc_stats) #define PPPIOCBRIDGECHAN _IOW('t', 53, int) /* bridge one channel to another */ -#define PPPIOCUNBRIDGECHAN _IO('t', 54) /* unbridge channel */ +#define PPPIOCUNBRIDGECHAN _IO('t', 52) /* unbridge channel */ #define SIOCGPPPSTATS (SIOCDEVPRIVATE + 0) #define SIOCGPPPVER (SIOCDEVPRIVATE + 1) /* NEVER change this!! */ diff --git a/include/uapi/misc/habanalabs.h b/include/uapi/misc/habanalabs.h index 8c15a7d336a0..dba3827c43ca 100644 --- a/include/uapi/misc/habanalabs.h +++ b/include/uapi/misc/habanalabs.h @@ -279,6 +279,7 @@ enum hl_device_status { * HL_INFO_CLK_THROTTLE_REASON - Retrieve clock throttling reason * HL_INFO_SYNC_MANAGER - Retrieve sync manager info per dcore * HL_INFO_TOTAL_ENERGY - Retrieve total energy consumption + * HL_INFO_PLL_FREQUENCY - Retrieve PLL frequency */ #define HL_INFO_HW_IP_INFO 0 #define HL_INFO_HW_EVENTS 1 @@ -425,6 +426,8 @@ struct hl_info_sync_manager { * @ctx_device_in_reset_drop_cnt: context dropped due to device in reset * @total_max_cs_in_flight_drop_cnt: total dropped due to maximum CS in-flight * @ctx_max_cs_in_flight_drop_cnt: context dropped due to maximum CS in-flight + * @total_validation_drop_cnt: total dropped due to validation error + * @ctx_validation_drop_cnt: context dropped due to validation error */ struct hl_info_cs_counters { __u64 total_out_of_mem_drop_cnt; @@ -437,6 +440,8 @@ struct hl_info_cs_counters { __u64 ctx_device_in_reset_drop_cnt; __u64 total_max_cs_in_flight_drop_cnt; __u64 ctx_max_cs_in_flight_drop_cnt; + __u64 total_validation_drop_cnt; + __u64 ctx_validation_drop_cnt; }; enum gaudi_dcores { diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h index 00c7235ae93e..2c43b0ef1e4d 100644 --- a/include/xen/xenbus.h +++ b/include/xen/xenbus.h @@ -192,7 +192,7 @@ void xs_suspend_cancel(void); struct work_struct; -void xenbus_probe(struct work_struct *); +void xenbus_probe(void); #define XENBUS_IS_ERR_READ(str) ({ \ if (!IS_ERR(str) && strlen(str) == 0) { \ diff --git a/init/main.c b/init/main.c index 6feee7f11eaf..c68d784376ca 100644 --- a/init/main.c +++ b/init/main.c @@ -1480,14 +1480,8 @@ void __init console_on_rootfs(void) struct file *file = filp_open("/dev/console", O_RDWR, 0); if (IS_ERR(file)) { - pr_err("Warning: unable to open an initial console. Fallback to ttynull.\n"); - register_ttynull_console(); - - file = filp_open("/dev/console", O_RDWR, 0); - if (IS_ERR(file)) { - pr_err("Warning: Failed to add ttynull console. No stdin, stdout, and stderr for the init process!\n"); - return; - } + pr_err("Warning: unable to open an initial console.\n"); + return; } init_dup(file); init_dup(file); @@ -1518,6 +1512,7 @@ static noinline void __init kernel_init_freeable(void) init_mm_internals(); + rcu_init_tasks_generic(); do_pre_smp_initcalls(); lockup_detector_init(); diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c index 6edff97ad594..2f0597320b6d 100644 --- a/kernel/bpf/bpf_inode_storage.c +++ b/kernel/bpf/bpf_inode_storage.c @@ -176,14 +176,14 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode, * bpf_local_storage_update expects the owner to have a * valid storage pointer. */ - if (!inode_storage_ptr(inode)) + if (!inode || !inode_storage_ptr(inode)) return (unsigned long)NULL; sdata = inode_storage_lookup(inode, map, true); if (sdata) return (unsigned long)sdata->data; - /* This helper must only called from where the inode is gurranteed + /* This helper must only called from where the inode is guaranteed * to have a refcount and cannot be freed. */ if (flags & BPF_LOCAL_STORAGE_GET_F_CREATE) { @@ -200,7 +200,10 @@ BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode, BPF_CALL_2(bpf_inode_storage_delete, struct bpf_map *, map, struct inode *, inode) { - /* This helper must only called from where the inode is gurranteed + if (!inode) + return -EINVAL; + + /* This helper must only called from where the inode is guaranteed * to have a refcount and cannot be freed. */ return inode_storage_delete(inode, map); diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c index 4ef1959a78f2..e0da0258b732 100644 --- a/kernel/bpf/bpf_task_storage.c +++ b/kernel/bpf/bpf_task_storage.c @@ -218,7 +218,7 @@ BPF_CALL_4(bpf_task_storage_get, struct bpf_map *, map, struct task_struct *, * bpf_local_storage_update expects the owner to have a * valid storage pointer. */ - if (!task_storage_ptr(task)) + if (!task || !task_storage_ptr(task)) return (unsigned long)NULL; sdata = task_storage_lookup(task, map, true); @@ -243,6 +243,9 @@ BPF_CALL_4(bpf_task_storage_get, struct bpf_map *, map, struct task_struct *, BPF_CALL_2(bpf_task_storage_delete, struct bpf_map *, map, struct task_struct *, task) { + if (!task) + return -EINVAL; + /* This helper must only be called from places where the lifetime of the task * is guaranteed. Either by being refcounted or by being protected * by an RCU read-side critical section. diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 8d6bdb4f4d61..84a36ee4a4c2 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -4172,7 +4172,7 @@ static int btf_parse_hdr(struct btf_verifier_env *env) return -ENOTSUPP; } - if (btf_data_size == hdr->hdr_len) { + if (!btf->base_btf && btf_data_size == hdr->hdr_len) { btf_verifier_log(env, "No data"); return -EINVAL; } diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 6ec088a96302..96555a8a2c54 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -1391,12 +1391,13 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level, if (ctx.optlen != 0) { *optlen = ctx.optlen; *kernel_optval = ctx.optval; + /* export and don't free sockopt buf */ + return 0; } } out: - if (ret) - sockopt_free_buf(&ctx); + sockopt_free_buf(&ctx); return ret; } diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 7e848200cd26..c1ac7f964bc9 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -152,6 +152,7 @@ static void htab_init_buckets(struct bpf_htab *htab) lockdep_set_class(&htab->buckets[i].lock, &htab->lockdep_key); } + cond_resched(); } } diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index bd8a3183d030..41ca280b1dc1 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -108,7 +108,7 @@ BPF_CALL_2(bpf_map_peek_elem, struct bpf_map *, map, void *, value) } const struct bpf_func_proto bpf_map_peek_elem_proto = { - .func = bpf_map_pop_elem, + .func = bpf_map_peek_elem, .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 4caf06fe4152..e5999d86c76e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -17,7 +17,6 @@ #include <linux/fs.h> #include <linux/license.h> #include <linux/filter.h> -#include <linux/version.h> #include <linux/kernel.h> #include <linux/idr.h> #include <linux/cred.h> @@ -2713,7 +2712,6 @@ out_unlock: out_put_prog: if (tgt_prog_fd && tgt_prog) bpf_prog_put(tgt_prog); - bpf_prog_put(prog); return err; } @@ -2826,7 +2824,10 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr) tp_name = prog->aux->attach_func_name; break; } - return bpf_tracing_prog_attach(prog, 0, 0); + err = bpf_tracing_prog_attach(prog, 0, 0); + if (err >= 0) + return err; + goto out_put_prog; case BPF_PROG_TYPE_RAW_TRACEPOINT: case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE: if (strncpy_from_user(buf, diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c index e73c07593024..175b7b42bfc4 100644 --- a/kernel/bpf/task_iter.c +++ b/kernel/bpf/task_iter.c @@ -37,7 +37,7 @@ retry: if (!task) { ++*tid; goto retry; - } else if (skip_if_dup_files && task->tgid != task->pid && + } else if (skip_if_dup_files && !thread_group_leader(task) && task->files == task->group_leader->files) { put_task_struct(task); task = NULL; @@ -151,13 +151,14 @@ again: curr_task = info->task; curr_fd = info->fd; } else { - curr_task = task_seq_get_next(ns, &curr_tid, true); - if (!curr_task) { - info->task = NULL; - return NULL; - } - - /* set info->task and info->tid */ + curr_task = task_seq_get_next(ns, &curr_tid, true); + if (!curr_task) { + info->task = NULL; + info->tid = curr_tid; + return NULL; + } + + /* set info->task and info->tid */ info->task = curr_task; if (curr_tid == info->tid) { curr_fd = info->fd; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 17270b8404f1..e7368c5eacb7 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2217,6 +2217,8 @@ static bool is_spillable_regtype(enum bpf_reg_type type) case PTR_TO_RDWR_BUF: case PTR_TO_RDWR_BUF_OR_NULL: case PTR_TO_PERCPU_BTF_ID: + case PTR_TO_MEM: + case PTR_TO_MEM_OR_NULL: return true; default: return false; @@ -5311,7 +5313,7 @@ static bool signed_add_overflows(s64 a, s64 b) return res < a; } -static bool signed_add32_overflows(s64 a, s64 b) +static bool signed_add32_overflows(s32 a, s32 b) { /* Do the add in u32, where overflow is well-defined */ s32 res = (s32)((u32)a + (u32)b); @@ -5321,7 +5323,7 @@ static bool signed_add32_overflows(s64 a, s64 b) return res < a; } -static bool signed_sub_overflows(s32 a, s32 b) +static bool signed_sub_overflows(s64 a, s64 b) { /* Do the sub in u64, where overflow is well-defined */ s64 res = (s64)((u64)a - (u64)b); @@ -5333,7 +5335,7 @@ static bool signed_sub_overflows(s32 a, s32 b) static bool signed_sub32_overflows(s32 a, s32 b) { - /* Do the sub in u64, where overflow is well-defined */ + /* Do the sub in u32, where overflow is well-defined */ s32 res = (s32)((u32)a - (u32)b); if (b < 0) diff --git a/kernel/configs/android-recommended.config b/kernel/configs/android-recommended.config index 53d688bdd894..eb0029c9a6a6 100644 --- a/kernel/configs/android-recommended.config +++ b/kernel/configs/android-recommended.config @@ -81,7 +81,6 @@ CONFIG_INPUT_JOYSTICK=y CONFIG_INPUT_MISC=y CONFIG_INPUT_TABLET=y CONFIG_INPUT_UINPUT=y -CONFIG_ION=y CONFIG_JOYSTICK_XPAD=y CONFIG_JOYSTICK_XPAD_FF=y CONFIG_JOYSTICK_XPAD_LEDS=y diff --git a/kernel/fork.c b/kernel/fork.c index 37720a6d04ea..d66cd1014211 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -819,9 +819,8 @@ void __init fork_init(void) init_task.signal->rlim[RLIMIT_SIGPENDING] = init_task.signal->rlim[RLIMIT_NPROC]; - for (i = 0; i < UCOUNT_COUNTS; i++) { + for (i = 0; i < UCOUNT_COUNTS; i++) init_user_ns.ucount_max[i] = max_threads/2; - } #ifdef CONFIG_VMAP_STACK cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "fork:vm_stack_cache", @@ -1654,9 +1653,8 @@ static inline void init_task_pid_links(struct task_struct *task) { enum pid_type type; - for (type = PIDTYPE_PID; type < PIDTYPE_MAX; ++type) { + for (type = PIDTYPE_PID; type < PIDTYPE_MAX; ++type) INIT_HLIST_NODE(&task->pid_links[type]); - } } static inline void diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index ab8567f32501..dec3f73e8db9 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -2859,3 +2859,4 @@ bool irq_check_status_bit(unsigned int irq, unsigned int bitmask) rcu_read_unlock(); return res; } +EXPORT_SYMBOL_GPL(irq_check_status_bit); diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c index 2c0c4d6d0f83..dc0e2d7fbdfd 100644 --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -402,7 +402,7 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, struct msi_domain_ops *ops = info->ops; struct irq_data *irq_data; struct msi_desc *desc; - msi_alloc_info_t arg; + msi_alloc_info_t arg = { }; int i, ret, virq; bool can_reserve; diff --git a/kernel/kthread.c b/kernel/kthread.c index a5eceecd4513..1578973c5740 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -294,7 +294,7 @@ static int kthread(void *_create) do_exit(ret); } -/* called from do_fork() to get node information for about to be created task */ +/* called from kernel_clone() to get node information for about to be created task */ int tsk_fork_get_node(struct task_struct *tsk) { #ifdef CONFIG_NUMA @@ -493,11 +493,36 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), return p; kthread_bind(p, cpu); /* CPU hotplug need to bind once again when unparking the thread. */ - set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags); to_kthread(p)->cpu = cpu; return p; } +void kthread_set_per_cpu(struct task_struct *k, int cpu) +{ + struct kthread *kthread = to_kthread(k); + if (!kthread) + return; + + WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY)); + + if (cpu < 0) { + clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags); + return; + } + + kthread->cpu = cpu; + set_bit(KTHREAD_IS_PER_CPU, &kthread->flags); +} + +bool kthread_is_per_cpu(struct task_struct *k) +{ + struct kthread *kthread = to_kthread(k); + if (!kthread) + return false; + + return test_bit(KTHREAD_IS_PER_CPU, &kthread->flags); +} + /** * kthread_unpark - unpark a thread created by kthread_create(). * @k: thread created by kthread_create(). diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index c1418b47f625..bdaf4829098c 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -79,7 +79,7 @@ module_param(lock_stat, int, 0644); DEFINE_PER_CPU(unsigned int, lockdep_recursion); EXPORT_PER_CPU_SYMBOL_GPL(lockdep_recursion); -static inline bool lockdep_enabled(void) +static __always_inline bool lockdep_enabled(void) { if (!debug_locks) return false; @@ -5271,12 +5271,15 @@ static void __lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie /* * Check whether we follow the irq-flags state precisely: */ -static void check_flags(unsigned long flags) +static noinstr void check_flags(unsigned long flags) { #if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_DEBUG_LOCKDEP) if (!debug_locks) return; + /* Get the warning out.. */ + instrumentation_begin(); + if (irqs_disabled_flags(flags)) { if (DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())) { printk("possible reason: unannotated irqs-off.\n"); @@ -5304,6 +5307,8 @@ static void check_flags(unsigned long flags) if (!debug_locks) print_irqtrace_events(current); + + instrumentation_end(); #endif } diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index ffdd0dc7ec6d..6639a0cfe0ac 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -1291,11 +1291,16 @@ static size_t info_print_prefix(const struct printk_info *info, bool syslog, * done: * * - Add prefix for each line. + * - Drop truncated lines that no longer fit into the buffer. * - Add the trailing newline that has been removed in vprintk_store(). - * - Drop truncated lines that do not longer fit into the buffer. + * - Add a string terminator. + * + * Since the produced string is always terminated, the maximum possible + * return value is @r->text_buf_size - 1; * * Return: The length of the updated/prepared text, including the added - * prefixes and the newline. The dropped line(s) are not counted. + * prefixes and the newline. The terminator is not counted. The dropped + * line(s) are not counted. */ static size_t record_print_text(struct printk_record *r, bool syslog, bool time) @@ -1338,26 +1343,31 @@ static size_t record_print_text(struct printk_record *r, bool syslog, /* * Truncate the text if there is not enough space to add the - * prefix and a trailing newline. + * prefix and a trailing newline and a terminator. */ - if (len + prefix_len + text_len + 1 > buf_size) { + if (len + prefix_len + text_len + 1 + 1 > buf_size) { /* Drop even the current line if no space. */ - if (len + prefix_len + line_len + 1 > buf_size) + if (len + prefix_len + line_len + 1 + 1 > buf_size) break; - text_len = buf_size - len - prefix_len - 1; + text_len = buf_size - len - prefix_len - 1 - 1; truncated = true; } memmove(text + prefix_len, text, text_len); memcpy(text, prefix, prefix_len); + /* + * Increment the prepared length to include the text and + * prefix that were just moved+copied. Also increment for the + * newline at the end of this line. If this is the last line, + * there is no newline, but it will be added immediately below. + */ len += prefix_len + line_len + 1; - if (text_len == line_len) { /* - * Add the trailing newline removed in - * vprintk_store(). + * This is the last line. Add the trailing newline + * removed in vprintk_store(). */ text[prefix_len + line_len] = '\n'; break; @@ -1382,6 +1392,14 @@ static size_t record_print_text(struct printk_record *r, bool syslog, text_len -= line_len + 1; } + /* + * If a buffer was provided, it will be terminated. Space for the + * string terminator is guaranteed to be available. The terminator is + * not counted in the return value. + */ + if (buf_size > 0) + text[len] = 0; + return len; } @@ -3427,7 +3445,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog, while (prb_read_valid_info(prb, seq, &info, &line_count)) { if (r.info->seq >= dumper->next_seq) break; - l += get_record_print_text_size(&info, line_count, true, time); + l += get_record_print_text_size(&info, line_count, syslog, time); seq = r.info->seq + 1; } @@ -3437,7 +3455,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog, &info, &line_count)) { if (r.info->seq >= dumper->next_seq) break; - l -= get_record_print_text_size(&info, line_count, true, time); + l -= get_record_print_text_size(&info, line_count, syslog, time); seq = r.info->seq + 1; } diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c index 6704f06e0417..8a7b7362c0dd 100644 --- a/kernel/printk/printk_ringbuffer.c +++ b/kernel/printk/printk_ringbuffer.c @@ -1718,7 +1718,7 @@ static bool copy_data(struct prb_data_ring *data_ring, /* Caller interested in the line count? */ if (line_count) - *line_count = count_lines(data, data_size); + *line_count = count_lines(data, len); /* Caller interested in the data content? */ if (!buf || !buf_size) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 35bdcfd84d42..36607551f966 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -241,7 +241,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) } } -/* Spawn RCU-tasks grace-period kthread, e.g., at core_initcall() time. */ +/* Spawn RCU-tasks grace-period kthread. */ static void __init rcu_spawn_tasks_kthread_generic(struct rcu_tasks *rtp) { struct task_struct *t; @@ -564,7 +564,6 @@ static int __init rcu_spawn_tasks_kthread(void) rcu_spawn_tasks_kthread_generic(&rcu_tasks); return 0; } -core_initcall(rcu_spawn_tasks_kthread); #if !defined(CONFIG_TINY_RCU) void show_rcu_tasks_classic_gp_kthread(void) @@ -692,7 +691,6 @@ static int __init rcu_spawn_tasks_rude_kthread(void) rcu_spawn_tasks_kthread_generic(&rcu_tasks_rude); return 0; } -core_initcall(rcu_spawn_tasks_rude_kthread); #if !defined(CONFIG_TINY_RCU) void show_rcu_tasks_rude_gp_kthread(void) @@ -968,6 +966,11 @@ static void rcu_tasks_trace_pregp_step(void) static void rcu_tasks_trace_pertask(struct task_struct *t, struct list_head *hop) { + // During early boot when there is only the one boot CPU, there + // is no idle task for the other CPUs. Just return. + if (unlikely(t == NULL)) + return; + WRITE_ONCE(t->trc_reader_special.b.need_qs, false); WRITE_ONCE(t->trc_reader_checked, false); t->trc_ipi_to_cpu = -1; @@ -1193,7 +1196,6 @@ static int __init rcu_spawn_tasks_trace_kthread(void) rcu_spawn_tasks_kthread_generic(&rcu_tasks_trace); return 0; } -core_initcall(rcu_spawn_tasks_trace_kthread); #if !defined(CONFIG_TINY_RCU) void show_rcu_tasks_trace_gp_kthread(void) @@ -1222,6 +1224,21 @@ void show_rcu_tasks_gp_kthreads(void) } #endif /* #ifndef CONFIG_TINY_RCU */ +void __init rcu_init_tasks_generic(void) +{ +#ifdef CONFIG_TASKS_RCU + rcu_spawn_tasks_kthread(); +#endif + +#ifdef CONFIG_TASKS_RUDE_RCU + rcu_spawn_tasks_rude_kthread(); +#endif + +#ifdef CONFIG_TASKS_TRACE_RCU + rcu_spawn_tasks_trace_kthread(); +#endif +} + #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */ static inline void rcu_tasks_bootup_oddness(void) {} void show_rcu_tasks_gp_kthreads(void) {} diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 15d2562118d1..ff74fca39ed2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1796,13 +1796,28 @@ static inline bool rq_has_pinned_tasks(struct rq *rq) */ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) { + /* When not in the task's cpumask, no point in looking further. */ if (!cpumask_test_cpu(cpu, p->cpus_ptr)) return false; - if (is_per_cpu_kthread(p) || is_migration_disabled(p)) + /* migrate_disabled() must be allowed to finish. */ + if (is_migration_disabled(p)) return cpu_online(cpu); - return cpu_active(cpu); + /* Non kernel threads are not allowed during either online or offline. */ + if (!(p->flags & PF_KTHREAD)) + return cpu_active(cpu); + + /* KTHREAD_IS_PER_CPU is always allowed. */ + if (kthread_is_per_cpu(p)) + return cpu_online(cpu); + + /* Regular kernel threads don't get to stay during offline. */ + if (cpu_rq(cpu)->balance_push) + return false; + + /* But are allowed during online. */ + return cpu_online(cpu); } /* @@ -2327,7 +2342,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (p->flags & PF_KTHREAD || is_migration_disabled(p)) { /* - * Kernel threads are allowed on online && !active CPUs. + * Kernel threads are allowed on online && !active CPUs, + * however, during cpu-hot-unplug, even these might get pushed + * away if not KTHREAD_IS_PER_CPU. * * Specifically, migration_disabled() tasks must not fail the * cpumask_any_and_distribute() pick below, esp. so on @@ -2371,16 +2388,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, __do_set_cpus_allowed(p, new_mask, flags); - if (p->flags & PF_KTHREAD) { - /* - * For kernel threads that do indeed end up on online && - * !active we want to ensure they are strict per-CPU threads. - */ - WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) && - !cpumask_intersects(new_mask, cpu_active_mask) && - p->nr_cpus_allowed != 1); - } - return affine_move_task(rq, p, &rf, dest_cpu, flags); out: @@ -3122,6 +3129,13 @@ bool cpus_share_cache(int this_cpu, int that_cpu) static inline bool ttwu_queue_cond(int cpu, int wake_flags) { /* + * Do not complicate things with the async wake_list while the CPU is + * in hotplug state. + */ + if (!cpu_active(cpu)) + return false; + + /* * If the CPU does not share cache, then queue the task on the * remote rqs wakelist to avoid accessing remote data. */ @@ -7276,8 +7290,14 @@ static void balance_push(struct rq *rq) /* * Both the cpu-hotplug and stop task are in this case and are * required to complete the hotplug process. + * + * XXX: the idle task does not match kthread_is_per_cpu() due to + * histerical raisins. */ - if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) { + if (rq->idle == push_task || + ((push_task->flags & PF_KTHREAD) && kthread_is_per_cpu(push_task)) || + is_migration_disabled(push_task)) { + /* * If this is the idle task on the outgoing CPU try to wake * up the hotplug control thread which might wait for the @@ -7309,7 +7329,7 @@ static void balance_push(struct rq *rq) /* * At this point need_resched() is true and we'll take the loop in * schedule(). The next pick is obviously going to be the stop task - * which is_per_cpu_kthread() and will push this task away. + * which kthread_is_per_cpu() and will push this task away. */ raw_spin_lock(&rq->lock); } @@ -7320,10 +7340,13 @@ static void balance_push_set(int cpu, bool on) struct rq_flags rf; rq_lock_irqsave(rq, &rf); - if (on) + rq->balance_push = on; + if (on) { + WARN_ON_ONCE(rq->balance_callback); rq->balance_callback = &balance_push_callback; - else + } else if (rq->balance_callback == &balance_push_callback) { rq->balance_callback = NULL; + } rq_unlock_irqrestore(rq, &rf); } @@ -7441,6 +7464,10 @@ int sched_cpu_activate(unsigned int cpu) struct rq *rq = cpu_rq(cpu); struct rq_flags rf; + /* + * Make sure that when the hotplug state machine does a roll-back + * we clear balance_push. Ideally that would happen earlier... + */ balance_push_set(cpu, false); #ifdef CONFIG_SCHED_SMT @@ -7483,17 +7510,27 @@ int sched_cpu_deactivate(unsigned int cpu) int ret; set_cpu_active(cpu, false); + + /* + * From this point forward, this CPU will refuse to run any task that + * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively + * push those tasks away until this gets cleared, see + * sched_cpu_dying(). + */ + balance_push_set(cpu, true); + /* - * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU - * users of this state to go away such that all new such users will - * observe it. + * We've cleared cpu_active_mask / set balance_push, wait for all + * preempt-disabled and RCU users of this state to go away such that + * all new such users will observe it. + * + * Specifically, we rely on ttwu to no longer target this CPU, see + * ttwu_queue_cond() and is_cpu_allowed(). * * Do sync before park smpboot threads to take care the rcu boost case. */ synchronize_rcu(); - balance_push_set(cpu, true); - rq_lock_irqsave(rq, &rf); if (rq->rd) { update_rq_clock(rq); @@ -7574,6 +7611,25 @@ static void calc_load_migrate(struct rq *rq) atomic_long_add(delta, &calc_load_tasks); } +static void dump_rq_tasks(struct rq *rq, const char *loglvl) +{ + struct task_struct *g, *p; + int cpu = cpu_of(rq); + + lockdep_assert_held(&rq->lock); + + printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running); + for_each_process_thread(g, p) { + if (task_cpu(p) != cpu) + continue; + + if (!task_on_rq_queued(p)) + continue; + + printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm); + } +} + int sched_cpu_dying(unsigned int cpu) { struct rq *rq = cpu_rq(cpu); @@ -7583,9 +7639,18 @@ int sched_cpu_dying(unsigned int cpu) sched_tick_stop(cpu); rq_lock_irqsave(rq, &rf); - BUG_ON(rq->nr_running != 1 || rq_has_pinned_tasks(rq)); + if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) { + WARN(true, "Dying CPU not properly vacated!"); + dump_rq_tasks(rq, KERN_WARNING); + } rq_unlock_irqrestore(rq, &rf); + /* + * Now that the CPU is offline, make sure we're welcome + * to new tasks once we come back up. + */ + balance_push_set(cpu, false); + calc_load_migrate(rq); update_max_interval(); nohz_balance_exit_idle(rq); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 12ada79d40f3..bb09988451a0 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -975,6 +975,7 @@ struct rq { unsigned long cpu_capacity_orig; struct callback_head *balance_callback; + unsigned char balance_push; unsigned char nohz_idle_balance; unsigned char idle_balance; diff --git a/kernel/signal.c b/kernel/signal.c index 5736c55aaa1a..5ad8566534e7 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -2550,6 +2550,9 @@ bool get_signal(struct ksignal *ksig) struct signal_struct *signal = current->signal; int signr; + if (unlikely(current->task_works)) + task_work_run(); + /* * For non-generic architectures, check for TIF_NOTIFY_SIGNAL so * that the arch handlers don't all have to do it. If we get here @@ -3701,7 +3704,8 @@ static bool access_pidfd_pidns(struct pid *pid) return true; } -static int copy_siginfo_from_user_any(kernel_siginfo_t *kinfo, siginfo_t *info) +static int copy_siginfo_from_user_any(kernel_siginfo_t *kinfo, + siginfo_t __user *info) { #ifdef CONFIG_COMPAT /* diff --git a/kernel/smpboot.c b/kernel/smpboot.c index 2efe1e206167..f25208e8df83 100644 --- a/kernel/smpboot.c +++ b/kernel/smpboot.c @@ -188,6 +188,7 @@ __smpboot_create_thread(struct smp_hotplug_thread *ht, unsigned int cpu) kfree(td); return PTR_ERR(tsk); } + kthread_set_per_cpu(tsk, cpu); /* * Park the thread so that it could start right on the CPU * when it is available. diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c index 7404d3831527..87389b9e21ab 100644 --- a/kernel/time/ntp.c +++ b/kernel/time/ntp.c @@ -498,7 +498,7 @@ out: static void sync_hw_clock(struct work_struct *work); static DECLARE_WORK(sync_work, sync_hw_clock); static struct hrtimer sync_hrtimer; -#define SYNC_PERIOD_NS (11UL * 60 * NSEC_PER_SEC) +#define SYNC_PERIOD_NS (11ULL * 60 * NSEC_PER_SEC) static enum hrtimer_restart sync_timer_callback(struct hrtimer *timer) { @@ -512,7 +512,7 @@ static void sched_sync_hw_clock(unsigned long offset_nsec, bool retry) ktime_t exp = ktime_set(ktime_get_real_seconds(), 0); if (retry) - exp = ktime_add_ns(exp, 2 * NSEC_PER_SEC - offset_nsec); + exp = ktime_add_ns(exp, 2ULL * NSEC_PER_SEC - offset_nsec); else exp = ktime_add_ns(exp, SYNC_PERIOD_NS - offset_nsec); diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index a45cedda93a7..6aee5768c86f 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -991,8 +991,7 @@ EXPORT_SYMBOL_GPL(ktime_get_seconds); /** * ktime_get_real_seconds - Get the seconds portion of CLOCK_REALTIME * - * Returns the wall clock seconds since 1970. This replaces the - * get_seconds() interface which is not y2038 safe on 32bit systems. + * Returns the wall clock seconds since 1970. * * For 64bit systems the fast access to tk->xtime_sec is preserved. On * 32bit systems the access must be protected with the sequence diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index d5a19413d4f8..c1a62ae7e812 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -538,7 +538,7 @@ config KPROBE_EVENTS config KPROBE_EVENTS_ON_NOTRACE bool "Do NOT protect notrace function from kprobe events" depends on KPROBE_EVENTS - depends on KPROBES_ON_FTRACE + depends on DYNAMIC_FTRACE default n help This is only for the developers who want to debug ftrace itself diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 9c31f42245e9..e6fba1798771 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -434,7 +434,7 @@ static int disable_trace_kprobe(struct trace_event_call *call, return 0; } -#if defined(CONFIG_KPROBES_ON_FTRACE) && \ +#if defined(CONFIG_DYNAMIC_FTRACE) && \ !defined(CONFIG_KPROBE_EVENTS_ON_NOTRACE) static bool __within_notrace_func(unsigned long addr) { diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 9880b6c0e272..894bb885b40b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1849,18 +1849,17 @@ static void worker_attach_to_pool(struct worker *worker, mutex_lock(&wq_pool_attach_mutex); /* - * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any - * online CPUs. It'll be re-applied when any of the CPUs come up. - */ - set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); - - /* * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains * stable across this function. See the comments above the flag * definition for details. */ if (pool->flags & POOL_DISASSOCIATED) worker->flags |= WORKER_UNBOUND; + else + kthread_set_per_cpu(worker->task, pool->cpu); + + if (worker->rescue_wq) + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); list_add_tail(&worker->node, &pool->workers); worker->pool = pool; @@ -1883,6 +1882,7 @@ static void worker_detach_from_pool(struct worker *worker) mutex_lock(&wq_pool_attach_mutex); + kthread_set_per_cpu(worker->task, -1); list_del(&worker->node); worker->pool = NULL; @@ -4919,8 +4919,10 @@ static void unbind_workers(int cpu) raw_spin_unlock_irq(&pool->lock); - for_each_pool_worker(worker, pool) - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0); + for_each_pool_worker(worker, pool) { + kthread_set_per_cpu(worker->task, -1); + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0); + } mutex_unlock(&wq_pool_attach_mutex); @@ -4972,9 +4974,11 @@ static void rebind_workers(struct worker_pool *pool) * of all workers first and then clear UNBOUND. As we're called * from CPU_ONLINE, the following shouldn't fail. */ - for_each_pool_worker(worker, pool) + for_each_pool_worker(worker, pool) { + kthread_set_per_cpu(worker->task, pool->cpu); WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0); + } raw_spin_lock_irq(&pool->lock); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index e6e58b26e888..7937265ef879 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -295,14 +295,6 @@ config GDB_SCRIPTS endif # DEBUG_INFO -config ENABLE_MUST_CHECK - bool "Enable __must_check logic" - default y - help - Enable the __must_check logic in the kernel build. Disable this to - suppress the "warning: ignoring return value of 'foo', declared with - attribute warn_unused_result" messages. - config FRAME_WARN int "Warn for stack frames larger than" range 0 8192 diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan index 8b635fd75fe4..3a0b1c930733 100644 --- a/lib/Kconfig.ubsan +++ b/lib/Kconfig.ubsan @@ -123,6 +123,7 @@ config UBSAN_SIGNED_OVERFLOW config UBSAN_UNSIGNED_OVERFLOW bool "Perform checking for unsigned arithmetic overflow" depends on $(cc-option,-fsanitize=unsigned-integer-overflow) + depends on !X86_32 # avoid excessive stack usage on x86-32/clang help This option enables -fsanitize=unsigned-integer-overflow which checks for overflow of any arithmetic operations with unsigned integers. This diff --git a/lib/fonts/font_ter16x32.c b/lib/fonts/font_ter16x32.c index 1955d624177c..5baedc573dd6 100644 --- a/lib/fonts/font_ter16x32.c +++ b/lib/fonts/font_ter16x32.c @@ -774,8 +774,8 @@ static const struct font_data fontdata_ter16x32 = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7f, 0xfc, 0x7f, 0xfc, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 95 */ - 0x00, 0x00, 0x1c, 0x00, 0x0e, 0x00, 0x07, 0x00, - 0x03, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x0e, 0x00, + 0x07, 0x00, 0x03, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, @@ -1169,7 +1169,7 @@ static const struct font_data fontdata_ter16x32 = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x7f, 0xf8, 0x7f, 0xfc, 0x03, 0x9e, 0x03, 0x8e, + 0x7e, 0xf8, 0x7f, 0xfc, 0x03, 0x9e, 0x03, 0x8e, 0x03, 0x8e, 0x3f, 0x8e, 0x7f, 0xfe, 0xf3, 0xfe, 0xe3, 0x80, 0xe3, 0x80, 0xe3, 0x80, 0xf3, 0xce, 0x7f, 0xfe, 0x3e, 0xfc, 0x00, 0x00, 0x00, 0x00, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 1635111c5bd2..a21e6a5792c5 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1658,7 +1658,7 @@ static int copy_compat_iovec_from_user(struct iovec *iov, (const struct compat_iovec __user *)uvec; int ret = -EFAULT, i; - if (!user_access_begin(uvec, nr_segs * sizeof(*uvec))) + if (!user_access_begin(uiov, nr_segs * sizeof(*uiov))) return -EFAULT; for (i = 0; i < nr_segs; i++) { diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile index b4c0df6d706d..c770570bfe4f 100644 --- a/lib/raid6/Makefile +++ b/lib/raid6/Makefile @@ -48,7 +48,7 @@ endif endif quiet_cmd_unroll = UNROLL $@ - cmd_unroll = $(AWK) -f$(srctree)/$(src)/unroll.awk -vN=$* < $< > $@ + cmd_unroll = $(AWK) -v N=$* -f $(srctree)/$(src)/unroll.awk < $< > $@ targets += int1.c int2.c int4.c int8.c int16.c int32.c $(obj)/int%.c: $(src)/int.uc $(src)/unroll.awk FORCE diff --git a/mm/highmem.c b/mm/highmem.c index c3a9ea7875ef..874b732b120c 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -473,6 +473,11 @@ static inline void *arch_kmap_local_high_get(struct page *page) } #endif +#ifndef arch_kmap_local_set_pte +#define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) \ + set_pte_at(mm, vaddr, ptep, ptev) +#endif + /* Unmap a local mapping which was obtained by kmap_high_get() */ static inline bool kmap_high_unmap_local(unsigned long vaddr) { @@ -515,7 +520,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot) vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); BUG_ON(!pte_none(*(kmap_pte - idx))); pteval = pfn_pte(pfn, prot); - set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); + arch_kmap_local_set_pte(&init_mm, vaddr, kmap_pte - idx, pteval); arch_kmap_local_post_map(vaddr, pteval); current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; preempt_enable(); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a2602969873d..18f6ee317900 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4371,7 +4371,7 @@ retry: * So we need to block hugepage fault by PG_hwpoison bit check. */ if (unlikely(PageHWPoison(page))) { - ret = VM_FAULT_HWPOISON | + ret = VM_FAULT_HWPOISON_LARGE | VM_FAULT_SET_HINDEX(hstate_index(h)); goto backout_unlocked; } diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 55bd6f09c70f..e529428e7a11 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -19,11 +19,10 @@ #include "kasan.h" -enum kasan_arg_mode { - KASAN_ARG_MODE_DEFAULT, - KASAN_ARG_MODE_OFF, - KASAN_ARG_MODE_PROD, - KASAN_ARG_MODE_FULL, +enum kasan_arg { + KASAN_ARG_DEFAULT, + KASAN_ARG_OFF, + KASAN_ARG_ON, }; enum kasan_arg_stacktrace { @@ -38,7 +37,7 @@ enum kasan_arg_fault { KASAN_ARG_FAULT_PANIC, }; -static enum kasan_arg_mode kasan_arg_mode __ro_after_init; +static enum kasan_arg kasan_arg __ro_after_init; static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init; static enum kasan_arg_fault kasan_arg_fault __ro_after_init; @@ -52,26 +51,24 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_stacktrace); /* Whether panic or disable tag checking on fault. */ bool kasan_flag_panic __ro_after_init; -/* kasan.mode=off/prod/full */ -static int __init early_kasan_mode(char *arg) +/* kasan=off/on */ +static int __init early_kasan_flag(char *arg) { if (!arg) return -EINVAL; if (!strcmp(arg, "off")) - kasan_arg_mode = KASAN_ARG_MODE_OFF; - else if (!strcmp(arg, "prod")) - kasan_arg_mode = KASAN_ARG_MODE_PROD; - else if (!strcmp(arg, "full")) - kasan_arg_mode = KASAN_ARG_MODE_FULL; + kasan_arg = KASAN_ARG_OFF; + else if (!strcmp(arg, "on")) + kasan_arg = KASAN_ARG_ON; else return -EINVAL; return 0; } -early_param("kasan.mode", early_kasan_mode); +early_param("kasan", early_kasan_flag); -/* kasan.stack=off/on */ +/* kasan.stacktrace=off/on */ static int __init early_kasan_flag_stacktrace(char *arg) { if (!arg) @@ -113,8 +110,8 @@ void kasan_init_hw_tags_cpu(void) * as this function is only called for MTE-capable hardware. */ - /* If KASAN is disabled, do nothing. */ - if (kasan_arg_mode == KASAN_ARG_MODE_OFF) + /* If KASAN is disabled via command line, don't initialize it. */ + if (kasan_arg == KASAN_ARG_OFF) return; hw_init_tags(KASAN_TAG_MAX); @@ -124,43 +121,28 @@ void kasan_init_hw_tags_cpu(void) /* kasan_init_hw_tags() is called once on boot CPU. */ void __init kasan_init_hw_tags(void) { - /* If hardware doesn't support MTE, do nothing. */ + /* If hardware doesn't support MTE, don't initialize KASAN. */ if (!system_supports_mte()) return; - /* Choose KASAN mode if kasan boot parameter is not provided. */ - if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) { - if (IS_ENABLED(CONFIG_DEBUG_KERNEL)) - kasan_arg_mode = KASAN_ARG_MODE_FULL; - else - kasan_arg_mode = KASAN_ARG_MODE_PROD; - } - - /* Preset parameter values based on the mode. */ - switch (kasan_arg_mode) { - case KASAN_ARG_MODE_DEFAULT: - /* Shouldn't happen as per the check above. */ - WARN_ON(1); - return; - case KASAN_ARG_MODE_OFF: - /* If KASAN is disabled, do nothing. */ + /* If KASAN is disabled via command line, don't initialize it. */ + if (kasan_arg == KASAN_ARG_OFF) return; - case KASAN_ARG_MODE_PROD: - static_branch_enable(&kasan_flag_enabled); - break; - case KASAN_ARG_MODE_FULL: - static_branch_enable(&kasan_flag_enabled); - static_branch_enable(&kasan_flag_stacktrace); - break; - } - /* Now, optionally override the presets. */ + /* Enable KASAN. */ + static_branch_enable(&kasan_flag_enabled); switch (kasan_arg_stacktrace) { case KASAN_ARG_STACKTRACE_DEFAULT: + /* + * Default to enabling stack trace collection for + * debug kernels. + */ + if (IS_ENABLED(CONFIG_DEBUG_KERNEL)) + static_branch_enable(&kasan_flag_stacktrace); break; case KASAN_ARG_STACKTRACE_OFF: - static_branch_disable(&kasan_flag_stacktrace); + /* Do nothing, kasan_flag_stacktrace keeps its default value. */ break; case KASAN_ARG_STACKTRACE_ON: static_branch_enable(&kasan_flag_stacktrace); @@ -169,11 +151,16 @@ void __init kasan_init_hw_tags(void) switch (kasan_arg_fault) { case KASAN_ARG_FAULT_DEFAULT: + /* + * Default to no panic on report. + * Do nothing, kasan_flag_panic keeps its default value. + */ break; case KASAN_ARG_FAULT_REPORT: - kasan_flag_panic = false; + /* Do nothing, kasan_flag_panic keeps its default value. */ break; case KASAN_ARG_FAULT_PANIC: + /* Enable panic on report. */ kasan_flag_panic = true; break; } diff --git a/mm/kasan/init.c b/mm/kasan/init.c index bc0ad208b3a7..c4605ac9837b 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -64,7 +64,8 @@ static inline bool kasan_pmd_table(pud_t pud) return false; } #endif -pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; +pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS] + __page_aligned_bss; static inline bool kasan_pte_table(pmd_t pmd) { @@ -372,9 +373,10 @@ static void kasan_remove_pmd_table(pmd_t *pmd, unsigned long addr, if (kasan_pte_table(*pmd)) { if (IS_ALIGNED(addr, PMD_SIZE) && - IS_ALIGNED(next, PMD_SIZE)) + IS_ALIGNED(next, PMD_SIZE)) { pmd_clear(pmd); - continue; + continue; + } } pte = pte_offset_kernel(pmd, addr); kasan_remove_pte_table(pte, addr, next); @@ -397,9 +399,10 @@ static void kasan_remove_pud_table(pud_t *pud, unsigned long addr, if (kasan_pmd_table(*pud)) { if (IS_ALIGNED(addr, PUD_SIZE) && - IS_ALIGNED(next, PUD_SIZE)) + IS_ALIGNED(next, PUD_SIZE)) { pud_clear(pud); - continue; + continue; + } } pmd = pmd_offset(pud, addr); pmd_base = pmd_offset(pud, 0); @@ -423,9 +426,10 @@ static void kasan_remove_p4d_table(p4d_t *p4d, unsigned long addr, if (kasan_pud_table(*p4d)) { if (IS_ALIGNED(addr, P4D_SIZE) && - IS_ALIGNED(next, P4D_SIZE)) + IS_ALIGNED(next, P4D_SIZE)) { p4d_clear(p4d); - continue; + continue; + } } pud = pud_offset(p4d, addr); kasan_remove_pud_table(pud, addr, next); @@ -456,9 +460,10 @@ void kasan_remove_zero_shadow(void *start, unsigned long size) if (kasan_p4d_table(*pgd)) { if (IS_ALIGNED(addr, PGDIR_SIZE) && - IS_ALIGNED(next, PGDIR_SIZE)) + IS_ALIGNED(next, PGDIR_SIZE)) { pgd_clear(pgd); - continue; + continue; + } } p4d = p4d_offset(pgd, addr); @@ -481,7 +486,6 @@ int kasan_add_zero_shadow(void *start, unsigned long size) ret = kasan_populate_early_shadow(shadow_start, shadow_end); if (ret) - kasan_remove_zero_shadow(shadow_start, - size >> KASAN_SHADOW_SCALE_SHIFT); + kasan_remove_zero_shadow(start, size); return ret; } diff --git a/mm/memblock.c b/mm/memblock.c index d24bcfa88d2f..1eaaec1e7687 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1427,7 +1427,7 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size, } /** - * memblock_phys_alloc_try_nid - allocate a memory block from specified MUMA node + * memblock_phys_alloc_try_nid - allocate a memory block from specified NUMA node * @size: size of memory block to be allocated in bytes * @align: alignment of the region and block's size * @nid: nid of the free area to find, %NUMA_NO_NODE for any node diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 605f671203ef..e2de77b5bcc2 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3115,9 +3115,7 @@ void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) page_counter_uncharge(&memcg->kmem, nr_pages); - page_counter_uncharge(&memcg->memory, nr_pages); - if (do_memsw_account()) - page_counter_uncharge(&memcg->memsw, nr_pages); + refill_stock(memcg, nr_pages); } /** diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 5a38e9eade94..e9481632fcd1 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1885,6 +1885,12 @@ static int soft_offline_free_page(struct page *page) return rc; } +static void put_ref_page(struct page *page) +{ + if (page) + put_page(page); +} + /** * soft_offline_page - Soft offline a page. * @pfn: pfn to soft-offline @@ -1910,20 +1916,26 @@ static int soft_offline_free_page(struct page *page) int soft_offline_page(unsigned long pfn, int flags) { int ret; - struct page *page; bool try_again = true; + struct page *page, *ref_page = NULL; + + WARN_ON_ONCE(!pfn_valid(pfn) && (flags & MF_COUNT_INCREASED)); if (!pfn_valid(pfn)) return -ENXIO; + if (flags & MF_COUNT_INCREASED) + ref_page = pfn_to_page(pfn); + /* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */ page = pfn_to_online_page(pfn); - if (!page) + if (!page) { + put_ref_page(ref_page); return -EIO; + } if (PageHWPoison(page)) { pr_info("%s: %#lx page already poisoned\n", __func__, pfn); - if (flags & MF_COUNT_INCREASED) - put_page(page); + put_ref_page(ref_page); return 0; } @@ -1940,7 +1952,7 @@ retry: goto retry; } } else if (ret == -EIO) { - pr_info("%s: %#lx: unknown page type: %lx (%pGP)\n", + pr_info("%s: %#lx: unknown page type: %lx (%pGp)\n", __func__, pfn, page->flags, &page->flags); } diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8cf96bd21341..2c3a86502053 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1111,7 +1111,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, const nodemask_t *to, int flags) { int busy = 0; - int err; + int err = 0; nodemask_t tmp; migrate_prep(); diff --git a/mm/migrate.c b/mm/migrate.c index ee5e612b4cd8..c0efe921bca5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, struct zone *oldzone, *newzone; int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + int nr = thp_nr_pages(page); if (!mapping) { /* Anonymous page without mapping */ @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ newpage->index = page->index; newpage->mapping = page->mapping; - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ + page_ref_add(newpage, nr); /* add cache reference */ if (PageSwapBacked(page)) { __SetPageSwapBacked(newpage); if (PageSwapCache(page)) { @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (PageTransHuge(page)) { int i; - for (i = 1; i < HPAGE_PMD_NR; i++) { + for (i = 1; i < nr; i++) { xas_next(&xas); xas_store(&xas, newpage); } @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); + page_ref_unfreeze(page, expected_count - nr); xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); if (PageSwapBacked(page) && !PageSwapCache(page)) { - __dec_lruvec_state(old_lruvec, NR_SHMEM); - __inc_lruvec_state(new_lruvec, NR_SHMEM); + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } if (dirty && mapping_can_writeback(mapping)) { - __dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY); - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); - __inc_node_state(newzone->zone_pgdat, NR_FILE_DIRTY); - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); + __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } } local_irq_enable(); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 586042472ac9..eb34d204d4ee 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2826,7 +2826,7 @@ EXPORT_SYMBOL(__test_set_page_writeback); */ void wait_on_page_writeback(struct page *page) { - if (PageWriteback(page)) { + while (PageWriteback(page)) { trace_wait_on_page_writeback(page, page_mapping(page)); wait_on_page_bit(page, PG_writeback); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bdbec4c98173..783913e41f65 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1207,8 +1207,10 @@ static void kernel_init_free_pages(struct page *page, int numpages) /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { + u8 tag = page_kasan_tag(page + i); page_kasan_tag_reset(page + i); clear_highpage(page + i); + page_kasan_tag_set(page + i, tag); } kasan_enable_current(); } @@ -2862,20 +2864,20 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, { struct page *page; -#ifdef CONFIG_CMA - /* - * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. - */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); - if (page) - return page; + if (IS_ENABLED(CONFIG_CMA)) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + if (alloc_flags & ALLOC_CMA && + zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2) { + page = __rmqueue_cma_fallback(zone, order); + if (page) + goto out; + } } -#endif retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { @@ -2886,8 +2888,9 @@ retry: alloc_flags)) goto retry; } - - trace_mm_page_alloc_zone_locked(page, order, migratetype); +out: + if (page) + trace_mm_page_alloc_zone_locked(page, order, migratetype); return page; } @@ -7077,23 +7080,26 @@ void __init free_area_init_memoryless_node(int nid) * Initialize all valid struct pages in the range [spfn, epfn) and mark them * PageReserved(). Return the number of struct pages that were initialized. */ -static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) +static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn, + int zone, int nid) { - unsigned long pfn; + unsigned long pfn, zone_spfn, zone_epfn; u64 pgcnt = 0; + zone_spfn = arch_zone_lowest_possible_pfn[zone]; + zone_epfn = arch_zone_highest_possible_pfn[zone]; + + spfn = clamp(spfn, zone_spfn, zone_epfn); + epfn = clamp(epfn, zone_spfn, zone_epfn); + for (pfn = spfn; pfn < epfn; pfn++) { if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) + pageblock_nr_pages - 1; continue; } - /* - * Use a fake node/zone (0) for now. Some of these pages - * (in memblock.reserved but not in memblock.memory) will - * get re-initialized via reserve_bootmem_region() later. - */ - __init_single_page(pfn_to_page(pfn), pfn, 0, 0); + + __init_single_page(pfn_to_page(pfn), pfn, zone, nid); __SetPageReserved(pfn_to_page(pfn)); pgcnt++; } @@ -7102,51 +7108,64 @@ static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) } /* - * Only struct pages that are backed by physical memory are zeroed and - * initialized by going through __init_single_page(). But, there are some - * struct pages which are reserved in memblock allocator and their fields - * may be accessed (for example page_to_pfn() on some configuration accesses - * flags). We must explicitly initialize those struct pages. + * Only struct pages that correspond to ranges defined by memblock.memory + * are zeroed and initialized by going through __init_single_page() during + * memmap_init(). + * + * But, there could be struct pages that correspond to holes in + * memblock.memory. This can happen because of the following reasons: + * - phyiscal memory bank size is not necessarily the exact multiple of the + * arbitrary section size + * - early reserved memory may not be listed in memblock.memory + * - memory layouts defined with memmap= kernel parameter may not align + * nicely with memmap sections * - * This function also addresses a similar issue where struct pages are left - * uninitialized because the physical address range is not covered by - * memblock.memory or memblock.reserved. That could happen when memblock - * layout is manually configured via memmap=, or when the highest physical - * address (max_pfn) does not end on a section boundary. + * Explicitly initialize those struct pages so that: + * - PG_Reserved is set + * - zone link is set accorging to the architecture constrains + * - node is set to node id of the next populated region except for the + * trailing hole where last node id is used */ -static void __init init_unavailable_mem(void) +static void __init init_zone_unavailable_mem(int zone) { - phys_addr_t start, end; - u64 i, pgcnt; - phys_addr_t next = 0; + unsigned long start, end; + int i, nid; + u64 pgcnt; + unsigned long next = 0; /* - * Loop through unavailable ranges not covered by memblock.memory. + * Loop through holes in memblock.memory and initialize struct + * pages corresponding to these holes */ pgcnt = 0; - for_each_mem_range(i, &start, &end) { + for_each_mem_pfn_range(i, MAX_NUMNODES, &start, &end, &nid) { if (next < start) - pgcnt += init_unavailable_range(PFN_DOWN(next), - PFN_UP(start)); + pgcnt += init_unavailable_range(next, start, zone, nid); next = end; } /* - * Early sections always have a fully populated memmap for the whole - * section - see pfn_valid(). If the last section has holes at the - * end and that section is marked "online", the memmap will be - * considered initialized. Make sure that memmap has a well defined - * state. + * Last section may surpass the actual end of memory (e.g. we can + * have 1Gb section and 512Mb of RAM pouplated). + * Make sure that memmap has a well defined state in this case. */ - pgcnt += init_unavailable_range(PFN_DOWN(next), - round_up(max_pfn, PAGES_PER_SECTION)); + end = round_up(max_pfn, PAGES_PER_SECTION); + pgcnt += init_unavailable_range(next, end, zone, nid); /* * Struct pages that do not have backing memory. This could be because * firmware is using some of this memory, or for some other reasons. */ if (pgcnt) - pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt); + pr_info("Zone %s: zeroed struct page in unavailable ranges: %lld pages", zone_names[zone], pgcnt); +} + +static void __init init_unavailable_mem(void) +{ + int zone; + + for (zone = 0; zone < ZONE_MOVABLE; zone++) + init_zone_unavailable_mem(zone); } #else static inline void __init init_unavailable_mem(void) diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 4bcc11958089..f5fee9cf90f8 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -9,6 +9,7 @@ #include <linux/mm.h> #include <linux/uio.h> #include <linux/sched.h> +#include <linux/compat.h> #include <linux/sched/mm.h> #include <linux/highmem.h> #include <linux/ptrace.h> diff --git a/mm/slub.c b/mm/slub.c index dc5b42e700b8..69742ab9a21d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1973,7 +1973,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, t = acquire_slab(s, n, page, object == NULL, &objects); if (!t) - break; + continue; /* cmpxchg raced */ available += objects; if (!object) { @@ -2791,7 +2791,8 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj) { if (unlikely(slab_want_init_on_free(s)) && obj) - memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), + 0, sizeof(void *)); } /* @@ -2883,7 +2884,7 @@ redo: stat(s, ALLOC_FASTPATH); } - maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); + maybe_wipe_obj_freeptr(s, object); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(kasan_reset_tag(object), 0, s->object_size); @@ -3329,7 +3330,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, int j; for (j = 0; j < i; j++) - memset(p[j], 0, s->object_size); + memset(kasan_reset_tag(p[j]), 0, s->object_size); } /* memcg and kmem_cache debug support */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4d88fe5a277a..e6f352bf0498 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2420,8 +2420,10 @@ void *vmap(struct page **pages, unsigned int count, return NULL; } - if (flags & VM_MAP_PUT_PAGES) + if (flags & VM_MAP_PUT_PAGES) { area->pages = pages; + area->nr_pages = count; + } return area->addr; } EXPORT_SYMBOL(vmap); diff --git a/mm/vmscan.c b/mm/vmscan.c index 257cba79a96d..b1b574ad199d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1238,6 +1238,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (!PageSwapCache(page)) { if (!(sc->gfp_mask & __GFP_IO)) goto keep_locked; + if (page_maybe_dma_pinned(page)) + goto keep_locked; if (PageTransHuge(page)) { /* cannot split THP, skip it */ if (!can_split_huge_page(page, NULL)) diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index f292e0267bb9..8b644113715e 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -284,8 +284,7 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id) return 0; out_free_newdev: - if (new_dev->reg_state == NETREG_UNINITIALIZED) - free_netdev(new_dev); + free_netdev(new_dev); return err; } diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index c1c30a9f76f3..8b796c499cbb 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -272,7 +272,8 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog, kattr->test.repeat) return -EINVAL; - if (ctx_size_in < prog->aux->max_ctx_offset) + if (ctx_size_in < prog->aux->max_ctx_offset || + ctx_size_in > MAX_BPF_FUNC_ARGS * sizeof(u64)) return -EINVAL; if ((kattr->test.flags & BPF_F_TEST_RUN_ON_CPU) == 0 && cpu != 0) diff --git a/net/can/isotp.c b/net/can/isotp.c index 7839c3b9e5be..3ef7f78e553b 100644 --- a/net/can/isotp.c +++ b/net/can/isotp.c @@ -1155,6 +1155,7 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer) if (peer) return -EOPNOTSUPP; + memset(addr, 0, sizeof(*addr)); addr->can_family = AF_CAN; addr->can_ifindex = so->ifindex; addr->can_addr.tp.rx_id = so->rxid; diff --git a/net/ceph/auth_x.c b/net/ceph/auth_x.c index 9815cfe42af0..ca44c327bace 100644 --- a/net/ceph/auth_x.c +++ b/net/ceph/auth_x.c @@ -569,6 +569,34 @@ e_range: return -ERANGE; } +static int decode_con_secret(void **p, void *end, u8 *con_secret, + int *con_secret_len) +{ + int len; + + ceph_decode_32_safe(p, end, len, bad); + ceph_decode_need(p, end, len, bad); + + dout("%s len %d\n", __func__, len); + if (con_secret) { + if (len > CEPH_MAX_CON_SECRET_LEN) { + pr_err("connection secret too big %d\n", len); + goto bad_memzero; + } + memcpy(con_secret, *p, len); + *con_secret_len = len; + } + memzero_explicit(*p, len); + *p += len; + return 0; + +bad_memzero: + memzero_explicit(*p, len); +bad: + pr_err("failed to decode connection secret\n"); + return -EINVAL; +} + static int handle_auth_session_key(struct ceph_auth_client *ac, void **p, void *end, u8 *session_key, int *session_key_len, @@ -612,17 +640,9 @@ static int handle_auth_session_key(struct ceph_auth_client *ac, dout("%s decrypted %d bytes\n", __func__, ret); dend = dp + ret; - ceph_decode_32_safe(&dp, dend, len, e_inval); - if (len > CEPH_MAX_CON_SECRET_LEN) { - pr_err("connection secret too big %d\n", len); - return -EINVAL; - } - - dout("%s connection secret len %d\n", __func__, len); - if (con_secret) { - memcpy(con_secret, dp, len); - *con_secret_len = len; - } + ret = decode_con_secret(&dp, dend, con_secret, con_secret_len); + if (ret) + return ret; } /* service tickets */ @@ -828,7 +848,6 @@ static int decrypt_authorizer_reply(struct ceph_crypto_key *secret, { void *dp, *dend; u8 struct_v; - int len; int ret; dp = *p + ceph_x_encrypt_offset(); @@ -843,17 +862,9 @@ static int decrypt_authorizer_reply(struct ceph_crypto_key *secret, ceph_decode_64_safe(&dp, dend, *nonce_plus_one, e_inval); dout("%s nonce_plus_one %llu\n", __func__, *nonce_plus_one); if (struct_v >= 2) { - ceph_decode_32_safe(&dp, dend, len, e_inval); - if (len > CEPH_MAX_CON_SECRET_LEN) { - pr_err("connection secret too big %d\n", len); - return -EINVAL; - } - - dout("%s connection secret len %d\n", __func__, len); - if (con_secret) { - memcpy(con_secret, dp, len); - *con_secret_len = len; - } + ret = decode_con_secret(&dp, dend, con_secret, con_secret_len); + if (ret) + return ret; } return 0; diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c index 4f75df40fb12..92d89b331645 100644 --- a/net/ceph/crypto.c +++ b/net/ceph/crypto.c @@ -96,6 +96,7 @@ int ceph_crypto_key_decode(struct ceph_crypto_key *key, void **p, void *end) key->len = ceph_decode_16(p); ceph_decode_need(p, end, key->len, bad); ret = set_secret(key, *p); + memzero_explicit(*p, key->len); *p += key->len; return ret; @@ -134,7 +135,7 @@ int ceph_crypto_key_unarmor(struct ceph_crypto_key *key, const char *inkey) void ceph_crypto_key_destroy(struct ceph_crypto_key *key) { if (key) { - kfree(key->key); + kfree_sensitive(key->key); key->key = NULL; if (key->tfm) { crypto_free_sync_skcipher(key->tfm); diff --git a/net/ceph/messenger_v1.c b/net/ceph/messenger_v1.c index 04f653b3c897..2cb5ffdf071a 100644 --- a/net/ceph/messenger_v1.c +++ b/net/ceph/messenger_v1.c @@ -1100,7 +1100,7 @@ static int read_partial_message(struct ceph_connection *con) if (ret < 0) return ret; - BUG_ON(!con->in_msg ^ skip); + BUG_ON((!con->in_msg) ^ skip); if (skip) { /* skip this message */ dout("alloc_msg said skip message\n"); diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index c38d8de93836..cc40ce4e02fb 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -689,11 +689,10 @@ static int verify_epilogue_crcs(struct ceph_connection *con, u32 front_crc, } static int setup_crypto(struct ceph_connection *con, - u8 *session_key, int session_key_len, - u8 *con_secret, int con_secret_len) + const u8 *session_key, int session_key_len, + const u8 *con_secret, int con_secret_len) { unsigned int noio_flag; - void *p; int ret; dout("%s con %p con_mode %d session_key_len %d con_secret_len %d\n", @@ -751,15 +750,14 @@ static int setup_crypto(struct ceph_connection *con, return ret; } - p = con_secret; - WARN_ON((unsigned long)p & crypto_aead_alignmask(con->v2.gcm_tfm)); - ret = crypto_aead_setkey(con->v2.gcm_tfm, p, CEPH_GCM_KEY_LEN); + WARN_ON((unsigned long)con_secret & + crypto_aead_alignmask(con->v2.gcm_tfm)); + ret = crypto_aead_setkey(con->v2.gcm_tfm, con_secret, CEPH_GCM_KEY_LEN); if (ret) { pr_err("failed to set gcm key: %d\n", ret); return ret; } - p += CEPH_GCM_KEY_LEN; WARN_ON(crypto_aead_ivsize(con->v2.gcm_tfm) != CEPH_GCM_IV_LEN); ret = crypto_aead_setauthsize(con->v2.gcm_tfm, CEPH_GCM_TAG_LEN); if (ret) { @@ -777,8 +775,11 @@ static int setup_crypto(struct ceph_connection *con, aead_request_set_callback(con->v2.gcm_req, CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &con->v2.gcm_wait); - memcpy(&con->v2.in_gcm_nonce, p, CEPH_GCM_IV_LEN); - memcpy(&con->v2.out_gcm_nonce, p + CEPH_GCM_IV_LEN, CEPH_GCM_IV_LEN); + memcpy(&con->v2.in_gcm_nonce, con_secret + CEPH_GCM_KEY_LEN, + CEPH_GCM_IV_LEN); + memcpy(&con->v2.out_gcm_nonce, + con_secret + CEPH_GCM_KEY_LEN + CEPH_GCM_IV_LEN, + CEPH_GCM_IV_LEN); return 0; /* auth_x, secure mode */ } @@ -800,7 +801,7 @@ static int hmac_sha256(struct ceph_connection *con, const struct kvec *kvecs, desc->tfm = con->v2.hmac_tfm; ret = crypto_shash_init(desc); if (ret) - return ret; + goto out; for (i = 0; i < kvec_cnt; i++) { WARN_ON((unsigned long)kvecs[i].iov_base & @@ -808,15 +809,14 @@ static int hmac_sha256(struct ceph_connection *con, const struct kvec *kvecs, ret = crypto_shash_update(desc, kvecs[i].iov_base, kvecs[i].iov_len); if (ret) - return ret; + goto out; } ret = crypto_shash_final(desc, hmac); - if (ret) - return ret; +out: shash_desc_zero(desc); - return 0; /* auth_x, both plain and secure modes */ + return ret; /* auth_x, both plain and secure modes */ } static void gcm_inc_nonce(struct ceph_gcm_nonce *nonce) @@ -2072,27 +2072,32 @@ static int process_auth_done(struct ceph_connection *con, void *p, void *end) if (con->state != CEPH_CON_S_V2_AUTH) { dout("%s con %p state changed to %d\n", __func__, con, con->state); - return -EAGAIN; + ret = -EAGAIN; + goto out; } dout("%s con %p handle_auth_done ret %d\n", __func__, con, ret); if (ret) - return ret; + goto out; ret = setup_crypto(con, session_key, session_key_len, con_secret, con_secret_len); if (ret) - return ret; + goto out; reset_out_kvecs(con); ret = prepare_auth_signature(con); if (ret) { pr_err("prepare_auth_signature failed: %d\n", ret); - return ret; + goto out; } con->state = CEPH_CON_S_V2_AUTH_SIGNATURE; - return 0; + +out: + memzero_explicit(session_key_buf, sizeof(session_key_buf)); + memzero_explicit(con_secret_buf, sizeof(con_secret_buf)); + return ret; bad: pr_err("failed to decode auth_done\n"); @@ -3436,6 +3441,8 @@ void ceph_con_v2_reset_protocol(struct ceph_connection *con) } con->v2.con_mode = CEPH_CON_MODE_UNKNOWN; + memzero_explicit(&con->v2.in_gcm_nonce, CEPH_GCM_IV_LEN); + memzero_explicit(&con->v2.out_gcm_nonce, CEPH_GCM_IV_LEN); if (con->v2.hmac_tfm) { crypto_free_shash(con->v2.hmac_tfm); diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c index b9d54ed9f338..195ceb8afb06 100644 --- a/net/ceph/mon_client.c +++ b/net/ceph/mon_client.c @@ -1433,7 +1433,7 @@ static int mon_handle_auth_bad_method(struct ceph_connection *con, /* * handle incoming message */ -static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) +static void mon_dispatch(struct ceph_connection *con, struct ceph_msg *msg) { struct ceph_mon_client *monc = con->private; int type = le16_to_cpu(msg->hdr.type); @@ -1565,21 +1565,21 @@ static void mon_fault(struct ceph_connection *con) * will come from the messenger workqueue, which is drained prior to * mon_client destruction. */ -static struct ceph_connection *con_get(struct ceph_connection *con) +static struct ceph_connection *mon_get_con(struct ceph_connection *con) { return con; } -static void con_put(struct ceph_connection *con) +static void mon_put_con(struct ceph_connection *con) { } static const struct ceph_connection_operations mon_con_ops = { - .get = con_get, - .put = con_put, - .dispatch = dispatch, - .fault = mon_fault, + .get = mon_get_con, + .put = mon_put_con, .alloc_msg = mon_alloc_msg, + .dispatch = mon_dispatch, + .fault = mon_fault, .get_auth_request = mon_get_auth_request, .handle_auth_reply_more = mon_handle_auth_reply_more, .handle_auth_done = mon_handle_auth_done, diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 61229c5e22cb..ff8624a7c964 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -5412,7 +5412,7 @@ void ceph_osdc_cleanup(void) /* * handle incoming message */ -static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) +static void osd_dispatch(struct ceph_connection *con, struct ceph_msg *msg) { struct ceph_osd *osd = con->private; struct ceph_osd_client *osdc = osd->o_osdc; @@ -5534,9 +5534,9 @@ static struct ceph_msg *alloc_msg_with_page_vector(struct ceph_msg_header *hdr) return m; } -static struct ceph_msg *alloc_msg(struct ceph_connection *con, - struct ceph_msg_header *hdr, - int *skip) +static struct ceph_msg *osd_alloc_msg(struct ceph_connection *con, + struct ceph_msg_header *hdr, + int *skip) { struct ceph_osd *osd = con->private; int type = le16_to_cpu(hdr->type); @@ -5560,7 +5560,7 @@ static struct ceph_msg *alloc_msg(struct ceph_connection *con, /* * Wrappers to refcount containing ceph_osd struct */ -static struct ceph_connection *get_osd_con(struct ceph_connection *con) +static struct ceph_connection *osd_get_con(struct ceph_connection *con) { struct ceph_osd *osd = con->private; if (get_osd(osd)) @@ -5568,7 +5568,7 @@ static struct ceph_connection *get_osd_con(struct ceph_connection *con) return NULL; } -static void put_osd_con(struct ceph_connection *con) +static void osd_put_con(struct ceph_connection *con) { struct ceph_osd *osd = con->private; put_osd(osd); @@ -5582,8 +5582,8 @@ static void put_osd_con(struct ceph_connection *con) * Note: returned pointer is the address of a structure that's * managed separately. Caller must *not* attempt to free it. */ -static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, - int *proto, int force_new) +static struct ceph_auth_handshake * +osd_get_authorizer(struct ceph_connection *con, int *proto, int force_new) { struct ceph_osd *o = con->private; struct ceph_osd_client *osdc = o->o_osdc; @@ -5599,7 +5599,7 @@ static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, return auth; } -static int add_authorizer_challenge(struct ceph_connection *con, +static int osd_add_authorizer_challenge(struct ceph_connection *con, void *challenge_buf, int challenge_buf_len) { struct ceph_osd *o = con->private; @@ -5610,7 +5610,7 @@ static int add_authorizer_challenge(struct ceph_connection *con, challenge_buf, challenge_buf_len); } -static int verify_authorizer_reply(struct ceph_connection *con) +static int osd_verify_authorizer_reply(struct ceph_connection *con) { struct ceph_osd *o = con->private; struct ceph_osd_client *osdc = o->o_osdc; @@ -5622,7 +5622,7 @@ static int verify_authorizer_reply(struct ceph_connection *con) NULL, NULL, NULL, NULL); } -static int invalidate_authorizer(struct ceph_connection *con) +static int osd_invalidate_authorizer(struct ceph_connection *con) { struct ceph_osd *o = con->private; struct ceph_osd_client *osdc = o->o_osdc; @@ -5731,18 +5731,18 @@ static int osd_check_message_signature(struct ceph_msg *msg) } static const struct ceph_connection_operations osd_con_ops = { - .get = get_osd_con, - .put = put_osd_con, - .dispatch = dispatch, - .get_authorizer = get_authorizer, - .add_authorizer_challenge = add_authorizer_challenge, - .verify_authorizer_reply = verify_authorizer_reply, - .invalidate_authorizer = invalidate_authorizer, - .alloc_msg = alloc_msg, + .get = osd_get_con, + .put = osd_put_con, + .alloc_msg = osd_alloc_msg, + .dispatch = osd_dispatch, + .fault = osd_fault, .reencode_message = osd_reencode_message, + .get_authorizer = osd_get_authorizer, + .add_authorizer_challenge = osd_add_authorizer_challenge, + .verify_authorizer_reply = osd_verify_authorizer_reply, + .invalidate_authorizer = osd_invalidate_authorizer, .sign_message = osd_sign_message, .check_message_signature = osd_check_message_signature, - .fault = osd_fault, .get_auth_request = osd_get_auth_request, .handle_auth_reply_more = osd_handle_auth_reply_more, .handle_auth_done = osd_handle_auth_done, diff --git a/net/core/dev.c b/net/core/dev.c index 8fa739259041..a979b86dbacd 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9661,9 +9661,20 @@ static netdev_features_t netdev_fix_features(struct net_device *dev, } } - if ((features & NETIF_F_HW_TLS_TX) && !(features & NETIF_F_HW_CSUM)) { - netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); - features &= ~NETIF_F_HW_TLS_TX; + if (features & NETIF_F_HW_TLS_TX) { + bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) == + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); + bool hw_csum = features & NETIF_F_HW_CSUM; + + if (!ip_csum && !hw_csum) { + netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); + features &= ~NETIF_F_HW_TLS_TX; + } + } + + if ((features & NETIF_F_HW_TLS_RX) && !(features & NETIF_F_RXCSUM)) { + netdev_dbg(dev, "Dropping TLS RX HW offload feature since no RXCSUM feature.\n"); + features &= ~NETIF_F_HW_TLS_RX; } return features; @@ -10077,17 +10088,11 @@ int register_netdevice(struct net_device *dev) ret = call_netdevice_notifiers(NETDEV_REGISTER, dev); ret = notifier_to_errno(ret); if (ret) { + /* Expect explicit free_netdev() on failure */ + dev->needs_free_netdev = false; rollback_registered(dev); - rcu_barrier(); - - dev->reg_state = NETREG_UNREGISTERED; - /* We should put the kobject that hold in - * netdev_unregister_kobject(), otherwise - * the net device cannot be freed when - * driver calls free_netdev(), because the - * kobject is being hold. - */ - kobject_put(&dev->dev.kobj); + net_set_todo(dev); + goto out; } /* * Prevent userspace races by waiting until the network @@ -10631,6 +10636,17 @@ void free_netdev(struct net_device *dev) struct napi_struct *p, *n; might_sleep(); + + /* When called immediately after register_netdevice() failed the unwind + * handling may still be dismantling the device. Handle that case by + * deferring the free. + */ + if (dev->reg_state == NETREG_UNREGISTERING) { + ASSERT_RTNL(); + dev->needs_free_netdev = true; + return; + } + netif_free_tx_queues(dev); netif_free_rx_queues(dev); diff --git a/net/core/devlink.c b/net/core/devlink.c index ee828e4b1007..738d4344d679 100644 --- a/net/core/devlink.c +++ b/net/core/devlink.c @@ -4146,7 +4146,7 @@ out: static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb, struct genl_info *info) { - struct devlink_port *devlink_port = info->user_ptr[0]; + struct devlink_port *devlink_port = info->user_ptr[1]; struct devlink_param_item *param_item; struct sk_buff *msg; int err; @@ -4175,7 +4175,7 @@ static int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb, static int devlink_nl_cmd_port_param_set_doit(struct sk_buff *skb, struct genl_info *info) { - struct devlink_port *devlink_port = info->user_ptr[0]; + struct devlink_port *devlink_port = info->user_ptr[1]; return __devlink_nl_cmd_param_set_doit(devlink_port->devlink, devlink_port->index, diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c index 80dbf2f4016e..8e582e29a41e 100644 --- a/net/core/gen_estimator.c +++ b/net/core/gen_estimator.c @@ -80,11 +80,11 @@ static void est_timer(struct timer_list *t) u64 rate, brate; est_fetch_counters(est, &b); - brate = (b.bytes - est->last_bytes) << (10 - est->ewma_log - est->intvl_log); - brate -= (est->avbps >> est->ewma_log); + brate = (b.bytes - est->last_bytes) << (10 - est->intvl_log); + brate = (brate >> est->ewma_log) - (est->avbps >> est->ewma_log); - rate = (b.packets - est->last_packets) << (10 - est->ewma_log - est->intvl_log); - rate -= (est->avpps >> est->ewma_log); + rate = (b.packets - est->last_packets) << (10 - est->intvl_log); + rate = (rate >> est->ewma_log) - (est->avpps >> est->ewma_log); write_seqcount_begin(&est->seq); est->avbps += brate; @@ -143,6 +143,9 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats, if (parm->interval < -2 || parm->interval > 3) return -EINVAL; + if (parm->ewma_log == 0 || parm->ewma_log >= 31) + return -EINVAL; + est = kzalloc(sizeof(*est), GFP_KERNEL); if (!est) return -ENOBUFS; diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 9500d28a43b0..277ed854aef1 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -1569,10 +1569,8 @@ static void neigh_proxy_process(struct timer_list *t) void pneigh_enqueue(struct neigh_table *tbl, struct neigh_parms *p, struct sk_buff *skb) { - unsigned long now = jiffies; - - unsigned long sched_next = now + (prandom_u32() % - NEIGH_VAR(p, PROXY_DELAY)); + unsigned long sched_next = jiffies + + prandom_u32_max(NEIGH_VAR(p, PROXY_DELAY)); if (tbl->proxy_queue.qlen > NEIGH_VAR(p, PROXY_QLEN)) { kfree_skb(skb); diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 999b70c59761..daf502c13d6d 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -1317,8 +1317,8 @@ static const struct attribute_group dql_group = { static ssize_t xps_cpus_show(struct netdev_queue *queue, char *buf) { + int cpu, len, ret, num_tc = 1, tc = 0; struct net_device *dev = queue->dev; - int cpu, len, num_tc = 1, tc = 0; struct xps_dev_maps *dev_maps; cpumask_var_t mask; unsigned long index; @@ -1328,22 +1328,31 @@ static ssize_t xps_cpus_show(struct netdev_queue *queue, index = get_netdev_queue_index(queue); + if (!rtnl_trylock()) + return restart_syscall(); + if (dev->num_tc) { /* Do not allow XPS on subordinate device directly */ num_tc = dev->num_tc; - if (num_tc < 0) - return -EINVAL; + if (num_tc < 0) { + ret = -EINVAL; + goto err_rtnl_unlock; + } /* If queue belongs to subordinate dev use its map */ dev = netdev_get_tx_queue(dev, index)->sb_dev ? : dev; tc = netdev_txq_to_tc(dev, index); - if (tc < 0) - return -EINVAL; + if (tc < 0) { + ret = -EINVAL; + goto err_rtnl_unlock; + } } - if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) - return -ENOMEM; + if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) { + ret = -ENOMEM; + goto err_rtnl_unlock; + } rcu_read_lock(); dev_maps = rcu_dereference(dev->xps_cpus_map); @@ -1366,9 +1375,15 @@ static ssize_t xps_cpus_show(struct netdev_queue *queue, } rcu_read_unlock(); + rtnl_unlock(); + len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask)); free_cpumask_var(mask); return len < PAGE_SIZE ? len : -EINVAL; + +err_rtnl_unlock: + rtnl_unlock(); + return ret; } static ssize_t xps_cpus_store(struct netdev_queue *queue, @@ -1396,7 +1411,13 @@ static ssize_t xps_cpus_store(struct netdev_queue *queue, return err; } + if (!rtnl_trylock()) { + free_cpumask_var(mask); + return restart_syscall(); + } + err = netif_set_xps_queue(dev, mask, index); + rtnl_unlock(); free_cpumask_var(mask); @@ -1408,22 +1429,29 @@ static struct netdev_queue_attribute xps_cpus_attribute __ro_after_init static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf) { + int j, len, ret, num_tc = 1, tc = 0; struct net_device *dev = queue->dev; struct xps_dev_maps *dev_maps; unsigned long *mask, index; - int j, len, num_tc = 1, tc = 0; index = get_netdev_queue_index(queue); + if (!rtnl_trylock()) + return restart_syscall(); + if (dev->num_tc) { num_tc = dev->num_tc; tc = netdev_txq_to_tc(dev, index); - if (tc < 0) - return -EINVAL; + if (tc < 0) { + ret = -EINVAL; + goto err_rtnl_unlock; + } } mask = bitmap_zalloc(dev->num_rx_queues, GFP_KERNEL); - if (!mask) - return -ENOMEM; + if (!mask) { + ret = -ENOMEM; + goto err_rtnl_unlock; + } rcu_read_lock(); dev_maps = rcu_dereference(dev->xps_rxqs_map); @@ -1449,10 +1477,16 @@ static ssize_t xps_rxqs_show(struct netdev_queue *queue, char *buf) out_no_maps: rcu_read_unlock(); + rtnl_unlock(); + len = bitmap_print_to_pagebuf(false, buf, mask, dev->num_rx_queues); bitmap_free(mask); return len < PAGE_SIZE ? len : -EINVAL; + +err_rtnl_unlock: + rtnl_unlock(); + return ret; } static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf, @@ -1478,10 +1512,17 @@ static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf, return err; } + if (!rtnl_trylock()) { + bitmap_free(mask); + return restart_syscall(); + } + cpus_read_lock(); err = __netif_set_xps_queue(dev, mask, index, true); cpus_read_unlock(); + rtnl_unlock(); + bitmap_free(mask); return err ? : len; } diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index bb0596c41b3e..3d6ab194d0f5 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -3439,26 +3439,15 @@ replay: dev->ifindex = ifm->ifi_index; - if (ops->newlink) { + if (ops->newlink) err = ops->newlink(link_net ? : net, dev, tb, data, extack); - /* Drivers should call free_netdev() in ->destructor - * and unregister it on failure after registration - * so that device could be finally freed in rtnl_unlock. - */ - if (err < 0) { - /* If device is not registered at all, free it now */ - if (dev->reg_state == NETREG_UNINITIALIZED || - dev->reg_state == NETREG_UNREGISTERED) - free_netdev(dev); - goto out; - } - } else { + else err = register_netdevice(dev); - if (err < 0) { - free_netdev(dev); - goto out; - } + if (err < 0) { + free_netdev(dev); + goto out; } + err = rtnl_configure_link(dev, ifm); if (err < 0) goto out_unregister; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index f62cae3f75d8..785daff48030 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -437,7 +437,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, len += NET_SKB_PAD; - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || + /* If requested length is either too small or too big, + * we use kmalloc() for skb->head allocation. + */ + if (len <= SKB_WITH_OVERHEAD(1024) || + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); if (!skb) @@ -501,13 +505,17 @@ EXPORT_SYMBOL(__netdev_alloc_skb); struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, gfp_t gfp_mask) { - struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); + struct napi_alloc_cache *nc; struct sk_buff *skb; void *data; len += NET_SKB_PAD + NET_IP_ALIGN; - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || + /* If requested length is either too small or too big, + * we use kmalloc() for skb->head allocation. + */ + if (len <= SKB_WITH_OVERHEAD(1024) || + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); if (!skb) @@ -515,6 +523,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, goto skb_success; } + nc = this_cpu_ptr(&napi_alloc_cache); len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); len = SKB_DATA_ALIGN(len); @@ -3442,6 +3451,7 @@ void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from, st->root_skb = st->cur_skb = skb; st->frag_idx = st->stepped_offset = 0; st->frag_data = NULL; + st->frag_off = 0; } EXPORT_SYMBOL(skb_prepare_seq_read); @@ -3496,14 +3506,27 @@ next_skb: st->stepped_offset += skb_headlen(st->cur_skb); while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) { + unsigned int pg_idx, pg_off, pg_sz; + frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx]; - block_limit = skb_frag_size(frag) + st->stepped_offset; + pg_idx = 0; + pg_off = skb_frag_off(frag); + pg_sz = skb_frag_size(frag); + + if (skb_frag_must_loop(skb_frag_page(frag))) { + pg_idx = (pg_off + st->frag_off) >> PAGE_SHIFT; + pg_off = offset_in_page(pg_off + st->frag_off); + pg_sz = min_t(unsigned int, pg_sz - st->frag_off, + PAGE_SIZE - pg_off); + } + + block_limit = pg_sz + st->stepped_offset; if (abs_offset < block_limit) { if (!st->frag_data) - st->frag_data = kmap_atomic(skb_frag_page(frag)); + st->frag_data = kmap_atomic(skb_frag_page(frag) + pg_idx); - *data = (u8 *) st->frag_data + skb_frag_off(frag) + + *data = (u8 *)st->frag_data + pg_off + (abs_offset - st->stepped_offset); return block_limit - abs_offset; @@ -3514,8 +3537,12 @@ next_skb: st->frag_data = NULL; } - st->frag_idx++; - st->stepped_offset += skb_frag_size(frag); + st->stepped_offset += pg_sz; + st->frag_off += pg_sz; + if (st->frag_off == skb_frag_size(frag)) { + st->frag_off = 0; + st->frag_idx++; + } } if (st->frag_data) { @@ -3655,7 +3682,8 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb, unsigned int delta_truesize = 0; unsigned int delta_len = 0; struct sk_buff *tail = NULL; - struct sk_buff *nskb; + struct sk_buff *nskb, *tmp; + int err; skb_push(skb, -skb_network_offset(skb) + offset); @@ -3665,11 +3693,28 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb, nskb = list_skb; list_skb = list_skb->next; + err = 0; + if (skb_shared(nskb)) { + tmp = skb_clone(nskb, GFP_ATOMIC); + if (tmp) { + consume_skb(nskb); + nskb = tmp; + err = skb_unclone(nskb, GFP_ATOMIC); + } else { + err = -ENOMEM; + } + } + if (!tail) skb->next = nskb; else tail->next = nskb; + if (unlikely(err)) { + nskb->next = list_skb; + goto err_linearize; + } + tail = nskb; delta_len += nskb->len; diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c index bbdd3c7b6cb5..b065f0a103ed 100644 --- a/net/core/sock_reuseport.c +++ b/net/core/sock_reuseport.c @@ -293,7 +293,7 @@ select_by_hash: i = j = reciprocal_scale(hash, socks); while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) { i++; - if (i >= reuse->num_socks) + if (i >= socks) i = 0; if (i == j) goto out; diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c index 084e159a12ba..653e3bc9c87b 100644 --- a/net/dcb/dcbnl.c +++ b/net/dcb/dcbnl.c @@ -1765,6 +1765,8 @@ static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh, fn = &reply_funcs[dcb->cmd]; if (!fn->cb) return -EOPNOTSUPP; + if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN)) + return -EPERM; if (!tb[DCB_ATTR_IFNAME]) return -EINVAL; diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c index 183003e45762..a47e0f9b20d0 100644 --- a/net/dsa/dsa2.c +++ b/net/dsa/dsa2.c @@ -353,9 +353,13 @@ static int dsa_port_devlink_setup(struct dsa_port *dp) static void dsa_port_teardown(struct dsa_port *dp) { + struct devlink_port *dlp = &dp->devlink_port; + if (!dp->setup) return; + devlink_port_type_clear(dlp); + switch (dp->type) { case DSA_PORT_TYPE_UNUSED: break; diff --git a/net/dsa/master.c b/net/dsa/master.c index 5a0f6fec4271..cb3a5cf99b25 100644 --- a/net/dsa/master.c +++ b/net/dsa/master.c @@ -309,8 +309,18 @@ static struct lock_class_key dsa_master_addr_list_lock_key; int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) { int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead; + struct dsa_switch *ds = cpu_dp->ds; + struct device_link *consumer_link; int ret; + /* The DSA master must use SET_NETDEV_DEV for this to work. */ + consumer_link = device_link_add(ds->dev, dev->dev.parent, + DL_FLAG_AUTOREMOVE_CONSUMER); + if (!consumer_link) + netdev_err(dev, + "Failed to create a device link to DSA switch %s\n", + dev_name(ds->dev)); + rtnl_lock(); ret = dev_set_mtu(dev, mtu); rtnl_unlock(); diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 8b07f3a4f2db..a3271ec3e162 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -443,7 +443,6 @@ static int esp_output_encap(struct xfrm_state *x, struct sk_buff *skb, int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) { u8 *tail; - u8 *vaddr; int nfrags; int esph_offset; struct page *page; @@ -485,14 +484,10 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info * page = pfrag->page; get_page(page); - vaddr = kmap_atomic(page); - - tail = vaddr + pfrag->offset; + tail = page_address(page) + pfrag->offset; esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); - kunmap_atomic(vaddr); - nfrags = skb_shinfo(skb)->nr_frags; __skb_fill_page_desc(skb, nfrags, page, pfrag->offset, diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c index cdf6ec5aa45d..84bb707bd88d 100644 --- a/net/ipv4/fib_frontend.c +++ b/net/ipv4/fib_frontend.c @@ -292,7 +292,7 @@ __be32 fib_compute_spec_dst(struct sk_buff *skb) .flowi4_iif = LOOPBACK_IFINDEX, .flowi4_oif = l3mdev_master_ifindex_rcu(dev), .daddr = ip_hdr(skb)->saddr, - .flowi4_tos = RT_TOS(ip_hdr(skb)->tos), + .flowi4_tos = ip_hdr(skb)->tos & IPTOS_RT_MASK, .flowi4_scope = scope, .flowi4_mark = vmark ? skb->mark : 0, }; diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c index 66fdbfe5447c..5d1e6fe9d838 100644 --- a/net/ipv4/gre_demux.c +++ b/net/ipv4/gre_demux.c @@ -128,7 +128,7 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, * to 0 and sets the configured key in the * inner erspan header field */ - if (greh->protocol == htons(ETH_P_ERSPAN) || + if ((greh->protocol == htons(ETH_P_ERSPAN) && hdr_len != 4) || greh->protocol == htons(ETH_P_ERSPAN2)) { struct erspan_base_hdr *ershdr; diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index fd8b8800a2c3..6bd7ca09af03 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -851,6 +851,7 @@ struct sock *inet_csk_clone_lock(const struct sock *sk, newicsk->icsk_retransmits = 0; newicsk->icsk_backoff = 0; newicsk->icsk_probes_out = 0; + newicsk->icsk_probes_tstamp = 0; /* Deinitialize accept_queue to trap illegal accesses. */ memset(&newicsk->icsk_accept_queue, 0, sizeof(newicsk->icsk_accept_queue)); diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 89fff5f59eea..2ed0b01f72f0 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -302,7 +302,7 @@ static int __ip_finish_output(struct net *net, struct sock *sk, struct sk_buff * if (skb_is_gso(skb)) return ip_finish_output_gso(net, sk, skb, mtu); - if (skb->len > mtu || (IPCB(skb)->flags & IPSKB_FRAG_PMTU)) + if (skb->len > mtu || IPCB(skb)->frag_max_size) return ip_fragment(net, sk, skb, mtu, ip_finish_output2); return ip_finish_output2(net, sk, skb); diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index ee65c9225178..64594aa755f0 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -759,8 +759,11 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, goto tx_error; } - if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off, inner_iph, - 0, 0, false)) { + df = tnl_params->frag_off; + if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df) + df |= (inner_iph->frag_off & htons(IP_DF)); + + if (tnl_update_pmtu(dev, skb, rt, df, inner_iph, 0, 0, false)) { ip_rt_put(rt); goto tx_error; } @@ -788,10 +791,6 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, ttl = ip4_dst_hoplimit(&rt->dst); } - df = tnl_params->frag_off; - if (skb->protocol == htons(ETH_P_IP) && !tunnel->ignore_df) - df |= (inner_iph->frag_off&htons(IP_DF)); - max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr) + rt->dst.header_len + ip_encap_hlen(&tunnel->encap); if (max_headroom > dev->needed_headroom) diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c index 563b62b76a5f..c576a63d09db 100644 --- a/net/ipv4/netfilter/arp_tables.c +++ b/net/ipv4/netfilter/arp_tables.c @@ -1379,7 +1379,7 @@ static int compat_get_entries(struct net *net, xt_compat_lock(NFPROTO_ARP); t = xt_find_table_lock(net, NFPROTO_ARP, get.name); if (!IS_ERR(t)) { - const struct xt_table_info *private = t->private; + const struct xt_table_info *private = xt_table_get_private_protected(t); struct xt_table_info info; ret = compat_table_info(private, &info); diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c index 6e2851f8d3a3..e8f6f9d86237 100644 --- a/net/ipv4/netfilter/ip_tables.c +++ b/net/ipv4/netfilter/ip_tables.c @@ -1589,7 +1589,7 @@ compat_get_entries(struct net *net, struct compat_ipt_get_entries __user *uptr, xt_compat_lock(AF_INET); t = xt_find_table_lock(net, AF_INET, get.name); if (!IS_ERR(t)) { - const struct xt_table_info *private = t->private; + const struct xt_table_info *private = xt_table_get_private_protected(t); struct xt_table_info info; ret = compat_table_info(private, &info); if (!ret && get.size == info.size) diff --git a/net/ipv4/netfilter/ipt_rpfilter.c b/net/ipv4/netfilter/ipt_rpfilter.c index cc23f1ce239c..8cd3224d913e 100644 --- a/net/ipv4/netfilter/ipt_rpfilter.c +++ b/net/ipv4/netfilter/ipt_rpfilter.c @@ -76,7 +76,7 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) flow.daddr = iph->saddr; flow.saddr = rpfilter_get_saddr(iph->daddr); flow.flowi4_mark = info->flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0; - flow.flowi4_tos = RT_TOS(iph->tos); + flow.flowi4_tos = iph->tos & IPTOS_RT_MASK; flow.flowi4_scope = RT_SCOPE_UNIVERSE; flow.flowi4_oif = l3mdev_master_ifindex_rcu(xt_in(par)); diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c index 5e1b22d4f939..e53e43aef785 100644 --- a/net/ipv4/nexthop.c +++ b/net/ipv4/nexthop.c @@ -627,7 +627,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[], for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) { if (!tb[i]) continue; - if (tb[NHA_FDB]) + if (i == NHA_FDB) continue; NL_SET_ERR_MSG(extack, "No other attributes can be set in nexthop groups"); @@ -1459,8 +1459,10 @@ static struct nexthop *nexthop_create_group(struct net *net, return nh; out_no_nh: - for (; i >= 0; --i) + for (i--; i >= 0; --i) { + list_del(&nhg->nh_entries[i].nh_list); nexthop_put(nhg->nh_entries[i].nh); + } kfree(nhg->spare); kfree(nhg); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index ed42d2193c5c..32545ecf2ab1 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2937,6 +2937,7 @@ int tcp_disconnect(struct sock *sk, int flags) icsk->icsk_backoff = 0; icsk->icsk_probes_out = 0; + icsk->icsk_probes_tstamp = 0; icsk->icsk_rto = TCP_TIMEOUT_INIT; icsk->icsk_rto_min = TCP_RTO_MIN; icsk->icsk_delack_max = TCP_DELACK_MAX; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c7e16b0ed791..a7dfca0a38cd 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3384,6 +3384,7 @@ static void tcp_ack_probe(struct sock *sk) return; if (!after(TCP_SKB_CB(head)->end_seq, tcp_wnd_end(tp))) { icsk->icsk_backoff = 0; + icsk->icsk_probes_tstamp = 0; inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0); /* Socket must be waked up by subsequent tcp_data_snd_check(). * This function is not for random using! @@ -4396,10 +4397,9 @@ static void tcp_rcv_spurious_retrans(struct sock *sk, const struct sk_buff *skb) * The receiver remembers and reflects via DSACKs. Leverage the * DSACK state and change the txhash to re-route speculatively. */ - if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq) { - sk_rethink_txhash(sk); + if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq && + sk_rethink_txhash(sk)) NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDUPLICATEDATAREHASH); - } } static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 58207c7769d0..777306b5bc22 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1595,6 +1595,8 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, tcp_move_syn(newtp, req); ireq->ireq_opt = NULL; } else { + newinet->inet_opt = NULL; + if (!req_unhash && found_dup_sk) { /* This code path should only be executed in the * syncookie case only @@ -1602,8 +1604,6 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, bh_unlock_sock(newsk); sock_put(newsk); newsk = NULL; - } else { - newinet->inet_opt = NULL; } } return newsk; @@ -1760,6 +1760,7 @@ int tcp_v4_early_demux(struct sk_buff *skb) bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) { u32 limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf); + u32 tail_gso_size, tail_gso_segs; struct skb_shared_info *shinfo; const struct tcphdr *th; struct tcphdr *thtail; @@ -1767,6 +1768,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) unsigned int hdrlen; bool fragstolen; u32 gso_segs; + u32 gso_size; int delta; /* In case all data was pulled from skb frags (in __pskb_pull_tail()), @@ -1792,13 +1794,6 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) */ th = (const struct tcphdr *)skb->data; hdrlen = th->doff * 4; - shinfo = skb_shinfo(skb); - - if (!shinfo->gso_size) - shinfo->gso_size = skb->len - hdrlen; - - if (!shinfo->gso_segs) - shinfo->gso_segs = 1; tail = sk->sk_backlog.tail; if (!tail) @@ -1821,6 +1816,15 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) goto no_coalesce; __skb_pull(skb, hdrlen); + + shinfo = skb_shinfo(skb); + gso_size = shinfo->gso_size ?: skb->len; + gso_segs = shinfo->gso_segs ?: 1; + + shinfo = skb_shinfo(tail); + tail_gso_size = shinfo->gso_size ?: (tail->len - hdrlen); + tail_gso_segs = shinfo->gso_segs ?: 1; + if (skb_try_coalesce(tail, skb, &fragstolen, &delta)) { TCP_SKB_CB(tail)->end_seq = TCP_SKB_CB(skb)->end_seq; @@ -1847,11 +1851,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) } /* Not as strict as GRO. We only need to carry mss max value */ - skb_shinfo(tail)->gso_size = max(shinfo->gso_size, - skb_shinfo(tail)->gso_size); - - gso_segs = skb_shinfo(tail)->gso_segs + shinfo->gso_segs; - skb_shinfo(tail)->gso_segs = min_t(u32, gso_segs, 0xFFFF); + shinfo->gso_size = max(gso_size, tail_gso_size); + shinfo->gso_segs = min_t(u32, gso_segs + tail_gso_segs, 0xFFFF); sk->sk_backlog.len += delta; __NET_INC_STATS(sock_net(sk), diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index f322e798a351..ab458697881e 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -4084,6 +4084,7 @@ void tcp_send_probe0(struct sock *sk) /* Cancel probe timer, if it is not required. */ icsk->icsk_probes_out = 0; icsk->icsk_backoff = 0; + icsk->icsk_probes_tstamp = 0; return; } diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 6c62b9ea1320..faa92948441b 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -219,14 +219,8 @@ static int tcp_write_timeout(struct sock *sk) int retry_until; if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { - if (icsk->icsk_retransmits) { - dst_negative_advice(sk); - } else { - sk_rethink_txhash(sk); - tp->timeout_rehash++; - __NET_INC_STATS(sock_net(sk), - LINUX_MIB_TCPTIMEOUTREHASH); - } + if (icsk->icsk_retransmits) + __dst_negative_advice(sk); retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries; expired = icsk->icsk_retransmits >= retry_until; } else { @@ -234,12 +228,7 @@ static int tcp_write_timeout(struct sock *sk) /* Black hole detection */ tcp_mtu_probing(icsk, sk); - dst_negative_advice(sk); - } else { - sk_rethink_txhash(sk); - tp->timeout_rehash++; - __NET_INC_STATS(sock_net(sk), - LINUX_MIB_TCPTIMEOUTREHASH); + __dst_negative_advice(sk); } retry_until = net->ipv4.sysctl_tcp_retries2; @@ -270,6 +259,11 @@ static int tcp_write_timeout(struct sock *sk) return 1; } + if (sk_rethink_txhash(sk)) { + tp->timeout_rehash++; + __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPTIMEOUTREHASH); + } + return 0; } @@ -349,6 +343,7 @@ static void tcp_probe_timer(struct sock *sk) if (tp->packets_out || !skb) { icsk->icsk_probes_out = 0; + icsk->icsk_probes_tstamp = 0; return; } @@ -360,13 +355,12 @@ static void tcp_probe_timer(struct sock *sk) * corresponding system limit. We also implement similar policy when * we use RTO to probe window in tcp_retransmit_timer(). */ - if (icsk->icsk_user_timeout) { - u32 elapsed = tcp_model_timeout(sk, icsk->icsk_probes_out, - tcp_probe0_base(sk)); - - if (elapsed >= icsk->icsk_user_timeout) - goto abort; - } + if (!icsk->icsk_probes_tstamp) + icsk->icsk_probes_tstamp = tcp_jiffies32; + else if (icsk->icsk_user_timeout && + (s32)(tcp_jiffies32 - icsk->icsk_probes_tstamp) >= + msecs_to_jiffies(icsk->icsk_user_timeout)) + goto abort; max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2; if (sock_flag(sk, SOCK_DEAD)) { diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 7103b0a89756..69ea76578abb 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2555,7 +2555,8 @@ int udp_v4_early_demux(struct sk_buff *skb) */ if (!inet_sk(sk)->inet_daddr && in_dev) return ip_mc_validate_source(skb, iph->daddr, - iph->saddr, iph->tos, + iph->saddr, + iph->tos & IPTOS_RT_MASK, skb->dev, in_dev, &itag); } return 0; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index eff2cacd5209..9edc5bb2d531 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -2467,8 +2467,9 @@ static void addrconf_add_mroute(struct net_device *dev) .fc_ifindex = dev->ifindex, .fc_dst_len = 8, .fc_flags = RTF_UP, - .fc_type = RTN_UNICAST, + .fc_type = RTN_MULTICAST, .fc_nlinfo.nl_net = dev_net(dev), + .fc_protocol = RTPROT_KERNEL, }; ipv6_addr_set(&cfg.fc_dst, htonl(0xFF000000), 0, 0, 0); diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c index 52c2f063529f..2b804fcebcc6 100644 --- a/net/ipv6/esp6.c +++ b/net/ipv6/esp6.c @@ -478,7 +478,6 @@ static int esp6_output_encap(struct xfrm_state *x, struct sk_buff *skb, int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) { u8 *tail; - u8 *vaddr; int nfrags; int esph_offset; struct page *page; @@ -519,14 +518,10 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info page = pfrag->page; get_page(page); - vaddr = kmap_atomic(page); - - tail = vaddr + pfrag->offset; + tail = page_address(page) + pfrag->offset; esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); - kunmap_atomic(vaddr); - nfrags = skb_shinfo(skb)->nr_frags; __skb_fill_page_desc(skb, nfrags, page, pfrag->offset, diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c index 605cdd38a919..f43e27555725 100644 --- a/net/ipv6/ip6_fib.c +++ b/net/ipv6/ip6_fib.c @@ -1025,6 +1025,8 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn, { struct fib6_table *table = rt->fib6_table; + /* Flush all cached dst in exception table */ + rt6_flush_exceptions(rt); fib6_drop_pcpu_from(rt, table); if (rt->nh && !list_empty(&rt->nh_list)) @@ -1927,9 +1929,6 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn, net->ipv6.rt6_stats->fib_rt_entries--; net->ipv6.rt6_stats->fib_discarded_routes++; - /* Flush all cached dst in exception table */ - rt6_flush_exceptions(rt); - /* Reset round-robin state, if necessary */ if (rcu_access_pointer(fn->rr_ptr) == rt) fn->rr_ptr = NULL; diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 749ad72386b2..077d43af8226 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -125,8 +125,43 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff * return -EINVAL; } +static int +ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk, + struct sk_buff *skb, unsigned int mtu) +{ + struct sk_buff *segs, *nskb; + netdev_features_t features; + int ret = 0; + + /* Please see corresponding comment in ip_finish_output_gso + * describing the cases where GSO segment length exceeds the + * egress MTU. + */ + features = netif_skb_features(skb); + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); + if (IS_ERR_OR_NULL(segs)) { + kfree_skb(skb); + return -ENOMEM; + } + + consume_skb(skb); + + skb_list_walk_safe(segs, segs, nskb) { + int err; + + skb_mark_not_on_list(segs); + err = ip6_fragment(net, sk, segs, ip6_finish_output2); + if (err && ret == 0) + ret = err; + } + + return ret; +} + static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb) { + unsigned int mtu; + #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM) /* Policy lookup after SNAT yielded a new policy */ if (skb_dst(skb)->xfrm) { @@ -135,7 +170,11 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff } #endif - if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) || + mtu = ip6_skb_dst_mtu(skb); + if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu)) + return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu); + + if ((skb->len > mtu && !skb_is_gso(skb)) || dst_allfrag(skb_dst(skb)) || (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size)) return ip6_fragment(net, sk, skb, ip6_finish_output2); diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c index c4f532f4d311..0d453fa9e327 100644 --- a/net/ipv6/netfilter/ip6_tables.c +++ b/net/ipv6/netfilter/ip6_tables.c @@ -1598,7 +1598,7 @@ compat_get_entries(struct net *net, struct compat_ip6t_get_entries __user *uptr, xt_compat_lock(AF_INET6); t = xt_find_table_lock(net, AF_INET6, get.name); if (!IS_ERR(t)) { - const struct xt_table_info *private = t->private; + const struct xt_table_info *private = xt_table_get_private_protected(t); struct xt_table_info info; ret = compat_table_info(private, &info); if (!ret && get.size == info.size) diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 2da0ee703779..93636867aee2 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -1645,8 +1645,11 @@ static int ipip6_newlink(struct net *src_net, struct net_device *dev, } #ifdef CONFIG_IPV6_SIT_6RD - if (ipip6_netlink_6rd_parms(data, &ip6rd)) + if (ipip6_netlink_6rd_parms(data, &ip6rd)) { err = ipip6_tunnel_update_6rd(nt, &ip6rd); + if (err < 0) + unregister_netdevice_queue(dev, NULL); + } #endif return err; diff --git a/net/lapb/lapb_iface.c b/net/lapb/lapb_iface.c index 213ea7abc9ab..40961889e9c0 100644 --- a/net/lapb/lapb_iface.c +++ b/net/lapb/lapb_iface.c @@ -489,6 +489,7 @@ static int lapb_device_event(struct notifier_block *this, unsigned long event, break; } + lapb_put(lapb); return NOTIFY_DONE; } diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c index 48f144f107d5..9e723d943421 100644 --- a/net/mac80211/debugfs.c +++ b/net/mac80211/debugfs.c @@ -120,18 +120,17 @@ static ssize_t aqm_write(struct file *file, { struct ieee80211_local *local = file->private_data; char buf[100]; - size_t len; - if (count > sizeof(buf)) + if (count >= sizeof(buf)) return -EINVAL; if (copy_from_user(buf, user_buf, count)) return -EFAULT; - buf[sizeof(buf) - 1] = '\0'; - len = strlen(buf); - if (len > 0 && buf[len-1] == '\n') - buf[len-1] = 0; + if (count && buf[count - 1] == '\n') + buf[count - 1] = '\0'; + else + buf[count] = '\0'; if (sscanf(buf, "fq_limit %u", &local->fq.limit) == 1) return count; @@ -177,18 +176,17 @@ static ssize_t airtime_flags_write(struct file *file, { struct ieee80211_local *local = file->private_data; char buf[16]; - size_t len; - if (count > sizeof(buf)) + if (count >= sizeof(buf)) return -EINVAL; if (copy_from_user(buf, user_buf, count)) return -EFAULT; - buf[sizeof(buf) - 1] = 0; - len = strlen(buf); - if (len > 0 && buf[len - 1] == '\n') - buf[len - 1] = 0; + if (count && buf[count - 1] == '\n') + buf[count - 1] = '\0'; + else + buf[count] = '\0'; if (kstrtou16(buf, 0, &local->airtime_flags)) return -EINVAL; @@ -237,20 +235,19 @@ static ssize_t aql_txq_limit_write(struct file *file, { struct ieee80211_local *local = file->private_data; char buf[100]; - size_t len; u32 ac, q_limit_low, q_limit_high, q_limit_low_old, q_limit_high_old; struct sta_info *sta; - if (count > sizeof(buf)) + if (count >= sizeof(buf)) return -EINVAL; if (copy_from_user(buf, user_buf, count)) return -EFAULT; - buf[sizeof(buf) - 1] = 0; - len = strlen(buf); - if (len > 0 && buf[len - 1] == '\n') - buf[len - 1] = 0; + if (count && buf[count - 1] == '\n') + buf[count - 1] = '\0'; + else + buf[count] = '\0'; if (sscanf(buf, "%u %u %u", &ac, &q_limit_low, &q_limit_high) != 3) return -EINVAL; @@ -306,18 +303,17 @@ static ssize_t force_tx_status_write(struct file *file, { struct ieee80211_local *local = file->private_data; char buf[3]; - size_t len; - if (count > sizeof(buf)) + if (count >= sizeof(buf)) return -EINVAL; if (copy_from_user(buf, user_buf, count)) return -EFAULT; - buf[sizeof(buf) - 1] = '\0'; - len = strlen(buf); - if (len > 0 && buf[len - 1] == '\n') - buf[len - 1] = 0; + if (count && buf[count - 1] == '\n') + buf[count - 1] = '\0'; + else + buf[count] = '\0'; if (buf[0] == '0' && buf[1] == '\0') local->force_tx_status = 0; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 13b9bcc4865d..972895e9f22d 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -4176,6 +4176,8 @@ void ieee80211_check_fast_rx(struct sta_info *sta) rcu_read_lock(); key = rcu_dereference(sta->ptk[sta->ptk_idx]); + if (!key) + key = rcu_dereference(sdata->default_unicast_key); if (key) { switch (key->conf.cipher) { case WLAN_CIPHER_SUITE_TKIP: diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index 6422da6690f7..ebb3228ce971 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -649,7 +649,7 @@ ieee80211_tx_h_select_key(struct ieee80211_tx_data *tx) if (!skip_hw && tx->key && tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) info->control.hw_key = &tx->key->conf; - } else if (!ieee80211_is_mgmt(hdr->frame_control) && tx->sta && + } else if (ieee80211_is_data_present(hdr->frame_control) && tx->sta && test_sta_flag(tx->sta, WLAN_STA_USES_ENCRYPTION)) { return TX_DROP; } @@ -3809,7 +3809,7 @@ void __ieee80211_schedule_txq(struct ieee80211_hw *hw, * get immediately moved to the back of the list on the next * call to ieee80211_next_txq(). */ - if (txqi->txq.sta && + if (txqi->txq.sta && local->airtime_flags && wiphy_ext_feature_isset(local->hw.wiphy, NL80211_EXT_FEATURE_AIRTIME_FAIRNESS)) list_add(&txqi->schedule_order, @@ -4251,7 +4251,6 @@ netdev_tx_t ieee80211_subif_start_xmit_8023(struct sk_buff *skb, struct ethhdr *ehdr = (struct ethhdr *)skb->data; struct ieee80211_key *key; struct sta_info *sta; - bool offload = true; if (unlikely(skb->len < ETH_HLEN)) { kfree_skb(skb); @@ -4267,18 +4266,22 @@ netdev_tx_t ieee80211_subif_start_xmit_8023(struct sk_buff *skb, if (unlikely(IS_ERR_OR_NULL(sta) || !sta->uploaded || !test_sta_flag(sta, WLAN_STA_AUTHORIZED) || - sdata->control_port_protocol == ehdr->h_proto)) - offload = false; - else if ((key = rcu_dereference(sta->ptk[sta->ptk_idx])) && - (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) || - key->conf.cipher == WLAN_CIPHER_SUITE_TKIP)) - offload = false; - - if (offload) - ieee80211_8023_xmit(sdata, dev, sta, key, skb); - else - ieee80211_subif_start_xmit(skb, dev); + sdata->control_port_protocol == ehdr->h_proto)) + goto skip_offload; + + key = rcu_dereference(sta->ptk[sta->ptk_idx]); + if (!key) + key = rcu_dereference(sdata->default_unicast_key); + + if (key && (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) || + key->conf.cipher == WLAN_CIPHER_SUITE_TKIP)) + goto skip_offload; + + ieee80211_8023_xmit(sdata, dev, sta, key, skb); + goto out; +skip_offload: + ieee80211_subif_start_xmit(skb, dev); out: rcu_read_unlock(); diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 09b19aa2f205..f998a077c7dd 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -427,7 +427,7 @@ static bool mptcp_subflow_active(struct mptcp_subflow_context *subflow) static bool tcp_can_send_ack(const struct sock *ssk) { return !((1 << inet_sk_state_load(ssk)) & - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE)); + (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); } static void mptcp_send_ack(struct mptcp_sock *msk) @@ -877,6 +877,9 @@ static void __mptcp_wmem_reserve(struct sock *sk, int size) struct mptcp_sock *msk = mptcp_sk(sk); WARN_ON_ONCE(msk->wmem_reserved); + if (WARN_ON_ONCE(amount < 0)) + amount = 0; + if (amount <= sk->sk_forward_alloc) goto reserve; @@ -1587,7 +1590,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL)) return -EOPNOTSUPP; - mptcp_lock_sock(sk, __mptcp_wmem_reserve(sk, len)); + mptcp_lock_sock(sk, __mptcp_wmem_reserve(sk, min_t(size_t, 1 << 20, len))); timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); @@ -2639,11 +2642,17 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) static int mptcp_disconnect(struct sock *sk, int flags) { - /* Should never be called. - * inet_stream_connect() calls ->disconnect, but that - * refers to the subflow socket, not the mptcp one. - */ - WARN_ON_ONCE(1); + struct mptcp_subflow_context *subflow; + struct mptcp_sock *msk = mptcp_sk(sk); + + __mptcp_flush_join_list(msk); + mptcp_for_each_subflow(msk, subflow) { + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + + lock_sock(ssk); + tcp_disconnect(ssk, flags); + release_sock(ssk); + } return 0; } @@ -3086,6 +3095,14 @@ bool mptcp_finish_join(struct sock *ssk) return true; } +static void mptcp_shutdown(struct sock *sk, int how) +{ + pr_debug("sk=%p, how=%d", sk, how); + + if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) + __mptcp_wr_shutdown(sk); +} + static struct proto mptcp_prot = { .name = "MPTCP", .owner = THIS_MODULE, @@ -3095,7 +3112,7 @@ static struct proto mptcp_prot = { .accept = mptcp_accept, .setsockopt = mptcp_setsockopt, .getsockopt = mptcp_getsockopt, - .shutdown = tcp_shutdown, + .shutdown = mptcp_shutdown, .destroy = mptcp_destroy, .sendmsg = mptcp_sendmsg, .recvmsg = mptcp_recvmsg, @@ -3341,43 +3358,6 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock, return mask; } -static int mptcp_shutdown(struct socket *sock, int how) -{ - struct mptcp_sock *msk = mptcp_sk(sock->sk); - struct sock *sk = sock->sk; - int ret = 0; - - pr_debug("sk=%p, how=%d", msk, how); - - lock_sock(sk); - - how++; - if ((how & ~SHUTDOWN_MASK) || !how) { - ret = -EINVAL; - goto out_unlock; - } - - if (sock->state == SS_CONNECTING) { - if ((1 << sk->sk_state) & - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE)) - sock->state = SS_DISCONNECTING; - else - sock->state = SS_CONNECTED; - } - - sk->sk_shutdown |= how; - if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) - __mptcp_wr_shutdown(sk); - - /* Wake up anyone sleeping in poll. */ - sk->sk_state_change(sk); - -out_unlock: - release_sock(sk); - - return ret; -} - static const struct proto_ops mptcp_stream_ops = { .family = PF_INET, .owner = THIS_MODULE, @@ -3391,7 +3371,7 @@ static const struct proto_ops mptcp_stream_ops = { .ioctl = inet_ioctl, .gettstamp = sock_gettstamp, .listen = mptcp_listen, - .shutdown = mptcp_shutdown, + .shutdown = inet_shutdown, .setsockopt = sock_common_setsockopt, .getsockopt = sock_common_getsockopt, .sendmsg = inet_sendmsg, @@ -3441,7 +3421,7 @@ static const struct proto_ops mptcp_v6_stream_ops = { .ioctl = inet6_ioctl, .gettstamp = sock_gettstamp, .listen = mptcp_listen, - .shutdown = mptcp_shutdown, + .shutdown = inet_shutdown, .setsockopt = sock_common_setsockopt, .getsockopt = sock_common_getsockopt, .sendmsg = inet6_sendmsg, diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c index 5b1f4ec66dd9..888ccc2d4e34 100644 --- a/net/ncsi/ncsi-rsp.c +++ b/net/ncsi/ncsi-rsp.c @@ -1120,7 +1120,7 @@ int ncsi_rcv_rsp(struct sk_buff *skb, struct net_device *dev, int payload, i, ret; /* Find the NCSI device */ - nd = ncsi_find_dev(dev); + nd = ncsi_find_dev(orig_dev); ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL; if (!ndp) return -ENODEV; diff --git a/net/netfilter/ipset/ip_set_hash_gen.h b/net/netfilter/ipset/ip_set_hash_gen.h index 5f1208ad049e..6186358eac7c 100644 --- a/net/netfilter/ipset/ip_set_hash_gen.h +++ b/net/netfilter/ipset/ip_set_hash_gen.h @@ -141,20 +141,6 @@ htable_size(u8 hbits) return hsize * sizeof(struct hbucket *) + sizeof(struct htable); } -/* Compute htable_bits from the user input parameter hashsize */ -static u8 -htable_bits(u32 hashsize) -{ - /* Assume that hashsize == 2^htable_bits */ - u8 bits = fls(hashsize - 1); - - if (jhash_size(bits) != hashsize) - /* Round up to the first 2^n value */ - bits = fls(hashsize); - - return bits; -} - #ifdef IP_SET_HASH_WITH_NETS #if IPSET_NET_COUNT > 1 #define __CIDR(cidr, i) (cidr[i]) @@ -640,7 +626,7 @@ mtype_resize(struct ip_set *set, bool retried) struct htype *h = set->data; struct htable *t, *orig; u8 htable_bits; - size_t dsize = set->dsize; + size_t hsize, dsize = set->dsize; #ifdef IP_SET_HASH_WITH_NETS u8 flags; struct mtype_elem *tmp; @@ -664,14 +650,12 @@ mtype_resize(struct ip_set *set, bool retried) retry: ret = 0; htable_bits++; - if (!htable_bits) { - /* In case we have plenty of memory :-) */ - pr_warn("Cannot increase the hashsize of set %s further\n", - set->name); - ret = -IPSET_ERR_HASH_FULL; - goto out; - } - t = ip_set_alloc(htable_size(htable_bits)); + if (!htable_bits) + goto hbwarn; + hsize = htable_size(htable_bits); + if (!hsize) + goto hbwarn; + t = ip_set_alloc(hsize); if (!t) { ret = -ENOMEM; goto out; @@ -813,6 +797,12 @@ cleanup: if (ret == -EAGAIN) goto retry; goto out; + +hbwarn: + /* In case we have plenty of memory :-) */ + pr_warn("Cannot increase the hashsize of set %s further\n", set->name); + ret = -IPSET_ERR_HASH_FULL; + goto out; } /* Get the current number of elements and ext_size in the set */ @@ -1521,7 +1511,11 @@ IPSET_TOKEN(HTYPE, _create)(struct net *net, struct ip_set *set, if (!h) return -ENOMEM; - hbits = htable_bits(hashsize); + /* Compute htable_bits from the user input parameter hashsize. + * Assume that hashsize == 2^htable_bits, + * otherwise round up to the first 2^n value. + */ + hbits = fls(hashsize - 1); hsize = htable_size(hbits); if (hsize == 0) { kfree(h); diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c index 46c5557c1fec..0ee702d374b0 100644 --- a/net/netfilter/nf_conntrack_standalone.c +++ b/net/netfilter/nf_conntrack_standalone.c @@ -523,6 +523,9 @@ nf_conntrack_hash_sysctl(struct ctl_table *table, int write, { int ret; + /* module_param hashsize could have changed value */ + nf_conntrack_htable_size_user = nf_conntrack_htable_size; + ret = proc_dointvec(table, write, buffer, lenp, ppos); if (ret < 0 || !write) return ret; diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c index ea923f8cf9c4..b7c3c902290f 100644 --- a/net/netfilter/nf_nat_core.c +++ b/net/netfilter/nf_nat_core.c @@ -1174,6 +1174,7 @@ static int __init nf_nat_init(void) ret = register_pernet_subsys(&nat_net_ops); if (ret < 0) { nf_ct_extend_unregister(&nat_extend); + kvfree(nf_nat_bysource); return ret; } diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 8d5aa0ac45f4..15c467f1a9dd 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4162,7 +4162,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk, if (flags & ~(NFT_SET_ANONYMOUS | NFT_SET_CONSTANT | NFT_SET_INTERVAL | NFT_SET_TIMEOUT | NFT_SET_MAP | NFT_SET_EVAL | - NFT_SET_OBJECT | NFT_SET_CONCAT)) + NFT_SET_OBJECT | NFT_SET_CONCAT | NFT_SET_EXPR)) return -EOPNOTSUPP; /* Only one of these operations is supported */ if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) == @@ -4304,6 +4304,10 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk, struct nlattr *tmp; int left; + if (!(flags & NFT_SET_EXPR)) { + err = -EINVAL; + goto err_set_alloc_name; + } i = 0; nla_for_each_nested(tmp, nla[NFTA_SET_EXPRESSIONS], left) { if (i == NFT_SET_EXPR_MAX) { @@ -5254,8 +5258,8 @@ static int nft_set_elem_expr_clone(const struct nft_ctx *ctx, return 0; err_expr: - for (k = i - 1; k >= 0; k++) - nft_expr_destroy(ctx, expr_array[i]); + for (k = i - 1; k >= 0; k--) + nft_expr_destroy(ctx, expr_array[k]); return -ENOMEM; } diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c index 983a1d5ca3ab..0b053f75cd60 100644 --- a/net/netfilter/nft_dynset.c +++ b/net/netfilter/nft_dynset.c @@ -19,6 +19,7 @@ struct nft_dynset { enum nft_registers sreg_key:8; enum nft_registers sreg_data:8; bool invert; + bool expr; u8 num_exprs; u64 timeout; struct nft_expr *expr_array[NFT_SET_EXPR_MAX]; @@ -175,11 +176,12 @@ static int nft_dynset_init(const struct nft_ctx *ctx, if (tb[NFTA_DYNSET_FLAGS]) { u32 flags = ntohl(nla_get_be32(tb[NFTA_DYNSET_FLAGS])); - - if (flags & ~NFT_DYNSET_F_INV) - return -EINVAL; + if (flags & ~(NFT_DYNSET_F_INV | NFT_DYNSET_F_EXPR)) + return -EOPNOTSUPP; if (flags & NFT_DYNSET_F_INV) priv->invert = true; + if (flags & NFT_DYNSET_F_EXPR) + priv->expr = true; } set = nft_set_lookup_global(ctx->net, ctx->table, @@ -210,7 +212,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx, timeout = 0; if (tb[NFTA_DYNSET_TIMEOUT] != NULL) { if (!(set->flags & NFT_SET_TIMEOUT)) - return -EINVAL; + return -EOPNOTSUPP; err = nf_msecs_to_jiffies64(tb[NFTA_DYNSET_TIMEOUT], &timeout); if (err) @@ -224,7 +226,7 @@ static int nft_dynset_init(const struct nft_ctx *ctx, if (tb[NFTA_DYNSET_SREG_DATA] != NULL) { if (!(set->flags & NFT_SET_MAP)) - return -EINVAL; + return -EOPNOTSUPP; if (set->dtype == NFT_DATA_VERDICT) return -EOPNOTSUPP; @@ -261,6 +263,9 @@ static int nft_dynset_init(const struct nft_ctx *ctx, struct nlattr *tmp; int left; + if (!priv->expr) + return -EINVAL; + i = 0; nla_for_each_nested(tmp, tb[NFTA_DYNSET_EXPRESSIONS], left) { if (i == NFT_SET_EXPR_MAX) { diff --git a/net/netfilter/xt_RATEEST.c b/net/netfilter/xt_RATEEST.c index 37253d399c6b..0d5c422f8745 100644 --- a/net/netfilter/xt_RATEEST.c +++ b/net/netfilter/xt_RATEEST.c @@ -115,6 +115,9 @@ static int xt_rateest_tg_checkentry(const struct xt_tgchk_param *par) } cfg; int ret; + if (strnlen(info->name, sizeof(est->name)) >= sizeof(est->name)) + return -ENAMETOOLONG; + net_get_random_once(&jhash_rnd, sizeof(jhash_rnd)); mutex_lock(&xn->hash_lock); diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c index e64727e1a72f..02a1f13f0798 100644 --- a/net/nfc/nci/core.c +++ b/net/nfc/nci/core.c @@ -508,7 +508,7 @@ static int nci_open_device(struct nci_dev *ndev) }; unsigned long opt = 0; - if (!(ndev->nci_ver & NCI_VER_2_MASK)) + if (ndev->nci_ver & NCI_VER_2_MASK) opt = (unsigned long)&nci_init_v2_cmd; rc = __nci_request(ndev, nci_init_req, opt, diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index de8e8dbbdeb8..6bbc7a448593 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -4595,7 +4595,9 @@ static void packet_seq_stop(struct seq_file *seq, void *v) static int packet_seq_show(struct seq_file *seq, void *v) { if (v == SEQ_START_TOKEN) - seq_puts(seq, "sk RefCnt Type Proto Iface R Rmem User Inode\n"); + seq_printf(seq, + "%*sRefCnt Type Proto Iface R Rmem User Inode\n", + IS_ENABLED(CONFIG_64BIT) ? -17 : -9, "sk"); else { struct sock *s = sk_entry(v); const struct packet_sock *po = pkt_sk(s); diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c index 56aaf8cb6527..8d00dfe8139e 100644 --- a/net/qrtr/ns.c +++ b/net/qrtr/ns.c @@ -755,7 +755,7 @@ static void qrtr_ns_data_ready(struct sock *sk) queue_work(qrtr_ns.workqueue, &qrtr_ns.work); } -void qrtr_ns_init(void) +int qrtr_ns_init(void) { struct sockaddr_qrtr sq; int ret; @@ -766,7 +766,7 @@ void qrtr_ns_init(void) ret = sock_create_kern(&init_net, AF_QIPCRTR, SOCK_DGRAM, PF_QIPCRTR, &qrtr_ns.sock); if (ret < 0) - return; + return ret; ret = kernel_getsockname(qrtr_ns.sock, (struct sockaddr *)&sq); if (ret < 0) { @@ -797,12 +797,13 @@ void qrtr_ns_init(void) if (ret < 0) goto err_wq; - return; + return 0; err_wq: destroy_workqueue(qrtr_ns.workqueue); err_sock: sock_release(qrtr_ns.sock); + return ret; } EXPORT_SYMBOL_GPL(qrtr_ns_init); diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c index f4ab3ca6d73b..b34358282f37 100644 --- a/net/qrtr/qrtr.c +++ b/net/qrtr/qrtr.c @@ -1287,13 +1287,19 @@ static int __init qrtr_proto_init(void) return rc; rc = sock_register(&qrtr_family); - if (rc) { - proto_unregister(&qrtr_proto); - return rc; - } + if (rc) + goto err_proto; - qrtr_ns_init(); + rc = qrtr_ns_init(); + if (rc) + goto err_sock; + return 0; + +err_sock: + sock_unregister(qrtr_family.family); +err_proto: + proto_unregister(&qrtr_proto); return rc; } postcore_initcall(qrtr_proto_init); diff --git a/net/qrtr/qrtr.h b/net/qrtr/qrtr.h index dc2b67f17927..3f2d28696062 100644 --- a/net/qrtr/qrtr.h +++ b/net/qrtr/qrtr.h @@ -29,7 +29,7 @@ void qrtr_endpoint_unregister(struct qrtr_endpoint *ep); int qrtr_endpoint_post(struct qrtr_endpoint *ep, const void *data, size_t len); -void qrtr_ns_init(void); +int qrtr_ns_init(void); void qrtr_ns_remove(void); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 667c44aa5a63..dc201363f2c4 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -430,7 +430,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) return; } - if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) { + if (state == RXRPC_CALL_SERVER_RECV_REQUEST) { unsigned long timo = READ_ONCE(call->next_req_timo); unsigned long now, expect_req_by; diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c index 9631aa8543b5..8d2073e0e3da 100644 --- a/net/rxrpc/key.c +++ b/net/rxrpc/key.c @@ -598,7 +598,7 @@ static long rxrpc_read(const struct key *key, default: /* we have a ticket we can't encode */ pr_err("Unsupported key token type (%u)\n", token->security_index); - continue; + return -ENOPKG; } _debug("token[%u]: toksize=%u", ntoks, toksize); @@ -674,7 +674,9 @@ static long rxrpc_read(const struct key *key, break; default: - break; + pr_err("Unsupported key token type (%u)\n", + token->security_index); + return -ENOPKG; } ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==, diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 1319986693fc..84f932532db7 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -1272,6 +1272,10 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, nla_opt_msk = nla_data(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]); msk_depth = nla_len(tb[TCA_FLOWER_KEY_ENC_OPTS_MASK]); + if (!nla_ok(nla_opt_msk, msk_depth)) { + NL_SET_ERR_MSG(extack, "Invalid nested attribute for masks"); + return -EINVAL; + } } nla_for_each_attr(nla_opt_key, nla_enc_key, @@ -1307,9 +1311,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, NL_SET_ERR_MSG(extack, "Key and mask miss aligned"); return -EINVAL; } - - if (msk_depth) - nla_opt_msk = nla_next(nla_opt_msk, &msk_depth); break; case TCA_FLOWER_KEY_ENC_OPTS_VXLAN: if (key->enc_opts.dst_opt_type) { @@ -1340,9 +1341,6 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, NL_SET_ERR_MSG(extack, "Key and mask miss aligned"); return -EINVAL; } - - if (msk_depth) - nla_opt_msk = nla_next(nla_opt_msk, &msk_depth); break; case TCA_FLOWER_KEY_ENC_OPTS_ERSPAN: if (key->enc_opts.dst_opt_type) { @@ -1373,14 +1371,20 @@ static int fl_set_enc_opt(struct nlattr **tb, struct fl_flow_key *key, NL_SET_ERR_MSG(extack, "Key and mask miss aligned"); return -EINVAL; } - - if (msk_depth) - nla_opt_msk = nla_next(nla_opt_msk, &msk_depth); break; default: NL_SET_ERR_MSG(extack, "Unknown tunnel option type"); return -EINVAL; } + + if (!msk_depth) + continue; + + if (!nla_ok(nla_opt_msk, msk_depth)) { + NL_SET_ERR_MSG(extack, "A mask attribute is invalid"); + return -EINVAL; + } + nla_opt_msk = nla_next(nla_opt_msk, &msk_depth); } return 0; diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c index 78bec347b8b6..c4007b9cd16d 100644 --- a/net/sched/cls_tcindex.c +++ b/net/sched/cls_tcindex.c @@ -366,9 +366,13 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base, if (tb[TCA_TCINDEX_MASK]) cp->mask = nla_get_u16(tb[TCA_TCINDEX_MASK]); - if (tb[TCA_TCINDEX_SHIFT]) + if (tb[TCA_TCINDEX_SHIFT]) { cp->shift = nla_get_u32(tb[TCA_TCINDEX_SHIFT]); - + if (cp->shift > 16) { + err = -EINVAL; + goto errout; + } + } if (!cp->hash) { /* Hash not specified, use perfect hash if the upper limit * of the hashing index is below the threshold. diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index 51cb553e4317..6fe4e5cc807c 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -412,7 +412,8 @@ struct qdisc_rate_table *qdisc_get_rtab(struct tc_ratespec *r, { struct qdisc_rate_table *rtab; - if (tab == NULL || r->rate == 0 || r->cell_log == 0 || + if (tab == NULL || r->rate == 0 || + r->cell_log == 0 || r->cell_log >= 32 || nla_len(tab) != TC_RTAB_SIZE) { NL_SET_ERR_MSG(extack, "Invalid rate table parameters for searching"); return NULL; diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c index bd618b00d319..50f680f03a54 100644 --- a/net/sched/sch_choke.c +++ b/net/sched/sch_choke.c @@ -362,7 +362,7 @@ static int choke_change(struct Qdisc *sch, struct nlattr *opt, ctl = nla_data(tb[TCA_CHOKE_PARMS]); - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) return -EINVAL; if (ctl->limit > CHOKE_MAX_QUEUE) diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c index 8599c6f31b05..e0bc77533acc 100644 --- a/net/sched/sch_gred.c +++ b/net/sched/sch_gred.c @@ -480,7 +480,7 @@ static inline int gred_change_vq(struct Qdisc *sch, int dp, struct gred_sched *table = qdisc_priv(sch); struct gred_sched_data *q = table->tab[dp]; - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) { + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) { NL_SET_ERR_MSG_MOD(extack, "invalid RED parameters"); return -EINVAL; } diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c index e89fab6ccb34..b4ae34d7aa96 100644 --- a/net/sched/sch_red.c +++ b/net/sched/sch_red.c @@ -250,7 +250,7 @@ static int __red_change(struct Qdisc *sch, struct nlattr **tb, max_P = tb[TCA_RED_MAX_P] ? nla_get_u32(tb[TCA_RED_MAX_P]) : 0; ctl = nla_data(tb[TCA_RED_PARMS]); - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog)) + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) return -EINVAL; err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS, diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index bca2be57d9fc..b25e51440623 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -647,7 +647,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt) } if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, - ctl_v1->Wlog)) + ctl_v1->Wlog, ctl_v1->Scell_log)) return -EINVAL; if (ctl_v1 && ctl_v1->qth_min) { p = kmalloc(sizeof(*p), GFP_KERNEL); diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index c74817ec9964..6f775275826a 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -1605,8 +1605,9 @@ static void taprio_reset(struct Qdisc *sch) hrtimer_cancel(&q->advance_timer); if (q->qdiscs) { - for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++) - qdisc_reset(q->qdiscs[i]); + for (i = 0; i < dev->num_tx_queues; i++) + if (q->qdiscs[i]) + qdisc_reset(q->qdiscs[i]); } sch->qstats.backlog = 0; sch->q.qlen = 0; @@ -1626,7 +1627,7 @@ static void taprio_destroy(struct Qdisc *sch) taprio_disable_offload(dev, q, NULL); if (q->qdiscs) { - for (i = 0; i < dev->num_tx_queues && q->qdiscs[i]; i++) + for (i = 0; i < dev->num_tx_queues; i++) qdisc_put(q->qdiscs[i]); kfree(q->qdiscs); diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index 59342b519e34..0df85a12651e 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -246,7 +246,8 @@ int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb) goto errattr; smc_clc_get_hostname(&host); if (host) { - snprintf(hostname, sizeof(hostname), "%s", host); + memcpy(hostname, host, SMC_MAX_HOSTNAME_LEN); + hostname[SMC_MAX_HOSTNAME_LEN] = 0; if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname)) goto errattr; } @@ -257,7 +258,8 @@ int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb) smc_ism_get_system_eid(smcd_dev, &seid); mutex_unlock(&smcd_dev_list.mutex); if (seid && smc_ism_is_v2_capable()) { - snprintf(smc_seid, sizeof(smc_seid), "%s", seid); + memcpy(smc_seid, seid, SMC_MAX_EID_LEN); + smc_seid[SMC_MAX_EID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid)) goto errattr; } @@ -295,7 +297,8 @@ static int smc_nl_fill_lgr(struct smc_link_group *lgr, goto errattr; if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id)) goto errattr; - snprintf(smc_target, sizeof(smc_target), "%s", lgr->pnet_id); + memcpy(smc_target, lgr->pnet_id, SMC_MAX_PNETID_LEN); + smc_target[SMC_MAX_PNETID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target)) goto errattr; @@ -312,7 +315,7 @@ static int smc_nl_fill_lgr_link(struct smc_link_group *lgr, struct sk_buff *skb, struct netlink_callback *cb) { - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; + char smc_ibname[IB_DEVICE_NAME_MAX]; u8 smc_gid_target[41]; struct nlattr *attrs; u32 link_uid = 0; @@ -461,7 +464,8 @@ static int smc_nl_fill_smcd_lgr(struct smc_link_group *lgr, goto errattr; if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd))) goto errattr; - snprintf(smc_pnet, sizeof(smc_pnet), "%s", lgr->smcd->pnetid); + memcpy(smc_pnet, lgr->smcd->pnetid, SMC_MAX_PNETID_LEN); + smc_pnet[SMC_MAX_PNETID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet)) goto errattr; @@ -474,10 +478,12 @@ static int smc_nl_fill_smcd_lgr(struct smc_link_group *lgr, goto errv2attr; if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os)) goto errv2attr; - snprintf(smc_host, sizeof(smc_host), "%s", lgr->peer_hostname); + memcpy(smc_host, lgr->peer_hostname, SMC_MAX_HOSTNAME_LEN); + smc_host[SMC_MAX_HOSTNAME_LEN] = 0; if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host)) goto errv2attr; - snprintf(smc_eid, sizeof(smc_eid), "%s", lgr->negotiated_eid); + memcpy(smc_eid, lgr->negotiated_eid, SMC_MAX_EID_LEN); + smc_eid[SMC_MAX_EID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid)) goto errv2attr; diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index ddd7fac98b1d..7d7ba0320d5a 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -371,8 +371,8 @@ static int smc_nl_handle_dev_port(struct sk_buff *skb, if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcibdev->pnetid_by_user[port])) goto errattr; - snprintf(smc_pnet, sizeof(smc_pnet), "%s", - (char *)&smcibdev->pnetid[port]); + memcpy(smc_pnet, &smcibdev->pnetid[port], SMC_MAX_PNETID_LEN); + smc_pnet[SMC_MAX_PNETID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) goto errattr; if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV, @@ -414,7 +414,7 @@ static int smc_nl_handle_smcr_dev(struct smc_ib_device *smcibdev, struct sk_buff *skb, struct netlink_callback *cb) { - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; + char smc_ibname[IB_DEVICE_NAME_MAX]; struct smc_pci_dev smc_pci_dev; struct pci_dev *pci_dev; unsigned char is_crit; diff --git a/net/smc/smc_ism.c b/net/smc/smc_ism.c index 524ef64a191a..9c6e95882553 100644 --- a/net/smc/smc_ism.c +++ b/net/smc/smc_ism.c @@ -250,7 +250,8 @@ static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd, goto errattr; if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user)) goto errportattr; - snprintf(smc_pnet, sizeof(smc_pnet), "%s", smcd->pnetid); + memcpy(smc_pnet, smcd->pnetid, SMC_MAX_PNETID_LEN); + smc_pnet[SMC_MAX_PNETID_LEN] = 0; if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) goto errportattr; diff --git a/net/sunrpc/addr.c b/net/sunrpc/addr.c index 010dcb876f9d..6e4dbd577a39 100644 --- a/net/sunrpc/addr.c +++ b/net/sunrpc/addr.c @@ -185,7 +185,7 @@ static int rpc_parse_scope_id(struct net *net, const char *buf, scope_id = dev->ifindex; dev_put(dev); } else { - if (kstrtou32(p, 10, &scope_id) == 0) { + if (kstrtou32(p, 10, &scope_id) != 0) { kfree(p); return 0; } diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 5fb9164aa690..dcc50ae54550 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -857,6 +857,7 @@ int svc_recv(struct svc_rqst *rqstp, long timeout) err = -EAGAIN; if (len <= 0) goto out_release; + trace_svc_xdr_recvfrom(&rqstp->rq_arg); clear_bit(XPT_OLD, &xprt->xpt_flags); @@ -866,7 +867,6 @@ int svc_recv(struct svc_rqst *rqstp, long timeout) if (serv->sv_stats) serv->sv_stats->netcnt++; - trace_svc_xdr_recvfrom(rqstp, &rqstp->rq_arg); return len; out_release: rqstp->rq_res.len = 0; @@ -904,7 +904,7 @@ int svc_send(struct svc_rqst *rqstp) xb->len = xb->head[0].iov_len + xb->page_len + xb->tail[0].iov_len; - trace_svc_xdr_sendto(rqstp, xb); + trace_svc_xdr_sendto(rqstp->rq_xid, xb); trace_svc_stats_latency(rqstp); len = xprt->xpt_ops->xpo_sendto(rqstp); diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index b248f2349437..c9766d07eb81 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1062,6 +1062,90 @@ err_noclose: return 0; /* record not complete */ } +static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, + int flags) +{ + return kernel_sendpage(sock, virt_to_page(vec->iov_base), + offset_in_page(vec->iov_base), + vec->iov_len, flags); +} + +/* + * kernel_sendpage() is used exclusively to reduce the number of + * copy operations in this path. Therefore the caller must ensure + * that the pages backing @xdr are unchanging. + * + * In addition, the logic assumes that * .bv_len is never larger + * than PAGE_SIZE. + */ +static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg, + struct xdr_buf *xdr, rpc_fraghdr marker, + unsigned int *sentp) +{ + const struct kvec *head = xdr->head; + const struct kvec *tail = xdr->tail; + struct kvec rm = { + .iov_base = &marker, + .iov_len = sizeof(marker), + }; + int flags, ret; + + *sentp = 0; + xdr_alloc_bvec(xdr, GFP_KERNEL); + + msg->msg_flags = MSG_MORE; + ret = kernel_sendmsg(sock, msg, &rm, 1, rm.iov_len); + if (ret < 0) + return ret; + *sentp += ret; + if (ret != rm.iov_len) + return -EAGAIN; + + flags = head->iov_len < xdr->len ? MSG_MORE | MSG_SENDPAGE_NOTLAST : 0; + ret = svc_tcp_send_kvec(sock, head, flags); + if (ret < 0) + return ret; + *sentp += ret; + if (ret != head->iov_len) + goto out; + + if (xdr->page_len) { + unsigned int offset, len, remaining; + struct bio_vec *bvec; + + bvec = xdr->bvec; + offset = xdr->page_base; + remaining = xdr->page_len; + flags = MSG_MORE | MSG_SENDPAGE_NOTLAST; + while (remaining > 0) { + if (remaining <= PAGE_SIZE && tail->iov_len == 0) + flags = 0; + len = min(remaining, bvec->bv_len); + ret = kernel_sendpage(sock, bvec->bv_page, + bvec->bv_offset + offset, + len, flags); + if (ret < 0) + return ret; + *sentp += ret; + if (ret != len) + goto out; + remaining -= len; + offset = 0; + bvec++; + } + } + + if (tail->iov_len) { + ret = svc_tcp_send_kvec(sock, tail, 0); + if (ret < 0) + return ret; + *sentp += ret; + } + +out: + return 0; +} + /** * svc_tcp_sendto - Send out a reply on a TCP socket * @rqstp: completed svc_rqst @@ -1089,7 +1173,7 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp) mutex_lock(&xprt->xpt_mutex); if (svc_xprt_is_dead(xprt)) goto out_notconn; - err = xprt_sock_sendmsg(svsk->sk_sock, &msg, xdr, 0, marker, &sent); + err = svc_tcp_sendmsg(svsk->sk_sock, &msg, xdr, marker, &sent); xdr_free_bvec(xdr); trace_svcsock_tcp_send(xprt, err < 0 ? err : sent); if (err < 0 || sent != (xdr->len + sizeof(marker))) diff --git a/net/tipc/link.c b/net/tipc/link.c index 6ae2140eb4f7..115109259430 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -1030,7 +1030,6 @@ void tipc_link_reset(struct tipc_link *l) int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, struct sk_buff_head *xmitq) { - struct tipc_msg *hdr = buf_msg(skb_peek(list)); struct sk_buff_head *backlogq = &l->backlogq; struct sk_buff_head *transmq = &l->transmq; struct sk_buff *skb, *_skb; @@ -1038,13 +1037,18 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, u16 ack = l->rcv_nxt - 1; u16 seqno = l->snd_nxt; int pkt_cnt = skb_queue_len(list); - int imp = msg_importance(hdr); unsigned int mss = tipc_link_mss(l); unsigned int cwin = l->window; unsigned int mtu = l->mtu; + struct tipc_msg *hdr; bool new_bundle; int rc = 0; + int imp; + + if (pkt_cnt <= 0) + return 0; + hdr = buf_msg(skb_peek(list)); if (unlikely(msg_size(hdr) > mtu)) { pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n", skb_queue_len(list), msg_user(hdr), @@ -1053,6 +1057,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, return -EMSGSIZE; } + imp = msg_importance(hdr); /* Allow oversubscription of one data msg per source at congestion */ if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) { if (imp == TIPC_SYSTEM_IMPORTANCE) { @@ -2539,7 +2544,7 @@ void tipc_link_set_queue_limits(struct tipc_link *l, u32 min_win, u32 max_win) } /** - * link_reset_stats - reset link statistics + * tipc_link_reset_stats - reset link statistics * @l: pointer to link */ void tipc_link_reset_stats(struct tipc_link *l) diff --git a/net/tipc/node.c b/net/tipc/node.c index 83d9eb830592..008670d1f43e 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -1665,7 +1665,7 @@ static void tipc_lxc_xmit(struct net *peer_net, struct sk_buff_head *list) } /** - * tipc_node_xmit() is the general link level function for message sending + * tipc_node_xmit() - general link level function for message sending * @net: the applicable net namespace * @list: chain of buffers containing message * @dnode: address of destination node diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig index 27026f587fa6..f620acd2a0f5 100644 --- a/net/wireless/Kconfig +++ b/net/wireless/Kconfig @@ -21,6 +21,7 @@ config CFG80211 tristate "cfg80211 - wireless configuration API" depends on RFKILL || !RFKILL select FW_LOADER + select CRC32 # may need to update this when certificates are changed and are # using a different algorithm, though right now they shouldn't # (this is here rather than below to allow it to be a module) diff --git a/net/wireless/reg.c b/net/wireless/reg.c index bb72447ad960..8114bba8556c 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -5,7 +5,7 @@ * Copyright 2008-2011 Luis R. Rodriguez <[email protected]> * Copyright 2013-2014 Intel Mobile Communications GmbH * Copyright 2017 Intel Deutschland GmbH - * Copyright (C) 2018 - 2019 Intel Corporation + * Copyright (C) 2018 - 2021 Intel Corporation * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -139,6 +139,11 @@ static const struct ieee80211_regdomain *get_cfg80211_regdom(void) return rcu_dereference_rtnl(cfg80211_regdomain); } +/* + * Returns the regulatory domain associated with the wiphy. + * + * Requires either RTNL or RCU protection + */ const struct ieee80211_regdomain *get_wiphy_regdom(struct wiphy *wiphy) { return rcu_dereference_rtnl(wiphy->regd); @@ -2571,9 +2576,13 @@ void wiphy_apply_custom_regulatory(struct wiphy *wiphy, if (IS_ERR(new_regd)) return; + rtnl_lock(); + tmp = get_wiphy_regdom(wiphy); rcu_assign_pointer(wiphy->regd, new_regd); rcu_free_regdom(tmp); + + rtnl_unlock(); } EXPORT_SYMBOL(wiphy_apply_custom_regulatory); diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index ac4a317038f1..4a83117507f5 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -108,9 +108,9 @@ EXPORT_SYMBOL(xsk_get_pool_from_qid); void xsk_clear_pool_at_qid(struct net_device *dev, u16 queue_id) { - if (queue_id < dev->real_num_rx_queues) + if (queue_id < dev->num_rx_queues) dev->_rx[queue_id].pool = NULL; - if (queue_id < dev->real_num_tx_queues) + if (queue_id < dev->num_tx_queues) dev->_tx[queue_id].pool = NULL; } @@ -423,9 +423,9 @@ static void xsk_destruct_skb(struct sk_buff *skb) struct xdp_sock *xs = xdp_sk(skb->sk); unsigned long flags; - spin_lock_irqsave(&xs->tx_completion_lock, flags); + spin_lock_irqsave(&xs->pool->cq_lock, flags); xskq_prod_submit_addr(xs->pool->cq, addr); - spin_unlock_irqrestore(&xs->tx_completion_lock, flags); + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); sock_wfree(skb); } @@ -437,6 +437,7 @@ static int xsk_generic_xmit(struct sock *sk) bool sent_frame = false; struct xdp_desc desc; struct sk_buff *skb; + unsigned long flags; int err = 0; mutex_lock(&xs->mutex); @@ -468,10 +469,13 @@ static int xsk_generic_xmit(struct sock *sk) * if there is space in it. This avoids having to implement * any buffering in the Tx path. */ + spin_lock_irqsave(&xs->pool->cq_lock, flags); if (unlikely(err) || xskq_prod_reserve(xs->pool->cq)) { + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); kfree_skb(skb); goto out; } + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); skb->dev = xs->dev; skb->priority = sk->sk_priority; @@ -483,6 +487,9 @@ static int xsk_generic_xmit(struct sock *sk) if (err == NETDEV_TX_BUSY) { /* Tell user-space to retry the send */ skb->destructor = sock_wfree; + spin_lock_irqsave(&xs->pool->cq_lock, flags); + xskq_prod_cancel(xs->pool->cq); + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); /* Free skb without triggering the perf drop trace */ consume_skb(skb); err = -EAGAIN; @@ -878,6 +885,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len) } } + /* FQ and CQ are now owned by the buffer pool and cleaned up with it. */ + xs->fq_tmp = NULL; + xs->cq_tmp = NULL; + xs->dev = dev; xs->zc = xs->umem->zc; xs->queue_id = qid; @@ -1299,7 +1310,6 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol, xs->state = XSK_READY; mutex_init(&xs->mutex); spin_lock_init(&xs->rx_lock); - spin_lock_init(&xs->tx_completion_lock); INIT_LIST_HEAD(&xs->map_list); spin_lock_init(&xs->map_list_lock); diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 67a4494d63b6..20598eea658c 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -71,12 +71,11 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, INIT_LIST_HEAD(&pool->free_list); INIT_LIST_HEAD(&pool->xsk_tx_list); spin_lock_init(&pool->xsk_tx_list_lock); + spin_lock_init(&pool->cq_lock); refcount_set(&pool->users, 1); pool->fq = xs->fq_tmp; pool->cq = xs->cq_tmp; - xs->fq_tmp = NULL; - xs->cq_tmp = NULL; for (i = 0; i < pool->free_heads_cnt; i++) { xskb = &pool->heads[i]; diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index 4a9663aa7afe..2823b7c3302d 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -334,6 +334,11 @@ static inline bool xskq_prod_is_full(struct xsk_queue *q) return xskq_prod_nb_free(q, 1) ? false : true; } +static inline void xskq_prod_cancel(struct xsk_queue *q) +{ + q->cached_prod--; +} + static inline int xskq_prod_reserve(struct xsk_queue *q) { if (xskq_prod_is_full(q)) diff --git a/scripts/config b/scripts/config index 8c8d7c3d7acc..ff88e2faefd3 100755 --- a/scripts/config +++ b/scripts/config @@ -223,6 +223,7 @@ while [ "$1" != "" ] ; do ;; *) + echo "bad command: $CMD" >&2 usage ;; esac diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile index d66949bfeba4..b5487cce69e8 100644 --- a/scripts/gcc-plugins/Makefile +++ b/scripts/gcc-plugins/Makefile @@ -22,9 +22,9 @@ always-y += $(GCC_PLUGIN) GCC_PLUGINS_DIR = $(shell $(CC) -print-file-name=plugin) plugin_cxxflags = -Wp,-MMD,$(depfile) $(KBUILD_HOSTCXXFLAGS) -fPIC \ - -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++98 \ + -I $(GCC_PLUGINS_DIR)/include -I $(obj) -std=gnu++11 \ -fno-rtti -fno-exceptions -fasynchronous-unwind-tables \ - -ggdb -Wno-narrowing -Wno-unused-variable -Wno-c++11-compat \ + -ggdb -Wno-narrowing -Wno-unused-variable \ -Wno-format-diag plugin_ldflags = -shared diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile index e46df0a2d4f9..2c40e68853dd 100644 --- a/scripts/kconfig/Makefile +++ b/scripts/kconfig/Makefile @@ -94,16 +94,6 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m .config $(configfiles) $(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig -PHONY += kvmconfig -kvmconfig: kvm_guest.config - @echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10" - @echo >&2 " Please use 'make $<' instead." - -PHONY += xenconfig -xenconfig: xen.config - @echo >&2 "WARNING: 'make $@' will be removed after Linux 5.10" - @echo >&2 " Please use 'make $<' instead." - PHONY += tinyconfig tinyconfig: $(Q)$(MAKE) -f $(srctree)/Makefile allnoconfig tiny.config diff --git a/scripts/kconfig/mconf-cfg.sh b/scripts/kconfig/mconf-cfg.sh index aa68ec95620d..fcd4acd4e9cb 100755 --- a/scripts/kconfig/mconf-cfg.sh +++ b/scripts/kconfig/mconf-cfg.sh @@ -33,7 +33,9 @@ if [ -f /usr/include/ncurses/ncurses.h ]; then exit 0 fi -if [ -f /usr/include/ncurses.h ]; then +# As a final fallback before giving up, check if $HOSTCC knows of a default +# ncurses installation (e.g. from a vendor-specific sysroot). +if echo '#include <ncurses.h>' | "${HOSTCC}" -E - >/dev/null 2>&1; then echo cflags=\"-D_GNU_SOURCE\" echo libs=\"-lncurses\" exit 0 diff --git a/security/lsm_audit.c b/security/lsm_audit.c index 7d8026f3f377..a0cd28cd31a8 100644 --- a/security/lsm_audit.c +++ b/security/lsm_audit.c @@ -275,7 +275,9 @@ static void dump_common_audit_data(struct audit_buffer *ab, struct inode *inode; audit_log_format(ab, " name="); + spin_lock(&a->u.dentry->d_lock); audit_log_untrustedstring(ab, a->u.dentry->d_name.name); + spin_unlock(&a->u.dentry->d_lock); inode = d_backing_inode(a->u.dentry); if (inode) { @@ -293,8 +295,9 @@ static void dump_common_audit_data(struct audit_buffer *ab, dentry = d_find_alias(inode); if (dentry) { audit_log_format(ab, " name="); - audit_log_untrustedstring(ab, - dentry->d_name.name); + spin_lock(&dentry->d_lock); + audit_log_untrustedstring(ab, dentry->d_name.name); + spin_unlock(&dentry->d_lock); dput(dentry); } audit_log_format(ab, " dev="); diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c index 11554d0412f0..1b8409ec2c97 100644 --- a/sound/core/seq/oss/seq_oss_synth.c +++ b/sound/core/seq/oss/seq_oss_synth.c @@ -611,7 +611,8 @@ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_in if (info->is_midi) { struct midi_info minf; - snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf); + if (snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf)) + return -ENXIO; inf->synth_type = SYNTH_TYPE_MIDI; inf->synth_subtype = 0; inf->nr_voices = 16; diff --git a/sound/firewire/fireface/ff-transaction.c b/sound/firewire/fireface/ff-transaction.c index 7f82762ccc8c..ee7122c461d4 100644 --- a/sound/firewire/fireface/ff-transaction.c +++ b/sound/firewire/fireface/ff-transaction.c @@ -88,7 +88,7 @@ static void transmit_midi_msg(struct snd_ff *ff, unsigned int port) /* Set interval to next transaction. */ ff->next_ktime[port] = ktime_add_ns(ktime_get(), - ff->rx_bytes[port] * 8 * NSEC_PER_SEC / 31250); + ff->rx_bytes[port] * 8 * (NSEC_PER_SEC / 31250)); if (quad_count == 1) tcode = TCODE_WRITE_QUADLET_REQUEST; diff --git a/sound/firewire/tascam/tascam-transaction.c b/sound/firewire/tascam/tascam-transaction.c index 90288b4b4637..a073cece4a7d 100644 --- a/sound/firewire/tascam/tascam-transaction.c +++ b/sound/firewire/tascam/tascam-transaction.c @@ -209,7 +209,7 @@ static void midi_port_work(struct work_struct *work) /* Set interval to next transaction. */ port->next_ktime = ktime_add_ns(ktime_get(), - port->consume_bytes * 8 * NSEC_PER_SEC / 31250); + port->consume_bytes * 8 * (NSEC_PER_SEC / 31250)); /* Start this transaction. */ port->idling = false; diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c index 687216e74526..eec1775dfffe 100644 --- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -2934,7 +2934,7 @@ static void hda_call_codec_resume(struct hda_codec *codec) snd_hdac_leave_pm(&codec->core); } -static int hda_codec_suspend(struct device *dev) +static int hda_codec_runtime_suspend(struct device *dev) { struct hda_codec *codec = dev_to_hda_codec(dev); unsigned int state; @@ -2953,7 +2953,7 @@ static int hda_codec_suspend(struct device *dev) return 0; } -static int hda_codec_resume(struct device *dev) +static int hda_codec_runtime_resume(struct device *dev) { struct hda_codec *codec = dev_to_hda_codec(dev); @@ -2968,16 +2968,6 @@ static int hda_codec_resume(struct device *dev) return 0; } -static int hda_codec_runtime_suspend(struct device *dev) -{ - return hda_codec_suspend(dev); -} - -static int hda_codec_runtime_resume(struct device *dev) -{ - return hda_codec_resume(dev); -} - #endif /* CONFIG_PM */ #ifdef CONFIG_PM_SLEEP @@ -2998,31 +2988,31 @@ static void hda_codec_pm_complete(struct device *dev) static int hda_codec_pm_suspend(struct device *dev) { dev->power.power_state = PMSG_SUSPEND; - return hda_codec_suspend(dev); + return pm_runtime_force_suspend(dev); } static int hda_codec_pm_resume(struct device *dev) { dev->power.power_state = PMSG_RESUME; - return hda_codec_resume(dev); + return pm_runtime_force_resume(dev); } static int hda_codec_pm_freeze(struct device *dev) { dev->power.power_state = PMSG_FREEZE; - return hda_codec_suspend(dev); + return pm_runtime_force_suspend(dev); } static int hda_codec_pm_thaw(struct device *dev) { dev->power.power_state = PMSG_THAW; - return hda_codec_resume(dev); + return pm_runtime_force_resume(dev); } static int hda_codec_pm_restore(struct device *dev) { dev->power.power_state = PMSG_RESTORE; - return hda_codec_resume(dev); + return pm_runtime_force_resume(dev); } #endif /* CONFIG_PM_SLEEP */ diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 6852668f1bcb..5a50d3a46445 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -2220,8 +2220,6 @@ static const struct snd_pci_quirk power_save_denylist[] = { SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0), /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0), - /* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */ - SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0), /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ SND_PCI_QUIRK(0x1558, 0x6504, "Clevo W65_67SB", 0), /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ @@ -2486,6 +2484,9 @@ static const struct pci_device_id azx_ids[] = { /* CometLake-S */ { PCI_DEVICE(0x8086, 0xa3f0), .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, + /* CometLake-R */ + { PCI_DEVICE(0x8086, 0xf0c8), + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, /* Icelake */ { PCI_DEVICE(0x8086, 0x34c8), .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, @@ -2509,6 +2510,9 @@ static const struct pci_device_id azx_ids[] = { /* Alderlake-S */ { PCI_DEVICE(0x8086, 0x7ad0), .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, + /* Alderlake-P */ + { PCI_DEVICE(0x8086, 0x51c8), + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, /* Elkhart Lake */ { PCI_DEVICE(0x8086, 0x4b55), .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, @@ -2600,7 +2604,8 @@ static const struct pci_device_id azx_ids[] = { .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_AMD_SB }, /* ATI HDMI */ { PCI_DEVICE(0x1002, 0x0002), - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | + AZX_DCAPS_PM_RUNTIME }, { PCI_DEVICE(0x1002, 0x1308), .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, { PCI_DEVICE(0x1002, 0x157a), @@ -2662,9 +2667,11 @@ static const struct pci_device_id azx_ids[] = { { PCI_DEVICE(0x1002, 0xaab0), .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, { PCI_DEVICE(0x1002, 0xaac0), - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | + AZX_DCAPS_PM_RUNTIME }, { PCI_DEVICE(0x1002, 0xaac8), - .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS }, + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | + AZX_DCAPS_PM_RUNTIME }, { PCI_DEVICE(0x1002, 0xaad8), .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS | AZX_DCAPS_PM_RUNTIME }, diff --git a/sound/pci/hda/hda_tegra.c b/sound/pci/hda/hda_tegra.c index 70164d1428d4..361cf2041911 100644 --- a/sound/pci/hda/hda_tegra.c +++ b/sound/pci/hda/hda_tegra.c @@ -388,7 +388,7 @@ static int hda_tegra_first_init(struct azx *chip, struct platform_device *pdev) * in powers of 2, next available ratio is 16 which can be * used as a limiting factor here. */ - if (of_device_is_compatible(np, "nvidia,tegra194-hda")) + if (of_device_is_compatible(np, "nvidia,tegra30-hda")) chip->bus.core.sdo_limit = 16; /* codec detection */ diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index be5000dd1585..d49cc4409d59 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -1070,6 +1070,7 @@ static int patch_conexant_auto(struct hda_codec *codec) static const struct hda_device_id snd_hda_id_conexant[] = { HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto), + HDA_CODEC_ENTRY(0x14f120d0, "CX11970", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto), HDA_CODEC_ENTRY(0x14f15051, "CX20561 (Hermosa)", patch_conexant_auto), diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index 1e4a4b83fbf6..97adff0cbcab 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -1733,7 +1733,7 @@ static void silent_stream_disable(struct hda_codec *codec, per_pin->silent_stream = false; unlock_out: - mutex_unlock(&spec->pcm_lock); + mutex_unlock(&per_pin->lock); } /* update ELD and jack state via audio component */ @@ -4346,6 +4346,7 @@ HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI", patch_i915_icl_hdmi), HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi), HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI", patch_i915_tgl_hdmi), HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI", patch_i915_tgl_hdmi), +HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi), HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi), diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index dde5ba209541..ed5b6b894dc1 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6289,6 +6289,7 @@ enum { ALC221_FIXUP_HP_FRONT_MIC, ALC292_FIXUP_TPT460, ALC298_FIXUP_SPK_VOLUME, + ALC298_FIXUP_LENOVO_SPK_VOLUME, ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER, ALC269_FIXUP_ATIV_BOOK_8, ALC221_FIXUP_HP_MIC_NO_PRESENCE, @@ -6370,6 +6371,7 @@ enum { ALC256_FIXUP_HP_HEADSET_MIC, ALC236_FIXUP_DELL_AIO_HEADSET_MIC, ALC282_FIXUP_ACER_DISABLE_LINEOUT, + ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST, }; static const struct hda_fixup alc269_fixups[] = { @@ -7119,6 +7121,10 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, }, + [ALC298_FIXUP_LENOVO_SPK_VOLUME] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc298_fixup_speaker_volume, + }, [ALC295_FIXUP_DISABLE_DAC3] = { .type = HDA_FIXUP_FUNC, .v.func = alc295_fixup_disable_dac3, @@ -7803,6 +7809,12 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_HEADSET_MODE }, + [ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc269_fixup_limit_int_mic_boost, + .chained = true, + .chain_id = ALC255_FIXUP_ACER_MIC_NO_PRESENCE, + }, }; static const struct snd_pci_quirk alc269_fixup_tbl[] = { @@ -7821,6 +7833,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC), SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK), + SND_PCI_QUIRK(0x1025, 0x1094, "Acer Aspire E5-575T", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST), SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK), @@ -7885,7 +7898,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC), SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC), - SND_PCI_QUIRK(0x1028, 0x0a58, "Dell Precision 3650 Tower", ALC255_FIXUP_DELL_HEADSET_MIC), + SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC), SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), @@ -7959,11 +7972,17 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_AMP_INIT), SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x103c, 0x8780, "HP ZBook Fury 17 G7 Mobile Workstation", + ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation", + ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), @@ -8021,6 +8040,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC), SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE), SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), + SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), + SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK), SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE), SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC), SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), @@ -8126,6 +8147,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c index 7ef8f3105cdb..834367dd54e1 100644 --- a/sound/pci/hda/patch_via.c +++ b/sound/pci/hda/patch_via.c @@ -113,6 +113,7 @@ static struct via_spec *via_new_spec(struct hda_codec *codec) spec->codec_type = VT1708S; spec->gen.indep_hp = 1; spec->gen.keep_eapd_on = 1; + spec->gen.dac_min_mute = 1; spec->gen.pcm_playback_hook = via_playback_pcm_hook; spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO; codec->power_save_node = 1; @@ -1002,6 +1003,7 @@ static const struct hda_verb vt1802_init_verbs[] = { enum { VIA_FIXUP_INTMIC_BOOST, VIA_FIXUP_ASUS_G75, + VIA_FIXUP_POWER_SAVE, }; static void via_fixup_intmic_boost(struct hda_codec *codec, @@ -1011,6 +1013,13 @@ static void via_fixup_intmic_boost(struct hda_codec *codec, override_mic_boost(codec, 0x30, 0, 2, 40); } +static void via_fixup_power_save(struct hda_codec *codec, + const struct hda_fixup *fix, int action) +{ + if (action == HDA_FIXUP_ACT_PRE_PROBE) + codec->power_save_node = 0; +} + static const struct hda_fixup via_fixups[] = { [VIA_FIXUP_INTMIC_BOOST] = { .type = HDA_FIXUP_FUNC, @@ -1025,11 +1034,16 @@ static const struct hda_fixup via_fixups[] = { { } } }, + [VIA_FIXUP_POWER_SAVE] = { + .type = HDA_FIXUP_FUNC, + .v.func = via_fixup_power_save, + }, }; static const struct snd_pci_quirk vt2002p_fixups[] = { SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75), SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST), + SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", VIA_FIXUP_POWER_SAVE), {} }; diff --git a/sound/soc/amd/raven/pci-acp3x.c b/sound/soc/amd/raven/pci-acp3x.c index 8c138e490f0c..d3536fd6a124 100644 --- a/sound/soc/amd/raven/pci-acp3x.c +++ b/sound/soc/amd/raven/pci-acp3x.c @@ -140,21 +140,14 @@ static int snd_acp3x_probe(struct pci_dev *pci, goto release_regions; } - /* check for msi interrupt support */ - ret = pci_enable_msi(pci); - if (ret) - /* msi is not enabled */ - irqflags = IRQF_SHARED; - else - /* msi is enabled */ - irqflags = 0; + irqflags = IRQF_SHARED; addr = pci_resource_start(pci, 0); adata->acp3x_base = devm_ioremap(&pci->dev, addr, pci_resource_len(pci, 0)); if (!adata->acp3x_base) { ret = -ENOMEM; - goto disable_msi; + goto release_regions; } pci_set_master(pci); pci_set_drvdata(pci, adata); @@ -162,7 +155,7 @@ static int snd_acp3x_probe(struct pci_dev *pci, adata->pme_en = rv_readl(adata->acp3x_base + mmACP_PME_EN); ret = acp3x_init(adata); if (ret) - goto disable_msi; + goto release_regions; val = rv_readl(adata->acp3x_base + mmACP_I2S_PIN_CONFIG); switch (val) { @@ -251,8 +244,6 @@ unregister_devs: de_init: if (acp3x_deinit(adata->acp3x_base)) dev_err(&pci->dev, "ACP de-init failed\n"); -disable_msi: - pci_disable_msi(pci); release_regions: pci_release_regions(pci); disable_pci: @@ -311,7 +302,6 @@ static void snd_acp3x_remove(struct pci_dev *pci) dev_err(&pci->dev, "ACP de-init failed\n"); pm_runtime_forbid(&pci->dev); pm_runtime_get_noresume(&pci->dev); - pci_disable_msi(pci); pci_release_regions(pci); pci_disable_device(pci); } diff --git a/sound/soc/amd/renoir/rn-pci-acp3x.c b/sound/soc/amd/renoir/rn-pci-acp3x.c index fa169bf09886..deca8c7a0e87 100644 --- a/sound/soc/amd/renoir/rn-pci-acp3x.c +++ b/sound/soc/amd/renoir/rn-pci-acp3x.c @@ -171,6 +171,20 @@ static const struct dmi_system_id rn_acp_quirk_table[] = { DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"), } }, + { + /* Lenovo ThinkPad E14 Gen 2 */ + .matches = { + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), + DMI_EXACT_MATCH(DMI_BOARD_NAME, "20T6CTO1WW"), + } + }, + { + /* Lenovo ThinkPad X395 */ + .matches = { + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), + DMI_EXACT_MATCH(DMI_BOARD_NAME, "20NLCTO1WW"), + } + }, {} }; diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig index 142373ec411a..9fe9471f4514 100644 --- a/sound/soc/atmel/Kconfig +++ b/sound/soc/atmel/Kconfig @@ -143,7 +143,7 @@ config SND_MCHP_SOC_SPDIFTX - sama7g5 This S/PDIF TX driver is compliant with IEC-60958 standard and - includes programable User Data and Channel Status fields. + includes programmable User Data and Channel Status fields. config SND_MCHP_SOC_SPDIFRX tristate "Microchip ASoC driver for boards using S/PDIF RX" @@ -157,5 +157,5 @@ config SND_MCHP_SOC_SPDIFRX - sama7g5 This S/PDIF RX driver is compliant with IEC-60958 standard and - includes programable User Data and Channel Status fields. + includes programmable User Data and Channel Status fields. endif diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig index ba4eb54aafcb..9bf6bfdaf11e 100644 --- a/sound/soc/codecs/Kconfig +++ b/sound/soc/codecs/Kconfig @@ -457,7 +457,7 @@ config SND_SOC_ADAU7118_HW help Enable support for the Analog Devices ADAU7118 8 Channel PDM-to-I2S/TDM Converter. In this mode, the device works in standalone mode which - means that there is no bus to comunicate with it. Stereo mode is not + means that there is no bus to communicate with it. Stereo mode is not supported in this mode. To compile this driver as a module, choose M here: the module diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c index d5fcc4db8284..0f3ac22f2cf8 100644 --- a/sound/soc/codecs/hdmi-codec.c +++ b/sound/soc/codecs/hdmi-codec.c @@ -717,7 +717,7 @@ static int hdmi_codec_set_jack(struct snd_soc_component *component, void *data) { struct hdmi_codec_priv *hcp = snd_soc_component_get_drvdata(component); - int ret = -EOPNOTSUPP; + int ret = -ENOTSUPP; if (hcp->hcd.ops->hook_plugged_cb) { hcp->jack = jack; diff --git a/sound/soc/codecs/max98373-i2c.c b/sound/soc/codecs/max98373-i2c.c index 92921e34f948..85f6865019d4 100644 --- a/sound/soc/codecs/max98373-i2c.c +++ b/sound/soc/codecs/max98373-i2c.c @@ -19,6 +19,12 @@ #include <sound/tlv.h> #include "max98373.h" +static const u32 max98373_i2c_cache_reg[] = { + MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, + MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, + MAX98373_R20B6_BDE_CUR_STATE_READBACK, +}; + static struct reg_default max98373_reg[] = { {MAX98373_R2000_SW_RESET, 0x00}, {MAX98373_R2001_INT_RAW1, 0x00}, @@ -472,6 +478,11 @@ static struct snd_soc_dai_driver max98373_dai[] = { static int max98373_suspend(struct device *dev) { struct max98373_priv *max98373 = dev_get_drvdata(dev); + int i; + + /* cache feedback register values before suspend */ + for (i = 0; i < max98373->cache_num; i++) + regmap_read(max98373->regmap, max98373->cache[i].reg, &max98373->cache[i].val); regcache_cache_only(max98373->regmap, true); regcache_mark_dirty(max98373->regmap); @@ -509,6 +520,7 @@ static int max98373_i2c_probe(struct i2c_client *i2c, { int ret = 0; int reg = 0; + int i; struct max98373_priv *max98373 = NULL; max98373 = devm_kzalloc(&i2c->dev, sizeof(*max98373), GFP_KERNEL); @@ -534,6 +546,14 @@ static int max98373_i2c_probe(struct i2c_client *i2c, return ret; } + max98373->cache_num = ARRAY_SIZE(max98373_i2c_cache_reg); + max98373->cache = devm_kcalloc(&i2c->dev, max98373->cache_num, + sizeof(*max98373->cache), + GFP_KERNEL); + + for (i = 0; i < max98373->cache_num; i++) + max98373->cache[i].reg = max98373_i2c_cache_reg[i]; + /* voltage/current slot & gpio configuration */ max98373_slot_config(&i2c->dev, max98373); diff --git a/sound/soc/codecs/max98373-sdw.c b/sound/soc/codecs/max98373-sdw.c index ec2e79c57357..b8d471d79e93 100644 --- a/sound/soc/codecs/max98373-sdw.c +++ b/sound/soc/codecs/max98373-sdw.c @@ -23,6 +23,12 @@ struct sdw_stream_data { struct sdw_stream_runtime *sdw_stream; }; +static const u32 max98373_sdw_cache_reg[] = { + MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, + MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, + MAX98373_R20B6_BDE_CUR_STATE_READBACK, +}; + static struct reg_default max98373_reg[] = { {MAX98373_R0040_SCP_INIT_STAT_1, 0x00}, {MAX98373_R0041_SCP_INIT_MASK_1, 0x00}, @@ -245,6 +251,11 @@ static const struct regmap_config max98373_sdw_regmap = { static __maybe_unused int max98373_suspend(struct device *dev) { struct max98373_priv *max98373 = dev_get_drvdata(dev); + int i; + + /* cache feedback register values before suspend */ + for (i = 0; i < max98373->cache_num; i++) + regmap_read(max98373->regmap, max98373->cache[i].reg, &max98373->cache[i].val); regcache_cache_only(max98373->regmap, true); @@ -757,6 +768,7 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap) { struct max98373_priv *max98373; int ret; + int i; struct device *dev = &slave->dev; /* Allocate and assign private driver data structure */ @@ -768,6 +780,14 @@ static int max98373_init(struct sdw_slave *slave, struct regmap *regmap) max98373->regmap = regmap; max98373->slave = slave; + max98373->cache_num = ARRAY_SIZE(max98373_sdw_cache_reg); + max98373->cache = devm_kcalloc(dev, max98373->cache_num, + sizeof(*max98373->cache), + GFP_KERNEL); + + for (i = 0; i < max98373->cache_num; i++) + max98373->cache[i].reg = max98373_sdw_cache_reg[i]; + /* Read voltage and slot configuration */ max98373_slot_config(dev, max98373); diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c index 929bb1798c43..31d571d4fac1 100644 --- a/sound/soc/codecs/max98373.c +++ b/sound/soc/codecs/max98373.c @@ -168,6 +168,31 @@ static SOC_ENUM_SINGLE_DECL(max98373_adc_samplerate_enum, MAX98373_R2051_MEAS_ADC_SAMPLING_RATE, 0, max98373_ADC_samplerate_text); +static int max98373_feedback_get(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) +{ + struct snd_soc_component *component = snd_kcontrol_chip(kcontrol); + struct soc_mixer_control *mc = + (struct soc_mixer_control *)kcontrol->private_value; + struct max98373_priv *max98373 = snd_soc_component_get_drvdata(component); + int i; + + if (snd_soc_component_get_bias_level(component) == SND_SOC_BIAS_OFF) { + /* + * Register values will be cached before suspend. The cached value + * will be a valid value and userspace will happy with that. + */ + for (i = 0; i < max98373->cache_num; i++) { + if (mc->reg == max98373->cache[i].reg) { + ucontrol->value.integer.value[0] = max98373->cache[i].val; + return 0; + } + } + } + + return snd_soc_put_volsw(kcontrol, ucontrol); +} + static const struct snd_kcontrol_new max98373_snd_controls[] = { SOC_SINGLE("Digital Vol Sel Switch", MAX98373_R203F_AMP_DSP_CFG, MAX98373_AMP_VOL_SEL_SHIFT, 1, 0), @@ -209,8 +234,10 @@ SOC_SINGLE("ADC PVDD FLT Switch", MAX98373_R2052_MEAS_ADC_PVDD_FLT_CFG, MAX98373_FLT_EN_SHIFT, 1, 0), SOC_SINGLE("ADC TEMP FLT Switch", MAX98373_R2053_MEAS_ADC_THERM_FLT_CFG, MAX98373_FLT_EN_SHIFT, 1, 0), -SOC_SINGLE("ADC PVDD", MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 0, 0xFF, 0), -SOC_SINGLE("ADC TEMP", MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 0, 0xFF, 0), +SOC_SINGLE_EXT("ADC PVDD", MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK, 0, 0xFF, 0, + max98373_feedback_get, NULL), +SOC_SINGLE_EXT("ADC TEMP", MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK, 0, 0xFF, 0, + max98373_feedback_get, NULL), SOC_SINGLE("ADC PVDD FLT Coeff", MAX98373_R2052_MEAS_ADC_PVDD_FLT_CFG, 0, 0x3, 0), SOC_SINGLE("ADC TEMP FLT Coeff", MAX98373_R2053_MEAS_ADC_THERM_FLT_CFG, @@ -226,7 +253,8 @@ SOC_SINGLE("BDE LVL1 Thresh", MAX98373_R2097_BDE_L1_THRESH, 0, 0xFF, 0), SOC_SINGLE("BDE LVL2 Thresh", MAX98373_R2098_BDE_L2_THRESH, 0, 0xFF, 0), SOC_SINGLE("BDE LVL3 Thresh", MAX98373_R2099_BDE_L3_THRESH, 0, 0xFF, 0), SOC_SINGLE("BDE LVL4 Thresh", MAX98373_R209A_BDE_L4_THRESH, 0, 0xFF, 0), -SOC_SINGLE("BDE Active Level", MAX98373_R20B6_BDE_CUR_STATE_READBACK, 0, 8, 0), +SOC_SINGLE_EXT("BDE Active Level", MAX98373_R20B6_BDE_CUR_STATE_READBACK, 0, 8, 0, + max98373_feedback_get, NULL), SOC_SINGLE("BDE Clip Mode Switch", MAX98373_R2092_BDE_CLIPPER_MODE, 0, 1, 0), SOC_SINGLE("BDE Thresh Hysteresis", MAX98373_R209B_BDE_THRESH_HYST, 0, 0xFF, 0), SOC_SINGLE("BDE Hold Time", MAX98373_R2090_BDE_LVL_HOLD, 0, 0xFF, 0), diff --git a/sound/soc/codecs/max98373.h b/sound/soc/codecs/max98373.h index 4ab29b9d51c7..71f5a5228f34 100644 --- a/sound/soc/codecs/max98373.h +++ b/sound/soc/codecs/max98373.h @@ -203,6 +203,11 @@ /* MAX98373_R2000_SW_RESET */ #define MAX98373_SOFT_RESET (0x1 << 0) +struct max98373_cache { + u32 reg; + u32 val; +}; + struct max98373_priv { struct regmap *regmap; int reset_gpio; @@ -212,6 +217,9 @@ struct max98373_priv { bool interleave_mode; unsigned int ch_size; bool tdm_mode; + /* cache for reading a valid fake feedback value */ + struct max98373_cache *cache; + int cache_num; /* variables to support soundwire */ struct sdw_slave *slave; bool hw_init; diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c index 5771c02c3459..85f744184a60 100644 --- a/sound/soc/codecs/rt711.c +++ b/sound/soc/codecs/rt711.c @@ -462,6 +462,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol, unsigned int read_ll, read_rl; int i; + mutex_lock(&rt711->calibrate_mutex); + /* Can't use update bit function, so read the original value first */ addr_h = mc->reg; addr_l = mc->rreg; @@ -547,6 +549,8 @@ static int rt711_set_amp_gain_put(struct snd_kcontrol *kcontrol, if (dapm->bias_level <= SND_SOC_BIAS_STANDBY) regmap_write(rt711->regmap, RT711_SET_AUDIO_POWER_STATE, AC_PWRST_D3); + + mutex_unlock(&rt711->calibrate_mutex); return 0; } @@ -859,9 +863,11 @@ static int rt711_set_bias_level(struct snd_soc_component *component, break; case SND_SOC_BIAS_STANDBY: + mutex_lock(&rt711->calibrate_mutex); regmap_write(rt711->regmap, RT711_SET_AUDIO_POWER_STATE, AC_PWRST_D3); + mutex_unlock(&rt711->calibrate_mutex); break; default: diff --git a/sound/soc/fsl/imx-hdmi.c b/sound/soc/fsl/imx-hdmi.c index 2c2a76a71940..dbbb7618351c 100644 --- a/sound/soc/fsl/imx-hdmi.c +++ b/sound/soc/fsl/imx-hdmi.c @@ -90,7 +90,7 @@ static int imx_hdmi_init(struct snd_soc_pcm_runtime *rtd) } ret = snd_soc_component_set_jack(component, &data->hdmi_jack, NULL); - if (ret && ret != -EOPNOTSUPP) { + if (ret && ret != -ENOTSUPP) { dev_err(card->dev, "Can't set HDMI Jack %d\n", ret); return ret; } @@ -164,6 +164,7 @@ static int imx_hdmi_probe(struct platform_device *pdev) if ((hdmi_out && hdmi_in) || (!hdmi_out && !hdmi_in)) { dev_err(&pdev->dev, "Invalid HDMI DAI link\n"); + ret = -EINVAL; goto fail; } diff --git a/sound/soc/intel/boards/haswell.c b/sound/soc/intel/boards/haswell.c index c55d1239e705..c763bfeb1f38 100644 --- a/sound/soc/intel/boards/haswell.c +++ b/sound/soc/intel/boards/haswell.c @@ -189,6 +189,7 @@ static struct platform_driver haswell_audio = { .probe = haswell_audio_probe, .driver = { .name = "haswell-audio", + .pm = &snd_soc_pm_ops, }, }; diff --git a/sound/soc/intel/skylake/cnl-sst.c b/sound/soc/intel/skylake/cnl-sst.c index fcd8dff27ae8..1275c149acc0 100644 --- a/sound/soc/intel/skylake/cnl-sst.c +++ b/sound/soc/intel/skylake/cnl-sst.c @@ -224,6 +224,7 @@ static int cnl_set_dsp_D0(struct sst_dsp *ctx, unsigned int core_id) "dsp boot timeout, status=%#x error=%#x\n", sst_dsp_shim_read(ctx, CNL_ADSP_FW_STATUS), sst_dsp_shim_read(ctx, CNL_ADSP_ERROR_CODE)); + ret = -ETIMEDOUT; goto err; } } else { diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c index c8664ab80d45..87cac440b369 100644 --- a/sound/soc/meson/axg-tdm-interface.c +++ b/sound/soc/meson/axg-tdm-interface.c @@ -467,8 +467,20 @@ static int axg_tdm_iface_set_bias_level(struct snd_soc_component *component, return ret; } +static const struct snd_soc_dapm_widget axg_tdm_iface_dapm_widgets[] = { + SND_SOC_DAPM_SIGGEN("Playback Signal"), +}; + +static const struct snd_soc_dapm_route axg_tdm_iface_dapm_routes[] = { + { "Loopback", NULL, "Playback Signal" }, +}; + static const struct snd_soc_component_driver axg_tdm_iface_component_drv = { - .set_bias_level = axg_tdm_iface_set_bias_level, + .dapm_widgets = axg_tdm_iface_dapm_widgets, + .num_dapm_widgets = ARRAY_SIZE(axg_tdm_iface_dapm_widgets), + .dapm_routes = axg_tdm_iface_dapm_routes, + .num_dapm_routes = ARRAY_SIZE(axg_tdm_iface_dapm_routes), + .set_bias_level = axg_tdm_iface_set_bias_level, }; static const struct of_device_id axg_tdm_iface_of_match[] = { diff --git a/sound/soc/meson/axg-tdmin.c b/sound/soc/meson/axg-tdmin.c index 88ed95ae886b..b4faf9d5c1aa 100644 --- a/sound/soc/meson/axg-tdmin.c +++ b/sound/soc/meson/axg-tdmin.c @@ -228,15 +228,6 @@ static const struct axg_tdm_formatter_driver axg_tdmin_drv = { .regmap_cfg = &axg_tdmin_regmap_cfg, .ops = &axg_tdmin_ops, .quirks = &(const struct axg_tdm_formatter_hw) { - .skew_offset = 2, - }, -}; - -static const struct axg_tdm_formatter_driver g12a_tdmin_drv = { - .component_drv = &axg_tdmin_component_drv, - .regmap_cfg = &axg_tdmin_regmap_cfg, - .ops = &axg_tdmin_ops, - .quirks = &(const struct axg_tdm_formatter_hw) { .skew_offset = 3, }, }; @@ -247,10 +238,10 @@ static const struct of_device_id axg_tdmin_of_match[] = { .data = &axg_tdmin_drv, }, { .compatible = "amlogic,g12a-tdmin", - .data = &g12a_tdmin_drv, + .data = &axg_tdmin_drv, }, { .compatible = "amlogic,sm1-tdmin", - .data = &g12a_tdmin_drv, + .data = &axg_tdmin_drv, }, {} }; MODULE_DEVICE_TABLE(of, axg_tdmin_of_match); diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c index af684fd19ab9..c5e99c2d89c7 100644 --- a/sound/soc/qcom/lpass-cpu.c +++ b/sound/soc/qcom/lpass-cpu.c @@ -270,18 +270,6 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream, struct lpaif_i2sctl *i2sctl = drvdata->i2sctl; unsigned int id = dai->driver->id; int ret = -EINVAL; - unsigned int val = 0; - - ret = regmap_read(drvdata->lpaif_map, - LPAIF_I2SCTL_REG(drvdata->variant, dai->driver->id), &val); - if (ret) { - dev_err(dai->dev, "error reading from i2sctl reg: %d\n", ret); - return ret; - } - if (val == LPAIF_I2SCTL_RESET_STATE) { - dev_err(dai->dev, "error in i2sctl register state\n"); - return -ENOTRECOVERABLE; - } switch (cmd) { case SNDRV_PCM_TRIGGER_START: @@ -454,20 +442,16 @@ static bool lpass_cpu_regmap_volatile(struct device *dev, unsigned int reg) struct lpass_variant *v = drvdata->variant; int i; - for (i = 0; i < v->i2s_ports; ++i) - if (reg == LPAIF_I2SCTL_REG(v, i)) - return true; for (i = 0; i < v->irq_ports; ++i) if (reg == LPAIF_IRQSTAT_REG(v, i)) return true; for (i = 0; i < v->rdma_channels; ++i) - if (reg == LPAIF_RDMACURR_REG(v, i) || reg == LPAIF_RDMACTL_REG(v, i)) + if (reg == LPAIF_RDMACURR_REG(v, i)) return true; for (i = 0; i < v->wrdma_channels; ++i) - if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start) || - reg == LPAIF_WRDMACTL_REG(v, i + v->wrdma_channel_start)) + if (reg == LPAIF_WRDMACURR_REG(v, i + v->wrdma_channel_start)) return true; return false; diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c index 80b09dede5f9..d1c248590f3a 100644 --- a/sound/soc/qcom/lpass-platform.c +++ b/sound/soc/qcom/lpass-platform.c @@ -452,7 +452,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, unsigned int reg_irqclr = 0, val_irqclr = 0; unsigned int reg_irqen = 0, val_irqen = 0, val_mask = 0; unsigned int dai_id = cpu_dai->driver->id; - unsigned int dma_ctrl_reg = 0; ch = pcm_data->dma_ch; if (dir == SNDRV_PCM_STREAM_PLAYBACK) { @@ -469,17 +468,7 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, id = pcm_data->dma_ch - v->wrdma_channel_start; map = drvdata->lpaif_map; } - ret = regmap_read(map, LPAIF_DMACTL_REG(v, ch, dir, dai_id), &dma_ctrl_reg); - if (ret) { - dev_err(soc_runtime->dev, "error reading from rdmactl reg: %d\n", ret); - return ret; - } - if (dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE || - dma_ctrl_reg == LPAIF_DMACTL_RESET_STATE + 1) { - dev_err(soc_runtime->dev, "error in rdmactl register state\n"); - return -ENOTRECOVERABLE; - } switch (cmd) { case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: @@ -500,7 +489,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, "error writing to rdmactl reg: %d\n", ret); return ret; } - map = drvdata->hdmiif_map; reg_irqclr = LPASS_HDMITX_APP_IRQCLEAR_REG(v); val_irqclr = (LPAIF_IRQ_ALL(ch) | LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) | @@ -519,7 +507,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, break; case MI2S_PRIMARY: case MI2S_SECONDARY: - map = drvdata->lpaif_map; reg_irqclr = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST); val_irqclr = LPAIF_IRQ_ALL(ch); @@ -563,7 +550,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, "error writing to rdmactl reg: %d\n", ret); return ret; } - map = drvdata->hdmiif_map; reg_irqen = LPASS_HDMITX_APP_IRQEN_REG(v); val_mask = (LPAIF_IRQ_ALL(ch) | LPAIF_IRQ_HDMI_REQ_ON_PRELOAD(ch) | @@ -573,7 +559,6 @@ static int lpass_platform_pcmops_trigger(struct snd_soc_component *component, break; case MI2S_PRIMARY: case MI2S_SECONDARY: - map = drvdata->lpaif_map; reg_irqen = LPAIF_IRQEN_REG(v, LPAIF_IRQ_PORT_HOST); val_mask = LPAIF_IRQ_ALL(ch); val_irqen = 0; @@ -838,6 +823,39 @@ static void lpass_platform_pcm_free(struct snd_soc_component *component, } } +static int lpass_platform_pcmops_suspend(struct snd_soc_component *component) +{ + struct lpass_data *drvdata = snd_soc_component_get_drvdata(component); + struct regmap *map; + unsigned int dai_id = component->id; + + if (dai_id == LPASS_DP_RX) + map = drvdata->hdmiif_map; + else + map = drvdata->lpaif_map; + + regcache_cache_only(map, true); + regcache_mark_dirty(map); + + return 0; +} + +static int lpass_platform_pcmops_resume(struct snd_soc_component *component) +{ + struct lpass_data *drvdata = snd_soc_component_get_drvdata(component); + struct regmap *map; + unsigned int dai_id = component->id; + + if (dai_id == LPASS_DP_RX) + map = drvdata->hdmiif_map; + else + map = drvdata->lpaif_map; + + regcache_cache_only(map, false); + return regcache_sync(map); +} + + static const struct snd_soc_component_driver lpass_component_driver = { .name = DRV_NAME, .open = lpass_platform_pcmops_open, @@ -850,6 +868,8 @@ static const struct snd_soc_component_driver lpass_component_driver = { .mmap = lpass_platform_pcmops_mmap, .pcm_construct = lpass_platform_pcm_new, .pcm_destruct = lpass_platform_pcm_free, + .suspend = lpass_platform_pcmops_suspend, + .resume = lpass_platform_pcmops_resume, }; diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c index b9aacf3d3b29..abdfd9cf91e2 100644 --- a/sound/soc/sh/rcar/adg.c +++ b/sound/soc/sh/rcar/adg.c @@ -366,25 +366,27 @@ void rsnd_adg_clk_control(struct rsnd_priv *priv, int enable) struct rsnd_adg *adg = rsnd_priv_to_adg(priv); struct device *dev = rsnd_priv_to_dev(priv); struct clk *clk; - int i, ret; + int i; for_each_rsnd_clk(clk, adg, i) { - ret = 0; if (enable) { - ret = clk_prepare_enable(clk); + int ret = clk_prepare_enable(clk); /* * We shouldn't use clk_get_rate() under * atomic context. Let's keep it when * rsnd_adg_clk_enable() was called */ - adg->clk_rate[i] = clk_get_rate(adg->clk[i]); + adg->clk_rate[i] = 0; + if (ret < 0) + dev_warn(dev, "can't use clk %d\n", i); + else + adg->clk_rate[i] = clk_get_rate(clk); } else { - clk_disable_unprepare(clk); + if (adg->clk_rate[i]) + clk_disable_unprepare(clk); + adg->clk_rate[i] = 0; } - - if (ret < 0) - dev_warn(dev, "can't use clk %d\n", i); } } diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c index 9f0c86cbdcca..2b75d0139e47 100644 --- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -2486,6 +2486,7 @@ void snd_soc_dapm_free_widget(struct snd_soc_dapm_widget *w) enum snd_soc_dapm_direction dir; list_del(&w->list); + list_del(&w->dirty); /* * remove source and sink paths associated to this widget. * While removing the path, remove reference to it from both diff --git a/sound/soc/sof/Kconfig b/sound/soc/sof/Kconfig index 031dad5fc4c7..3e8b6c035ce3 100644 --- a/sound/soc/sof/Kconfig +++ b/sound/soc/sof/Kconfig @@ -122,7 +122,7 @@ config SND_SOC_SOF_DEBUG_XRUN_STOP bool "SOF stop on XRUN" help This option forces PCMs to stop on any XRUN event. This is useful to - preserve any trace data ond pipeline status prior to the XRUN. + preserve any trace data and pipeline status prior to the XRUN. Say Y if you are debugging SOF FW pipeline XRUNs. If unsure select "N". diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 6875fa570c2c..6744318de612 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -63,16 +63,18 @@ static int hda_codec_load_module(struct hda_codec *codec) } /* enable controller wake up event for all codecs with jack connectors */ -void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) +void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable) { struct hda_bus *hbus = sof_to_hbus(sdev); struct hdac_bus *bus = sof_to_bus(sdev); struct hda_codec *codec; unsigned int mask = 0; - list_for_each_codec(codec, hbus) - if (codec->jacktbl.used) - mask |= BIT(codec->core.addr); + if (enable) { + list_for_each_codec(codec, hbus) + if (codec->jacktbl.used) + mask |= BIT(codec->core.addr); + } snd_hdac_chip_updatew(bus, WAKEEN, STATESTS_INT_MASK, mask); } @@ -81,23 +83,18 @@ void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) void hda_codec_jack_check(struct snd_sof_dev *sdev) { struct hda_bus *hbus = sof_to_hbus(sdev); - struct hdac_bus *bus = sof_to_bus(sdev); struct hda_codec *codec; - /* disable controller Wake Up event*/ - snd_hdac_chip_updatew(bus, WAKEEN, STATESTS_INT_MASK, 0); - list_for_each_codec(codec, hbus) /* * Wake up all jack-detecting codecs regardless whether an event * has been recorded in STATESTS */ if (codec->jacktbl.used) - schedule_delayed_work(&codec->jackpoll_work, - codec->jackpoll_interval); + pm_request_resume(&codec->core.dev); } #else -void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) {} +void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable) {} void hda_codec_jack_check(struct snd_sof_dev *sdev) {} #endif /* CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC */ EXPORT_SYMBOL_NS(hda_codec_jack_wake_enable, SND_SOC_SOF_HDA_AUDIO_CODEC); @@ -156,7 +153,8 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n"); - goto error; + ret = -ENOENT; + goto out; } hda_priv->need_display_power = true; } @@ -173,24 +171,23 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0) - goto error; + ret = -ENOENT; } - return ret; - -error: - snd_hdac_ext_bus_device_exit(hdev); - return -ENOENT; - +out: + if (ret < 0) { + snd_hdac_device_unregister(hdev); + put_device(&hdev->dev); + } #else hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL); if (!hdev) return -ENOMEM; ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev, HDA_DEV_ASOC); +#endif return ret; -#endif } /* Codec initialization */ diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c index 2b001151fe37..1c5e05b88a90 100644 --- a/sound/soc/sof/intel/hda-dsp.c +++ b/sound/soc/sof/intel/hda-dsp.c @@ -617,7 +617,7 @@ static int hda_suspend(struct snd_sof_dev *sdev, bool runtime_suspend) #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) if (runtime_suspend) - hda_codec_jack_wake_enable(sdev); + hda_codec_jack_wake_enable(sdev, true); /* power down all hda link */ snd_hdac_ext_bus_link_power_down_all(bus); @@ -683,8 +683,11 @@ static int hda_resume(struct snd_sof_dev *sdev, bool runtime_resume) #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) /* check jack status */ - if (runtime_resume) - hda_codec_jack_check(sdev); + if (runtime_resume) { + hda_codec_jack_wake_enable(sdev, false); + if (sdev->system_suspend_target == SOF_SUSPEND_NONE) + hda_codec_jack_check(sdev); + } /* turn off the links that were off before suspend */ list_for_each_entry(hlink, &bus->hlink_list, list) { diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h index 9ec8ae0fd649..a3b6f3e9121c 100644 --- a/sound/soc/sof/intel/hda.h +++ b/sound/soc/sof/intel/hda.h @@ -650,7 +650,7 @@ void sof_hda_bus_init(struct hdac_bus *bus, struct device *dev); */ void hda_codec_probe_bus(struct snd_sof_dev *sdev, bool hda_codec_use_common_hdmi); -void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev); +void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable); void hda_codec_jack_check(struct snd_sof_dev *sdev); #endif /* CONFIG_SND_SOC_SOF_HDA */ diff --git a/sound/usb/card.c b/sound/usb/card.c index d731ca62d599..e08fbf8e3ee0 100644 --- a/sound/usb/card.c +++ b/sound/usb/card.c @@ -450,10 +450,8 @@ lookup_device_name(u32 id) static void snd_usb_audio_free(struct snd_card *card) { struct snd_usb_audio *chip = card->private_data; - struct snd_usb_endpoint *ep, *n; - list_for_each_entry_safe(ep, n, &chip->ep_list, list) - snd_usb_endpoint_free(ep); + snd_usb_endpoint_free_all(chip); mutex_destroy(&chip->mutex); if (!atomic_read(&chip->shutdown)) @@ -611,6 +609,7 @@ static int snd_usb_audio_create(struct usb_interface *intf, chip->usb_id = usb_id; INIT_LIST_HEAD(&chip->pcm_list); INIT_LIST_HEAD(&chip->ep_list); + INIT_LIST_HEAD(&chip->iface_ref_list); INIT_LIST_HEAD(&chip->midi_list); INIT_LIST_HEAD(&chip->mixer_list); diff --git a/sound/usb/card.h b/sound/usb/card.h index 6a027c349194..37091b117614 100644 --- a/sound/usb/card.h +++ b/sound/usb/card.h @@ -18,6 +18,7 @@ struct audioformat { unsigned int frame_size; /* samples per frame for non-audio */ unsigned char iface; /* interface number */ unsigned char altsetting; /* corresponding alternate setting */ + unsigned char ep_idx; /* endpoint array index */ unsigned char altset_idx; /* array index of altenate setting */ unsigned char attributes; /* corresponding attributes of cs endpoint */ unsigned char endpoint; /* endpoint */ @@ -42,6 +43,7 @@ struct audioformat { }; struct snd_usb_substream; +struct snd_usb_iface_ref; struct snd_usb_endpoint; struct snd_usb_power_domain; @@ -58,6 +60,7 @@ struct snd_urb_ctx { struct snd_usb_endpoint { struct snd_usb_audio *chip; + struct snd_usb_iface_ref *iface_ref; int opened; /* open refcount; protect with chip->mutex */ atomic_t running; /* running status */ diff --git a/sound/usb/clock.c b/sound/usb/clock.c index 31051f2be46d..dc68ed65e478 100644 --- a/sound/usb/clock.c +++ b/sound/usb/clock.c @@ -485,18 +485,9 @@ static int set_sample_rate_v1(struct snd_usb_audio *chip, const struct audioformat *fmt, int rate) { struct usb_device *dev = chip->dev; - struct usb_host_interface *alts; - unsigned int ep; unsigned char data[3]; int err, crate; - alts = snd_usb_get_host_interface(chip, fmt->iface, fmt->altsetting); - if (!alts) - return -EINVAL; - if (get_iface_desc(alts)->bNumEndpoints < 1) - return -EINVAL; - ep = get_endpoint(alts, 0)->bEndpointAddress; - /* if endpoint doesn't have sampling rate control, bail out */ if (!(fmt->attributes & UAC_EP_CS_ATTR_SAMPLE_RATE)) return 0; @@ -506,11 +497,11 @@ static int set_sample_rate_v1(struct snd_usb_audio *chip, data[2] = rate >> 16; err = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0), UAC_SET_CUR, USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_OUT, - UAC_EP_CS_ATTR_SAMPLE_RATE << 8, ep, - data, sizeof(data)); + UAC_EP_CS_ATTR_SAMPLE_RATE << 8, + fmt->endpoint, data, sizeof(data)); if (err < 0) { dev_err(&dev->dev, "%d:%d: cannot set freq %d to ep %#x\n", - fmt->iface, fmt->altsetting, rate, ep); + fmt->iface, fmt->altsetting, rate, fmt->endpoint); return err; } @@ -524,11 +515,11 @@ static int set_sample_rate_v1(struct snd_usb_audio *chip, err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC_GET_CUR, USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN, - UAC_EP_CS_ATTR_SAMPLE_RATE << 8, ep, - data, sizeof(data)); + UAC_EP_CS_ATTR_SAMPLE_RATE << 8, + fmt->endpoint, data, sizeof(data)); if (err < 0) { dev_err(&dev->dev, "%d:%d: cannot get freq at ep %#x\n", - fmt->iface, fmt->altsetting, ep); + fmt->iface, fmt->altsetting, fmt->endpoint); chip->sample_rate_read_error++; return 0; /* some devices don't support reading */ } diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c index 162da7a50046..8e568823c992 100644 --- a/sound/usb/endpoint.c +++ b/sound/usb/endpoint.c @@ -24,6 +24,14 @@ #define EP_FLAG_RUNNING 1 #define EP_FLAG_STOPPING 2 +/* interface refcounting */ +struct snd_usb_iface_ref { + unsigned char iface; + bool need_setup; + int opened; + struct list_head list; +}; + /* * snd_usb_endpoint is a model that abstracts everything related to an * USB endpoint and its streaming. @@ -489,6 +497,28 @@ exit_clear: } /* + * Find or create a refcount object for the given interface + * + * The objects are released altogether in snd_usb_endpoint_free_all() + */ +static struct snd_usb_iface_ref * +iface_ref_find(struct snd_usb_audio *chip, int iface) +{ + struct snd_usb_iface_ref *ip; + + list_for_each_entry(ip, &chip->iface_ref_list, list) + if (ip->iface == iface) + return ip; + + ip = kzalloc(sizeof(*ip), GFP_KERNEL); + if (!ip) + return NULL; + ip->iface = iface; + list_add_tail(&ip->list, &chip->iface_ref_list); + return ip; +} + +/* * Get the existing endpoint object corresponding EP * Returns NULL if not present. */ @@ -520,8 +550,8 @@ snd_usb_get_endpoint(struct snd_usb_audio *chip, int ep_num) * * Returns zero on success or a negative error code. * - * New endpoints will be added to chip->ep_list and must be freed by - * calling snd_usb_endpoint_free(). + * New endpoints will be added to chip->ep_list and freed by + * calling snd_usb_endpoint_free_all(). * * For SND_USB_ENDPOINT_TYPE_SYNC, the caller needs to guarantee that * bNumEndpoints > 1 beforehand. @@ -653,11 +683,17 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip, } else { ep->iface = fp->iface; ep->altsetting = fp->altsetting; - ep->ep_idx = 0; + ep->ep_idx = fp->ep_idx; } usb_audio_dbg(chip, "Open EP 0x%x, iface=%d:%d, idx=%d\n", ep_num, ep->iface, ep->altsetting, ep->ep_idx); + ep->iface_ref = iface_ref_find(chip, ep->iface); + if (!ep->iface_ref) { + ep = NULL; + goto unlock; + } + ep->cur_audiofmt = fp; ep->cur_channels = fp->channels; ep->cur_rate = params_rate(params); @@ -681,6 +717,11 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip, ep->implicit_fb_sync); } else { + if (WARN_ON(!ep->iface_ref)) { + ep = NULL; + goto unlock; + } + if (!endpoint_compatible(ep, fp, params)) { usb_audio_err(chip, "Incompatible EP setup for 0x%x\n", ep_num); @@ -692,6 +733,9 @@ snd_usb_endpoint_open(struct snd_usb_audio *chip, ep_num, ep->opened); } + if (!ep->iface_ref->opened++) + ep->iface_ref->need_setup = true; + ep->opened++; unlock: @@ -760,12 +804,16 @@ void snd_usb_endpoint_close(struct snd_usb_audio *chip, mutex_lock(&chip->mutex); usb_audio_dbg(chip, "Closing EP 0x%x (count %d)\n", ep->ep_num, ep->opened); - if (!--ep->opened) { + + if (!--ep->iface_ref->opened) endpoint_set_interface(chip, ep, false); + + if (!--ep->opened) { ep->iface = 0; ep->altsetting = 0; ep->cur_audiofmt = NULL; ep->cur_rate = 0; + ep->iface_ref = NULL; usb_audio_dbg(chip, "EP 0x%x closed\n", ep->ep_num); } mutex_unlock(&chip->mutex); @@ -775,6 +823,8 @@ void snd_usb_endpoint_close(struct snd_usb_audio *chip, void snd_usb_endpoint_suspend(struct snd_usb_endpoint *ep) { ep->need_setup = true; + if (ep->iface_ref) + ep->iface_ref->need_setup = true; } /* @@ -1195,11 +1245,22 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip, int err = 0; mutex_lock(&chip->mutex); + if (WARN_ON(!ep->iface_ref)) + goto unlock; if (!ep->need_setup) goto unlock; - /* No need to (re-)configure the sync EP belonging to the same altset */ - if (ep->ep_idx) { + /* If the interface has been already set up, just set EP parameters */ + if (!ep->iface_ref->need_setup) { + /* sample rate setup of UAC1 is per endpoint, and we need + * to update at each EP configuration + */ + if (ep->cur_audiofmt->protocol == UAC_VERSION_1) { + err = snd_usb_init_sample_rate(chip, ep->cur_audiofmt, + ep->cur_rate); + if (err < 0) + goto unlock; + } err = snd_usb_endpoint_set_params(chip, ep); if (err < 0) goto unlock; @@ -1242,6 +1303,8 @@ int snd_usb_endpoint_configure(struct snd_usb_audio *chip, goto unlock; } + ep->iface_ref->need_setup = false; + done: ep->need_setup = false; err = 1; @@ -1387,15 +1450,21 @@ void snd_usb_endpoint_release(struct snd_usb_endpoint *ep) } /** - * snd_usb_endpoint_free: Free the resources of an snd_usb_endpoint + * snd_usb_endpoint_free_all: Free the resources of an snd_usb_endpoint + * @card: The chip * - * @ep: the endpoint to free - * - * This free all resources of the given ep. + * This free all endpoints and those resources */ -void snd_usb_endpoint_free(struct snd_usb_endpoint *ep) +void snd_usb_endpoint_free_all(struct snd_usb_audio *chip) { - kfree(ep); + struct snd_usb_endpoint *ep, *en; + struct snd_usb_iface_ref *ip, *in; + + list_for_each_entry_safe(ep, en, &chip->ep_list, list) + kfree(ep); + + list_for_each_entry_safe(ip, in, &chip->iface_ref_list, list) + kfree(ip); } /* diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h index 11e3bb839fd7..eea4ca49876d 100644 --- a/sound/usb/endpoint.h +++ b/sound/usb/endpoint.h @@ -42,7 +42,7 @@ void snd_usb_endpoint_sync_pending_stop(struct snd_usb_endpoint *ep); void snd_usb_endpoint_suspend(struct snd_usb_endpoint *ep); int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); void snd_usb_endpoint_release(struct snd_usb_endpoint *ep); -void snd_usb_endpoint_free(struct snd_usb_endpoint *ep); +void snd_usb_endpoint_free_all(struct snd_usb_audio *chip); int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep); int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep, diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c index eb3a4c433c3e..521cc846d9d9 100644 --- a/sound/usb/implicit.c +++ b/sound/usb/implicit.c @@ -58,8 +58,6 @@ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = { IMPLICIT_FB_FIXED_DEV(0x0499, 0x172f, 0x81, 2), /* Steinberg UR22C */ IMPLICIT_FB_FIXED_DEV(0x0d9a, 0x00df, 0x81, 2), /* RTX6001 */ IMPLICIT_FB_FIXED_DEV(0x22f0, 0x0006, 0x81, 3), /* Allen&Heath Qu-16 */ - IMPLICIT_FB_FIXED_DEV(0x2b73, 0x000a, 0x82, 0), /* Pioneer DJ DJM-900NXS2 */ - IMPLICIT_FB_FIXED_DEV(0x2b73, 0x0017, 0x82, 0), /* Pioneer DJ DJM-250MK2 */ IMPLICIT_FB_FIXED_DEV(0x1686, 0xf029, 0x82, 2), /* Zoom UAC-2 */ IMPLICIT_FB_FIXED_DEV(0x2466, 0x8003, 0x86, 2), /* Fractal Audio Axe-Fx II */ IMPLICIT_FB_FIXED_DEV(0x0499, 0x172a, 0x86, 2), /* Yamaha MODX */ @@ -74,10 +72,12 @@ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = { /* No quirk for playback but with capture quirk (see below) */ IMPLICIT_FB_SKIP_DEV(0x0582, 0x0130), /* BOSS BR-80 */ + IMPLICIT_FB_SKIP_DEV(0x0582, 0x0171), /* BOSS RC-505 */ IMPLICIT_FB_SKIP_DEV(0x0582, 0x0189), /* BOSS GT-100v2 */ IMPLICIT_FB_SKIP_DEV(0x0582, 0x01d6), /* BOSS GT-1 */ IMPLICIT_FB_SKIP_DEV(0x0582, 0x01d8), /* BOSS Katana */ IMPLICIT_FB_SKIP_DEV(0x0582, 0x01e5), /* BOSS GT-001 */ + IMPLICIT_FB_SKIP_DEV(0x0582, 0x0203), /* BOSS AD-10 */ {} /* terminator */ }; @@ -85,10 +85,12 @@ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = { /* Implicit feedback quirk table for capture: only FIXED type */ static const struct snd_usb_implicit_fb_match capture_implicit_fb_quirks[] = { IMPLICIT_FB_FIXED_DEV(0x0582, 0x0130, 0x0d, 0x01), /* BOSS BR-80 */ + IMPLICIT_FB_FIXED_DEV(0x0582, 0x0171, 0x0d, 0x01), /* BOSS RC-505 */ IMPLICIT_FB_FIXED_DEV(0x0582, 0x0189, 0x0d, 0x01), /* BOSS GT-100v2 */ IMPLICIT_FB_FIXED_DEV(0x0582, 0x01d6, 0x0d, 0x01), /* BOSS GT-1 */ IMPLICIT_FB_FIXED_DEV(0x0582, 0x01d8, 0x0d, 0x01), /* BOSS Katana */ IMPLICIT_FB_FIXED_DEV(0x0582, 0x01e5, 0x0d, 0x01), /* BOSS GT-001 */ + IMPLICIT_FB_FIXED_DEV(0x0582, 0x0203, 0x0d, 0x01), /* BOSS AD-10 */ {} /* terminator */ }; @@ -96,7 +98,7 @@ static const struct snd_usb_implicit_fb_match capture_implicit_fb_quirks[] = { /* set up sync EP information on the audioformat */ static int add_implicit_fb_sync_ep(struct snd_usb_audio *chip, struct audioformat *fmt, - int ep, int ifnum, + int ep, int ep_idx, int ifnum, const struct usb_host_interface *alts) { struct usb_interface *iface; @@ -111,7 +113,7 @@ static int add_implicit_fb_sync_ep(struct snd_usb_audio *chip, fmt->sync_ep = ep; fmt->sync_iface = ifnum; fmt->sync_altsetting = alts->desc.bAlternateSetting; - fmt->sync_ep_idx = 0; + fmt->sync_ep_idx = ep_idx; fmt->implicit_fb = 1; usb_audio_dbg(chip, "%d:%d: added %s implicit_fb sync_ep %x, iface %d:%d\n", @@ -143,7 +145,7 @@ static int add_generic_uac2_implicit_fb(struct snd_usb_audio *chip, (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != USB_ENDPOINT_USAGE_IMPLICIT_FB) return 0; - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, ifnum, alts); } @@ -169,10 +171,33 @@ static int add_roland_implicit_fb(struct snd_usb_audio *chip, (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != USB_ENDPOINT_USAGE_IMPLICIT_FB) return 0; - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, ifnum, alts); } +/* Playback and capture EPs on Pioneer devices share the same iface/altset, + * but they don't seem working with the implicit fb mode well, hence we + * just return as if the sync were already set up. + */ +static int skip_pioneer_sync_ep(struct snd_usb_audio *chip, + struct audioformat *fmt, + struct usb_host_interface *alts) +{ + struct usb_endpoint_descriptor *epd; + + if (alts->desc.bNumEndpoints != 2) + return 0; + + epd = get_endpoint(alts, 1); + if (!usb_endpoint_is_isoc_in(epd) || + (epd->bmAttributes & USB_ENDPOINT_SYNCTYPE) != USB_ENDPOINT_SYNC_ASYNC || + ((epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != + USB_ENDPOINT_USAGE_DATA && + (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != + USB_ENDPOINT_USAGE_IMPLICIT_FB)) + return 0; + return 1; /* don't handle with the implicit fb, just skip sync EP */ +} static int __add_generic_implicit_fb(struct snd_usb_audio *chip, struct audioformat *fmt, @@ -193,7 +218,7 @@ static int __add_generic_implicit_fb(struct snd_usb_audio *chip, if (!usb_endpoint_is_isoc_in(epd) || (epd->bmAttributes & USB_ENDPOINT_SYNCTYPE) != USB_ENDPOINT_SYNC_ASYNC) return 0; - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, + return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 0, iface, alts); } @@ -246,7 +271,7 @@ static int audioformat_implicit_fb_quirk(struct snd_usb_audio *chip, case IMPLICIT_FB_NONE: return 0; /* No quirk */ case IMPLICIT_FB_FIXED: - return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, + return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, 0, p->iface, NULL); } } @@ -274,6 +299,14 @@ static int audioformat_implicit_fb_quirk(struct snd_usb_audio *chip, return 1; } + /* Pioneer devices with vendor spec class */ + if (attr == USB_ENDPOINT_SYNC_ASYNC && + alts->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC && + USB_ID_VENDOR(chip->usb_id) == 0x2b73 /* Pioneer */) { + if (skip_pioneer_sync_ep(chip, fmt, alts)) + return 1; + } + /* Try the generic implicit fb if available */ if (chip->generic_implicit_fb) return add_generic_implicit_fb(chip, fmt, alts); @@ -291,8 +324,8 @@ static int audioformat_capture_quirk(struct snd_usb_audio *chip, p = find_implicit_fb_entry(chip, capture_implicit_fb_quirks, alts); if (p && p->type == IMPLICIT_FB_FIXED) - return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, p->iface, - NULL); + return add_implicit_fb_sync_ep(chip, fmt, p->ep_num, 0, + p->iface, NULL); return 0; } @@ -374,20 +407,19 @@ snd_usb_find_implicit_fb_sync_format(struct snd_usb_audio *chip, int stream) { struct snd_usb_substream *subs; - const struct audioformat *fp, *sync_fmt; + const struct audioformat *fp, *sync_fmt = NULL; int score, high_score; - /* When sharing the same altset, use the original audioformat */ + /* Use the original audioformat as fallback for the shared altset */ if (target->iface == target->sync_iface && target->altsetting == target->sync_altsetting) - return target; + sync_fmt = target; subs = find_matching_substream(chip, stream, target->sync_ep, target->fmt_type); if (!subs) - return NULL; + return sync_fmt; - sync_fmt = NULL; high_score = 0; list_for_each_entry(fp, &subs->fmt_list, list) { score = match_endpoint_audioformats(subs, fp, diff --git a/sound/usb/midi.c b/sound/usb/midi.c index c8213652470c..0c23fa6d8525 100644 --- a/sound/usb/midi.c +++ b/sound/usb/midi.c @@ -1889,6 +1889,8 @@ static int snd_usbmidi_get_ms_info(struct snd_usb_midi *umidi, ms_ep = find_usb_ms_endpoint_descriptor(hostep); if (!ms_ep) continue; + if (ms_ep->bNumEmbMIDIJack > 0x10) + continue; if (usb_endpoint_dir_out(ep)) { if (endpoints[epidx].out_ep) { if (++epidx >= MIDI_MAX_ENDPOINTS) { @@ -2141,6 +2143,8 @@ static int snd_usbmidi_detect_roland(struct snd_usb_midi *umidi, cs_desc[1] == USB_DT_CS_INTERFACE && cs_desc[2] == 0xf1 && cs_desc[3] == 0x02) { + if (cs_desc[4] > 0x10 || cs_desc[5] > 0x10) + continue; endpoint->in_cables = (1 << cs_desc[4]) - 1; endpoint->out_cables = (1 << cs_desc[5]) - 1; return snd_usbmidi_detect_endpoints(umidi, endpoint, 1); diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c index 56079901769f..078bb4c94033 100644 --- a/sound/usb/pcm.c +++ b/sound/usb/pcm.c @@ -663,7 +663,7 @@ static int hw_check_valid_format(struct snd_usb_substream *subs, check_fmts.bits[1] = (u32)(fp->formats >> 32); snd_mask_intersect(&check_fmts, fmts); if (snd_mask_empty(&check_fmts)) { - hwc_debug(" > check: no supported format %d\n", fp->format); + hwc_debug(" > check: no supported format 0x%llx\n", fp->formats); return 0; } /* check the channels */ @@ -775,24 +775,11 @@ static int hw_rule_channels(struct snd_pcm_hw_params *params, return apply_hw_params_minmax(it, rmin, rmax); } -static int hw_rule_format(struct snd_pcm_hw_params *params, - struct snd_pcm_hw_rule *rule) +static int apply_hw_params_format_bits(struct snd_mask *fmt, u64 fbits) { - struct snd_usb_substream *subs = rule->private; - const struct audioformat *fp; - struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); - u64 fbits; u32 oldbits[2]; int changed; - hwc_debug("hw_rule_format: %x:%x\n", fmt->bits[0], fmt->bits[1]); - fbits = 0; - list_for_each_entry(fp, &subs->fmt_list, list) { - if (!hw_check_valid_format(subs, params, fp)) - continue; - fbits |= fp->formats; - } - oldbits[0] = fmt->bits[0]; oldbits[1] = fmt->bits[1]; fmt->bits[0] &= (u32)fbits; @@ -806,6 +793,24 @@ static int hw_rule_format(struct snd_pcm_hw_params *params, return changed; } +static int hw_rule_format(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct snd_usb_substream *subs = rule->private; + const struct audioformat *fp; + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); + u64 fbits; + + hwc_debug("hw_rule_format: %x:%x\n", fmt->bits[0], fmt->bits[1]); + fbits = 0; + list_for_each_entry(fp, &subs->fmt_list, list) { + if (!hw_check_valid_format(subs, params, fp)) + continue; + fbits |= fp->formats; + } + return apply_hw_params_format_bits(fmt, fbits); +} + static int hw_rule_period_time(struct snd_pcm_hw_params *params, struct snd_pcm_hw_rule *rule) { @@ -833,64 +838,92 @@ static int hw_rule_period_time(struct snd_pcm_hw_params *params, return apply_hw_params_minmax(it, pmin, UINT_MAX); } -/* apply PCM hw constraints from the concurrent sync EP */ -static int apply_hw_constraint_from_sync(struct snd_pcm_runtime *runtime, - struct snd_usb_substream *subs) +/* get the EP or the sync EP for implicit fb when it's already set up */ +static const struct snd_usb_endpoint * +get_sync_ep_from_substream(struct snd_usb_substream *subs) { struct snd_usb_audio *chip = subs->stream->chip; - struct snd_usb_endpoint *ep; const struct audioformat *fp; - int err; + const struct snd_usb_endpoint *ep; list_for_each_entry(fp, &subs->fmt_list, list) { ep = snd_usb_get_endpoint(chip, fp->endpoint); if (ep && ep->cur_rate) - goto found; + return ep; if (!fp->implicit_fb) continue; /* for the implicit fb, check the sync ep as well */ ep = snd_usb_get_endpoint(chip, fp->sync_ep); if (ep && ep->cur_rate) - goto found; + return ep; } - return 0; + return NULL; +} - found: - if (!find_format(&subs->fmt_list, ep->cur_format, ep->cur_rate, - ep->cur_channels, false, NULL)) { - usb_audio_dbg(chip, "EP 0x%x being used, but not applicable\n", - ep->ep_num); +/* additional hw constraints for implicit feedback mode */ +static int hw_rule_format_implicit_fb(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct snd_usb_substream *subs = rule->private; + const struct snd_usb_endpoint *ep; + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); + + ep = get_sync_ep_from_substream(subs); + if (!ep) return 0; - } - usb_audio_dbg(chip, "EP 0x%x being used, using fixed params:\n", - ep->ep_num); - usb_audio_dbg(chip, "rate=%d, period_size=%d, periods=%d\n", - ep->cur_rate, ep->cur_period_frames, - ep->cur_buffer_periods); + hwc_debug("applying %s\n", __func__); + return apply_hw_params_format_bits(fmt, pcm_format_to_bits(ep->cur_format)); +} - runtime->hw.formats = subs->formats; - runtime->hw.rate_min = runtime->hw.rate_max = ep->cur_rate; - runtime->hw.rates = SNDRV_PCM_RATE_KNOT; - runtime->hw.periods_min = runtime->hw.periods_max = - ep->cur_buffer_periods; +static int hw_rule_rate_implicit_fb(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct snd_usb_substream *subs = rule->private; + const struct snd_usb_endpoint *ep; + struct snd_interval *it; - err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS, - hw_rule_channels, subs, - SNDRV_PCM_HW_PARAM_FORMAT, - SNDRV_PCM_HW_PARAM_RATE, - -1); - if (err < 0) - return err; + ep = get_sync_ep_from_substream(subs); + if (!ep) + return 0; - err = snd_pcm_hw_constraint_minmax(runtime, - SNDRV_PCM_HW_PARAM_PERIOD_SIZE, - ep->cur_period_frames, - ep->cur_period_frames); - if (err < 0) - return err; + hwc_debug("applying %s\n", __func__); + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_RATE); + return apply_hw_params_minmax(it, ep->cur_rate, ep->cur_rate); +} - return 1; /* notify the finding */ +static int hw_rule_period_size_implicit_fb(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct snd_usb_substream *subs = rule->private; + const struct snd_usb_endpoint *ep; + struct snd_interval *it; + + ep = get_sync_ep_from_substream(subs); + if (!ep) + return 0; + + hwc_debug("applying %s\n", __func__); + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE); + return apply_hw_params_minmax(it, ep->cur_period_frames, + ep->cur_period_frames); +} + +static int hw_rule_periods_implicit_fb(struct snd_pcm_hw_params *params, + struct snd_pcm_hw_rule *rule) +{ + struct snd_usb_substream *subs = rule->private; + const struct snd_usb_endpoint *ep; + struct snd_interval *it; + + ep = get_sync_ep_from_substream(subs); + if (!ep) + return 0; + + hwc_debug("applying %s\n", __func__); + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_PERIODS); + return apply_hw_params_minmax(it, ep->cur_buffer_periods, + ep->cur_buffer_periods); } /* @@ -899,20 +932,11 @@ static int apply_hw_constraint_from_sync(struct snd_pcm_runtime *runtime, static int setup_hw_info(struct snd_pcm_runtime *runtime, struct snd_usb_substream *subs) { - struct snd_usb_audio *chip = subs->stream->chip; const struct audioformat *fp; unsigned int pt, ptmin; int param_period_time_if_needed = -1; int err; - mutex_lock(&chip->mutex); - err = apply_hw_constraint_from_sync(runtime, subs); - mutex_unlock(&chip->mutex); - if (err < 0) - return err; - if (err > 0) /* found the matching? */ - goto add_extra_rules; - runtime->hw.formats = subs->formats; runtime->hw.rate_min = 0x7fffffff; @@ -957,6 +981,7 @@ static int setup_hw_info(struct snd_pcm_runtime *runtime, struct snd_usb_substre err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, hw_rule_rate, subs, + SNDRV_PCM_HW_PARAM_RATE, SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_HW_PARAM_CHANNELS, param_period_time_if_needed, @@ -964,9 +989,9 @@ static int setup_hw_info(struct snd_pcm_runtime *runtime, struct snd_usb_substre if (err < 0) return err; -add_extra_rules: err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS, hw_rule_channels, subs, + SNDRV_PCM_HW_PARAM_CHANNELS, SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_HW_PARAM_RATE, param_period_time_if_needed, @@ -975,6 +1000,7 @@ add_extra_rules: return err; err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_FORMAT, hw_rule_format, subs, + SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_HW_PARAM_RATE, SNDRV_PCM_HW_PARAM_CHANNELS, param_period_time_if_needed, @@ -993,6 +1019,28 @@ add_extra_rules: return err; } + /* additional hw constraints for implicit fb */ + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_FORMAT, + hw_rule_format_implicit_fb, subs, + SNDRV_PCM_HW_PARAM_FORMAT, -1); + if (err < 0) + return err; + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, + hw_rule_rate_implicit_fb, subs, + SNDRV_PCM_HW_PARAM_RATE, -1); + if (err < 0) + return err; + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, + hw_rule_period_size_implicit_fb, subs, + SNDRV_PCM_HW_PARAM_PERIOD_SIZE, -1); + if (err < 0) + return err; + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_PERIODS, + hw_rule_periods_implicit_fb, subs, + SNDRV_PCM_HW_PARAM_PERIODS, -1); + if (err < 0) + return err; + return 0; } diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h index 0e11cb96fa8c..c8a4bdf18207 100644 --- a/sound/usb/quirks-table.h +++ b/sound/usb/quirks-table.h @@ -3362,6 +3362,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x86, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, @@ -3450,6 +3451,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x82, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, @@ -3506,6 +3508,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x82, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, @@ -3562,6 +3565,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x82, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, @@ -3619,6 +3623,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x82, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, @@ -3679,6 +3684,7 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), .altsetting = 1, .altset_idx = 1, .endpoint = 0x82, + .ep_idx = 1, .ep_attr = USB_ENDPOINT_XFER_ISOC| USB_ENDPOINT_SYNC_ASYNC| USB_ENDPOINT_USAGE_IMPLICIT_FB, diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c index e4a690bb4c99..e196e364cef1 100644 --- a/sound/usb/quirks.c +++ b/sound/usb/quirks.c @@ -120,6 +120,40 @@ static int create_standard_audio_quirk(struct snd_usb_audio *chip, return 0; } +/* create the audio stream and the corresponding endpoints from the fixed + * audioformat object; this is used for quirks with the fixed EPs + */ +static int add_audio_stream_from_fixed_fmt(struct snd_usb_audio *chip, + struct audioformat *fp) +{ + int stream, err; + + stream = (fp->endpoint & USB_DIR_IN) ? + SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; + + snd_usb_audioformat_set_sync_ep(chip, fp); + + err = snd_usb_add_audio_stream(chip, stream, fp); + if (err < 0) + return err; + + err = snd_usb_add_endpoint(chip, fp->endpoint, + SND_USB_ENDPOINT_TYPE_DATA); + if (err < 0) + return err; + + if (fp->sync_ep) { + err = snd_usb_add_endpoint(chip, fp->sync_ep, + fp->implicit_fb ? + SND_USB_ENDPOINT_TYPE_DATA : + SND_USB_ENDPOINT_TYPE_SYNC); + if (err < 0) + return err; + } + + return 0; +} + /* * create a stream for an endpoint/altsetting without proper descriptors */ @@ -131,8 +165,8 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, struct audioformat *fp; struct usb_host_interface *alts; struct usb_interface_descriptor *altsd; - int stream, err; unsigned *rate_table = NULL; + int err; fp = kmemdup(quirk->data, sizeof(*fp), GFP_KERNEL); if (!fp) @@ -153,11 +187,6 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, fp->rate_table = rate_table; } - stream = (fp->endpoint & USB_DIR_IN) - ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; - err = snd_usb_add_audio_stream(chip, stream, fp); - if (err < 0) - goto error; if (fp->iface != get_iface_desc(&iface->altsetting[0])->bInterfaceNumber || fp->altset_idx >= iface->num_altsetting) { err = -EINVAL; @@ -165,7 +194,7 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, } alts = &iface->altsetting[fp->altset_idx]; altsd = get_iface_desc(alts); - if (altsd->bNumEndpoints < 1) { + if (altsd->bNumEndpoints <= fp->ep_idx) { err = -EINVAL; goto error; } @@ -175,7 +204,14 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, if (fp->datainterval == 0) fp->datainterval = snd_usb_parse_datainterval(chip, alts); if (fp->maxpacksize == 0) - fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize); + fp->maxpacksize = le16_to_cpu(get_endpoint(alts, fp->ep_idx)->wMaxPacketSize); + if (!fp->fmt_type) + fp->fmt_type = UAC_FORMAT_TYPE_I; + + err = add_audio_stream_from_fixed_fmt(chip, fp); + if (err < 0) + goto error; + usb_set_interface(chip->dev, fp->iface, 0); snd_usb_init_pitch(chip, fp); snd_usb_init_sample_rate(chip, fp, fp->rate_max); @@ -417,7 +453,7 @@ static int create_uaxx_quirk(struct snd_usb_audio *chip, struct usb_host_interface *alts; struct usb_interface_descriptor *altsd; struct audioformat *fp; - int stream, err; + int err; /* both PCM and MIDI interfaces have 2 or more altsettings */ if (iface->num_altsetting < 2) @@ -482,9 +518,7 @@ static int create_uaxx_quirk(struct snd_usb_audio *chip, return -ENXIO; } - stream = (fp->endpoint & USB_DIR_IN) - ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; - err = snd_usb_add_audio_stream(chip, stream, fp); + err = add_audio_stream_from_fixed_fmt(chip, fp); if (err < 0) { list_del(&fp->list); /* unlink for avoiding double-free */ kfree(fp); @@ -1436,30 +1470,6 @@ static void set_format_emu_quirk(struct snd_usb_substream *subs, subs->pkt_offset_adj = (emu_samplerate_id >= EMU_QUIRK_SR_176400HZ) ? 4 : 0; } - -/* - * Pioneer DJ DJM-900NXS2 - * Device needs to know the sample rate each time substream is started - */ -static int pioneer_djm_set_format_quirk(struct snd_usb_substream *subs) -{ - unsigned int cur_rate = subs->data_endpoint->cur_rate; - /* Convert sample rate value to little endian */ - u8 sr[3]; - - sr[0] = cur_rate & 0xff; - sr[1] = (cur_rate >> 8) & 0xff; - sr[2] = (cur_rate >> 16) & 0xff; - - /* Configure device */ - usb_set_interface(subs->dev, 0, 1); - snd_usb_ctl_msg(subs->stream->chip->dev, - usb_rcvctrlpipe(subs->stream->chip->dev, 0), - 0x01, 0x22, 0x0100, 0x0082, &sr, 0x0003); - - return 0; -} - void snd_usb_set_format_quirk(struct snd_usb_substream *subs, const struct audioformat *fmt) { @@ -1470,10 +1480,6 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs, case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */ set_format_emu_quirk(subs, fmt); break; - case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */ - case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */ - pioneer_djm_set_format_quirk(subs); - break; case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ subs->stream_offset_adj = 2; break; diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h index 980287aadd36..215c1771dd57 100644 --- a/sound/usb/usbaudio.h +++ b/sound/usb/usbaudio.h @@ -44,6 +44,7 @@ struct snd_usb_audio { struct list_head pcm_list; /* list of pcm streams */ struct list_head ep_list; /* list of audio-related endpoints */ + struct list_head iface_ref_list; /* list of interface refcounts */ int pcm_devs; struct list_head midi_list; /* list of midi interfaces */ diff --git a/tools/bootconfig/scripts/bconf2ftrace.sh b/tools/bootconfig/scripts/bconf2ftrace.sh index 595e164dc352..feb30c2c7881 100755 --- a/tools/bootconfig/scripts/bconf2ftrace.sh +++ b/tools/bootconfig/scripts/bconf2ftrace.sh @@ -152,6 +152,7 @@ setup_instance() { # [instance] set_array_of ${instance}.options ${instancedir}/trace_options set_value_of ${instance}.trace_clock ${instancedir}/trace_clock set_value_of ${instance}.cpumask ${instancedir}/tracing_cpumask + set_value_of ${instance}.tracing_on ${instancedir}/tracing_on set_value_of ${instance}.tracer ${instancedir}/current_tracer set_array_of ${instance}.ftrace.filters \ ${instancedir}/set_ftrace_filter diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh index 6c0d4b61e0c2..a0c3bcc6da4f 100755 --- a/tools/bootconfig/scripts/ftrace2bconf.sh +++ b/tools/bootconfig/scripts/ftrace2bconf.sh @@ -221,6 +221,10 @@ instance_options() { # [instance-name] if [ `echo $val | sed -e s/f//g`x != x ]; then emit_kv $PREFIX.cpumask = $val fi + val=`cat $INSTANCE/tracing_on` + if [ `echo $val | sed -e s/f//g`x != x ]; then + emit_kv $PREFIX.tracing_on = $val + fi val= for i in `cat $INSTANCE/set_event`; do diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c index 3fae61ef6339..ff3aa0cf3997 100644 --- a/tools/bpf/bpftool/net.c +++ b/tools/bpf/bpftool/net.c @@ -11,7 +11,6 @@ #include <bpf/bpf.h> #include <bpf/libbpf.h> #include <net/if.h> -#include <linux/if.h> #include <linux/rtnetlink.h> #include <linux/socket.h> #include <linux/tc_act/tc_bpf.h> diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c index e3ea569ee125..7409d7860aa6 100644 --- a/tools/bpf/resolve_btfids/main.c +++ b/tools/bpf/resolve_btfids/main.c @@ -139,6 +139,8 @@ int eprintf(int level, int var, const char *fmt, ...) #define pr_debug2(fmt, ...) pr_debugN(2, pr_fmt(fmt), ##__VA_ARGS__) #define pr_err(fmt, ...) \ eprintf(0, verbose, pr_fmt(fmt), ##__VA_ARGS__) +#define pr_info(fmt, ...) \ + eprintf(0, verbose, pr_fmt(fmt), ##__VA_ARGS__) static bool is_btf_id(const char *name) { @@ -472,7 +474,7 @@ static int symbols_resolve(struct object *obj) int nr_funcs = obj->nr_funcs; int err, type_id; struct btf *btf; - __u32 nr; + __u32 nr_types; btf = btf__parse(obj->btf ?: obj->path, NULL); err = libbpf_get_error(btf); @@ -483,12 +485,12 @@ static int symbols_resolve(struct object *obj) } err = -1; - nr = btf__get_nr_types(btf); + nr_types = btf__get_nr_types(btf); /* * Iterate all the BTF types and search for collected symbol IDs. */ - for (type_id = 1; type_id <= nr; type_id++) { + for (type_id = 1; type_id <= nr_types; type_id++) { const struct btf_type *type; struct rb_root *root; struct btf_id *id; @@ -526,8 +528,13 @@ static int symbols_resolve(struct object *obj) id = btf_id__find(root, str); if (id) { - id->id = type_id; - (*nr)--; + if (id->id) { + pr_info("WARN: multiple IDs found for '%s': %d, %d - using %d\n", + str, id->id, type_id, id->id); + } else { + id->id = type_id; + (*nr)--; + } } } diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c index cacd66ad7926..a2b233fdb572 100644 --- a/tools/gpio/gpio-event-mon.c +++ b/tools/gpio/gpio-event-mon.c @@ -107,8 +107,8 @@ int monitor_device(const char *device_name, ret = -EIO; break; } - fprintf(stdout, "GPIO EVENT at %llu on line %d (%d|%d) ", - event.timestamp_ns, event.offset, event.line_seqno, + fprintf(stdout, "GPIO EVENT at %" PRIu64 " on line %d (%d|%d) ", + (uint64_t)event.timestamp_ns, event.offset, event.line_seqno, event.seqno); switch (event.id) { case GPIO_V2_LINE_EVENT_RISING_EDGE: diff --git a/tools/gpio/gpio-watch.c b/tools/gpio/gpio-watch.c index f229ec62301b..41e76d244192 100644 --- a/tools/gpio/gpio-watch.c +++ b/tools/gpio/gpio-watch.c @@ -10,6 +10,7 @@ #include <ctype.h> #include <errno.h> #include <fcntl.h> +#include <inttypes.h> #include <linux/gpio.h> #include <poll.h> #include <stdbool.h> @@ -86,8 +87,8 @@ int main(int argc, char **argv) return EXIT_FAILURE; } - printf("line %u: %s at %llu\n", - chg.info.offset, event, chg.timestamp_ns); + printf("line %u: %s at %" PRIu64 "\n", + chg.info.offset, event, (uint64_t)chg.timestamp_ns); } } diff --git a/tools/include/linux/build_bug.h b/tools/include/linux/build_bug.h index ce365d212768..cc7070c7439b 100644 --- a/tools/include/linux/build_bug.h +++ b/tools/include/linux/build_bug.h @@ -79,9 +79,4 @@ #define __static_assert(expr, msg, ...) _Static_assert(expr, msg) #endif // static_assert -#ifdef __GENKSYMS__ -/* genksyms gets confused by _Static_assert */ -#define _Static_assert(expr, ...) -#endif - #endif /* _LINUX_BUILD_BUG_H */ diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h index 886802b8ffba..374c67875cdb 100644 --- a/tools/include/uapi/linux/kvm.h +++ b/tools/include/uapi/linux/kvm.h @@ -251,6 +251,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_X86_RDMSR 29 #define KVM_EXIT_X86_WRMSR 30 #define KVM_EXIT_DIRTY_RING_FULL 31 +#define KVM_EXIT_AP_RESET_HOLD 32 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -573,6 +574,7 @@ struct kvm_vapic_addr { #define KVM_MP_STATE_CHECK_STOP 6 #define KVM_MP_STATE_OPERATING 7 #define KVM_MP_STATE_LOAD 8 +#define KVM_MP_STATE_AP_RESET_HOLD 9 struct kvm_mp_state { __u32 mp_state; diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c index 3c3f2bc6c652..9970a288dda5 100644 --- a/tools/lib/bpf/btf.c +++ b/tools/lib/bpf/btf.c @@ -240,11 +240,6 @@ static int btf_parse_hdr(struct btf *btf) } meta_left = btf->raw_size - sizeof(*hdr); - if (!meta_left) { - pr_debug("BTF has no data\n"); - return -EINVAL; - } - if (meta_left < hdr->str_off + hdr->str_len) { pr_debug("Invalid BTF total size:%u\n", btf->raw_size); return -EINVAL; diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c index cfcdbd7be066..17465d454a0e 100644 --- a/tools/lib/perf/evlist.c +++ b/tools/lib/perf/evlist.c @@ -367,21 +367,13 @@ static struct perf_mmap* perf_evlist__alloc_mmap(struct perf_evlist *evlist, boo return map; } -static void perf_evlist__set_sid_idx(struct perf_evlist *evlist, - struct perf_evsel *evsel, int idx, int cpu, - int thread) +static void perf_evsel__set_sid_idx(struct perf_evsel *evsel, int idx, int cpu, int thread) { struct perf_sample_id *sid = SID(evsel, cpu, thread); sid->idx = idx; - if (evlist->cpus && cpu >= 0) - sid->cpu = evlist->cpus->map[cpu]; - else - sid->cpu = -1; - if (!evsel->system_wide && evlist->threads && thread >= 0) - sid->tid = perf_thread_map__pid(evlist->threads, thread); - else - sid->tid = -1; + sid->cpu = perf_cpu_map__cpu(evsel->cpus, cpu); + sid->tid = perf_thread_map__pid(evsel->threads, thread); } static struct perf_mmap* @@ -500,8 +492,7 @@ mmap_per_evsel(struct perf_evlist *evlist, struct perf_evlist_mmap_ops *ops, if (perf_evlist__id_add_fd(evlist, evsel, cpu, thread, fd) < 0) return -1; - perf_evlist__set_sid_idx(evlist, evsel, idx, cpu, - thread); + perf_evsel__set_sid_idx(evsel, idx, cpu, thread); } } diff --git a/tools/lib/perf/tests/test-cpumap.c b/tools/lib/perf/tests/test-cpumap.c index c8d45091e7c2..c70e9e03af3e 100644 --- a/tools/lib/perf/tests/test-cpumap.c +++ b/tools/lib/perf/tests/test-cpumap.c @@ -27,5 +27,5 @@ int main(int argc, char **argv) perf_cpu_map__put(cpus); __T_END; - return 0; + return tests_failed == 0 ? 0 : -1; } diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c index 6d8ebe0c2504..e2ac0b7f432e 100644 --- a/tools/lib/perf/tests/test-evlist.c +++ b/tools/lib/perf/tests/test-evlist.c @@ -208,13 +208,13 @@ static int test_mmap_thread(void) char path[PATH_MAX]; int id, err, pid, go_pipe[2]; union perf_event *event; - char bf; int count = 0; snprintf(path, PATH_MAX, "%s/kernel/debug/tracing/events/syscalls/sys_enter_prctl/id", sysfs__mountpoint()); if (filename__read_int(path, &id)) { + tests_failed++; fprintf(stderr, "error: failed to get tracepoint id: %s\n", path); return -1; } @@ -229,6 +229,7 @@ static int test_mmap_thread(void) pid = fork(); if (!pid) { int i; + char bf; read(go_pipe[0], &bf, 1); @@ -266,7 +267,7 @@ static int test_mmap_thread(void) perf_evlist__enable(evlist); /* kick the child and wait for it to finish */ - write(go_pipe[1], &bf, 1); + write(go_pipe[1], "A", 1); waitpid(pid, NULL, 0); /* @@ -409,5 +410,5 @@ int main(int argc, char **argv) test_mmap_cpus(); __T_END; - return 0; + return tests_failed == 0 ? 0 : -1; } diff --git a/tools/lib/perf/tests/test-evsel.c b/tools/lib/perf/tests/test-evsel.c index 135722ac965b..0ad82d7a2a51 100644 --- a/tools/lib/perf/tests/test-evsel.c +++ b/tools/lib/perf/tests/test-evsel.c @@ -131,5 +131,5 @@ int main(int argc, char **argv) test_stat_thread_enable(); __T_END; - return 0; + return tests_failed == 0 ? 0 : -1; } diff --git a/tools/lib/perf/tests/test-threadmap.c b/tools/lib/perf/tests/test-threadmap.c index 7dc4d6fbedde..384471441b48 100644 --- a/tools/lib/perf/tests/test-threadmap.c +++ b/tools/lib/perf/tests/test-threadmap.c @@ -27,5 +27,5 @@ int main(int argc, char **argv) perf_thread_map__put(threads); __T_END; - return 0; + return tests_failed == 0 ? 0 : -1; } diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 5f8d3eed78a1..4bd30315eb62 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -2928,14 +2928,10 @@ int check(struct objtool_file *file) warnings += ret; out: - if (ret < 0) { - /* - * Fatal error. The binary is corrupt or otherwise broken in - * some way, or objtool itself is broken. Fail the kernel - * build. - */ - return ret; - } - + /* + * For now, don't fail the kernel build on fatal warnings. These + * errors are still fairly common due to the growing matrix of + * supported toolchains and their recent pace of change. + */ return 0; } diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index be89c741ba9a..d8421e1d06be 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -380,8 +380,11 @@ static int read_symbols(struct elf *elf) symtab = find_section_by_name(elf, ".symtab"); if (!symtab) { - WARN("missing symbol table"); - return -1; + /* + * A missing symbol table is actually possible if it's an empty + * .o file. This can happen for thunk_64.o. + */ + return 0; } symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); @@ -448,6 +451,13 @@ static int read_symbols(struct elf *elf) list_add(&sym->list, entry); elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx); elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name)); + + /* + * Don't store empty STT_NOTYPE symbols in the rbtree. They + * can exist within a function, confusing the sorting. + */ + if (!sym->len) + rb_erase(&sym->node, &sym->sec->symbol_tree); } if (stats) diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c index edacfa98d073..42dad4a0f8cf 100644 --- a/tools/perf/builtin-script.c +++ b/tools/perf/builtin-script.c @@ -186,6 +186,7 @@ struct output_option { enum { OUTPUT_TYPE_SYNTH = PERF_TYPE_MAX, + OUTPUT_TYPE_OTHER, OUTPUT_TYPE_MAX }; @@ -283,6 +284,18 @@ static struct { .invalid_fields = PERF_OUTPUT_TRACE | PERF_OUTPUT_BPF_OUTPUT, }, + + [OUTPUT_TYPE_OTHER] = { + .user_set = false, + + .fields = PERF_OUTPUT_COMM | PERF_OUTPUT_TID | + PERF_OUTPUT_CPU | PERF_OUTPUT_TIME | + PERF_OUTPUT_EVNAME | PERF_OUTPUT_IP | + PERF_OUTPUT_SYM | PERF_OUTPUT_SYMOFFSET | + PERF_OUTPUT_DSO | PERF_OUTPUT_PERIOD, + + .invalid_fields = PERF_OUTPUT_TRACE | PERF_OUTPUT_BPF_OUTPUT, + }, }; struct evsel_script { @@ -343,8 +356,11 @@ static inline int output_type(unsigned int type) case PERF_TYPE_SYNTH: return OUTPUT_TYPE_SYNTH; default: - return type; + if (type < PERF_TYPE_MAX) + return type; } + + return OUTPUT_TYPE_OTHER; } static inline unsigned int attr_type(unsigned int type) diff --git a/tools/perf/examples/bpf/5sec.c b/tools/perf/examples/bpf/5sec.c index 65c4ff6892d9..e6b6181c6dc6 100644 --- a/tools/perf/examples/bpf/5sec.c +++ b/tools/perf/examples/bpf/5sec.c @@ -39,7 +39,7 @@ Copyright (C) 2018 Red Hat, Inc., Arnaldo Carvalho de Melo <[email protected]> */ -#include <bpf/bpf.h> +#include <bpf.h> #define NSEC_PER_SEC 1000000000L diff --git a/tools/perf/tests/shell/stat+shadow_stat.sh b/tools/perf/tests/shell/stat+shadow_stat.sh index 249dfe48cf6a..ebebd3596cf9 100755 --- a/tools/perf/tests/shell/stat+shadow_stat.sh +++ b/tools/perf/tests/shell/stat+shadow_stat.sh @@ -9,31 +9,29 @@ perf stat -a true > /dev/null 2>&1 || exit 2 test_global_aggr() { - local cyc - perf stat -a --no-big-num -e cycles,instructions sleep 1 2>&1 | \ grep -e cycles -e instructions | \ while read num evt hash ipc rest do # skip not counted events - if [[ $num == "<not" ]]; then + if [ "$num" = "<not" ]; then continue fi # save cycles count - if [[ $evt == "cycles" ]]; then + if [ "$evt" = "cycles" ]; then cyc=$num continue fi # skip if no cycles - if [[ -z $cyc ]]; then + if [ -z "$cyc" ]; then continue fi # use printf for rounding and a leading zero - local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` - if [[ $ipc != $res ]]; then + res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` + if [ "$ipc" != "$res" ]; then echo "IPC is different: $res != $ipc ($num / $cyc)" exit 1 fi @@ -42,32 +40,32 @@ test_global_aggr() test_no_aggr() { - declare -A results - perf stat -a -A --no-big-num -e cycles,instructions sleep 1 2>&1 | \ grep ^CPU | \ while read cpu num evt hash ipc rest do # skip not counted events - if [[ $num == "<not" ]]; then + if [ "$num" = "<not" ]; then continue fi # save cycles count - if [[ $evt == "cycles" ]]; then - results[$cpu]=$num + if [ "$evt" = "cycles" ]; then + results="$results $cpu:$num" continue fi + cyc=${results##* $cpu:} + cyc=${cyc%% *} + # skip if no cycles - local cyc=${results[$cpu]} - if [[ -z $cyc ]]; then + if [ -z "$cyc" ]; then continue fi # use printf for rounding and a leading zero - local res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` - if [[ $ipc != $res ]]; then + res=`printf "%.2f" $(echo "scale=6; $num / $cyc" | bc -q)` + if [ "$ipc" != "$res" ]; then echo "IPC is different for $cpu: $res != $ipc ($num / $cyc)" exit 1 fi diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index 062383e225a3..c4ed3dc2c8f4 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -3323,6 +3323,14 @@ int perf_session__write_header(struct perf_session *session, attr_offset = lseek(ff.fd, 0, SEEK_CUR); evlist__for_each_entry(evlist, evsel) { + if (evsel->core.attr.size < sizeof(evsel->core.attr)) { + /* + * We are likely in "perf inject" and have read + * from an older file. Update attr size so that + * reader gets the right offset to the ids. + */ + evsel->core.attr.size = sizeof(evsel->core.attr); + } f_attr = (struct perf_file_attr){ .attr = evsel->core.attr, .ids = { diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index f841f3503cae..1e9d3f982b47 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -2980,7 +2980,7 @@ int machines__for_each_thread(struct machines *machines, pid_t machine__get_current_tid(struct machine *machine, int cpu) { - int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS); + int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS); if (cpu < 0 || cpu >= nr_cpus || !machine->current_tid) return -1; @@ -2992,7 +2992,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid, pid_t tid) { struct thread *thread; - int nr_cpus = min(machine->env->nr_cpus_online, MAX_NR_CPUS); + int nr_cpus = min(machine->env->nr_cpus_avail, MAX_NR_CPUS); if (cpu < 0) return -EINVAL; diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c index ee94d3e8dd65..e6d3452031e5 100644 --- a/tools/perf/util/metricgroup.c +++ b/tools/perf/util/metricgroup.c @@ -162,6 +162,14 @@ static bool contains_event(struct evsel **metric_events, int num_events, return false; } +static bool evsel_same_pmu(struct evsel *ev1, struct evsel *ev2) +{ + if (!ev1->pmu_name || !ev2->pmu_name) + return false; + + return !strcmp(ev1->pmu_name, ev2->pmu_name); +} + /** * Find a group of events in perf_evlist that correspond to those from a parsed * metric expression. Note, as find_evsel_group is called in the same order as @@ -280,8 +288,7 @@ static struct evsel *find_evsel_group(struct evlist *perf_evlist, */ if (!has_constraint && ev->leader != metric_events[i]->leader && - !strcmp(ev->leader->pmu_name, - metric_events[i]->leader->pmu_name)) + evsel_same_pmu(ev->leader, metric_events[i]->leader)) break; if (!strcmp(metric_events[i]->name, ev->name)) { set_bit(ev->idx, evlist_used); @@ -766,7 +773,6 @@ int __weak arch_get_runtimeparam(struct pmu_event *pe __maybe_unused) struct metricgroup_add_iter_data { struct list_head *metric_list; const char *metric; - struct metric **m; struct expr_ids *ids; int *ret; bool *has_match; @@ -1058,12 +1064,13 @@ static int metricgroup__add_metric_sys_event_iter(struct pmu_event *pe, void *data) { struct metricgroup_add_iter_data *d = data; + struct metric *m = NULL; int ret; if (!match_pe_metric(pe, d->metric)) return 0; - ret = add_metric(d->metric_list, pe, d->metric_no_group, d->m, NULL, d->ids); + ret = add_metric(d->metric_list, pe, d->metric_no_group, &m, NULL, d->ids); if (ret) return ret; @@ -1114,7 +1121,6 @@ static int metricgroup__add_metric(const char *metric, bool metric_no_group, .metric_list = &list, .metric = metric, .metric_no_group = metric_no_group, - .m = &m, .ids = &ids, .has_match = &has_match, .ret = &ret, diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 50ff9795a4f1..25adbcce0281 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -2404,7 +2404,7 @@ int perf_session__cpu_bitmap(struct perf_session *session, { int i, err = -1; struct perf_cpu_map *map; - int nr_cpus = min(session->header.env.nr_cpus_online, MAX_NR_CPUS); + int nr_cpus = min(session->header.env.nr_cpus_avail, MAX_NR_CPUS); for (i = 0; i < PERF_TYPE_MAX; ++i) { struct evsel *evsel; diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c index 901265127e36..12eafd12a693 100644 --- a/tools/perf/util/stat-shadow.c +++ b/tools/perf/util/stat-shadow.c @@ -8,6 +8,7 @@ #include "evlist.h" #include "expr.h" #include "metricgroup.h" +#include "cgroup.h" #include <linux/zalloc.h> /* @@ -28,6 +29,7 @@ struct saved_value { enum stat_type type; int ctx; int cpu; + struct cgroup *cgrp; struct runtime_stat *stat; struct stats stats; u64 metric_total; @@ -57,6 +59,9 @@ static int saved_value_cmp(struct rb_node *rb_node, const void *entry) if (a->ctx != b->ctx) return a->ctx - b->ctx; + if (a->cgrp != b->cgrp) + return (char *)a->cgrp < (char *)b->cgrp ? -1 : +1; + if (a->evsel == NULL && b->evsel == NULL) { if (a->stat == b->stat) return 0; @@ -100,7 +105,8 @@ static struct saved_value *saved_value_lookup(struct evsel *evsel, bool create, enum stat_type type, int ctx, - struct runtime_stat *st) + struct runtime_stat *st, + struct cgroup *cgrp) { struct rblist *rblist; struct rb_node *nd; @@ -110,10 +116,15 @@ static struct saved_value *saved_value_lookup(struct evsel *evsel, .type = type, .ctx = ctx, .stat = st, + .cgrp = cgrp, }; rblist = &st->value_list; + /* don't use context info for clock events */ + if (type == STAT_NSECS) + dm.ctx = 0; + nd = rblist__find(rblist, &dm); if (nd) return container_of(nd, struct saved_value, rb_node); @@ -191,12 +202,18 @@ void perf_stat__reset_shadow_per_stat(struct runtime_stat *st) reset_stat(st); } +struct runtime_stat_data { + int ctx; + struct cgroup *cgrp; +}; + static void update_runtime_stat(struct runtime_stat *st, enum stat_type type, - int ctx, int cpu, u64 count) + int cpu, u64 count, + struct runtime_stat_data *rsd) { - struct saved_value *v = saved_value_lookup(NULL, cpu, true, - type, ctx, st); + struct saved_value *v = saved_value_lookup(NULL, cpu, true, type, + rsd->ctx, st, rsd->cgrp); if (v) update_stats(&v->stats, count); @@ -210,82 +227,86 @@ static void update_runtime_stat(struct runtime_stat *st, void perf_stat__update_shadow_stats(struct evsel *counter, u64 count, int cpu, struct runtime_stat *st) { - int ctx = evsel_context(counter); u64 count_ns = count; struct saved_value *v; + struct runtime_stat_data rsd = { + .ctx = evsel_context(counter), + .cgrp = counter->cgrp, + }; count *= counter->scale; if (evsel__is_clock(counter)) - update_runtime_stat(st, STAT_NSECS, 0, cpu, count_ns); + update_runtime_stat(st, STAT_NSECS, cpu, count_ns, &rsd); else if (evsel__match(counter, HARDWARE, HW_CPU_CYCLES)) - update_runtime_stat(st, STAT_CYCLES, ctx, cpu, count); + update_runtime_stat(st, STAT_CYCLES, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, CYCLES_IN_TX)) - update_runtime_stat(st, STAT_CYCLES_IN_TX, ctx, cpu, count); + update_runtime_stat(st, STAT_CYCLES_IN_TX, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TRANSACTION_START)) - update_runtime_stat(st, STAT_TRANSACTION, ctx, cpu, count); + update_runtime_stat(st, STAT_TRANSACTION, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, ELISION_START)) - update_runtime_stat(st, STAT_ELISION, ctx, cpu, count); + update_runtime_stat(st, STAT_ELISION, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS)) update_runtime_stat(st, STAT_TOPDOWN_TOTAL_SLOTS, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED)) update_runtime_stat(st, STAT_TOPDOWN_SLOTS_ISSUED, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED)) update_runtime_stat(st, STAT_TOPDOWN_SLOTS_RETIRED, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES)) update_runtime_stat(st, STAT_TOPDOWN_FETCH_BUBBLES, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES)) update_runtime_stat(st, STAT_TOPDOWN_RECOVERY_BUBBLES, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_RETIRING)) update_runtime_stat(st, STAT_TOPDOWN_RETIRING, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_BAD_SPEC)) update_runtime_stat(st, STAT_TOPDOWN_BAD_SPEC, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_FE_BOUND)) update_runtime_stat(st, STAT_TOPDOWN_FE_BOUND, - ctx, cpu, count); + cpu, count, &rsd); else if (perf_stat_evsel__is(counter, TOPDOWN_BE_BOUND)) update_runtime_stat(st, STAT_TOPDOWN_BE_BOUND, - ctx, cpu, count); + cpu, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) update_runtime_stat(st, STAT_STALLED_CYCLES_FRONT, - ctx, cpu, count); + cpu, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND)) update_runtime_stat(st, STAT_STALLED_CYCLES_BACK, - ctx, cpu, count); + cpu, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS)) - update_runtime_stat(st, STAT_BRANCHES, ctx, cpu, count); + update_runtime_stat(st, STAT_BRANCHES, cpu, count, &rsd); else if (evsel__match(counter, HARDWARE, HW_CACHE_REFERENCES)) - update_runtime_stat(st, STAT_CACHEREFS, ctx, cpu, count); + update_runtime_stat(st, STAT_CACHEREFS, cpu, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1D)) - update_runtime_stat(st, STAT_L1_DCACHE, ctx, cpu, count); + update_runtime_stat(st, STAT_L1_DCACHE, cpu, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_L1I)) - update_runtime_stat(st, STAT_L1_ICACHE, ctx, cpu, count); + update_runtime_stat(st, STAT_L1_ICACHE, cpu, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_LL)) - update_runtime_stat(st, STAT_LL_CACHE, ctx, cpu, count); + update_runtime_stat(st, STAT_LL_CACHE, cpu, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_DTLB)) - update_runtime_stat(st, STAT_DTLB_CACHE, ctx, cpu, count); + update_runtime_stat(st, STAT_DTLB_CACHE, cpu, count, &rsd); else if (evsel__match(counter, HW_CACHE, HW_CACHE_ITLB)) - update_runtime_stat(st, STAT_ITLB_CACHE, ctx, cpu, count); + update_runtime_stat(st, STAT_ITLB_CACHE, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, SMI_NUM)) - update_runtime_stat(st, STAT_SMI_NUM, ctx, cpu, count); + update_runtime_stat(st, STAT_SMI_NUM, cpu, count, &rsd); else if (perf_stat_evsel__is(counter, APERF)) - update_runtime_stat(st, STAT_APERF, ctx, cpu, count); + update_runtime_stat(st, STAT_APERF, cpu, count, &rsd); if (counter->collect_stat) { - v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st); + v = saved_value_lookup(counter, cpu, true, STAT_NONE, 0, st, + rsd.cgrp); update_stats(&v->stats, count); if (counter->metric_leader) v->metric_total += count; } else if (counter->metric_leader) { v = saved_value_lookup(counter->metric_leader, - cpu, true, STAT_NONE, 0, st); + cpu, true, STAT_NONE, 0, st, rsd.cgrp); v->metric_total += count; v->metric_other++; } @@ -422,11 +443,12 @@ void perf_stat__collect_metric_expr(struct evlist *evsel_list) } static double runtime_stat_avg(struct runtime_stat *st, - enum stat_type type, int ctx, int cpu) + enum stat_type type, int cpu, + struct runtime_stat_data *rsd) { struct saved_value *v; - v = saved_value_lookup(NULL, cpu, false, type, ctx, st); + v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp); if (!v) return 0.0; @@ -434,11 +456,12 @@ static double runtime_stat_avg(struct runtime_stat *st, } static double runtime_stat_n(struct runtime_stat *st, - enum stat_type type, int ctx, int cpu) + enum stat_type type, int cpu, + struct runtime_stat_data *rsd) { struct saved_value *v; - v = saved_value_lookup(NULL, cpu, false, type, ctx, st); + v = saved_value_lookup(NULL, cpu, false, type, rsd->ctx, st, rsd->cgrp); if (!v) return 0.0; @@ -446,16 +469,15 @@ static double runtime_stat_n(struct runtime_stat *st, } static void print_stalled_cycles_frontend(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -470,16 +492,15 @@ static void print_stalled_cycles_frontend(struct perf_stat_config *config, } static void print_stalled_cycles_backend(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -490,17 +511,15 @@ static void print_stalled_cycles_backend(struct perf_stat_config *config, } static void print_branch_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_BRANCHES, ctx, cpu); + total = runtime_stat_avg(st, STAT_BRANCHES, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -511,18 +530,15 @@ static void print_branch_misses(struct perf_stat_config *config, } static void print_l1_dcache_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) - + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_L1_DCACHE, ctx, cpu); + total = runtime_stat_avg(st, STAT_L1_DCACHE, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -533,18 +549,15 @@ static void print_l1_dcache_misses(struct perf_stat_config *config, } static void print_l1_icache_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) - + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_L1_ICACHE, ctx, cpu); + total = runtime_stat_avg(st, STAT_L1_ICACHE, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -554,17 +567,15 @@ static void print_l1_icache_misses(struct perf_stat_config *config, } static void print_dtlb_cache_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_DTLB_CACHE, ctx, cpu); + total = runtime_stat_avg(st, STAT_DTLB_CACHE, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -574,17 +585,15 @@ static void print_dtlb_cache_misses(struct perf_stat_config *config, } static void print_itlb_cache_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_ITLB_CACHE, ctx, cpu); + total = runtime_stat_avg(st, STAT_ITLB_CACHE, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -594,17 +603,15 @@ static void print_itlb_cache_misses(struct perf_stat_config *config, } static void print_ll_cache_misses(struct perf_stat_config *config, - int cpu, - struct evsel *evsel, - double avg, + int cpu, double avg, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double total, ratio = 0.0; const char *color; - int ctx = evsel_context(evsel); - total = runtime_stat_avg(st, STAT_LL_CACHE, ctx, cpu); + total = runtime_stat_avg(st, STAT_LL_CACHE, cpu, rsd); if (total) ratio = avg / total * 100.0; @@ -662,56 +669,61 @@ static double sanitize_val(double x) return x; } -static double td_total_slots(int ctx, int cpu, struct runtime_stat *st) +static double td_total_slots(int cpu, struct runtime_stat *st, + struct runtime_stat_data *rsd) { - return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, ctx, cpu); + return runtime_stat_avg(st, STAT_TOPDOWN_TOTAL_SLOTS, cpu, rsd); } -static double td_bad_spec(int ctx, int cpu, struct runtime_stat *st) +static double td_bad_spec(int cpu, struct runtime_stat *st, + struct runtime_stat_data *rsd) { double bad_spec = 0; double total_slots; double total; - total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, ctx, cpu) - - runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, ctx, cpu) + - runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, ctx, cpu); + total = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_ISSUED, cpu, rsd) - + runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, cpu, rsd) + + runtime_stat_avg(st, STAT_TOPDOWN_RECOVERY_BUBBLES, cpu, rsd); - total_slots = td_total_slots(ctx, cpu, st); + total_slots = td_total_slots(cpu, st, rsd); if (total_slots) bad_spec = total / total_slots; return sanitize_val(bad_spec); } -static double td_retiring(int ctx, int cpu, struct runtime_stat *st) +static double td_retiring(int cpu, struct runtime_stat *st, + struct runtime_stat_data *rsd) { double retiring = 0; - double total_slots = td_total_slots(ctx, cpu, st); + double total_slots = td_total_slots(cpu, st, rsd); double ret_slots = runtime_stat_avg(st, STAT_TOPDOWN_SLOTS_RETIRED, - ctx, cpu); + cpu, rsd); if (total_slots) retiring = ret_slots / total_slots; return retiring; } -static double td_fe_bound(int ctx, int cpu, struct runtime_stat *st) +static double td_fe_bound(int cpu, struct runtime_stat *st, + struct runtime_stat_data *rsd) { double fe_bound = 0; - double total_slots = td_total_slots(ctx, cpu, st); + double total_slots = td_total_slots(cpu, st, rsd); double fetch_bub = runtime_stat_avg(st, STAT_TOPDOWN_FETCH_BUBBLES, - ctx, cpu); + cpu, rsd); if (total_slots) fe_bound = fetch_bub / total_slots; return fe_bound; } -static double td_be_bound(int ctx, int cpu, struct runtime_stat *st) +static double td_be_bound(int cpu, struct runtime_stat *st, + struct runtime_stat_data *rsd) { - double sum = (td_fe_bound(ctx, cpu, st) + - td_bad_spec(ctx, cpu, st) + - td_retiring(ctx, cpu, st)); + double sum = (td_fe_bound(cpu, st, rsd) + + td_bad_spec(cpu, st, rsd) + + td_retiring(cpu, st, rsd)); if (sum == 0) return 0; return sanitize_val(1.0 - sum); @@ -722,15 +734,15 @@ static double td_be_bound(int ctx, int cpu, struct runtime_stat *st) * the ratios we need to recreate the sum. */ -static double td_metric_ratio(int ctx, int cpu, - enum stat_type type, - struct runtime_stat *stat) +static double td_metric_ratio(int cpu, enum stat_type type, + struct runtime_stat *stat, + struct runtime_stat_data *rsd) { - double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) + - runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) + - runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) + - runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu); - double d = runtime_stat_avg(stat, type, ctx, cpu); + double sum = runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) + + runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) + + runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) + + runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd); + double d = runtime_stat_avg(stat, type, cpu, rsd); if (sum) return d / sum; @@ -742,34 +754,33 @@ static double td_metric_ratio(int ctx, int cpu, * We allow two missing. */ -static bool full_td(int ctx, int cpu, - struct runtime_stat *stat) +static bool full_td(int cpu, struct runtime_stat *stat, + struct runtime_stat_data *rsd) { int c = 0; - if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, ctx, cpu) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_RETIRING, cpu, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, ctx, cpu) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_BE_BOUND, cpu, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, ctx, cpu) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_FE_BOUND, cpu, rsd) > 0) c++; - if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, ctx, cpu) > 0) + if (runtime_stat_avg(stat, STAT_TOPDOWN_BAD_SPEC, cpu, rsd) > 0) c++; return c >= 2; } -static void print_smi_cost(struct perf_stat_config *config, - int cpu, struct evsel *evsel, +static void print_smi_cost(struct perf_stat_config *config, int cpu, struct perf_stat_output_ctx *out, - struct runtime_stat *st) + struct runtime_stat *st, + struct runtime_stat_data *rsd) { double smi_num, aperf, cycles, cost = 0.0; - int ctx = evsel_context(evsel); const char *color = NULL; - smi_num = runtime_stat_avg(st, STAT_SMI_NUM, ctx, cpu); - aperf = runtime_stat_avg(st, STAT_APERF, ctx, cpu); - cycles = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); + smi_num = runtime_stat_avg(st, STAT_SMI_NUM, cpu, rsd); + aperf = runtime_stat_avg(st, STAT_APERF, cpu, rsd); + cycles = runtime_stat_avg(st, STAT_CYCLES, cpu, rsd); if ((cycles == 0) || (aperf == 0)) return; @@ -804,7 +815,8 @@ static int prepare_metric(struct evsel **metric_events, scale = 1e-9; } else { v = saved_value_lookup(metric_events[i], cpu, false, - STAT_NONE, 0, st); + STAT_NONE, 0, st, + metric_events[i]->cgrp); if (!v) break; stats = &v->stats; @@ -930,12 +942,15 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, print_metric_t print_metric = out->print_metric; double total, ratio = 0.0, total2; const char *color = NULL; - int ctx = evsel_context(evsel); + struct runtime_stat_data rsd = { + .ctx = evsel_context(evsel), + .cgrp = evsel->cgrp, + }; struct metric_event *me; int num = 1; if (evsel__match(evsel, HARDWARE, HW_INSTRUCTIONS)) { - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); if (total) { ratio = avg / total; @@ -945,12 +960,11 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, print_metric(config, ctxp, NULL, NULL, "insn per cycle", 0); } - total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, - ctx, cpu); + total = runtime_stat_avg(st, STAT_STALLED_CYCLES_FRONT, cpu, &rsd); total = max(total, runtime_stat_avg(st, STAT_STALLED_CYCLES_BACK, - ctx, cpu)); + cpu, &rsd)); if (total && avg) { out->new_line(config, ctxp); @@ -960,8 +974,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ratio); } } else if (evsel__match(evsel, HARDWARE, HW_BRANCH_MISSES)) { - if (runtime_stat_n(st, STAT_BRANCHES, ctx, cpu) != 0) - print_branch_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_BRANCHES, cpu, &rsd) != 0) + print_branch_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all branches", 0); } else if ( @@ -970,8 +984,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { - if (runtime_stat_n(st, STAT_L1_DCACHE, ctx, cpu) != 0) - print_l1_dcache_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_L1_DCACHE, cpu, &rsd) != 0) + print_l1_dcache_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all L1-dcache accesses", 0); } else if ( @@ -980,8 +994,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { - if (runtime_stat_n(st, STAT_L1_ICACHE, ctx, cpu) != 0) - print_l1_icache_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_L1_ICACHE, cpu, &rsd) != 0) + print_l1_icache_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all L1-icache accesses", 0); } else if ( @@ -990,8 +1004,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { - if (runtime_stat_n(st, STAT_DTLB_CACHE, ctx, cpu) != 0) - print_dtlb_cache_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_DTLB_CACHE, cpu, &rsd) != 0) + print_dtlb_cache_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all dTLB cache accesses", 0); } else if ( @@ -1000,8 +1014,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { - if (runtime_stat_n(st, STAT_ITLB_CACHE, ctx, cpu) != 0) - print_itlb_cache_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_ITLB_CACHE, cpu, &rsd) != 0) + print_itlb_cache_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all iTLB cache accesses", 0); } else if ( @@ -1010,27 +1024,27 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, ((PERF_COUNT_HW_CACHE_OP_READ) << 8) | ((PERF_COUNT_HW_CACHE_RESULT_MISS) << 16))) { - if (runtime_stat_n(st, STAT_LL_CACHE, ctx, cpu) != 0) - print_ll_cache_misses(config, cpu, evsel, avg, out, st); + if (runtime_stat_n(st, STAT_LL_CACHE, cpu, &rsd) != 0) + print_ll_cache_misses(config, cpu, avg, out, st, &rsd); else print_metric(config, ctxp, NULL, NULL, "of all LL-cache accesses", 0); } else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) { - total = runtime_stat_avg(st, STAT_CACHEREFS, ctx, cpu); + total = runtime_stat_avg(st, STAT_CACHEREFS, cpu, &rsd); if (total) ratio = avg * 100 / total; - if (runtime_stat_n(st, STAT_CACHEREFS, ctx, cpu) != 0) + if (runtime_stat_n(st, STAT_CACHEREFS, cpu, &rsd) != 0) print_metric(config, ctxp, NULL, "%8.3f %%", "of all cache refs", ratio); else print_metric(config, ctxp, NULL, NULL, "of all cache refs", 0); } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_FRONTEND)) { - print_stalled_cycles_frontend(config, cpu, evsel, avg, out, st); + print_stalled_cycles_frontend(config, cpu, avg, out, st, &rsd); } else if (evsel__match(evsel, HARDWARE, HW_STALLED_CYCLES_BACKEND)) { - print_stalled_cycles_backend(config, cpu, evsel, avg, out, st); + print_stalled_cycles_backend(config, cpu, avg, out, st, &rsd); } else if (evsel__match(evsel, HARDWARE, HW_CPU_CYCLES)) { - total = runtime_stat_avg(st, STAT_NSECS, 0, cpu); + total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd); if (total) { ratio = avg / total; @@ -1039,7 +1053,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, print_metric(config, ctxp, NULL, NULL, "Ghz", 0); } } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX)) { - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); if (total) print_metric(config, ctxp, NULL, @@ -1049,8 +1063,8 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, print_metric(config, ctxp, NULL, NULL, "transactional cycles", 0); } else if (perf_stat_evsel__is(evsel, CYCLES_IN_TX_CP)) { - total = runtime_stat_avg(st, STAT_CYCLES, ctx, cpu); - total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES, cpu, &rsd); + total2 = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); if (total2 < avg) total2 = avg; @@ -1060,21 +1074,19 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, else print_metric(config, ctxp, NULL, NULL, "aborted cycles", 0); } else if (perf_stat_evsel__is(evsel, TRANSACTION_START)) { - total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, - ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); if (avg) ratio = total / avg; - if (runtime_stat_n(st, STAT_CYCLES_IN_TX, ctx, cpu) != 0) + if (runtime_stat_n(st, STAT_CYCLES_IN_TX, cpu, &rsd) != 0) print_metric(config, ctxp, NULL, "%8.0f", "cycles / transaction", ratio); else print_metric(config, ctxp, NULL, NULL, "cycles / transaction", 0); } else if (perf_stat_evsel__is(evsel, ELISION_START)) { - total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, - ctx, cpu); + total = runtime_stat_avg(st, STAT_CYCLES_IN_TX, cpu, &rsd); if (avg) ratio = total / avg; @@ -1087,28 +1099,28 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, else print_metric(config, ctxp, NULL, NULL, "CPUs utilized", 0); } else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) { - double fe_bound = td_fe_bound(ctx, cpu, st); + double fe_bound = td_fe_bound(cpu, st, &rsd); if (fe_bound > 0.2) color = PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "frontend bound", fe_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) { - double retiring = td_retiring(ctx, cpu, st); + double retiring = td_retiring(cpu, st, &rsd); if (retiring > 0.7) color = PERF_COLOR_GREEN; print_metric(config, ctxp, color, "%8.1f%%", "retiring", retiring * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) { - double bad_spec = td_bad_spec(ctx, cpu, st); + double bad_spec = td_bad_spec(cpu, st, &rsd); if (bad_spec > 0.1) color = PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "bad speculation", bad_spec * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) { - double be_bound = td_be_bound(ctx, cpu, st); + double be_bound = td_be_bound(cpu, st, &rsd); const char *name = "backend bound"; static int have_recovery_bubbles = -1; @@ -1121,43 +1133,43 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, if (be_bound > 0.2) color = PERF_COLOR_RED; - if (td_total_slots(ctx, cpu, st) > 0) + if (td_total_slots(cpu, st, &rsd) > 0) print_metric(config, ctxp, color, "%8.1f%%", name, be_bound * 100.); else print_metric(config, ctxp, NULL, NULL, name, 0); } else if (perf_stat_evsel__is(evsel, TOPDOWN_RETIRING) && - full_td(ctx, cpu, st)) { - double retiring = td_metric_ratio(ctx, cpu, - STAT_TOPDOWN_RETIRING, st); - + full_td(cpu, st, &rsd)) { + double retiring = td_metric_ratio(cpu, + STAT_TOPDOWN_RETIRING, st, + &rsd); if (retiring > 0.7) color = PERF_COLOR_GREEN; print_metric(config, ctxp, color, "%8.1f%%", "retiring", retiring * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_FE_BOUND) && - full_td(ctx, cpu, st)) { - double fe_bound = td_metric_ratio(ctx, cpu, - STAT_TOPDOWN_FE_BOUND, st); - + full_td(cpu, st, &rsd)) { + double fe_bound = td_metric_ratio(cpu, + STAT_TOPDOWN_FE_BOUND, st, + &rsd); if (fe_bound > 0.2) color = PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "frontend bound", fe_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_BE_BOUND) && - full_td(ctx, cpu, st)) { - double be_bound = td_metric_ratio(ctx, cpu, - STAT_TOPDOWN_BE_BOUND, st); - + full_td(cpu, st, &rsd)) { + double be_bound = td_metric_ratio(cpu, + STAT_TOPDOWN_BE_BOUND, st, + &rsd); if (be_bound > 0.2) color = PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "backend bound", be_bound * 100.); } else if (perf_stat_evsel__is(evsel, TOPDOWN_BAD_SPEC) && - full_td(ctx, cpu, st)) { - double bad_spec = td_metric_ratio(ctx, cpu, - STAT_TOPDOWN_BAD_SPEC, st); - + full_td(cpu, st, &rsd)) { + double bad_spec = td_metric_ratio(cpu, + STAT_TOPDOWN_BAD_SPEC, st, + &rsd); if (bad_spec > 0.1) color = PERF_COLOR_RED; print_metric(config, ctxp, color, "%8.1f%%", "bad speculation", @@ -1165,11 +1177,11 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, } else if (evsel->metric_expr) { generic_metric(config, evsel->metric_expr, evsel->metric_events, NULL, evsel->name, evsel->metric_name, NULL, 1, cpu, out, st); - } else if (runtime_stat_n(st, STAT_NSECS, 0, cpu) != 0) { + } else if (runtime_stat_n(st, STAT_NSECS, cpu, &rsd) != 0) { char unit = 'M'; char unit_buf[10]; - total = runtime_stat_avg(st, STAT_NSECS, 0, cpu); + total = runtime_stat_avg(st, STAT_NSECS, cpu, &rsd); if (total) ratio = 1000.0 * avg / total; @@ -1180,7 +1192,7 @@ void perf_stat__print_shadow_stats(struct perf_stat_config *config, snprintf(unit_buf, sizeof(unit_buf), "%c/sec", unit); print_metric(config, ctxp, NULL, "%8.3f", unit_buf, ratio); } else if (perf_stat_evsel__is(evsel, SMI_NUM)) { - print_smi_cost(config, cpu, evsel, out, st); + print_smi_cost(config, cpu, out, st, &rsd); } else { num = 0; } diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c index 5390158cdb40..09cb3a6672f3 100644 --- a/tools/power/x86/intel-speed-select/isst-config.c +++ b/tools/power/x86/intel-speed-select/isst-config.c @@ -1249,6 +1249,8 @@ static void dump_isst_config(int arg) isst_ctdp_display_information_end(outf); } +static void adjust_scaling_max_from_base_freq(int cpu); + static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3, void *arg4) { @@ -1267,6 +1269,9 @@ static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3, int pkg_id = get_physical_package_id(cpu); int die_id = get_physical_die_id(cpu); + /* Wait for updated base frequencies */ + usleep(2000); + fprintf(stderr, "Option is set to online/offline\n"); ctdp_level.core_cpumask_size = alloc_cpu_set(&ctdp_level.core_cpumask); @@ -1283,6 +1288,7 @@ static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3, if (CPU_ISSET_S(i, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask)) { fprintf(stderr, "online cpu %d\n", i); set_cpu_online_offline(i, 1); + adjust_scaling_max_from_base_freq(i); } else { fprintf(stderr, "offline cpu %d\n", i); set_cpu_online_offline(i, 0); @@ -1440,6 +1446,31 @@ static int set_cpufreq_scaling_min_max(int cpu, int max, int freq) return 0; } +static int no_turbo(void) +{ + return parse_int_file(0, "/sys/devices/system/cpu/intel_pstate/no_turbo"); +} + +static void adjust_scaling_max_from_base_freq(int cpu) +{ + int base_freq, scaling_max_freq; + + scaling_max_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_max_freq", cpu); + base_freq = get_cpufreq_base_freq(cpu); + if (scaling_max_freq < base_freq || no_turbo()) + set_cpufreq_scaling_min_max(cpu, 1, base_freq); +} + +static void adjust_scaling_min_from_base_freq(int cpu) +{ + int base_freq, scaling_min_freq; + + scaling_min_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_min_freq", cpu); + base_freq = get_cpufreq_base_freq(cpu); + if (scaling_min_freq < base_freq) + set_cpufreq_scaling_min_max(cpu, 0, base_freq); +} + static int set_clx_pbf_cpufreq_scaling_min_max(int cpu) { struct isst_pkg_ctdp_level_info *ctdp_level; @@ -1537,6 +1568,7 @@ static void set_scaling_min_to_cpuinfo_max(int cpu) continue; set_cpufreq_scaling_min_max_from_cpuinfo(i, 1, 0); + adjust_scaling_min_from_base_freq(i); } } diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py index 21516e293d17..e808a47c839b 100755 --- a/tools/testing/kunit/kunit.py +++ b/tools/testing/kunit/kunit.py @@ -43,9 +43,9 @@ class KunitStatus(Enum): BUILD_FAILURE = auto() TEST_FAILURE = auto() -def get_kernel_root_path(): - parts = sys.argv[0] if not __file__ else __file__ - parts = os.path.realpath(parts).split('tools/testing/kunit') +def get_kernel_root_path() -> str: + path = sys.argv[0] if not __file__ else __file__ + parts = os.path.realpath(path).split('tools/testing/kunit') if len(parts) != 2: sys.exit(1) return parts[0] @@ -171,7 +171,7 @@ def run_tests(linux: kunit_kernel.LinuxSourceTree, exec_result.elapsed_time)) return parse_result -def add_common_opts(parser): +def add_common_opts(parser) -> None: parser.add_argument('--build_dir', help='As in the make command, it specifies the build ' 'directory.', @@ -183,13 +183,13 @@ def add_common_opts(parser): help='Run all KUnit tests through allyesconfig', action='store_true') -def add_build_opts(parser): +def add_build_opts(parser) -> None: parser.add_argument('--jobs', help='As in the make command, "Specifies the number of ' 'jobs (commands) to run simultaneously."', type=int, default=8, metavar='jobs') -def add_exec_opts(parser): +def add_exec_opts(parser) -> None: parser.add_argument('--timeout', help='maximum number of seconds to allow for all tests ' 'to run. This does not include time taken to build the ' @@ -198,7 +198,7 @@ def add_exec_opts(parser): default=300, metavar='timeout') -def add_parse_opts(parser): +def add_parse_opts(parser) -> None: parser.add_argument('--raw_output', help='don\'t format output from kernel', action='store_true') parser.add_argument('--json', @@ -256,10 +256,7 @@ def main(argv, linux=None): os.mkdir(cli_args.build_dir) if not linux: - linux = kunit_kernel.LinuxSourceTree() - - linux.create_kunitconfig(cli_args.build_dir) - linux.read_kunitconfig(cli_args.build_dir) + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) request = KunitRequest(cli_args.raw_output, cli_args.timeout, @@ -277,10 +274,7 @@ def main(argv, linux=None): os.mkdir(cli_args.build_dir) if not linux: - linux = kunit_kernel.LinuxSourceTree() - - linux.create_kunitconfig(cli_args.build_dir) - linux.read_kunitconfig(cli_args.build_dir) + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) request = KunitConfigRequest(cli_args.build_dir, cli_args.make_options) @@ -292,10 +286,7 @@ def main(argv, linux=None): sys.exit(1) elif cli_args.subcommand == 'build': if not linux: - linux = kunit_kernel.LinuxSourceTree() - - linux.create_kunitconfig(cli_args.build_dir) - linux.read_kunitconfig(cli_args.build_dir) + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) request = KunitBuildRequest(cli_args.jobs, cli_args.build_dir, @@ -309,10 +300,7 @@ def main(argv, linux=None): sys.exit(1) elif cli_args.subcommand == 'exec': if not linux: - linux = kunit_kernel.LinuxSourceTree() - - linux.create_kunitconfig(cli_args.build_dir) - linux.read_kunitconfig(cli_args.build_dir) + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) exec_request = KunitExecRequest(cli_args.timeout, cli_args.build_dir, diff --git a/tools/testing/kunit/kunit_config.py b/tools/testing/kunit/kunit_config.py index 02ffc3a3e5dc..bdd60230764b 100644 --- a/tools/testing/kunit/kunit_config.py +++ b/tools/testing/kunit/kunit_config.py @@ -8,6 +8,7 @@ import collections import re +from typing import List, Set CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$' CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$' @@ -30,10 +31,10 @@ class KconfigParseError(Exception): class Kconfig(object): """Represents defconfig or .config specified using the Kconfig language.""" - def __init__(self): - self._entries = [] + def __init__(self) -> None: + self._entries = [] # type: List[KconfigEntry] - def entries(self): + def entries(self) -> Set[KconfigEntry]: return set(self._entries) def add_entry(self, entry: KconfigEntry) -> None: diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py index 624b31b2dbd6..f5cca5c38cac 100644 --- a/tools/testing/kunit/kunit_json.py +++ b/tools/testing/kunit/kunit_json.py @@ -13,7 +13,7 @@ import kunit_parser from kunit_parser import TestStatus -def get_json_result(test_result, def_config, build_dir, json_path): +def get_json_result(test_result, def_config, build_dir, json_path) -> str: sub_groups = [] # Each test suite is mapped to a KernelCI sub_group diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py index 57c1724b7e5d..2076a5a2d060 100644 --- a/tools/testing/kunit/kunit_kernel.py +++ b/tools/testing/kunit/kunit_kernel.py @@ -11,6 +11,7 @@ import subprocess import os import shutil import signal +from typing import Iterator from contextlib import ExitStack @@ -39,7 +40,7 @@ class BuildError(Exception): class LinuxSourceTreeOperations(object): """An abstraction over command line operations performed on a source tree.""" - def make_mrproper(self): + def make_mrproper(self) -> None: try: subprocess.check_output(['make', 'mrproper'], stderr=subprocess.STDOUT) except OSError as e: @@ -47,7 +48,7 @@ class LinuxSourceTreeOperations(object): except subprocess.CalledProcessError as e: raise ConfigError(e.output.decode()) - def make_olddefconfig(self, build_dir, make_options): + def make_olddefconfig(self, build_dir, make_options) -> None: command = ['make', 'ARCH=um', 'olddefconfig'] if make_options: command.extend(make_options) @@ -60,7 +61,7 @@ class LinuxSourceTreeOperations(object): except subprocess.CalledProcessError as e: raise ConfigError(e.output.decode()) - def make_allyesconfig(self, build_dir, make_options): + def make_allyesconfig(self, build_dir, make_options) -> None: kunit_parser.print_with_timestamp( 'Enabling all CONFIGs for UML...') command = ['make', 'ARCH=um', 'allyesconfig'] @@ -82,7 +83,7 @@ class LinuxSourceTreeOperations(object): kunit_parser.print_with_timestamp( 'Starting Kernel with all configs takes a few minutes...') - def make(self, jobs, build_dir, make_options): + def make(self, jobs, build_dir, make_options) -> None: command = ['make', 'ARCH=um', '--jobs=' + str(jobs)] if make_options: command.extend(make_options) @@ -100,7 +101,7 @@ class LinuxSourceTreeOperations(object): if stderr: # likely only due to build warnings print(stderr.decode()) - def linux_bin(self, params, timeout, build_dir): + def linux_bin(self, params, timeout, build_dir) -> None: """Runs the Linux UML binary. Must be named 'linux'.""" linux_bin = get_file_path(build_dir, 'linux') outfile = get_outfile_path(build_dir) @@ -110,41 +111,42 @@ class LinuxSourceTreeOperations(object): stderr=subprocess.STDOUT) process.wait(timeout) -def get_kconfig_path(build_dir): +def get_kconfig_path(build_dir) -> str: return get_file_path(build_dir, KCONFIG_PATH) -def get_kunitconfig_path(build_dir): +def get_kunitconfig_path(build_dir) -> str: return get_file_path(build_dir, KUNITCONFIG_PATH) -def get_outfile_path(build_dir): +def get_outfile_path(build_dir) -> str: return get_file_path(build_dir, OUTFILE_PATH) class LinuxSourceTree(object): """Represents a Linux kernel source tree with KUnit tests.""" - def __init__(self): - self._ops = LinuxSourceTreeOperations() + def __init__(self, build_dir: str, load_config=True, defconfig=DEFAULT_KUNITCONFIG_PATH) -> None: signal.signal(signal.SIGINT, self.signal_handler) - def clean(self): - try: - self._ops.make_mrproper() - except ConfigError as e: - logging.error(e) - return False - return True + self._ops = LinuxSourceTreeOperations() + + if not load_config: + return - def create_kunitconfig(self, build_dir, defconfig=DEFAULT_KUNITCONFIG_PATH): kunitconfig_path = get_kunitconfig_path(build_dir) if not os.path.exists(kunitconfig_path): shutil.copyfile(defconfig, kunitconfig_path) - def read_kunitconfig(self, build_dir): - kunitconfig_path = get_kunitconfig_path(build_dir) self._kconfig = kunit_config.Kconfig() self._kconfig.read_from_file(kunitconfig_path) - def validate_config(self, build_dir): + def clean(self) -> bool: + try: + self._ops.make_mrproper() + except ConfigError as e: + logging.error(e) + return False + return True + + def validate_config(self, build_dir) -> bool: kconfig_path = get_kconfig_path(build_dir) validated_kconfig = kunit_config.Kconfig() validated_kconfig.read_from_file(kconfig_path) @@ -158,7 +160,7 @@ class LinuxSourceTree(object): return False return True - def build_config(self, build_dir, make_options): + def build_config(self, build_dir, make_options) -> bool: kconfig_path = get_kconfig_path(build_dir) if build_dir and not os.path.exists(build_dir): os.mkdir(build_dir) @@ -170,7 +172,7 @@ class LinuxSourceTree(object): return False return self.validate_config(build_dir) - def build_reconfig(self, build_dir, make_options): + def build_reconfig(self, build_dir, make_options) -> bool: """Creates a new .config if it is not a subset of the .kunitconfig.""" kconfig_path = get_kconfig_path(build_dir) if os.path.exists(kconfig_path): @@ -186,7 +188,7 @@ class LinuxSourceTree(object): print('Generating .config ...') return self.build_config(build_dir, make_options) - def build_um_kernel(self, alltests, jobs, build_dir, make_options): + def build_um_kernel(self, alltests, jobs, build_dir, make_options) -> bool: try: if alltests: self._ops.make_allyesconfig(build_dir, make_options) @@ -197,8 +199,8 @@ class LinuxSourceTree(object): return False return self.validate_config(build_dir) - def run_kernel(self, args=[], build_dir='', timeout=None): - args.extend(['mem=1G']) + def run_kernel(self, args=[], build_dir='', timeout=None) -> Iterator[str]: + args.extend(['mem=1G', 'console=tty']) self._ops.linux_bin(args, timeout, build_dir) outfile = get_outfile_path(build_dir) subprocess.call(['stty', 'sane']) @@ -206,6 +208,6 @@ class LinuxSourceTree(object): for line in file: yield line - def signal_handler(self, sig, frame): + def signal_handler(self, sig, frame) -> None: logging.error('Build interruption occurred. Cleaning console.') subprocess.call(['stty', 'sane']) diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 6614ec4d0898..e8bcc139702e 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -12,32 +12,32 @@ from collections import namedtuple from datetime import datetime from enum import Enum, auto from functools import reduce -from typing import List, Optional, Tuple +from typing import Iterable, Iterator, List, Optional, Tuple TestResult = namedtuple('TestResult', ['status','suites','log']) class TestSuite(object): - def __init__(self): - self.status = None - self.name = None - self.cases = [] + def __init__(self) -> None: + self.status = TestStatus.SUCCESS + self.name = '' + self.cases = [] # type: List[TestCase] - def __str__(self): - return 'TestSuite(' + self.status + ',' + self.name + ',' + str(self.cases) + ')' + def __str__(self) -> str: + return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')' - def __repr__(self): + def __repr__(self) -> str: return str(self) class TestCase(object): - def __init__(self): - self.status = None + def __init__(self) -> None: + self.status = TestStatus.SUCCESS self.name = '' - self.log = [] + self.log = [] # type: List[str] - def __str__(self): - return 'TestCase(' + self.status + ',' + self.name + ',' + str(self.log) + ')' + def __str__(self) -> str: + return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')' - def __repr__(self): + def __repr__(self) -> str: return str(self) class TestStatus(Enum): @@ -51,7 +51,7 @@ kunit_start_re = re.compile(r'TAP version [0-9]+$') kunit_end_re = re.compile('(List of all partitions:|' 'Kernel panic - not syncing: VFS:)') -def isolate_kunit_output(kernel_output): +def isolate_kunit_output(kernel_output) -> Iterator[str]: started = False for line in kernel_output: line = line.rstrip() # line always has a trailing \n @@ -64,7 +64,7 @@ def isolate_kunit_output(kernel_output): elif started: yield line[prefix_len:] if prefix_len > 0 else line -def raw_output(kernel_output): +def raw_output(kernel_output) -> None: for line in kernel_output: print(line.rstrip()) @@ -72,36 +72,36 @@ DIVIDER = '=' * 60 RESET = '\033[0;0m' -def red(text): +def red(text) -> str: return '\033[1;31m' + text + RESET -def yellow(text): +def yellow(text) -> str: return '\033[1;33m' + text + RESET -def green(text): +def green(text) -> str: return '\033[1;32m' + text + RESET -def print_with_timestamp(message): +def print_with_timestamp(message) -> None: print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message)) -def format_suite_divider(message): +def format_suite_divider(message) -> str: return '======== ' + message + ' ========' -def print_suite_divider(message): +def print_suite_divider(message) -> None: print_with_timestamp(DIVIDER) print_with_timestamp(format_suite_divider(message)) -def print_log(log): +def print_log(log) -> None: for m in log: print_with_timestamp(m) TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$') -def consume_non_diagnositic(lines: List[str]) -> None: +def consume_non_diagnostic(lines: List[str]) -> None: while lines and not TAP_ENTRIES.match(lines[0]): lines.pop(0) -def save_non_diagnositic(lines: List[str], test_case: TestCase) -> None: +def save_non_diagnostic(lines: List[str], test_case: TestCase) -> None: while lines and not TAP_ENTRIES.match(lines[0]): test_case.log.append(lines[0]) lines.pop(0) @@ -113,7 +113,7 @@ OK_NOT_OK_SUBTEST = re.compile(r'^[\s]+(ok|not ok) [0-9]+ - (.*)$') OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$') def parse_ok_not_ok_test_case(lines: List[str], test_case: TestCase) -> bool: - save_non_diagnositic(lines, test_case) + save_non_diagnostic(lines, test_case) if not lines: test_case.status = TestStatus.TEST_CRASHED return True @@ -139,7 +139,7 @@ SUBTEST_DIAGNOSTIC = re.compile(r'^[\s]+# (.*)$') DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$') def parse_diagnostic(lines: List[str], test_case: TestCase) -> bool: - save_non_diagnositic(lines, test_case) + save_non_diagnostic(lines, test_case) if not lines: return False line = lines[0] @@ -155,7 +155,7 @@ def parse_diagnostic(lines: List[str], test_case: TestCase) -> bool: def parse_test_case(lines: List[str]) -> Optional[TestCase]: test_case = TestCase() - save_non_diagnositic(lines, test_case) + save_non_diagnostic(lines, test_case) while parse_diagnostic(lines, test_case): pass if parse_ok_not_ok_test_case(lines, test_case): @@ -166,7 +166,7 @@ def parse_test_case(lines: List[str]) -> Optional[TestCase]: SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$') def parse_subtest_header(lines: List[str]) -> Optional[str]: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) if not lines: return None match = SUBTEST_HEADER.match(lines[0]) @@ -179,7 +179,7 @@ def parse_subtest_header(lines: List[str]) -> Optional[str]: SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)') def parse_subtest_plan(lines: List[str]) -> Optional[int]: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) match = SUBTEST_PLAN.match(lines[0]) if match: lines.pop(0) @@ -202,7 +202,7 @@ def max_status(left: TestStatus, right: TestStatus) -> TestStatus: def parse_ok_not_ok_test_suite(lines: List[str], test_suite: TestSuite, expected_suite_index: int) -> bool: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) if not lines: test_suite.status = TestStatus.TEST_CRASHED return False @@ -224,18 +224,17 @@ def parse_ok_not_ok_test_suite(lines: List[str], else: return False -def bubble_up_errors(to_status, status_container_list) -> TestStatus: - status_list = map(to_status, status_container_list) - return reduce(max_status, status_list, TestStatus.SUCCESS) +def bubble_up_errors(statuses: Iterable[TestStatus]) -> TestStatus: + return reduce(max_status, statuses, TestStatus.SUCCESS) def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus: - max_test_case_status = bubble_up_errors(lambda x: x.status, test_suite.cases) + max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases) return max_status(max_test_case_status, test_suite.status) def parse_test_suite(lines: List[str], expected_suite_index: int) -> Optional[TestSuite]: if not lines: return None - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) test_suite = TestSuite() test_suite.status = TestStatus.SUCCESS name = parse_subtest_header(lines) @@ -264,7 +263,7 @@ def parse_test_suite(lines: List[str], expected_suite_index: int) -> Optional[Te TAP_HEADER = re.compile(r'^TAP version 14$') def parse_tap_header(lines: List[str]) -> bool: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) if TAP_HEADER.match(lines[0]): lines.pop(0) return True @@ -274,7 +273,7 @@ def parse_tap_header(lines: List[str]) -> bool: TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)') def parse_test_plan(lines: List[str]) -> Optional[int]: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) match = TEST_PLAN.match(lines[0]) if match: lines.pop(0) @@ -282,11 +281,11 @@ def parse_test_plan(lines: List[str]) -> Optional[int]: else: return None -def bubble_up_suite_errors(test_suite_list: List[TestSuite]) -> TestStatus: - return bubble_up_errors(lambda x: x.status, test_suite_list) +def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus: + return bubble_up_errors(x.status for x in test_suites) def parse_test_result(lines: List[str]) -> TestResult: - consume_non_diagnositic(lines) + consume_non_diagnostic(lines) if not lines or not parse_tap_header(lines): return TestResult(TestStatus.NO_TESTS, [], lines) expected_test_suite_num = parse_test_plan(lines) diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index afbab4aeef3c..8a917cb4426a 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -77,8 +77,10 @@ TARGETS += zram TARGETS_HOTPLUG = cpu-hotplug TARGETS_HOTPLUG += memory-hotplug -# User can optionally provide a TARGETS skiplist. -SKIP_TARGETS ?= +# User can optionally provide a TARGETS skiplist. By default we skip +# BPF since it has cutting edge build time dependencies which require +# more effort to install. +SKIP_TARGETS ?= bpf ifneq ($(SKIP_TARGETS),) TMP := $(filter-out $(SKIP_TARGETS), $(TARGETS)) override TARGETS := $(TMP) diff --git a/tools/testing/selftests/arm64/fp/fpsimd-test.S b/tools/testing/selftests/arm64/fp/fpsimd-test.S index 1c5556bdd11d..0dbd594c2747 100644 --- a/tools/testing/selftests/arm64/fp/fpsimd-test.S +++ b/tools/testing/selftests/arm64/fp/fpsimd-test.S @@ -457,7 +457,7 @@ function barf mov x11, x1 // actual data mov x12, x2 // data size - puts "Mistatch: PID=" + puts "Mismatch: PID=" mov x0, x20 bl putdec puts ", iteration=" diff --git a/tools/testing/selftests/arm64/fp/sve-test.S b/tools/testing/selftests/arm64/fp/sve-test.S index f95074c9b48b..9210691aa998 100644 --- a/tools/testing/selftests/arm64/fp/sve-test.S +++ b/tools/testing/selftests/arm64/fp/sve-test.S @@ -625,7 +625,7 @@ function barf mov x11, x1 // actual data mov x12, x2 // data size - puts "Mistatch: PID=" + puts "Mismatch: PID=" mov x0, x20 bl putdec puts ", iteration=" diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 8c33e999319a..c51df6b91bef 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -121,6 +121,9 @@ VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \ /sys/kernel/btf/vmlinux \ /boot/vmlinux-$(shell uname -r) VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS)))) +ifeq ($(VMLINUX_BTF),) +$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)") +endif # Define simple and short `make test_progs`, `make test_sysctl`, etc targets # to build individual tests. diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c index c0fe73a17ed1..3bfcf00c0a67 100644 --- a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c @@ -34,61 +34,6 @@ struct storage { struct bpf_spin_lock lock; }; -/* Copies an rm binary to a temp file. dest is a mkstemp template */ -static int copy_rm(char *dest) -{ - int fd_in, fd_out = -1, ret = 0; - struct stat stat; - char *buf = NULL; - - fd_in = open("/bin/rm", O_RDONLY); - if (fd_in < 0) - return -errno; - - fd_out = mkstemp(dest); - if (fd_out < 0) { - ret = -errno; - goto out; - } - - ret = fstat(fd_in, &stat); - if (ret == -1) { - ret = -errno; - goto out; - } - - buf = malloc(stat.st_blksize); - if (!buf) { - ret = -errno; - goto out; - } - - while (ret = read(fd_in, buf, stat.st_blksize), ret > 0) { - ret = write(fd_out, buf, ret); - if (ret < 0) { - ret = -errno; - goto out; - - } - } - if (ret < 0) { - ret = -errno; - goto out; - - } - - /* Set executable permission on the copied file */ - ret = chmod(dest, 0100); - if (ret == -1) - ret = -errno; - -out: - free(buf); - close(fd_in); - close(fd_out); - return ret; -} - /* Fork and exec the provided rm binary and return the exit code of the * forked process and its pid. */ @@ -168,9 +113,11 @@ static bool check_syscall_operations(int map_fd, int obj_fd) void test_test_local_storage(void) { - char tmp_exec_path[PATH_MAX] = "/tmp/copy_of_rmXXXXXX"; + char tmp_dir_path[64] = "/tmp/local_storageXXXXXX"; int err, serv_sk = -1, task_fd = -1, rm_fd = -1; struct local_storage *skel = NULL; + char tmp_exec_path[64]; + char cmd[256]; skel = local_storage__open_and_load(); if (CHECK(!skel, "skel_load", "lsm skeleton failed\n")) @@ -189,18 +136,24 @@ void test_test_local_storage(void) task_fd)) goto close_prog; - err = copy_rm(tmp_exec_path); - if (CHECK(err < 0, "copy_rm", "err %d errno %d\n", err, errno)) + if (CHECK(!mkdtemp(tmp_dir_path), "mkdtemp", + "unable to create tmpdir: %d\n", errno)) goto close_prog; + snprintf(tmp_exec_path, sizeof(tmp_exec_path), "%s/copy_of_rm", + tmp_dir_path); + snprintf(cmd, sizeof(cmd), "cp /bin/rm %s", tmp_exec_path); + if (CHECK_FAIL(system(cmd))) + goto close_prog_rmdir; + rm_fd = open(tmp_exec_path, O_RDONLY); if (CHECK(rm_fd < 0, "open", "failed to open %s err:%d, errno:%d", tmp_exec_path, rm_fd, errno)) - goto close_prog; + goto close_prog_rmdir; if (!check_syscall_operations(bpf_map__fd(skel->maps.inode_storage_map), rm_fd)) - goto close_prog; + goto close_prog_rmdir; /* Sets skel->bss->monitored_pid to the pid of the forked child * forks a child process that executes tmp_exec_path and tries to @@ -209,33 +162,36 @@ void test_test_local_storage(void) */ err = run_self_unlink(&skel->bss->monitored_pid, tmp_exec_path); if (CHECK(err != EPERM, "run_self_unlink", "err %d want EPERM\n", err)) - goto close_prog_unlink; + goto close_prog_rmdir; /* Set the process being monitored to be the current process */ skel->bss->monitored_pid = getpid(); - /* Remove the temporary created executable */ - err = unlink(tmp_exec_path); - if (CHECK(err != 0, "unlink", "unable to unlink %s: %d", tmp_exec_path, - errno)) - goto close_prog_unlink; + /* Move copy_of_rm to a new location so that it triggers the + * inode_rename LSM hook with a new_dentry that has a NULL inode ptr. + */ + snprintf(cmd, sizeof(cmd), "mv %s/copy_of_rm %s/check_null_ptr", + tmp_dir_path, tmp_dir_path); + if (CHECK_FAIL(system(cmd))) + goto close_prog_rmdir; CHECK(skel->data->inode_storage_result != 0, "inode_storage_result", "inode_local_storage not set\n"); serv_sk = start_server(AF_INET6, SOCK_STREAM, NULL, 0, 0); if (CHECK(serv_sk < 0, "start_server", "failed to start server\n")) - goto close_prog; + goto close_prog_rmdir; CHECK(skel->data->sk_storage_result != 0, "sk_storage_result", "sk_local_storage not set\n"); if (!check_syscall_operations(bpf_map__fd(skel->maps.sk_storage_map), serv_sk)) - goto close_prog; + goto close_prog_rmdir; -close_prog_unlink: - unlink(tmp_exec_path); +close_prog_rmdir: + snprintf(cmd, sizeof(cmd), "rm -rf %s", tmp_dir_path); + system(cmd); close_prog: close(serv_sk); close(rm_fd); diff --git a/tools/testing/selftests/bpf/progs/bprm_opts.c b/tools/testing/selftests/bpf/progs/bprm_opts.c index 5bfef2887e70..418d9c6d4952 100644 --- a/tools/testing/selftests/bpf/progs/bprm_opts.c +++ b/tools/testing/selftests/bpf/progs/bprm_opts.c @@ -4,7 +4,7 @@ * Copyright 2020 Google LLC. */ -#include "vmlinux.h" +#include <linux/bpf.h> #include <errno.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c index 3e3de130f28f..95868bc7ada9 100644 --- a/tools/testing/selftests/bpf/progs/local_storage.c +++ b/tools/testing/selftests/bpf/progs/local_storage.c @@ -50,7 +50,6 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) __u32 pid = bpf_get_current_pid_tgid() >> 32; struct local_storage *storage; bool is_self_unlink; - int err; if (pid != monitored_pid) return 0; @@ -66,8 +65,27 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) return -EPERM; } - storage = bpf_inode_storage_get(&inode_storage_map, victim->d_inode, 0, - BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("lsm/inode_rename") +int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, + struct inode *new_dir, struct dentry *new_dentry, + unsigned int flags) +{ + __u32 pid = bpf_get_current_pid_tgid() >> 32; + struct local_storage *storage; + int err; + + /* new_dentry->d_inode can be NULL when the inode is renamed to a file + * that did not exist before. The helper should be able to handle this + * NULL pointer. + */ + bpf_inode_storage_get(&inode_storage_map, new_dentry->d_inode, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + + storage = bpf_inode_storage_get(&inode_storage_map, old_dentry->d_inode, + 0, 0); if (!storage) return 0; @@ -76,7 +94,7 @@ int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) inode_storage_result = -1; bpf_spin_unlock(&storage->lock); - err = bpf_inode_storage_delete(&inode_storage_map, victim->d_inode); + err = bpf_inode_storage_delete(&inode_storage_map, old_dentry->d_inode); if (!err) inode_storage_result = err; @@ -133,37 +151,18 @@ int BPF_PROG(socket_post_create, struct socket *sock, int family, int type, return 0; } -SEC("lsm/file_open") -int BPF_PROG(file_open, struct file *file) -{ - __u32 pid = bpf_get_current_pid_tgid() >> 32; - struct local_storage *storage; - - if (pid != monitored_pid) - return 0; - - if (!file->f_inode) - return 0; - - storage = bpf_inode_storage_get(&inode_storage_map, file->f_inode, 0, - BPF_LOCAL_STORAGE_GET_F_CREATE); - if (!storage) - return 0; - - bpf_spin_lock(&storage->lock); - storage->value = DUMMY_STORAGE_VALUE; - bpf_spin_unlock(&storage->lock); - return 0; -} - /* This uses the local storage to remember the inode of the binary that a * process was originally executing. */ SEC("lsm/bprm_committed_creds") void BPF_PROG(exec, struct linux_binprm *bprm) { + __u32 pid = bpf_get_current_pid_tgid() >> 32; struct local_storage *storage; + if (pid != monitored_pid) + return; + storage = bpf_task_storage_get(&task_storage_map, bpf_get_current_task_btf(), 0, BPF_LOCAL_STORAGE_GET_F_CREATE); @@ -172,4 +171,13 @@ void BPF_PROG(exec, struct linux_binprm *bprm) storage->exec_inode = bprm->file->f_inode; bpf_spin_unlock(&storage->lock); } + + storage = bpf_inode_storage_get(&inode_storage_map, bprm->file->f_inode, + 0, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!storage) + return; + + bpf_spin_lock(&storage->lock); + storage->value = DUMMY_STORAGE_VALUE; + bpf_spin_unlock(&storage->lock); } diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c index 0ad3e6305ff0..51adc42b2b40 100644 --- a/tools/testing/selftests/bpf/test_maps.c +++ b/tools/testing/selftests/bpf/test_maps.c @@ -1312,22 +1312,58 @@ static void test_map_stress(void) #define DO_UPDATE 1 #define DO_DELETE 0 +#define MAP_RETRIES 20 + +static int map_update_retriable(int map_fd, const void *key, const void *value, + int flags, int attempts) +{ + while (bpf_map_update_elem(map_fd, key, value, flags)) { + if (!attempts || (errno != EAGAIN && errno != EBUSY)) + return -errno; + + usleep(1); + attempts--; + } + + return 0; +} + +static int map_delete_retriable(int map_fd, const void *key, int attempts) +{ + while (bpf_map_delete_elem(map_fd, key)) { + if (!attempts || (errno != EAGAIN && errno != EBUSY)) + return -errno; + + usleep(1); + attempts--; + } + + return 0; +} + static void test_update_delete(unsigned int fn, void *data) { int do_update = ((int *)data)[1]; int fd = ((int *)data)[0]; - int i, key, value; + int i, key, value, err; for (i = fn; i < MAP_SIZE; i += TASKS) { key = value = i; if (do_update) { - assert(bpf_map_update_elem(fd, &key, &value, - BPF_NOEXIST) == 0); - assert(bpf_map_update_elem(fd, &key, &value, - BPF_EXIST) == 0); + err = map_update_retriable(fd, &key, &value, BPF_NOEXIST, MAP_RETRIES); + if (err) + printf("error %d %d\n", err, errno); + assert(err == 0); + err = map_update_retriable(fd, &key, &value, BPF_EXIST, MAP_RETRIES); + if (err) + printf("error %d %d\n", err, errno); + assert(err == 0); } else { - assert(bpf_map_delete_elem(fd, &key) == 0); + err = map_delete_retriable(fd, &key, MAP_RETRIES); + if (err) + printf("error %d %d\n", err, errno); + assert(err == 0); } } } diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 777a81404fdb..f8569f04064b 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -50,7 +50,7 @@ #define MAX_INSNS BPF_MAXINSNS #define MAX_TEST_INSNS 1000000 #define MAX_FIXUPS 8 -#define MAX_NR_MAPS 20 +#define MAX_NR_MAPS 21 #define MAX_TEST_RUNS 8 #define POINTER_VALUE 0xcafe4all #define TEST_DATA_LEN 64 @@ -87,6 +87,7 @@ struct bpf_test { int fixup_sk_storage_map[MAX_FIXUPS]; int fixup_map_event_output[MAX_FIXUPS]; int fixup_map_reuseport_array[MAX_FIXUPS]; + int fixup_map_ringbuf[MAX_FIXUPS]; const char *errstr; const char *errstr_unpriv; uint32_t insn_processed; @@ -640,6 +641,7 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type, int *fixup_sk_storage_map = test->fixup_sk_storage_map; int *fixup_map_event_output = test->fixup_map_event_output; int *fixup_map_reuseport_array = test->fixup_map_reuseport_array; + int *fixup_map_ringbuf = test->fixup_map_ringbuf; if (test->fill_helper) { test->fill_insns = calloc(MAX_TEST_INSNS, sizeof(struct bpf_insn)); @@ -817,6 +819,14 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type, fixup_map_reuseport_array++; } while (*fixup_map_reuseport_array); } + if (*fixup_map_ringbuf) { + map_fds[20] = create_map(BPF_MAP_TYPE_RINGBUF, 0, + 0, 4096); + do { + prog[*fixup_map_ringbuf].imm = map_fds[20]; + fixup_map_ringbuf++; + } while (*fixup_map_ringbuf); + } } struct libcap { diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c index 45d43bf82f26..0b943897aaf6 100644 --- a/tools/testing/selftests/bpf/verifier/spill_fill.c +++ b/tools/testing/selftests/bpf/verifier/spill_fill.c @@ -29,6 +29,36 @@ .result_unpriv = ACCEPT, }, { + "check valid spill/fill, ptr to mem", + .insns = { + /* reserve 8 byte ringbuf memory */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_MOV64_IMM(BPF_REG_2, 8), + BPF_MOV64_IMM(BPF_REG_3, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), + /* store a pointer to the reserved memory in R6 */ + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), + /* check whether the reservation was successful */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), + /* spill R6(mem) into the stack */ + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), + /* fill it back in R7 */ + BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), + /* should be able to access *(R7) = 0 */ + BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), + /* submit the reserved ringbuf memory */ + BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .fixup_map_ringbuf = { 1 }, + .result = ACCEPT, + .result_unpriv = ACCEPT, +}, +{ "check corrupted spill/fill", .insns = { /* spill R1(ctx) into stack */ diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c index 014dedaa4dd2..1e722ee76b1f 100644 --- a/tools/testing/selftests/bpf/xdpxceiver.c +++ b/tools/testing/selftests/bpf/xdpxceiver.c @@ -715,7 +715,7 @@ static void worker_pkt_dump(void) int payload = *((uint32_t *)(pkt_buf[iter]->payload + PKT_HDR_SIZE)); if (payload == EOT) { - ksft_print_msg("End-of-tranmission frame received\n"); + ksft_print_msg("End-of-transmission frame received\n"); fprintf(stdout, "---------------------------------------\n"); break; } @@ -747,7 +747,7 @@ static void worker_pkt_validate(void) } if (payloadseqnum == EOT) { - ksft_print_msg("End-of-tranmission frame received: PASS\n"); + ksft_print_msg("End-of-transmission frame received: PASS\n"); sigvar = 1; break; } diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh index 4d900bc1f76c..5c7700212f75 100755 --- a/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh +++ b/tools/testing/selftests/drivers/net/mlxsw/qos_pfc.sh @@ -230,7 +230,7 @@ switch_create() __mlnx_qos -i $swp4 --pfc=0,1,0,0,0,0,0,0 >/dev/null # PG0 will get autoconfigured to Xoff, give PG1 arbitrarily 100K, which # is (-2*MTU) about 80K of delay provision. - __mlnx_qos -i $swp3 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null + __mlnx_qos -i $swp4 --buffer_size=0,$_100KB,0,0,0,0,0,0 >/dev/null # bridges # ------- diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c7ca4faba272..fe41c6a0fa67 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -33,7 +33,7 @@ ifeq ($(ARCH),s390) UNAME_M := s390x endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c +LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 3d96a7bfaff3..cdad1eca72f7 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -7,23 +7,20 @@ * Copyright (C) 2019, Google, Inc. */ -#define _GNU_SOURCE /* for program_invocation_name */ +#define _GNU_SOURCE /* for pipe2 */ #include <stdio.h> #include <stdlib.h> -#include <sys/syscall.h> -#include <unistd.h> -#include <asm/unistd.h> #include <time.h> #include <poll.h> #include <pthread.h> -#include <linux/bitmap.h> -#include <linux/bitops.h> #include <linux/userfaultfd.h> +#include <sys/syscall.h> -#include "perf_test_util.h" -#include "processor.h" +#include "kvm_util.h" #include "test_util.h" +#include "perf_test_util.h" +#include "guest_modes.h" #ifdef __NR_userfaultfd @@ -39,12 +36,14 @@ #define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) #endif +static int nr_vcpus = 1; +static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; static char *guest_data_prototype; static void *vcpu_worker(void *data) { int ret; - struct vcpu_args *vcpu_args = (struct vcpu_args *)data; + struct perf_test_vcpu_args *vcpu_args = (struct perf_test_vcpu_args *)data; int vcpu_id = vcpu_args->vcpu_id; struct kvm_vm *vm = perf_test_args.vm; struct kvm_run *run; @@ -248,9 +247,14 @@ static int setup_demand_paging(struct kvm_vm *vm, return 0; } -static void run_test(enum vm_guest_mode mode, bool use_uffd, - useconds_t uffd_delay) +struct test_params { + bool use_uffd; + useconds_t uffd_delay; +}; + +static void run_test(enum vm_guest_mode mode, void *arg) { + struct test_params *p = arg; pthread_t *vcpu_threads; pthread_t *uffd_handler_threads = NULL; struct uffd_handler_args *uffd_args = NULL; @@ -261,7 +265,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, int vcpu_id; int r; - vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size); + vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size); perf_test_args.wr_fract = 1; @@ -273,9 +277,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads)); TEST_ASSERT(vcpu_threads, "Memory allocation failed"); - add_vcpus(vm, nr_vcpus, guest_percpu_mem_size); + perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size); - if (use_uffd) { + if (p->use_uffd) { uffd_handler_threads = malloc(nr_vcpus * sizeof(*uffd_handler_threads)); TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); @@ -308,7 +312,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, r = setup_demand_paging(vm, &uffd_handler_threads[vcpu_id], pipefds[vcpu_id * 2], - uffd_delay, &uffd_args[vcpu_id], + p->uffd_delay, &uffd_args[vcpu_id], vcpu_hva, guest_percpu_mem_size); if (r < 0) exit(-r); @@ -339,7 +343,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, pr_info("All vCPU threads joined\n"); - if (use_uffd) { + if (p->use_uffd) { char c; /* Tell the user fault fd handler threads to quit */ @@ -357,43 +361,23 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, perf_test_args.vcpu_args[0].pages * nr_vcpus / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); - ucall_uninit(vm); - kvm_vm_free(vm); + perf_test_destroy_vm(vm); free(guest_data_prototype); free(vcpu_threads); - if (use_uffd) { + if (p->use_uffd) { free(uffd_handler_threads); free(uffd_args); free(pipefds); } } -struct guest_mode { - bool supported; - bool enabled; -}; -static struct guest_mode guest_modes[NUM_VM_MODES]; - -#define guest_mode_init(mode, supported, enabled) ({ \ - guest_modes[mode] = (struct guest_mode){ supported, enabled }; \ -}) - static void help(char *name) { - int i; - puts(""); printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n" " [-b memory] [-v vcpus]\n", name); - printf(" -m: specify the guest mode ID to test\n" - " (default: test all supported modes)\n" - " This option may be used multiple times.\n" - " Guest mode IDs:\n"); - for (i = 0; i < NUM_VM_MODES; ++i) { - printf(" %d: %s%s\n", i, vm_guest_mode_string(i), - guest_modes[i].supported ? " (supported)" : ""); - } + guest_modes_help(); printf(" -u: use User Fault FD to handle vCPU page\n" " faults.\n"); printf(" -d: add a delay in usec to the User Fault\n" @@ -410,53 +394,22 @@ static void help(char *name) int main(int argc, char *argv[]) { int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); - bool mode_selected = false; - unsigned int mode; - int opt, i; - bool use_uffd = false; - useconds_t uffd_delay = 0; - -#ifdef __x86_64__ - guest_mode_init(VM_MODE_PXXV48_4K, true, true); -#endif -#ifdef __aarch64__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); - guest_mode_init(VM_MODE_P40V48_64K, true, true); - { - unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); - - if (limit >= 52) - guest_mode_init(VM_MODE_P52V48_64K, true, true); - if (limit >= 48) { - guest_mode_init(VM_MODE_P48V48_4K, true, true); - guest_mode_init(VM_MODE_P48V48_64K, true, true); - } - } -#endif -#ifdef __s390x__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); -#endif + struct test_params p = {}; + int opt; + + guest_modes_append_default(); while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) { switch (opt) { case 'm': - if (!mode_selected) { - for (i = 0; i < NUM_VM_MODES; ++i) - guest_modes[i].enabled = false; - mode_selected = true; - } - mode = strtoul(optarg, NULL, 10); - TEST_ASSERT(mode < NUM_VM_MODES, - "Guest mode ID %d too big", mode); - guest_modes[mode].enabled = true; + guest_modes_cmdline(optarg); break; case 'u': - use_uffd = true; + p.use_uffd = true; break; case 'd': - uffd_delay = strtoul(optarg, NULL, 0); - TEST_ASSERT(uffd_delay >= 0, - "A negative UFFD delay is not supported."); + p.uffd_delay = strtoul(optarg, NULL, 0); + TEST_ASSERT(p.uffd_delay >= 0, "A negative UFFD delay is not supported."); break; case 'b': guest_percpu_mem_size = parse_size(optarg); @@ -473,14 +426,7 @@ int main(int argc, char *argv[]) } } - for (i = 0; i < NUM_VM_MODES; ++i) { - if (!guest_modes[i].enabled) - continue; - TEST_ASSERT(guest_modes[i].supported, - "Guest mode ID %d (%s) not supported.", - i, vm_guest_mode_string(i)); - run_test(i, use_uffd, uffd_delay); - } + for_each_guest_mode(run_test, &p); return 0; } diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 9c6a7be31e03..2283a0ec74a9 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -8,29 +8,28 @@ * Copyright (C) 2020, Google, Inc. */ -#define _GNU_SOURCE /* for program_invocation_name */ - #include <stdio.h> #include <stdlib.h> -#include <unistd.h> #include <time.h> #include <pthread.h> #include <linux/bitmap.h> -#include <linux/bitops.h> #include "kvm_util.h" -#include "perf_test_util.h" -#include "processor.h" #include "test_util.h" +#include "perf_test_util.h" +#include "guest_modes.h" /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/ #define TEST_HOST_LOOP_N 2UL +static int nr_vcpus = 1; +static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; + /* Host variables */ static u64 dirty_log_manual_caps; static bool host_quit; static uint64_t iteration; -static uint64_t vcpu_last_completed_iteration[MAX_VCPUS]; +static uint64_t vcpu_last_completed_iteration[KVM_MAX_VCPUS]; static void *vcpu_worker(void *data) { @@ -42,7 +41,7 @@ static void *vcpu_worker(void *data) struct timespec ts_diff; struct timespec total = (struct timespec){0}; struct timespec avg; - struct vcpu_args *vcpu_args = (struct vcpu_args *)data; + struct perf_test_vcpu_args *vcpu_args = (struct perf_test_vcpu_args *)data; int vcpu_id = vcpu_args->vcpu_id; vcpu_args_set(vm, vcpu_id, 1, vcpu_id); @@ -89,9 +88,15 @@ static void *vcpu_worker(void *data) return NULL; } -static void run_test(enum vm_guest_mode mode, unsigned long iterations, - uint64_t phys_offset, int wr_fract) +struct test_params { + unsigned long iterations; + uint64_t phys_offset; + int wr_fract; +}; + +static void run_test(enum vm_guest_mode mode, void *arg) { + struct test_params *p = arg; pthread_t *vcpu_threads; struct kvm_vm *vm; unsigned long *bmap; @@ -106,9 +111,9 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, struct kvm_enable_cap cap = {}; struct timespec clear_dirty_log_total = (struct timespec){0}; - vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size); + vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size); - perf_test_args.wr_fract = wr_fract; + perf_test_args.wr_fract = p->wr_fract; guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm); guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); @@ -124,7 +129,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads)); TEST_ASSERT(vcpu_threads, "Memory allocation failed"); - add_vcpus(vm, nr_vcpus, guest_percpu_mem_size); + perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size); sync_global_to_guest(vm, perf_test_args); @@ -150,13 +155,13 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, /* Enable dirty logging */ clock_gettime(CLOCK_MONOTONIC, &start); - vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, + vm_mem_region_set_flags(vm, PERF_TEST_MEM_SLOT_INDEX, KVM_MEM_LOG_DIRTY_PAGES); ts_diff = timespec_diff_now(start); pr_info("Enabling dirty logging time: %ld.%.9lds\n\n", ts_diff.tv_sec, ts_diff.tv_nsec); - while (iteration < iterations) { + while (iteration < p->iterations) { /* * Incrementing the iteration number will start the vCPUs * dirtying memory again. @@ -177,7 +182,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, iteration, ts_diff.tv_sec, ts_diff.tv_nsec); clock_gettime(CLOCK_MONOTONIC, &start); - kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap); + kvm_vm_get_dirty_log(vm, PERF_TEST_MEM_SLOT_INDEX, bmap); ts_diff = timespec_diff_now(start); get_dirty_log_total = timespec_add(get_dirty_log_total, @@ -187,7 +192,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, if (dirty_log_manual_caps) { clock_gettime(CLOCK_MONOTONIC, &start); - kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0, + kvm_vm_clear_dirty_log(vm, PERF_TEST_MEM_SLOT_INDEX, bmap, 0, host_num_pages); ts_diff = timespec_diff_now(start); @@ -205,43 +210,30 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, /* Disable dirty logging */ clock_gettime(CLOCK_MONOTONIC, &start); - vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, 0); + vm_mem_region_set_flags(vm, PERF_TEST_MEM_SLOT_INDEX, 0); ts_diff = timespec_diff_now(start); pr_info("Disabling dirty logging time: %ld.%.9lds\n", ts_diff.tv_sec, ts_diff.tv_nsec); - avg = timespec_div(get_dirty_log_total, iterations); + avg = timespec_div(get_dirty_log_total, p->iterations); pr_info("Get dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n", - iterations, get_dirty_log_total.tv_sec, + p->iterations, get_dirty_log_total.tv_sec, get_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec); if (dirty_log_manual_caps) { - avg = timespec_div(clear_dirty_log_total, iterations); + avg = timespec_div(clear_dirty_log_total, p->iterations); pr_info("Clear dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n", - iterations, clear_dirty_log_total.tv_sec, + p->iterations, clear_dirty_log_total.tv_sec, clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec); } free(bmap); free(vcpu_threads); - ucall_uninit(vm); - kvm_vm_free(vm); + perf_test_destroy_vm(vm); } -struct guest_mode { - bool supported; - bool enabled; -}; -static struct guest_mode guest_modes[NUM_VM_MODES]; - -#define guest_mode_init(mode, supported, enabled) ({ \ - guest_modes[mode] = (struct guest_mode){ supported, enabled }; \ -}) - static void help(char *name) { - int i; - puts(""); printf("usage: %s [-h] [-i iterations] [-p offset] " "[-m mode] [-b vcpu bytes] [-v vcpus]\n", name); @@ -250,14 +242,7 @@ static void help(char *name) TEST_HOST_LOOP_N); printf(" -p: specify guest physical test memory offset\n" " Warning: a low offset can conflict with the loaded test code.\n"); - printf(" -m: specify the guest mode ID to test " - "(default: test all supported modes)\n" - " This option may be used multiple times.\n" - " Guest mode IDs:\n"); - for (i = 0; i < NUM_VM_MODES; ++i) { - printf(" %d: %s%s\n", i, vm_guest_mode_string(i), - guest_modes[i].supported ? " (supported)" : ""); - } + guest_modes_help(); printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); @@ -272,74 +257,43 @@ static void help(char *name) int main(int argc, char *argv[]) { - unsigned long iterations = TEST_HOST_LOOP_N; - bool mode_selected = false; - uint64_t phys_offset = 0; - unsigned int mode; - int opt, i; - int wr_fract = 1; + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); + struct test_params p = { + .iterations = TEST_HOST_LOOP_N, + .wr_fract = 1, + }; + int opt; dirty_log_manual_caps = kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2); dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | KVM_DIRTY_LOG_INITIALLY_SET); -#ifdef __x86_64__ - guest_mode_init(VM_MODE_PXXV48_4K, true, true); -#endif -#ifdef __aarch64__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); - guest_mode_init(VM_MODE_P40V48_64K, true, true); - - { - unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); - - if (limit >= 52) - guest_mode_init(VM_MODE_P52V48_64K, true, true); - if (limit >= 48) { - guest_mode_init(VM_MODE_P48V48_4K, true, true); - guest_mode_init(VM_MODE_P48V48_64K, true, true); - } - } -#endif -#ifdef __s390x__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); -#endif + guest_modes_append_default(); while ((opt = getopt(argc, argv, "hi:p:m:b:f:v:")) != -1) { switch (opt) { case 'i': - iterations = strtol(optarg, NULL, 10); + p.iterations = strtol(optarg, NULL, 10); break; case 'p': - phys_offset = strtoull(optarg, NULL, 0); + p.phys_offset = strtoull(optarg, NULL, 0); break; case 'm': - if (!mode_selected) { - for (i = 0; i < NUM_VM_MODES; ++i) - guest_modes[i].enabled = false; - mode_selected = true; - } - mode = strtoul(optarg, NULL, 10); - TEST_ASSERT(mode < NUM_VM_MODES, - "Guest mode ID %d too big", mode); - guest_modes[mode].enabled = true; + guest_modes_cmdline(optarg); break; case 'b': guest_percpu_mem_size = parse_size(optarg); break; case 'f': - wr_fract = atoi(optarg); - TEST_ASSERT(wr_fract >= 1, + p.wr_fract = atoi(optarg); + TEST_ASSERT(p.wr_fract >= 1, "Write fraction cannot be less than one"); break; case 'v': nr_vcpus = atoi(optarg); - TEST_ASSERT(nr_vcpus > 0, - "Must have a positive number of vCPUs"); - TEST_ASSERT(nr_vcpus <= MAX_VCPUS, - "This test does not currently support\n" - "more than %d vCPUs.", MAX_VCPUS); + TEST_ASSERT(nr_vcpus > 0 && nr_vcpus <= max_vcpus, + "Invalid number of vcpus, must be between 1 and %d", max_vcpus); break; case 'h': default: @@ -348,18 +302,11 @@ int main(int argc, char *argv[]) } } - TEST_ASSERT(iterations >= 2, "The test should have at least two iterations"); + TEST_ASSERT(p.iterations >= 2, "The test should have at least two iterations"); - pr_info("Test iterations: %"PRIu64"\n", iterations); + pr_info("Test iterations: %"PRIu64"\n", p.iterations); - for (i = 0; i < NUM_VM_MODES; ++i) { - if (!guest_modes[i].enabled) - continue; - TEST_ASSERT(guest_modes[i].supported, - "Guest mode ID %d (%s) not supported.", - i, vm_guest_mode_string(i)); - run_test(i, iterations, phys_offset, wr_fract); - } + for_each_guest_mode(run_test, &p); return 0; } diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 471baecb7772..bb2752d78fe3 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -9,8 +9,6 @@ #include <stdio.h> #include <stdlib.h> -#include <unistd.h> -#include <time.h> #include <pthread.h> #include <semaphore.h> #include <sys/types.h> @@ -20,8 +18,9 @@ #include <linux/bitops.h> #include <asm/barrier.h> -#include "test_util.h" #include "kvm_util.h" +#include "test_util.h" +#include "guest_modes.h" #include "processor.h" #define VCPU_ID 1 @@ -673,9 +672,15 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, #define DIRTY_MEM_BITS 30 /* 1G */ #define PAGE_SHIFT_4K 12 -static void run_test(enum vm_guest_mode mode, unsigned long iterations, - unsigned long interval, uint64_t phys_offset) +struct test_params { + unsigned long iterations; + unsigned long interval; + uint64_t phys_offset; +}; + +static void run_test(enum vm_guest_mode mode, void *arg) { + struct test_params *p = arg; struct kvm_vm *vm; unsigned long *bmap; @@ -709,12 +714,12 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, host_page_size = getpagesize(); host_num_pages = vm_num_host_pages(mode, guest_num_pages); - if (!phys_offset) { + if (!p->phys_offset) { guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; guest_test_phys_mem &= ~(host_page_size - 1); } else { - guest_test_phys_mem = phys_offset; + guest_test_phys_mem = p->phys_offset; } #ifdef __s390x__ @@ -758,9 +763,9 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, pthread_create(&vcpu_thread, NULL, vcpu_worker, vm); - while (iteration < iterations) { + while (iteration < p->iterations) { /* Give the vcpu thread some time to dirty some pages */ - usleep(interval * 1000); + usleep(p->interval * 1000); log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX, bmap, host_num_pages); vm_dirty_log_verify(mode, bmap); @@ -783,20 +788,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations, kvm_vm_free(vm); } -struct guest_mode { - bool supported; - bool enabled; -}; -static struct guest_mode guest_modes[NUM_VM_MODES]; - -#define guest_mode_init(mode, supported, enabled) ({ \ - guest_modes[mode] = (struct guest_mode){ supported, enabled }; \ -}) - static void help(char *name) { - int i; - puts(""); printf("usage: %s [-h] [-i iterations] [-I interval] " "[-p offset] [-m mode]\n", name); @@ -813,51 +806,23 @@ static void help(char *name) printf(" -M: specify the host logging mode " "(default: run all log modes). Supported modes: \n\t"); log_modes_dump(); - printf(" -m: specify the guest mode ID to test " - "(default: test all supported modes)\n" - " This option may be used multiple times.\n" - " Guest mode IDs:\n"); - for (i = 0; i < NUM_VM_MODES; ++i) { - printf(" %d: %s%s\n", i, vm_guest_mode_string(i), - guest_modes[i].supported ? " (supported)" : ""); - } + guest_modes_help(); puts(""); exit(0); } int main(int argc, char *argv[]) { - unsigned long iterations = TEST_HOST_LOOP_N; - unsigned long interval = TEST_HOST_LOOP_INTERVAL; - bool mode_selected = false; - uint64_t phys_offset = 0; - unsigned int mode; - int opt, i, j; + struct test_params p = { + .iterations = TEST_HOST_LOOP_N, + .interval = TEST_HOST_LOOP_INTERVAL, + }; + int opt, i; sem_init(&dirty_ring_vcpu_stop, 0, 0); sem_init(&dirty_ring_vcpu_cont, 0, 0); -#ifdef __x86_64__ - guest_mode_init(VM_MODE_PXXV48_4K, true, true); -#endif -#ifdef __aarch64__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); - guest_mode_init(VM_MODE_P40V48_64K, true, true); - - { - unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); - - if (limit >= 52) - guest_mode_init(VM_MODE_P52V48_64K, true, true); - if (limit >= 48) { - guest_mode_init(VM_MODE_P48V48_4K, true, true); - guest_mode_init(VM_MODE_P48V48_64K, true, true); - } - } -#endif -#ifdef __s390x__ - guest_mode_init(VM_MODE_P40V48_4K, true, true); -#endif + guest_modes_append_default(); while ((opt = getopt(argc, argv, "c:hi:I:p:m:M:")) != -1) { switch (opt) { @@ -865,24 +830,16 @@ int main(int argc, char *argv[]) test_dirty_ring_count = strtol(optarg, NULL, 10); break; case 'i': - iterations = strtol(optarg, NULL, 10); + p.iterations = strtol(optarg, NULL, 10); break; case 'I': - interval = strtol(optarg, NULL, 10); + p.interval = strtol(optarg, NULL, 10); break; case 'p': - phys_offset = strtoull(optarg, NULL, 0); + p.phys_offset = strtoull(optarg, NULL, 0); break; case 'm': - if (!mode_selected) { - for (i = 0; i < NUM_VM_MODES; ++i) - guest_modes[i].enabled = false; - mode_selected = true; - } - mode = strtoul(optarg, NULL, 10); - TEST_ASSERT(mode < NUM_VM_MODES, - "Guest mode ID %d too big", mode); - guest_modes[mode].enabled = true; + guest_modes_cmdline(optarg); break; case 'M': if (!strcmp(optarg, "all")) { @@ -911,32 +868,24 @@ int main(int argc, char *argv[]) } } - TEST_ASSERT(iterations > 2, "Iterations must be greater than two"); - TEST_ASSERT(interval > 0, "Interval must be greater than zero"); + TEST_ASSERT(p.iterations > 2, "Iterations must be greater than two"); + TEST_ASSERT(p.interval > 0, "Interval must be greater than zero"); pr_info("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n", - iterations, interval); + p.iterations, p.interval); srandom(time(0)); - for (i = 0; i < NUM_VM_MODES; ++i) { - if (!guest_modes[i].enabled) - continue; - TEST_ASSERT(guest_modes[i].supported, - "Guest mode ID %d (%s) not supported.", - i, vm_guest_mode_string(i)); - if (host_log_mode_option == LOG_MODE_ALL) { - /* Run each log mode */ - for (j = 0; j < LOG_MODE_NUM; j++) { - pr_info("Testing Log Mode '%s'\n", - log_modes[j].name); - host_log_mode = j; - run_test(i, iterations, interval, phys_offset); - } - } else { - host_log_mode = host_log_mode_option; - run_test(i, iterations, interval, phys_offset); + if (host_log_mode_option == LOG_MODE_ALL) { + /* Run each log mode */ + for (i = 0; i < LOG_MODE_NUM; i++) { + pr_info("Testing Log Mode '%s'\n", log_modes[i].name); + host_log_mode = i; + for_each_guest_mode(run_test, &p); } + } else { + host_log_mode = host_log_mode_option; + for_each_guest_mode(run_test, &p); } return 0; diff --git a/tools/testing/selftests/kvm/include/guest_modes.h b/tools/testing/selftests/kvm/include/guest_modes.h new file mode 100644 index 000000000000..b691df33e64e --- /dev/null +++ b/tools/testing/selftests/kvm/include/guest_modes.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020, Red Hat, Inc. + */ +#include "kvm_util.h" + +struct guest_mode { + bool supported; + bool enabled; +}; + +extern struct guest_mode guest_modes[NUM_VM_MODES]; + +#define guest_mode_append(mode, supported, enabled) ({ \ + guest_modes[mode] = (struct guest_mode){ supported, enabled }; \ +}) + +void guest_modes_append_default(void); +void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg); +void guest_modes_help(void); +void guest_modes_cmdline(const char *arg); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index dfa9d369e8fc..5cbb861525ed 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -16,6 +16,7 @@ #include "sparsebit.h" +#define KVM_MAX_VCPUS 512 /* * Callers of kvm_util only have an incomplete/opaque description of the @@ -70,6 +71,14 @@ enum vm_guest_mode { #define vm_guest_mode_string(m) vm_guest_mode_string[m] extern const char * const vm_guest_mode_string[]; +struct vm_guest_mode_params { + unsigned int pa_bits; + unsigned int va_bits; + unsigned int page_size; + unsigned int page_shift; +}; +extern const struct vm_guest_mode_params vm_guest_mode_params[]; + enum vm_mem_backing_src_type { VM_MEM_SRC_ANONYMOUS, VM_MEM_SRC_ANONYMOUS_THP, diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index 239421e4f6b8..b1188823c31b 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -9,38 +9,15 @@ #define SELFTEST_KVM_PERF_TEST_UTIL_H #include "kvm_util.h" -#include "processor.h" - -#define MAX_VCPUS 512 - -#define PAGE_SHIFT_4K 12 -#define PTES_PER_4K_PT 512 - -#define TEST_MEM_SLOT_INDEX 1 /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000 #define DEFAULT_PER_VCPU_MEM_SIZE (1 << 30) /* 1G */ -/* - * Guest physical memory offset of the testing memory slot. - * This will be set to the topmost valid physical address minus - * the test memory size. - */ -static uint64_t guest_test_phys_mem; - -/* - * Guest virtual memory offset of the testing memory slot. - * Must not conflict with identity mapped test code. - */ -static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; -static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; - -/* Number of VCPUs for the test */ -static int nr_vcpus = 1; +#define PERF_TEST_MEM_SLOT_INDEX 1 -struct vcpu_args { +struct perf_test_vcpu_args { uint64_t gva; uint64_t pages; @@ -54,141 +31,21 @@ struct perf_test_args { uint64_t guest_page_size; int wr_fract; - struct vcpu_args vcpu_args[MAX_VCPUS]; + struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; -static struct perf_test_args perf_test_args; +extern struct perf_test_args perf_test_args; /* - * Continuously write to the first 8 bytes of each page in the - * specified region. + * Guest physical memory offset of the testing memory slot. + * This will be set to the topmost valid physical address minus + * the test memory size. */ -static void guest_code(uint32_t vcpu_id) -{ - struct vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id]; - uint64_t gva; - uint64_t pages; - int i; - - /* Make sure vCPU args data structure is not corrupt. */ - GUEST_ASSERT(vcpu_args->vcpu_id == vcpu_id); - - gva = vcpu_args->gva; - pages = vcpu_args->pages; - - while (true) { - for (i = 0; i < pages; i++) { - uint64_t addr = gva + (i * perf_test_args.guest_page_size); - - if (i % perf_test_args.wr_fract == 0) - *(uint64_t *)addr = 0x0123456789ABCDEF; - else - READ_ONCE(*(uint64_t *)addr); - } - - GUEST_SYNC(1); - } -} - -static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, - uint64_t vcpu_memory_bytes) -{ - struct kvm_vm *vm; - uint64_t pages = DEFAULT_GUEST_PHY_PAGES; - uint64_t guest_num_pages; - - /* Account for a few pages per-vCPU for stacks */ - pages += DEFAULT_STACK_PGS * vcpus; - - /* - * Reserve twice the ammount of memory needed to map the test region and - * the page table / stacks region, at 4k, for page tables. Do the - * calculation with 4K page size: the smallest of all archs. (e.g., 64K - * page size guest will need even less memory for page tables). - */ - pages += (2 * pages) / PTES_PER_4K_PT; - pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) / - PTES_PER_4K_PT; - pages = vm_adjust_num_guest_pages(mode, pages); - - pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); - - vm = vm_create(mode, pages, O_RDWR); - kvm_vm_elf_load(vm, program_invocation_name, 0, 0); -#ifdef __x86_64__ - vm_create_irqchip(vm); -#endif - - perf_test_args.vm = vm; - perf_test_args.guest_page_size = vm_get_page_size(vm); - perf_test_args.host_page_size = getpagesize(); - - TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0, - "Guest memory size is not guest page size aligned."); - - guest_num_pages = (vcpus * vcpu_memory_bytes) / - perf_test_args.guest_page_size; - guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages); - - /* - * If there should be more memory in the guest test region than there - * can be pages in the guest, it will definitely cause problems. - */ - TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), - "Requested more guest memory than address space allows.\n" - " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", - guest_num_pages, vm_get_max_gfn(vm), vcpus, - vcpu_memory_bytes); - - TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0, - "Guest memory size is not host page size aligned."); - - guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * - perf_test_args.guest_page_size; - guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1); - -#ifdef __s390x__ - /* Align to 1M (segment size) */ - guest_test_phys_mem &= ~((1 << 20) - 1); -#endif - - pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem); - - /* Add an extra memory slot for testing */ - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - guest_test_phys_mem, - TEST_MEM_SLOT_INDEX, - guest_num_pages, 0); - - /* Do mapping for the demand paging memory slot */ - virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0); - - ucall_init(vm, NULL); - - return vm; -} - -static void add_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes) -{ - vm_paddr_t vcpu_gpa; - struct vcpu_args *vcpu_args; - int vcpu_id; - - for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { - vcpu_args = &perf_test_args.vcpu_args[vcpu_id]; - - vm_vcpu_add_default(vm, vcpu_id, guest_code); - - vcpu_args->vcpu_id = vcpu_id; - vcpu_args->gva = guest_test_virt_mem + - (vcpu_id * vcpu_memory_bytes); - vcpu_args->pages = vcpu_memory_bytes / - perf_test_args.guest_page_size; +extern uint64_t guest_test_phys_mem; - vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes); - pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n", - vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes); - } -} +struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, + uint64_t vcpu_memory_bytes); +void perf_test_destroy_vm(struct kvm_vm *vm); +void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes); #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c new file mode 100644 index 000000000000..25bff307c71f --- /dev/null +++ b/tools/testing/selftests/kvm/lib/guest_modes.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020, Red Hat, Inc. + */ +#include "guest_modes.h" + +struct guest_mode guest_modes[NUM_VM_MODES]; + +void guest_modes_append_default(void) +{ + guest_mode_append(VM_MODE_DEFAULT, true, true); + +#ifdef __aarch64__ + guest_mode_append(VM_MODE_P40V48_64K, true, true); + { + unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); + if (limit >= 52) + guest_mode_append(VM_MODE_P52V48_64K, true, true); + if (limit >= 48) { + guest_mode_append(VM_MODE_P48V48_4K, true, true); + guest_mode_append(VM_MODE_P48V48_64K, true, true); + } + } +#endif +} + +void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg) +{ + int i; + + for (i = 0; i < NUM_VM_MODES; ++i) { + if (!guest_modes[i].enabled) + continue; + TEST_ASSERT(guest_modes[i].supported, + "Guest mode ID %d (%s) not supported.", + i, vm_guest_mode_string(i)); + func(i, arg); + } +} + +void guest_modes_help(void) +{ + int i; + + printf(" -m: specify the guest mode ID to test\n" + " (default: test all supported modes)\n" + " This option may be used multiple times.\n" + " Guest mode IDs:\n"); + for (i = 0; i < NUM_VM_MODES; ++i) { + printf(" %d: %s%s\n", i, vm_guest_mode_string(i), + guest_modes[i].supported ? " (supported)" : ""); + } +} + +void guest_modes_cmdline(const char *arg) +{ + static bool mode_selected; + unsigned int mode; + int i; + + if (!mode_selected) { + for (i = 0; i < NUM_VM_MODES; ++i) + guest_modes[i].enabled = false; + mode_selected = true; + } + + mode = strtoul(optarg, NULL, 10); + TEST_ASSERT(mode < NUM_VM_MODES, "Guest mode ID %d too big", mode); + guest_modes[mode].enabled = true; +} diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 88ef7067f1e6..fa5a90e6c6f0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -153,14 +153,7 @@ const char * const vm_guest_mode_string[] = { _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, "Missing new mode strings?"); -struct vm_guest_mode_params { - unsigned int pa_bits; - unsigned int va_bits; - unsigned int page_size; - unsigned int page_shift; -}; - -static const struct vm_guest_mode_params vm_guest_mode_params[] = { +const struct vm_guest_mode_params vm_guest_mode_params[] = { { 52, 48, 0x1000, 12 }, { 52, 48, 0x10000, 16 }, { 48, 48, 0x1000, 12 }, diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c new file mode 100644 index 000000000000..9be1944c2d1c --- /dev/null +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2020, Google LLC. + */ + +#include "kvm_util.h" +#include "perf_test_util.h" +#include "processor.h" + +struct perf_test_args perf_test_args; + +uint64_t guest_test_phys_mem; + +/* + * Guest virtual memory offset of the testing memory slot. + * Must not conflict with identity mapped test code. + */ +static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; + +/* + * Continuously write to the first 8 bytes of each page in the + * specified region. + */ +static void guest_code(uint32_t vcpu_id) +{ + struct perf_test_vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id]; + uint64_t gva; + uint64_t pages; + int i; + + /* Make sure vCPU args data structure is not corrupt. */ + GUEST_ASSERT(vcpu_args->vcpu_id == vcpu_id); + + gva = vcpu_args->gva; + pages = vcpu_args->pages; + + while (true) { + for (i = 0; i < pages; i++) { + uint64_t addr = gva + (i * perf_test_args.guest_page_size); + + if (i % perf_test_args.wr_fract == 0) + *(uint64_t *)addr = 0x0123456789ABCDEF; + else + READ_ONCE(*(uint64_t *)addr); + } + + GUEST_SYNC(1); + } +} + +struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, + uint64_t vcpu_memory_bytes) +{ + struct kvm_vm *vm; + uint64_t guest_num_pages; + + pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + + perf_test_args.host_page_size = getpagesize(); + perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size; + + guest_num_pages = vm_adjust_num_guest_pages(mode, + (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size); + + TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0, + "Guest memory size is not host page size aligned."); + TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0, + "Guest memory size is not guest page size aligned."); + + vm = vm_create_with_vcpus(mode, vcpus, + (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size, + 0, guest_code, NULL); + + perf_test_args.vm = vm; + + /* + * If there should be more memory in the guest test region than there + * can be pages in the guest, it will definitely cause problems. + */ + TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), + "Requested more guest memory than address space allows.\n" + " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", + guest_num_pages, vm_get_max_gfn(vm), vcpus, + vcpu_memory_bytes); + + guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * + perf_test_args.guest_page_size; + guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1); +#ifdef __s390x__ + /* Align to 1M (segment size) */ + guest_test_phys_mem &= ~((1 << 20) - 1); +#endif + pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem); + + /* Add an extra memory slot for testing */ + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + guest_test_phys_mem, + PERF_TEST_MEM_SLOT_INDEX, + guest_num_pages, 0); + + /* Do mapping for the demand paging memory slot */ + virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0); + + ucall_init(vm, NULL); + + return vm; +} + +void perf_test_destroy_vm(struct kvm_vm *vm) +{ + ucall_uninit(vm); + kvm_vm_free(vm); +} + +void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes) +{ + vm_paddr_t vcpu_gpa; + struct perf_test_vcpu_args *vcpu_args; + int vcpu_id; + + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + vcpu_args = &perf_test_args.vcpu_args[vcpu_id]; + + vcpu_args->vcpu_id = vcpu_id; + vcpu_args->gva = guest_test_virt_mem + + (vcpu_id * vcpu_memory_bytes); + vcpu_args->pages = vcpu_memory_bytes / + perf_test_args.guest_page_size; + + vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes); + pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n", + vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes); + } +} diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh index eb693a3b7b4a..4c7d33618437 100755 --- a/tools/testing/selftests/net/fib_nexthops.sh +++ b/tools/testing/selftests/net/fib_nexthops.sh @@ -869,7 +869,7 @@ ipv6_torture() pid3=$! ip netns exec me ping -f 2001:db8:101::2 >/dev/null 2>&1 & pid4=$! - ip netns exec me mausezahn veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 & + ip netns exec me mausezahn -6 veth1 -B 2001:db8:101::2 -A 2001:db8:91::1 -c 0 -t tcp "dp=1-1023, flags=syn" >/dev/null 2>&1 & pid5=$! sleep 300 diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh index 84205c3a55eb..2b5707738609 100755 --- a/tools/testing/selftests/net/fib_tests.sh +++ b/tools/testing/selftests/net/fib_tests.sh @@ -1055,7 +1055,6 @@ ipv6_addr_metric_test() check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260" log_test $? 0 "Set metric with peer route on local side" - log_test $? 0 "User specified metric on local address" check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260" log_test $? 0 "Set metric with peer route on peer side" diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh index 464e31eabc73..64cd2e23c568 100755 --- a/tools/testing/selftests/net/pmtu.sh +++ b/tools/testing/selftests/net/pmtu.sh @@ -162,7 +162,15 @@ # - list_flush_ipv6_exception # Using the same topology as in pmtu_ipv6, create exceptions, and check # they are shown when listing exception caches, gone after flushing them - +# +# - pmtu_ipv4_route_change +# Use the same topology as in pmtu_ipv4, but issue a route replacement +# command and delete the corresponding device afterward. This tests for +# proper cleanup of the PMTU exceptions by the route replacement path. +# Device unregistration should complete successfully +# +# - pmtu_ipv6_route_change +# Same as above but with IPv6 # Kselftest framework requirement - SKIP code is 4. ksft_skip=4 @@ -224,7 +232,9 @@ tests=" cleanup_ipv4_exception ipv4: cleanup of cached exceptions 1 cleanup_ipv6_exception ipv6: cleanup of cached exceptions 1 list_flush_ipv4_exception ipv4: list and flush cached exceptions 1 - list_flush_ipv6_exception ipv6: list and flush cached exceptions 1" + list_flush_ipv6_exception ipv6: list and flush cached exceptions 1 + pmtu_ipv4_route_change ipv4: PMTU exception w/route replace 1 + pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1" NS_A="ns-A" NS_B="ns-B" @@ -1782,6 +1792,63 @@ test_list_flush_ipv6_exception() { return ${fail} } +test_pmtu_ipvX_route_change() { + family=${1} + + setup namespaces routing || return 2 + trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \ + "${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \ + "${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \ + "${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2 + + if [ ${family} -eq 4 ]; then + ping=ping + dst1="${prefix4}.${b_r1}.1" + dst2="${prefix4}.${b_r2}.1" + gw="${prefix4}.${a_r1}.2" + else + ping=${ping6} + dst1="${prefix6}:${b_r1}::1" + dst2="${prefix6}:${b_r2}::1" + gw="${prefix6}:${a_r1}::2" + fi + + # Set up initial MTU values + mtu "${ns_a}" veth_A-R1 2000 + mtu "${ns_r1}" veth_R1-A 2000 + mtu "${ns_r1}" veth_R1-B 1400 + mtu "${ns_b}" veth_B-R1 1400 + + mtu "${ns_a}" veth_A-R2 2000 + mtu "${ns_r2}" veth_R2-A 2000 + mtu "${ns_r2}" veth_R2-B 1500 + mtu "${ns_b}" veth_B-R2 1500 + + # Create route exceptions + run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst1} + run_cmd ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst2} + + # Check that exceptions have been created with the correct PMTU + pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst1})" + check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1 + pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst2})" + check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1 + + # Replace the route from A to R1 + run_cmd ${ns_a} ip route change default via ${gw} + + # Delete the device in A + run_cmd ${ns_a} ip link del "veth_A-R1" +} + +test_pmtu_ipv4_route_change() { + test_pmtu_ipvX_route_change 4 +} + +test_pmtu_ipv6_route_change() { + test_pmtu_ipvX_route_change 6 +} + usage() { echo echo "$0 [OPTIONS] [TEST]..." diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c index cb0d1890a860..e0088c2d38a5 100644 --- a/tools/testing/selftests/net/tls.c +++ b/tools/testing/selftests/net/tls.c @@ -103,8 +103,8 @@ FIXTURE(tls) FIXTURE_VARIANT(tls) { - u16 tls_version; - u16 cipher_type; + uint16_t tls_version; + uint16_t cipher_type; }; FIXTURE_VARIANT_ADD(tls, 12_gcm) diff --git a/tools/testing/selftests/net/udpgro.sh b/tools/testing/selftests/net/udpgro.sh index ac2a30be9b32..f8a19f548ae9 100755 --- a/tools/testing/selftests/net/udpgro.sh +++ b/tools/testing/selftests/net/udpgro.sh @@ -5,6 +5,14 @@ readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)" +# set global exit status, but never reset nonzero one. +check_err() +{ + if [ $ret -eq 0 ]; then + ret=$1 + fi +} + cleanup() { local -r jobs="$(jobs -p)" local -r ns="$(ip netns list|grep $PEER_NS)" @@ -44,7 +52,9 @@ run_one() { # Hack: let bg programs complete the startup sleep 0.1 ./udpgso_bench_tx ${tx_args} + ret=$? wait $(jobs -p) + return $ret } run_test() { @@ -87,8 +97,10 @@ run_one_nat() { sleep 0.1 ./udpgso_bench_tx ${tx_args} + ret=$? kill -INT $pid wait $(jobs -p) + return $ret } run_one_2sock() { @@ -110,7 +122,9 @@ run_one_2sock() { sleep 0.1 # first UDP GSO socket should be closed at this point ./udpgso_bench_tx ${tx_args} + ret=$? wait $(jobs -p) + return $ret } run_nat_test() { @@ -131,36 +145,54 @@ run_all() { local -r core_args="-l 4" local -r ipv4_args="${core_args} -4 -D 192.168.1.1" local -r ipv6_args="${core_args} -6 -D 2001:db8::1" + ret=0 echo "ipv4" run_test "no GRO" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400" + check_err $? # explicitly check we are not receiving UDP_SEGMENT cmsg (-S -1) # when GRO does not take place run_test "no GRO chk cmsg" "${ipv4_args} -M 10 -s 1400" "-4 -n 10 -l 1400 -S -1" + check_err $? # the GSO packets are aggregated because: # * veth schedule napi after each xmit # * segmentation happens in BH context, veth napi poll is delayed after # the transmission of the last segment run_test "GRO" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720" + check_err $? run_test "GRO chk cmsg" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472" + check_err $? run_test "GRO with custom segment size" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720" + check_err $? run_test "GRO with custom segment size cmsg" "${ipv4_args} -M 1 -s 14720 -S 500 " "-4 -n 1 -l 14720 -S 500" + check_err $? run_nat_test "bad GRO lookup" "${ipv4_args} -M 1 -s 14720 -S 0" "-n 10 -l 1472" + check_err $? run_2sock_test "multiple GRO socks" "${ipv4_args} -M 1 -s 14720 -S 0 " "-4 -n 1 -l 14720 -S 1472" + check_err $? echo "ipv6" run_test "no GRO" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400" + check_err $? run_test "no GRO chk cmsg" "${ipv6_args} -M 10 -s 1400" "-n 10 -l 1400 -S -1" + check_err $? run_test "GRO" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520" + check_err $? run_test "GRO chk cmsg" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 1 -l 14520 -S 1452" + check_err $? run_test "GRO with custom segment size" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520" + check_err $? run_test "GRO with custom segment size cmsg" "${ipv6_args} -M 1 -s 14520 -S 500" "-n 1 -l 14520 -S 500" + check_err $? run_nat_test "bad GRO lookup" "${ipv6_args} -M 1 -s 14520 -S 0" "-n 10 -l 1452" + check_err $? run_2sock_test "multiple GRO socks" "${ipv6_args} -M 1 -s 14520 -S 0 " "-n 1 -l 14520 -S 1452" + check_err $? + return $ret } if [ ! -f ../bpf/xdp_dummy.o ]; then @@ -180,3 +212,5 @@ elif [[ $1 == "__subprocess_2sock" ]]; then shift run_one_2sock $@ fi + +exit $? diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile index a374e10ef506..3006a8e5b41a 100644 --- a/tools/testing/selftests/netfilter/Makefile +++ b/tools/testing/selftests/netfilter/Makefile @@ -4,7 +4,8 @@ TEST_PROGS := nft_trans_stress.sh nft_nat.sh bridge_brouter.sh \ conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \ nft_concat_range.sh nft_conntrack_helper.sh \ - nft_queue.sh nft_meta.sh + nft_queue.sh nft_meta.sh \ + ipip-conntrack-mtu.sh LDLIBS = -lmnl TEST_GEN_FILES = nf-queue diff --git a/tools/testing/selftests/netfilter/ipip-conntrack-mtu.sh b/tools/testing/selftests/netfilter/ipip-conntrack-mtu.sh new file mode 100755 index 000000000000..4a6f5c3b3215 --- /dev/null +++ b/tools/testing/selftests/netfilter/ipip-conntrack-mtu.sh @@ -0,0 +1,206 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Kselftest framework requirement - SKIP code is 4. +ksft_skip=4 + +# Conntrack needs to reassemble fragments in order to have complete +# packets for rule matching. Reassembly can lead to packet loss. + +# Consider the following setup: +# +--------+ +---------+ +--------+ +# |Router A|-------|Wanrouter|-------|Router B| +# | |.IPIP..| |..IPIP.| | +# +--------+ +---------+ +--------+ +# / mtu 1400 \ +# / \ +#+--------+ +--------+ +#|Client A| |Client B| +#| | | | +#+--------+ +--------+ + +# Router A and Router B use IPIP tunnel interfaces to tunnel traffic +# between Client A and Client B over WAN. Wanrouter has MTU 1400 set +# on its interfaces. + +rnd=$(mktemp -u XXXXXXXX) +rx=$(mktemp) + +r_a="ns-ra-$rnd" +r_b="ns-rb-$rnd" +r_w="ns-rw-$rnd" +c_a="ns-ca-$rnd" +c_b="ns-cb-$rnd" + +checktool (){ + if ! $1 > /dev/null 2>&1; then + echo "SKIP: Could not $2" + exit $ksft_skip + fi +} + +checktool "iptables --version" "run test without iptables" +checktool "ip -Version" "run test without ip tool" +checktool "which nc" "run test without nc (netcat)" +checktool "ip netns add ${r_a}" "create net namespace" + +for n in ${r_b} ${r_w} ${c_a} ${c_b};do + ip netns add ${n} +done + +cleanup() { + for n in ${r_a} ${r_b} ${r_w} ${c_a} ${c_b};do + ip netns del ${n} + done + rm -f ${rx} +} + +trap cleanup EXIT + +test_path() { + msg="$1" + + ip netns exec ${c_b} nc -n -w 3 -q 3 -u -l -p 5000 > ${rx} < /dev/null & + + sleep 1 + for i in 1 2 3; do + head -c1400 /dev/zero | tr "\000" "a" | ip netns exec ${c_a} nc -n -w 1 -u 192.168.20.2 5000 + done + + wait + + bytes=$(wc -c < ${rx}) + + if [ $bytes -eq 1400 ];then + echo "OK: PMTU $msg connection tracking" + else + echo "FAIL: PMTU $msg connection tracking: got $bytes, expected 1400" + exit 1 + fi +} + +# Detailed setup for Router A +# --------------------------- +# Interfaces: +# eth0: 10.2.2.1/24 +# eth1: 192.168.10.1/24 +# ipip0: No IP address, local 10.2.2.1 remote 10.4.4.1 +# Routes: +# 192.168.20.0/24 dev ipip0 (192.168.20.0/24 is subnet of Client B) +# 10.4.4.1 via 10.2.2.254 (Router B via Wanrouter) +# No iptables rules at all. + +ip link add veth0 netns ${r_a} type veth peer name veth0 netns ${r_w} +ip link add veth1 netns ${r_a} type veth peer name veth0 netns ${c_a} + +l_addr="10.2.2.1" +r_addr="10.4.4.1" +ip netns exec ${r_a} ip link add ipip0 type ipip local ${l_addr} remote ${r_addr} mode ipip || exit $ksft_skip + +for dev in lo veth0 veth1 ipip0; do + ip -net ${r_a} link set $dev up +done + +ip -net ${r_a} addr add 10.2.2.1/24 dev veth0 +ip -net ${r_a} addr add 192.168.10.1/24 dev veth1 + +ip -net ${r_a} route add 192.168.20.0/24 dev ipip0 +ip -net ${r_a} route add 10.4.4.0/24 via 10.2.2.254 + +ip netns exec ${r_a} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null + +# Detailed setup for Router B +# --------------------------- +# Interfaces: +# eth0: 10.4.4.1/24 +# eth1: 192.168.20.1/24 +# ipip0: No IP address, local 10.4.4.1 remote 10.2.2.1 +# Routes: +# 192.168.10.0/24 dev ipip0 (192.168.10.0/24 is subnet of Client A) +# 10.2.2.1 via 10.4.4.254 (Router A via Wanrouter) +# No iptables rules at all. + +ip link add veth0 netns ${r_b} type veth peer name veth1 netns ${r_w} +ip link add veth1 netns ${r_b} type veth peer name veth0 netns ${c_b} + +l_addr="10.4.4.1" +r_addr="10.2.2.1" + +ip netns exec ${r_b} ip link add ipip0 type ipip local ${l_addr} remote ${r_addr} mode ipip || exit $ksft_skip + +for dev in lo veth0 veth1 ipip0; do + ip -net ${r_b} link set $dev up +done + +ip -net ${r_b} addr add 10.4.4.1/24 dev veth0 +ip -net ${r_b} addr add 192.168.20.1/24 dev veth1 + +ip -net ${r_b} route add 192.168.10.0/24 dev ipip0 +ip -net ${r_b} route add 10.2.2.0/24 via 10.4.4.254 +ip netns exec ${r_b} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null + +# Client A +ip -net ${c_a} addr add 192.168.10.2/24 dev veth0 +ip -net ${c_a} link set dev lo up +ip -net ${c_a} link set dev veth0 up +ip -net ${c_a} route add default via 192.168.10.1 + +# Client A +ip -net ${c_b} addr add 192.168.20.2/24 dev veth0 +ip -net ${c_b} link set dev veth0 up +ip -net ${c_b} link set dev lo up +ip -net ${c_b} route add default via 192.168.20.1 + +# Wan +ip -net ${r_w} addr add 10.2.2.254/24 dev veth0 +ip -net ${r_w} addr add 10.4.4.254/24 dev veth1 + +ip -net ${r_w} link set dev lo up +ip -net ${r_w} link set dev veth0 up mtu 1400 +ip -net ${r_w} link set dev veth1 up mtu 1400 + +ip -net ${r_a} link set dev veth0 mtu 1400 +ip -net ${r_b} link set dev veth0 mtu 1400 + +ip netns exec ${r_w} sysctl -q net.ipv4.conf.all.forwarding=1 > /dev/null + +# Path MTU discovery +# ------------------ +# Running tracepath from Client A to Client B shows PMTU discovery is working +# as expected: +# +# clienta:~# tracepath 192.168.20.2 +# 1?: [LOCALHOST] pmtu 1500 +# 1: 192.168.10.1 0.867ms +# 1: 192.168.10.1 0.302ms +# 2: 192.168.10.1 0.312ms pmtu 1480 +# 2: no reply +# 3: 192.168.10.1 0.510ms pmtu 1380 +# 3: 192.168.20.2 2.320ms reached +# Resume: pmtu 1380 hops 3 back 3 + +# ip netns exec ${c_a} traceroute --mtu 192.168.20.2 + +# Router A has learned PMTU (1400) to Router B from Wanrouter. +# Client A has learned PMTU (1400 - IPIP overhead = 1380) to Client B +# from Router A. + +#Send large UDP packet +#--------------------- +#Now we send a 1400 bytes UDP packet from Client A to Client B: + +# clienta:~# head -c1400 /dev/zero | tr "\000" "a" | nc -u 192.168.20.2 5000 +test_path "without" + +# The IPv4 stack on Client A already knows the PMTU to Client B, so the +# UDP packet is sent as two fragments (1380 + 20). Router A forwards the +# fragments between eth1 and ipip0. The fragments fit into the tunnel and +# reach their destination. + +#When sending the large UDP packet again, Router A now reassembles the +#fragments before routing the packet over ipip0. The resulting IPIP +#packet is too big (1400) for the tunnel PMTU (1380) to Router B, it is +#dropped on Router A before sending. + +ip netns exec ${r_a} iptables -A FORWARD -m conntrack --ctstate NEW +test_path "with" diff --git a/tools/testing/selftests/netfilter/nft_conntrack_helper.sh b/tools/testing/selftests/netfilter/nft_conntrack_helper.sh index edf0a48da6bf..bf6b9626c7dd 100755 --- a/tools/testing/selftests/netfilter/nft_conntrack_helper.sh +++ b/tools/testing/selftests/netfilter/nft_conntrack_helper.sh @@ -94,7 +94,13 @@ check_for_helper() local message=$2 local port=$3 - ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' + if echo $message |grep -q 'ipv6';then + local family="ipv6" + else + local family="ipv4" + fi + + ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' if [ $? -ne 0 ] ; then echo "FAIL: ${netns} did not show attached helper $message" 1>&2 ret=1 @@ -111,8 +117,8 @@ test_helper() sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null & - sleep 1 sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null & + sleep 1 check_for_helper "$ns1" "ip $msg" $port check_for_helper "$ns2" "ip $msg" $port @@ -128,8 +134,8 @@ test_helper() sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null & - sleep 1 sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null & + sleep 1 check_for_helper "$ns1" "ipv6 $msg" $port check_for_helper "$ns2" "ipv6 $msg" $port diff --git a/tools/testing/selftests/powerpc/alignment/alignment_handler.c b/tools/testing/selftests/powerpc/alignment/alignment_handler.c index cb53a8b777e6..c25cf7cd45e9 100644 --- a/tools/testing/selftests/powerpc/alignment/alignment_handler.c +++ b/tools/testing/selftests/powerpc/alignment/alignment_handler.c @@ -443,7 +443,6 @@ int test_alignment_handler_integer(void) LOAD_DFORM_TEST(ldu); LOAD_XFORM_TEST(ldx); LOAD_XFORM_TEST(ldux); - LOAD_DFORM_TEST(lmw); STORE_DFORM_TEST(stb); STORE_XFORM_TEST(stbx); STORE_DFORM_TEST(stbu); @@ -462,7 +461,11 @@ int test_alignment_handler_integer(void) STORE_XFORM_TEST(stdx); STORE_DFORM_TEST(stdu); STORE_XFORM_TEST(stdux); + +#ifdef __BIG_ENDIAN__ + LOAD_DFORM_TEST(lmw); STORE_DFORM_TEST(stmw); +#endif return rc; } diff --git a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c index 9e5c7f3f498a..0af4f02669a1 100644 --- a/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c +++ b/tools/testing/selftests/powerpc/mm/pkey_exec_prot.c @@ -290,5 +290,5 @@ static int test(void) int main(void) { - test_harness(test, "pkey_exec_prot"); + return test_harness(test, "pkey_exec_prot"); } diff --git a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c index 4f815d7c1214..2db76e56d4cb 100644 --- a/tools/testing/selftests/powerpc/mm/pkey_siginfo.c +++ b/tools/testing/selftests/powerpc/mm/pkey_siginfo.c @@ -329,5 +329,5 @@ static int test(void) int main(void) { - test_harness(test, "pkey_siginfo"); + return test_harness(test, "pkey_siginfo"); } diff --git a/tools/testing/selftests/vDSO/.gitignore b/tools/testing/selftests/vDSO/.gitignore index 5eb64d41e541..a8dc51af5a9c 100644 --- a/tools/testing/selftests/vDSO/.gitignore +++ b/tools/testing/selftests/vDSO/.gitignore @@ -1,5 +1,8 @@ # SPDX-License-Identifier: GPL-2.0-only vdso_test +vdso_test_abi +vdso_test_clock_getres +vdso_test_correctness vdso_test_gettimeofday vdso_test_getcpu vdso_standalone_test_x86 diff --git a/tools/testing/selftests/vDSO/vdso_test_correctness.c b/tools/testing/selftests/vDSO/vdso_test_correctness.c index 5029ef9b228c..c4aea794725a 100644 --- a/tools/testing/selftests/vDSO/vdso_test_correctness.c +++ b/tools/testing/selftests/vDSO/vdso_test_correctness.c @@ -349,7 +349,7 @@ static void test_one_clock_gettime64(int clock, const char *name) return; } - printf("\t%llu.%09ld %llu.%09ld %llu.%09ld\n", + printf("\t%llu.%09lld %llu.%09lld %llu.%09lld\n", (unsigned long long)start.tv_sec, start.tv_nsec, (unsigned long long)vdso.tv_sec, vdso.tv_nsec, (unsigned long long)end.tv_sec, end.tv_nsec); diff --git a/tools/testing/selftests/wireguard/qemu/debug.config b/tools/testing/selftests/wireguard/qemu/debug.config index b50c2085c1ac..fe07d97df9fa 100644 --- a/tools/testing/selftests/wireguard/qemu/debug.config +++ b/tools/testing/selftests/wireguard/qemu/debug.config @@ -1,5 +1,4 @@ CONFIG_LOCALVERSION="-debug" -CONFIG_ENABLE_MUST_CHECK=y CONFIG_FRAME_POINTER=y CONFIG_STACK_VALIDATION=y CONFIG_DEBUG_KERNEL=y diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5f260488e999..fa9e3614d30e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -485,9 +485,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mmu_notifier_count++; need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, range->flags); - need_tlb_flush |= kvm->tlbs_dirty; /* we've to flush the tlb before the pages can be freed */ - if (need_tlb_flush) + if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm); spin_unlock(&kvm->mmu_lock); |