diff options
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml | 5 | ||||
-rw-r--r-- | Documentation/devicetree/bindings/display/panel/novatek,nt36523.yaml | 16 | ||||
-rw-r--r-- | Documentation/devicetree/bindings/display/panel/panel-simple.yaml | 2 | ||||
-rw-r--r-- | Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml | 2 | ||||
-rw-r--r-- | Documentation/gpu/rfc/index.rst | 4 | ||||
-rw-r--r-- | Documentation/gpu/rfc/xe.rst | 235 | ||||
-rw-r--r-- | Documentation/gpu/todo.rst | 7 | ||||
-rw-r--r-- | Documentation/gpu/vkms.rst | 7 |
8 files changed, 261 insertions, 17 deletions
diff --git a/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml b/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml index 6e0e3ba9b49e..07388bf2b90d 100644 --- a/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml +++ b/Documentation/devicetree/bindings/display/bridge/fsl,ldb.yaml @@ -17,6 +17,7 @@ description: | properties: compatible: enum: + - fsl,imx6sx-ldb - fsl,imx8mp-ldb - fsl,imx93-ldb @@ -64,7 +65,9 @@ allOf: properties: compatible: contains: - const: fsl,imx93-ldb + enum: + - fsl,imx6sx-ldb + - fsl,imx93-ldb then: properties: ports: diff --git a/Documentation/devicetree/bindings/display/panel/novatek,nt36523.yaml b/Documentation/devicetree/bindings/display/panel/novatek,nt36523.yaml index 0039561ef04c..5f7e4c486094 100644 --- a/Documentation/devicetree/bindings/display/panel/novatek,nt36523.yaml +++ b/Documentation/devicetree/bindings/display/panel/novatek,nt36523.yaml @@ -19,11 +19,16 @@ allOf: properties: compatible: - items: - - enum: - - xiaomi,elish-boe-nt36523 - - xiaomi,elish-csot-nt36523 - - const: novatek,nt36523 + oneOf: + - items: + - enum: + - xiaomi,elish-boe-nt36523 + - xiaomi,elish-csot-nt36523 + - const: novatek,nt36523 + - items: + - enum: + - lenovo,j606f-boe-nt36523w + - const: novatek,nt36523w reset-gpios: maxItems: 1 @@ -34,6 +39,7 @@ properties: reg: true ports: true + rotation: true backlight: true required: diff --git a/Documentation/devicetree/bindings/display/panel/panel-simple.yaml b/Documentation/devicetree/bindings/display/panel/panel-simple.yaml index 01560fe226dd..ec50dd268314 100644 --- a/Documentation/devicetree/bindings/display/panel/panel-simple.yaml +++ b/Documentation/devicetree/bindings/display/panel/panel-simple.yaml @@ -174,6 +174,8 @@ properties: - innolux,at043tn24 # Innolux AT070TN92 7.0" WQVGA TFT LCD panel - innolux,at070tn92 + # Innolux G070ACE-L01 7" WVGA (800x480) TFT LCD panel + - innolux,g070ace-l01 # Innolux G070Y2-L01 7" WVGA (800x480) TFT LCD panel - innolux,g070y2-l01 # Innolux G070Y2-T02 7" WVGA (800x480) TFT LCD TTL panel diff --git a/Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml b/Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml index 09b5eb7542f8..150e81090af2 100644 --- a/Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml +++ b/Documentation/devicetree/bindings/display/panel/rocktech,jh057n00900.yaml @@ -20,6 +20,8 @@ allOf: properties: compatible: enum: + # Anberic RG353V-V2 5.0" 640x480 TFT LCD panel + - anbernic,rg353v-panel-v2 # Rocktech JH057N00900 5.5" 720x1440 TFT LCD panel - rocktech,jh057n00900 # Xingbangda XBD599 5.99" 720x1440 TFT LCD panel diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 476719771eef..e4f7b005138d 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -31,3 +31,7 @@ host such documentation: .. toctree:: i915_vm_bind.rst + +.. toctree:: + + xe.rst diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst new file mode 100644 index 000000000000..2516fe141db6 --- /dev/null +++ b/Documentation/gpu/rfc/xe.rst @@ -0,0 +1,235 @@ +========================== +Xe – Merge Acceptance Plan +========================== +Xe is a new driver for Intel GPUs that supports both integrated and +discrete platforms starting with Tiger Lake (first Intel Xe Architecture). + +This document aims to establish a merge plan for the Xe, by writing down clear +pre-merge goals, in order to avoid unnecessary delays. + +Xe – Overview +============= +The main motivation of Xe is to have a fresh base to work from that is +unencumbered by older platforms, whilst also taking the opportunity to +rearchitect our driver to increase sharing across the drm subsystem, both +leveraging and allowing us to contribute more towards other shared components +like TTM and drm/scheduler. + +This is also an opportunity to start from the beginning with a clean uAPI that is +extensible by design and already aligned with the modern userspace needs. For +this reason, the memory model is solely based on GPU Virtual Address space +bind/unbind (‘VM_BIND’) of GEM buffer objects (BOs) and execution only supporting +explicit synchronization. With persistent mapping across the execution, the +userspace does not need to provide a list of all required mappings during each +submission. + +The new driver leverages a lot from i915. As for display, the intent is to share +the display code with the i915 driver so that there is maximum reuse there. + +As for the power management area, the goal is to have a much-simplified support +for the system suspend states (S-states), PCI device suspend states (D-states), +GPU/Render suspend states (R-states) and frequency management. It should leverage +as much as possible all the existent PCI-subsystem infrastructure (pm and +runtime_pm) and underlying firmware components such PCODE and GuC for the power +states and frequency decisions. + +Repository: + +https://gitlab.freedesktop.org/drm/xe/kernel (branch drm-xe-next) + +Xe – Platforms +============== +Currently, Xe is already functional and has experimental support for multiple +platforms starting from Tiger Lake, with initial support in userspace implemented +in Mesa (for Iris and Anv, our OpenGL and Vulkan drivers), as well as in NEO +(for OpenCL and Level0). + +During a transition period, platforms will be supported by both Xe and i915. +However, the force_probe mechanism existent in both drivers will allow only one +official and by-default probe at a given time. + +For instance, in order to probe a DG2 which PCI ID is 0x5690 by Xe instead of +i915, the following set of parameters need to be used: + +``` +i915.force_probe=!5690 xe.force_probe=5690 +``` + +In both drivers, the ‘.require_force_probe’ protection forces the user to use the +force_probe parameter while the driver is under development. This protection is +only removed when the support for the platform and the uAPI are stable. Stability +which needs to be demonstrated by CI results. + +In order to avoid user space regressions, i915 will continue to support all the +current platforms that are already out of this protection. Xe support will be +forever experimental and dependent on the usage of force_probe for these +platforms. + +When the time comes for Xe, the protection will be lifted on Xe and kept in i915. + +Xe driver will be protected with both STAGING Kconfig and force_probe. Changes in +the uAPI are expected while the driver is behind these protections. STAGING will +be removed when the driver uAPI gets to a mature state where we can guarantee the +‘no regression’ rule. Then force_probe will be lifted only for future platforms +that will be productized with Xe driver, but not with i915. + +Xe – Pre-Merge Goals +==================== + +Drm_scheduler +------------- +Xe primarily uses Firmware based scheduling (GuC FW). However, it will use +drm_scheduler as the scheduler ‘frontend’ for userspace submission in order to +resolve syncobj and dma-buf implicit sync dependencies. However, drm_scheduler is +not yet prepared to handle the 1-to-1 relationship between drm_gpu_scheduler and +drm_sched_entity. + +Deeper changes to drm_scheduler should *not* be required to get Xe accepted, but +some consensus needs to be reached between Xe and other community drivers that +could also benefit from this work, for coupling FW based/assisted submission such +as the ARM’s new Mali GPU driver, and others. + +As a key measurable result, the patch series introducing Xe itself shall not +depend on any other patch touching drm_scheduler itself that was not yet merged +through drm-misc. This, by itself, already includes the reach of an agreement for +uniform 1 to 1 relationship implementation / usage across drivers. + +GPU VA +------ +Two main goals of Xe are meeting together here: + +1) Have an uAPI that aligns with modern UMD needs. + +2) Early upstream engagement. + +RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping +track of GPU virtual address mappings. This is still not merged upstream, but +this aligns very well with our goals and with our VM_BIND. The engagement with +upstream and the port of Xe towards GPUVA is already ongoing. + +As a key measurable result, Xe needs to be aligned with the GPU VA and working in +our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA +related patch should be independent and present on dri-devel or acked by +maintainers to go along with the first Xe pull request towards drm-next. + +DRM_VM_BIND +----------- +Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to +fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the +development of a common new drm_infrastructure. However, the Xe team needs to +engage with the community to explore the options of a common API. + +As a key measurable result, the DRM_VM_BIND needs to be documented in this file +below, or this entire block deleted if the consensus is for independent drivers +vm_bind ioctls. + +Although having a common DRM level IOCTL for VM_BIND is not a requirement to get +Xe merged, it is mandatory to enforce the overall locking scheme for all major +structs and list (so vm and vma). So, a consensus is needed, and possibly some +common helpers. If helpers are needed, they should be also documented in this +document. + +ASYNC VM_BIND +------------- +Although having a common DRM level IOCTL for VM_BIND is not a requirement to get +Xe merged, it is mandatory to have a consensus with other drivers and Mesa. +It needs to be clear how to handle async VM_BIND and interactions with userspace +memory fences. Ideally with helper support so people don't get it wrong in all +possible ways. + +As a key measurable result, the benefits of ASYNC VM_BIND and a discussion of +various flavors, error handling and a sample API should be documented here or in +a separate document pointed to by this document. + +Userptr integration and vm_bind +------------------------------- +Different drivers implement different ways of dealing with execution of userptr. +With multiple drivers currently introducing support to VM_BIND, the goal is to +aim for a DRM consensus on what’s the best way to have that support. To some +extent this is already getting addressed itself with the GPUVA where likely the +userptr will be a GPUVA with a NULL GEM call VM bind directly on the userptr. +However, there are more aspects around the rules for that and the usage of +mmu_notifiers, locking and other aspects. + +This task here has the goal of introducing a documentation of the basic rules. + +The documentation *needs* to first live in this document (API session below) and +then moved to another more specific document or at Xe level or at DRM level. + +Documentation should include: + + * The userptr part of the VM_BIND api. + + * Locking, including the page-faulting case. + + * O(1) complexity under VM_BIND. + +Some parts of userptr like mmu_notifiers should become GPUVA or DRM helpers when +the second driver supporting VM_BIND+userptr appears. Details to be defined when +the time comes. + +Long running compute: minimal data structure/scaffolding +-------------------------------------------------------- +The generic scheduler code needs to include the handling of endless compute +contexts, with the minimal scaffolding for preempt-ctx fences (probably on the +drm_sched_entity) and making sure drm_scheduler can cope with the lack of job +completion fence. + +The goal is to achieve a consensus ahead of Xe initial pull-request, ideally with +this minimal drm/scheduler work, if needed, merged to drm-misc in a way that any +drm driver, including Xe, could re-use and add their own individual needs on top +in a next stage. However, this should not block the initial merge. + +This is a non-blocker item since the driver without the support for the long +running compute enabled is not a showstopper. + +Display integration with i915 +----------------------------- +In order to share the display code with the i915 driver so that there is maximum +reuse, the i915/display/ code is built twice, once for i915.ko and then for +xe.ko. Currently, the i915/display code in Xe tree is polluted with many 'ifdefs' +depending on the build target. The goal is to refactor both Xe and i915/display +code simultaneously in order to get a clean result before they land upstream, so +that display can already be part of the initial pull request towards drm-next. + +However, display code should not gate the acceptance of Xe in upstream. Xe +patches will be refactored in a way that display code can be removed, if needed, +from the first pull request of Xe towards drm-next. The expectation is that when +both drivers are part of the drm-tip, the introduction of cleaner patches will be +easier and speed up. + +Drm_exec +-------- +Helper to make dma_resv locking for a big number of buffers is getting removed in +the drm_exec series proposed in https://patchwork.freedesktop.org/patch/524376/ +If that happens, Xe needs to change and incorporate the changes in the driver. +The goal is to engage with the Community to understand if the best approach is to +move that to the drivers that are using it or if we should keep the helpers in +place waiting for Xe to get merged. + +This item ties into the GPUVA, VM_BIND, and even long-running compute support. + +As a key measurable result, we need to have a community consensus documented in +this document and the Xe driver prepared for the changes, if necessary. + +Dev_coredump +------------ + +Xe needs to align with other drivers on the way that the error states are +dumped, avoiding a Xe only error_state solution. The goal is to use devcoredump +infrastructure to report error states, since it produces a standardized way +by exposing a virtual and temporary /sys/class/devcoredump device. + +As the key measurable result, Xe driver needs to provide GPU snapshots captured +at hang time through devcoredump, but without depending on any core modification +of devcoredump infrastructure itself. + +Later, when we are in-tree, the goal is to collaborate with devcoredump +infrastructure with overall possible improvements, like multiple file support +for better organization of the dumps, snapshot support, dmesg extra print, +and whatever may make sense and help the overall infrastructure. + +Xe – uAPI high level overview +============================= + +...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached. diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 1f8a5ebe188e..68bdafa0284f 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -276,11 +276,8 @@ Various hold-ups: - Need to switch to drm_fbdev_generic_setup(), otherwise a lot of the custom fb setup code can't be deleted. -- Many drivers wrap drm_gem_fb_create() only to check for valid formats. For - atomic drivers we could check for valid formats by calling - drm_plane_check_pixel_format() against all planes, and pass if any plane - supports the format. For non-atomic that's not possible since like the format - list for the primary plane is fake and we'd therefor reject valid formats. +- Need to switch to drm_gem_fb_create(), as now drm_gem_fb_create() checks for + valid formats for atomic drivers. - Many drivers subclass drm_framebuffer, we'd need a embedding compatible version of the varios drm_gem_fb_create functions. Maybe called diff --git a/Documentation/gpu/vkms.rst b/Documentation/gpu/vkms.rst index 49db221c0f52..ba04ac7c2167 100644 --- a/Documentation/gpu/vkms.rst +++ b/Documentation/gpu/vkms.rst @@ -118,14 +118,9 @@ Add Plane Features There's lots of plane features we could add support for: -- ARGB format on primary plane: blend the primary plane into background with - translucent alpha. - - Add background color KMS property[Good to get started]. -- Full alpha blending on all planes. - -- Rotation, scaling. +- Scaling. - Additional buffer formats, especially YUV formats for video like NV12. Low/high bpp RGB formats would also be interesting. |