diff options
| author | Dave Airlie <[email protected]> | 2024-02-13 11:32:23 +1000 |
|---|---|---|
| committer | Dave Airlie <[email protected]> | 2024-02-13 11:32:23 +1000 |
| commit | b344e64fbda303b767a3844ee739a596a9c3679e (patch) | |
| tree | caeec9924ea5522f008e42d89a7bf4286e759bc3 /drivers/gpu/drm/amd/display | |
| parent | 0de62399883d5077fd13d0926f5128a7e038b40c (diff) | |
| parent | d5597444032b2f5c8624918fb5b29be5bba78a3c (diff) | |
Merge tag 'amd-drm-next-6.9-2024-02-09' of https://gitlab.freedesktop.org/agd5f/linux into drm-next
amd-drm-next-6.9-2024-02-09:
amdgpu:
- Validate DMABuf imports in compute VMs
- Add RAS ACA framework
- PSP 13 fixes
- Misc code cleanups
- Replay fixes
- Atom interpretor PS, WS bounds checking
- DML2 fixes
- Audio fixes
- DCN 3.5 Z state fixes
- Remove deprecated ida_simple usage
- UBSAN fixes
- RAS fixes
- Enable seq64 infrastructure
- DC color block enablement
- Documentation updates
- DC documentation updates
- DMCUB updates
- S3 fixes
- VCN 4.0.5 fixes
- DP MST fixes
- SR-IOV fixes
amdkfd:
- Validate DMABuf imports in compute VMs
- SVM fixes
- Trap handler updates
radeon:
- Atom interpretor PS, WS bounds checking
- Misc code cleanups
UAPI:
- Bump KFD version so UMDs know that the fixes that enable the management of
VA mappings in compute VMs using the GEM_VA ioctl for DMABufs exported from KFD are present
- Add INFO query for input power. This matches the existing INFO query for average
power. Used in gaming HUDs, etc.
Example userspace: https://github.com/Umio-Yasuno/libdrm-amdgpu-sys-rs/tree/input_power
From: Alex Deucher <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Diffstat (limited to 'drivers/gpu/drm/amd/display')
110 files changed, 2431 insertions, 1010 deletions
diff --git a/drivers/gpu/drm/amd/display/TODO b/drivers/gpu/drm/amd/display/TODO deleted file mode 100644 index a8a6c106e8c7..000000000000 --- a/drivers/gpu/drm/amd/display/TODO +++ /dev/null @@ -1,110 +0,0 @@ -=============================================================================== -TODOs -=============================================================================== - -1. Base this on drm-next - WIP - - -2. Cleanup commit history - - -3. WIP - Drop page flip helper and use DRM's version - - -4. DONE - Flatten all DC objects - * dc_stream/core_stream/stream should just be dc_stream - * Same for other DC objects - - "Is there any major reason to keep all those abstractions? - - Could you collapse everything into struct dc_stream? - - I haven't looked recently but I didn't get the impression there was a - lot of design around what was public/protected, more whatever needed - to be used by someone else was in public." - ~ Dave Airlie - - -5. DONE - Rename DC objects to align more with DRM - * dc_surface -> dc_plane_state - * dc_stream -> dc_stream_state - - -6. DONE - Per-plane and per-stream validation - - -7. WIP - Per-plane and per-stream commit - - -8. WIP - Split pipe_ctx into plane and stream resource structs - - -9. Attach plane and stream reources to state object instead of validate_context - - -10. Remove dc_edid_caps and drm_helpers_parse_edid_caps - * Use drm_display_info instead - * Remove DC's edid quirks and rely on DRM's quirks (add quirks if needed) - - "Making sure you use the sink-specific helper libraries and kernel - subsystems, since there's really no good reason to have 2nd - implementation of those in the kernel. Looks likes that's done for mst - and edid parsing. There's still a bit a midlayer feeling to the edid - parsing side (e.g. dc_edid_caps and dm_helpers_parse_edid_caps, I - think it'd be much better if you convert that over to reading stuff - from drm_display_info and if needed, push stuff into the core). Also, - I can't come up with a good reason why DC needs all this (except to - reimplement half of our edid quirk table, which really isn't a good - idea). Might be good if you put this onto the list of things to fix - long-term, but imo not a blocker. Definitely make sure new stuff - doesn't slip in (i.e. if you start adding edid quirks to DC instead of - the drm core, refactoring to use the core edid stuff was pointless)." - ~ Daniel Vetter - - -11. Remove dc/i2caux. This folder can be somewhat misleading. It's basically an -overy complicated HW programming function for sendind and receiving i2c/aux -commands. We can greatly simplify that and move it into dc/dceXYZ like other -HW blocks. - -12. drm_modeset_lock in MST should no longer be needed in recent kernels - * Adopt appropriate locking scheme - -13. get_modes and best_encoder callbacks look a bit funny. Can probably rip out -a few indirections, and consider removing entirely and using the -drm_atomic_helper_best_encoder default behaviour. - -14. core/dc_debug.c, consider switching to the atomic state debug helpers and -moving all your driver state printing into the various atomic_print_state -callbacks. There's also plans to expose this stuff in a standard way across all -drivers, to make debugging userspace compositors easier across different hw. - -15. Move DP/HDMI dual mode adaptors to drm_dp_dual_mode_helper.c. See -dal_ddc_service_i2c_query_dp_dual_mode_adaptor. - -16. Move to core SCDC helpers (I think those are new since initial DC review). - -17. There's still a pretty massive layer cake around dp aux and DPCD handling, -with like 3 levels of abstraction and using your own structures instead of the -stuff in drm_dp_helper.h. drm_dp_helper.h isn't really great and already has 2 -incompatible styles, just means more reasons not to add a third (or well third -one gets to do the cleanup refactor). - -18. There's a pile of sink handling code, both for DP and HDMI where I didn't -immediately recognize the standard. I think long term it'd be best for the drm -subsystem if we try to move as much of that into helpers/core as possible, and -share it with drivers. But that's a very long term goal, and by far not just an -issue with DC - other drivers, especially around DP sink handling, are equally -guilty. - -19. DONE - The DC logger is still a rather sore thing, but I know that the -DRM_DEBUG stuff just isn't up to the challenges either. We need to figure out -something that integrates better with DRM and linux debug printing, while not -being useless with filtering output. dynamic debug printing might be an option. - -20. Use kernel i2c device to program HDMI retimer. Some boards have an HDMI -retimer that we need to program to pass PHY compliance. Currently that's -bypassing the i2c device and goes directly to HW. This should be changed. - -21. Remove vector.c from dc/basics. It's used in DDC code which can probably -be simplified enough to no longer need a vector implementation. diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index d292f290cd6e..467796d97313 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -67,6 +67,7 @@ #include "amdgpu_dm_debugfs.h" #endif #include "amdgpu_dm_psr.h" +#include "amdgpu_dm_replay.h" #include "ivsrcid/ivsrcid_vislands30.h" @@ -2121,6 +2122,16 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev) const struct dmcub_firmware_header_v1_0 *hdr; enum dmub_asic dmub_asic; enum dmub_status status; + static enum dmub_window_memory_type window_memory_type[DMUB_WINDOW_TOTAL] = { + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_0_INST_CONST + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_1_STACK + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_2_BSS_DATA + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_3_VBIOS + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_4_MAILBOX + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_5_TRACEBUFF + DMUB_WINDOW_MEMORY_TYPE_FB, //DMUB_WINDOW_6_FW_STATE + DMUB_WINDOW_MEMORY_TYPE_FB //DMUB_WINDOW_7_SCRATCH_MEM + }; int r; switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) { @@ -2218,7 +2229,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev) adev->dm.dmub_fw->data + le32_to_cpu(hdr->header.ucode_array_offset_bytes) + PSP_HEADER_BYTES; - region_params.is_mailbox_in_inbox = false; + region_params.window_memory_type = window_memory_type; status = dmub_srv_calc_region_info(dmub_srv, ®ion_params, ®ion_info); @@ -2246,6 +2257,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev) memory_params.cpu_fb_addr = adev->dm.dmub_bo_cpu_addr; memory_params.gpu_fb_addr = adev->dm.dmub_bo_gpu_addr; memory_params.region_info = ®ion_info; + memory_params.window_memory_type = window_memory_type; adev->dm.dmub_fb_info = kzalloc(sizeof(*adev->dm.dmub_fb_info), GFP_KERNEL); @@ -4399,6 +4411,7 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) enum dc_connection_type new_connection_type = dc_connection_none; const struct dc_plane_cap *plane; bool psr_feature_enabled = false; + bool replay_feature_enabled = false; int max_overlay = dm->dc->caps.max_slave_planes; dm->display_indexes_num = dm->dc->caps.max_streams; @@ -4510,6 +4523,23 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) } } + /* Determine whether to enable Replay support by default. */ + if (!(amdgpu_dc_debug_mask & DC_DISABLE_REPLAY)) { + switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) { + case IP_VERSION(3, 1, 4): + case IP_VERSION(3, 1, 5): + case IP_VERSION(3, 1, 6): + case IP_VERSION(3, 2, 0): + case IP_VERSION(3, 2, 1): + case IP_VERSION(3, 5, 0): + replay_feature_enabled = true; + break; + default: + replay_feature_enabled = amdgpu_dc_feature_mask & DC_REPLAY_MASK; + break; + } + } + /* loops over all connectors on the board */ for (i = 0; i < link_cnt; i++) { struct dc_link *link = NULL; @@ -4578,6 +4608,11 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) amdgpu_dm_update_connector_after_detect(aconnector); setup_backlight_device(dm, aconnector); + /* Disable PSR if Replay can be enabled */ + if (replay_feature_enabled) + if (amdgpu_dm_set_replay_caps(link, aconnector)) + psr_feature_enabled = false; + if (psr_feature_enabled) amdgpu_dm_set_psr_caps(link); @@ -6402,10 +6437,81 @@ int amdgpu_dm_connector_atomic_get_property(struct drm_connector *connector, return ret; } +/** + * DOC: panel power savings + * + * The display manager allows you to set your desired **panel power savings** + * level (between 0-4, with 0 representing off), e.g. using the following:: + * + * # echo 3 > /sys/class/drm/card0-eDP-1/amdgpu/panel_power_savings + * + * Modifying this value can have implications on color accuracy, so tread + * carefully. + */ + +static ssize_t panel_power_savings_show(struct device *device, + struct device_attribute *attr, + char *buf) +{ + struct drm_connector *connector = dev_get_drvdata(device); + struct drm_device *dev = connector->dev; + u8 val; + + drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); + val = to_dm_connector_state(connector->state)->abm_level == + ABM_LEVEL_IMMEDIATE_DISABLE ? 0 : + to_dm_connector_state(connector->state)->abm_level; + drm_modeset_unlock(&dev->mode_config.connection_mutex); + + return sysfs_emit(buf, "%u\n", val); +} + +static ssize_t panel_power_savings_store(struct device *device, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct drm_connector *connector = dev_get_drvdata(device); + struct drm_device *dev = connector->dev; + long val; + int ret; + + ret = kstrtol(buf, 0, &val); + + if (ret) + return ret; + + if (val < 0 || val > 4) + return -EINVAL; + + drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); + to_dm_connector_state(connector->state)->abm_level = val ?: + ABM_LEVEL_IMMEDIATE_DISABLE; + drm_modeset_unlock(&dev->mode_config.connection_mutex); + + drm_kms_helper_hotplug_event(dev); + + return count; +} + +static DEVICE_ATTR_RW(panel_power_savings); + +static struct attribute *amdgpu_attrs[] = { + &dev_attr_panel_power_savings.attr, + NULL +}; + +static const struct attribute_group amdgpu_group = { + .name = "amdgpu", + .attrs = amdgpu_attrs +}; + static void amdgpu_dm_connector_unregister(struct drm_connector *connector) { struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector); + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) + sysfs_remove_group(&connector->kdev->kobj, &amdgpu_group); + drm_dp_aux_unregister(&amdgpu_dm_connector->dm_dp_aux.aux); } @@ -6507,6 +6613,13 @@ amdgpu_dm_connector_late_register(struct drm_connector *connector) to_amdgpu_dm_connector(connector); int r; + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { + r = sysfs_create_group(&connector->kdev->kobj, + &amdgpu_group); + if (r) + return r; + } + amdgpu_dm_register_backlight_device(amdgpu_dm_connector); if ((connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) || @@ -8526,10 +8639,22 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, dm_update_pflip_irq_state(drm_to_adev(dev), acrtc_attach); - if ((acrtc_state->update_type > UPDATE_TYPE_FAST) && - acrtc_state->stream->link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED && - !acrtc_state->stream->link->psr_settings.psr_feature_enabled) - amdgpu_dm_link_setup_psr(acrtc_state->stream); + if (acrtc_state->update_type > UPDATE_TYPE_FAST) { + if (acrtc_state->stream->link->replay_settings.config.replay_supported && + !acrtc_state->stream->link->replay_settings.replay_feature_enabled) { + struct amdgpu_dm_connector *aconn = + (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context; + amdgpu_dm_link_setup_replay(acrtc_state->stream->link, aconn); + } else if (acrtc_state->stream->link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED && + !acrtc_state->stream->link->psr_settings.psr_feature_enabled) { + + struct amdgpu_dm_connector *aconn = (struct amdgpu_dm_connector *) + acrtc_state->stream->dm_stream_context; + + if (!aconn->disallow_edp_enter_psr) + amdgpu_dm_link_setup_psr(acrtc_state->stream); + } + } /* Decrement skip count when PSR is enabled and we're doing fast updates. */ if (acrtc_state->update_type == UPDATE_TYPE_FAST && @@ -8556,6 +8681,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) && #endif !acrtc_state->stream->link->psr_settings.psr_allow_active && + !aconn->disallow_edp_enter_psr && (timestamp_ns - acrtc_state->stream->link->psr_settings.psr_dirty_rects_change_timestamp_ns) > 500000000) @@ -8818,11 +8944,12 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state, } } /* for_each_crtc_in_state() */ - /* if there mode set or reset, disable eDP PSR */ + /* if there mode set or reset, disable eDP PSR, Replay */ if (mode_set_reset_required) { if (dm->vblank_control_workqueue) flush_workqueue(dm->vblank_control_workqueue); + amdgpu_dm_replay_disable_all(dm); amdgpu_dm_psr_disable_all(dm); } @@ -10731,11 +10858,13 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, goto fail; } - ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars); - if (ret) { - DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n"); - ret = -EINVAL; - goto fail; + if (dc_resource_is_dsc_encoding_supported(dc)) { + ret = compute_mst_dsc_configs_for_state(state, dm_state->context, vars); + if (ret) { + DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() failed\n"); + ret = -EINVAL; + goto fail; + } } ret = dm_update_mst_vcpi_slots_for_dsc(state, dm_state->context, vars); diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h index 9c1871b866cc..09519b7abf67 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h @@ -693,6 +693,7 @@ struct amdgpu_dm_connector { struct drm_display_mode freesync_vid_base; int psr_skip_count; + bool disallow_edp_enter_psr; /* Record progress status of mst*/ uint8_t mst_status; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c index 6e715ef3a556..e23a0a276e33 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c @@ -29,6 +29,7 @@ #include "dc.h" #include "amdgpu.h" #include "amdgpu_dm_psr.h" +#include "amdgpu_dm_replay.h" #include "amdgpu_dm_crtc.h" #include "amdgpu_dm_plane.h" #include "amdgpu_dm_trace.h" @@ -95,6 +96,61 @@ bool amdgpu_dm_crtc_vrr_active(struct dm_crtc_state *dm_state) dm_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED; } +/** + * amdgpu_dm_crtc_set_panel_sr_feature() - Manage panel self-refresh features. + * + * @vblank_work: is a pointer to a struct vblank_control_work object. + * @vblank_enabled: indicates whether the DRM vblank counter is currently + * enabled (true) or disabled (false). + * @allow_sr_entry: represents whether entry into the self-refresh mode is + * allowed (true) or not allowed (false). + * + * The DRM vblank counter enable/disable action is used as the trigger to enable + * or disable various panel self-refresh features: + * + * Panel Replay and PSR SU + * - Enable when: + * - vblank counter is disabled + * - entry is allowed: usermode demonstrates an adequate number of fast + * commits) + * - CRC capture window isn't active + * - Keep enabled even when vblank counter gets enabled + * + * PSR1 + * - Enable condition same as above + * - Disable when vblank counter is enabled + */ +static void amdgpu_dm_crtc_set_panel_sr_feature( + struct vblank_control_work *vblank_work, + bool vblank_enabled, bool allow_sr_entry) +{ + struct dc_link *link = vblank_work->stream->link; + bool is_sr_active = (link->replay_settings.replay_allow_active || + link->psr_settings.psr_allow_active); + bool is_crc_window_active = false; + +#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY + is_crc_window_active = + amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base); +#endif + + if (link->replay_settings.replay_feature_enabled && + allow_sr_entry && !is_sr_active && !is_crc_window_active) { + amdgpu_dm_replay_enable(vblank_work->stream, true); + } else if (vblank_enabled) { + if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active) + amdgpu_dm_psr_disable(vblank_work->stream); + } else if (link->psr_settings.psr_feature_enabled && + allow_sr_entry && !is_sr_active && !is_crc_window_active) { + + struct amdgpu_dm_connector *aconn = + (struct amdgpu_dm_connector *) vblank_work->stream->dm_stream_context; + + if (!aconn->disallow_edp_enter_psr) + amdgpu_dm_psr_enable(vblank_work->stream); + } +} + static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work) { struct vblank_control_work *vblank_work = @@ -123,18 +179,10 @@ static void amdgpu_dm_crtc_vblank_control_worker(struct work_struct *work) * fill_dc_dirty_rects(). */ if (vblank_work->stream && vblank_work->stream->link) { - if (vblank_work->enable) { - if (vblank_work->stream->link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && - vblank_work->stream->link->psr_settings.psr_allow_active) - amdgpu_dm_psr_disable(vblank_work->stream); - } else if (vblank_work->stream->link->psr_settings.psr_feature_enabled && - !vblank_work->stream->link->psr_settings.psr_allow_active && -#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY - !amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base) && -#endif - vblank_work->acrtc->dm_irq_params.allow_psr_entry) { - amdgpu_dm_psr_enable(vblank_work->stream); - } + amdgpu_dm_crtc_set_panel_sr_feature( + vblank_work, vblank_work->enable, + vblank_work->acrtc->dm_irq_params.allow_psr_entry || + vblank_work->stream->link->replay_settings.replay_feature_enabled); } mutex_unlock(&dm->dc_lock); diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c index 68a846323912..eee4945653e2 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c @@ -1483,7 +1483,7 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf, const uint32_t rd_buf_size = 10; struct pipe_ctx *pipe_ctx; ssize_t result = 0; - int i, r, str_len = 30; + int i, r, str_len = 10; rd_buf = kcalloc(rd_buf_size, sizeof(char), GFP_KERNEL); @@ -2971,6 +2971,53 @@ static int allow_edp_hotplug_detection_set(void *data, u64 val) return 0; } +/* check if kernel disallow eDP enter psr state + * cat /sys/kernel/debug/dri/0/eDP-X/disallow_edp_enter_psr + * 0: allow edp enter psr; 1: disallow + */ +static int disallow_edp_enter_psr_get(void *data, u64 *val) +{ + struct amdgpu_dm_connector *aconnector = data; + + *val = (u64) aconnector->disallow_edp_enter_psr; + return 0; +} + +/* set kernel disallow eDP enter psr state + * echo 0x0 /sys/kernel/debug/dri/0/eDP-X/disallow_edp_enter_psr + * 0: allow edp enter psr; 1: disallow + * + * usage: test app read crc from PSR eDP rx. + * + * during kernel boot up, kernel write dpcd 0x170 = 5. + * this notify eDP rx psr enable and let rx check crc. + * rx fw will start checking crc for rx internal logic. + * crc read count within dpcd 0x246 is not updated and + * value is 0. when eDP tx driver wants to read rx crc + * from dpcd 0x246, 0x270, read count 0 lead tx driver + * timeout. + * + * to avoid this, we add this debugfs to let test app to disbable + * rx crc checking for rx internal logic. then test app can read + * non-zero crc read count. + * + * expected app sequence is as below: + * 1. disable eDP PHY and notify eDP rx with dpcd 0x600 = 2. + * 2. echo 0x1 /sys/kernel/debug/dri/0/eDP-X/disallow_edp_enter_psr + * 3. enable eDP PHY and notify eDP rx with dpcd 0x600 = 1 but + * without dpcd 0x170 = 5. + * 4. read crc from rx dpcd 0x270, 0x246, etc. + * 5. echo 0x0 /sys/kernel/debug/dri/0/eDP-X/disallow_edp_enter_psr. + * this will let eDP back to normal with psr setup dpcd 0x170 = 5. + */ +static int disallow_edp_enter_psr_set(void *data, u64 val) +{ + struct amdgpu_dm_connector *aconnector = data; + + aconnector->disallow_edp_enter_psr = val ? true : false; + return 0; +} + static int dmub_trace_mask_set(void *data, u64 val) { struct amdgpu_device *adev = data; @@ -3092,6 +3139,10 @@ DEFINE_DEBUGFS_ATTRIBUTE(allow_edp_hotplug_detection_fops, allow_edp_hotplug_detection_get, allow_edp_hotplug_detection_set, "%llu\n"); +DEFINE_DEBUGFS_ATTRIBUTE(disallow_edp_enter_psr_fops, + disallow_edp_enter_psr_get, + disallow_edp_enter_psr_set, "%llu\n"); + DEFINE_SHOW_ATTRIBUTE(current_backlight); DEFINE_SHOW_ATTRIBUTE(target_backlight); @@ -3265,6 +3316,8 @@ void connector_debugfs_init(struct amdgpu_dm_connector *connector) &edp_ilr_debugfs_fops); debugfs_create_file("allow_edp_hotplug_detection", 0644, dir, connector, &allow_edp_hotplug_detection_fops); + debugfs_create_file("disallow_edp_enter_psr", 0644, dir, connector, + &disallow_edp_enter_psr_fops); } for (i = 0; i < ARRAY_SIZE(connector_debugfs_entries); i++) { diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c index 5ce542b1f860..738a58eebba7 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c @@ -60,21 +60,26 @@ static bool link_supports_replay(struct dc_link *link, struct amdgpu_dm_connecto if (!as_caps->dp_adap_sync_caps.bits.ADAPTIVE_SYNC_SDP_SUPPORT) return false; + // Sink shall populate line deviation information + if (dpcd_caps->pr_info.pixel_deviation_per_line == 0 || + dpcd_caps->pr_info.max_deviation_line == 0) + return false; + return true; } /* - * amdgpu_dm_setup_replay() - setup replay configuration + * amdgpu_dm_set_replay_caps() - setup Replay capabilities * @link: link * @aconnector: aconnector * */ -bool amdgpu_dm_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector) +bool amdgpu_dm_set_replay_caps(struct dc_link *link, struct amdgpu_dm_connector *aconnector) { - struct replay_config pr_config; + struct replay_config pr_config = { 0 }; union replay_debug_flags *debug_flags = NULL; - // For eDP, if Replay is supported, return true to skip checks + // If Replay is already set to support, return true to skip checks if (link->replay_settings.config.replay_supported) return true; @@ -87,27 +92,50 @@ bool amdgpu_dm_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *ac if (!link_supports_replay(link, aconnector)) return false; - // Mark Replay is supported in link and update related attributes + // Mark Replay is supported in pr_config pr_config.replay_supported = true; - pr_config.replay_power_opt_supported = 0; - pr_config.replay_enable_option |= pr_enable_option_static_screen; - pr_config.replay_timing_sync_supported = aconnector->max_vfreq >= 2 * aconnector->min_vfreq; - - if (!pr_config.replay_timing_sync_supported) - pr_config.replay_enable_option &= ~pr_enable_option_general_ui; debug_flags = (union replay_debug_flags *)&pr_config.debug_flags; debug_flags->u32All = 0; debug_flags->bitfields.visual_confirm = link->ctx->dc->debug.visual_confirm == VISUAL_CONFIRM_REPLAY; - link->replay_settings.replay_feature_enabled = true; - init_replay_config(link, &pr_config); return true; } +/* + * amdgpu_dm_link_setup_replay() - configure replay link + * @link: link + * @aconnector: aconnector + * + */ +bool amdgpu_dm_link_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector) +{ + struct replay_config *pr_config; + + if (link == NULL || aconnector == NULL) + return false; + + pr_config = &link->replay_settings.config; + + if (!pr_config->replay_supported) + return false; + + pr_config->replay_power_opt_supported = 0x11; + pr_config->replay_smu_opt_supported = false; + pr_config->replay_enable_option |= pr_enable_option_static_screen; + pr_config->replay_support_fast_resync_in_ultra_sleep_mode = aconnector->max_vfreq >= 2 * aconnector->min_vfreq; + pr_config->replay_timing_sync_supported = false; + + if (!pr_config->replay_timing_sync_supported) + pr_config->replay_enable_option &= ~pr_enable_option_general_ui; + + link->replay_settings.replay_feature_enabled = true; + + return true; +} /* * amdgpu_dm_replay_enable() - enable replay f/w @@ -117,51 +145,23 @@ bool amdgpu_dm_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *ac */ bool amdgpu_dm_replay_enable(struct dc_stream_state *stream, bool wait) { - uint64_t state; - unsigned int retry_count; bool replay_active = true; - const unsigned int max_retry = 1000; - bool force_static = true; struct dc_link *link = NULL; - if (stream == NULL) return false; link = stream->link; - if (link == NULL) - return false; - - link->dc->link_srv->edp_setup_replay(link, stream); - - link->dc->link_srv->edp_set_replay_allow_active(link, NULL, false, false, NULL); - - link->dc->link_srv->edp_set_replay_allow_active(link, &replay_active, false, true, NULL); - - if (wait == true) { - - for (retry_count = 0; retry_count <= max_retry; retry_count++) { - dc_link_get_replay_state(link, &state); - if (replay_active) { - if (state != REPLAY_STATE_0 && - (!force_static || state == REPLAY_STATE_3)) - break; - } else { - if (state == REPLAY_STATE_0) - break; - } - udelay(500); - } - - /* assert if max retry hit */ - if (retry_count >= max_retry) - ASSERT(0); - } else { - /* To-do: Add trace log */ + if (link) { + link->dc->link_srv->edp_setup_replay(link, stream); + link->dc->link_srv->edp_set_coasting_vtotal(link, stream->timing.v_total); + DRM_DEBUG_DRIVER("Enabling replay...\n"); + link->dc->link_srv->edp_set_replay_allow_active(link, &replay_active, wait, false, NULL); + return true; } - return true; + return false; } /* @@ -172,12 +172,31 @@ bool amdgpu_dm_replay_enable(struct dc_stream_state *stream, bool wait) */ bool amdgpu_dm_replay_disable(struct dc_stream_state *stream) { + bool replay_active = false; + struct dc_link *link = NULL; - if (stream->link) { + if (stream == NULL) + return false; + + link = stream->link; + + if (link) { DRM_DEBUG_DRIVER("Disabling replay...\n"); - stream->link->dc->link_srv->edp_set_replay_allow_active(stream->link, NULL, false, false, NULL); + link->dc->link_srv->edp_set_replay_allow_active(stream->link, &replay_active, true, false, NULL); return true; } return false; } + +/* + * amdgpu_dm_replay_disable_all() - disable replay f/w + * if replay is enabled on any stream + * + * Return: true if success + */ +bool amdgpu_dm_replay_disable_all(struct amdgpu_display_manager *dm) +{ + DRM_DEBUG_DRIVER("Disabling replay if replay is enabled on any stream\n"); + return dc_set_replay_allow_active(dm->dc, false); +} diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h index 01cba3cd6246..f0d30eb47312 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h @@ -40,7 +40,9 @@ enum replay_enable_option { bool amdgpu_dm_replay_enable(struct dc_stream_state *stream, bool enable); -bool amdgpu_dm_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector); +bool amdgpu_dm_set_replay_caps(struct dc_link *link, struct amdgpu_dm_connector *aconnector); +bool amdgpu_dm_link_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector); bool amdgpu_dm_replay_disable(struct dc_stream_state *stream); +bool amdgpu_dm_replay_disable_all(struct amdgpu_display_manager *dm); #endif /* AMDGPU_DM_AMDGPU_DM_REPLAY_H_ */ diff --git a/drivers/gpu/drm/amd/display/dc/basics/conversion.c b/drivers/gpu/drm/amd/display/dc/basics/conversion.c index 1090d235086a..bd1f60ecaba4 100644 --- a/drivers/gpu/drm/amd/display/dc/basics/conversion.c +++ b/drivers/gpu/drm/amd/display/dc/basics/conversion.c @@ -101,6 +101,40 @@ void convert_float_matrix( } } +static struct fixed31_32 int_frac_to_fixed_point(uint16_t arg, + uint8_t integer_bits, + uint8_t fractional_bits) +{ + struct fixed31_32 result; + uint16_t sign_mask = 1 << (fractional_bits + integer_bits); + uint16_t value_mask = sign_mask - 1; + + result.value = (long long)(arg & value_mask) << + (FIXED31_32_BITS_PER_FRACTIONAL_PART - fractional_bits); + + if (arg & sign_mask) + result = dc_fixpt_neg(result); + + return result; +} + +/** + * convert_hw_matrix - converts HW values into fixed31_32 matrix. + * @matrix: fixed point 31.32 matrix + * @reg: array of register values + * @buffer_size: size of the array of register values + * + * Converts HW register spec defined format S2D13 into a fixed-point 31.32 + * matrix. + */ +void convert_hw_matrix(struct fixed31_32 *matrix, + uint16_t *reg, + uint32_t buffer_size) +{ + for (int i = 0; i < buffer_size; ++i) + matrix[i] = int_frac_to_fixed_point(reg[i], 2, 13); +} + static uint32_t find_gcd(uint32_t a, uint32_t b) { uint32_t remainder; diff --git a/drivers/gpu/drm/amd/display/dc/basics/conversion.h b/drivers/gpu/drm/amd/display/dc/basics/conversion.h index 81da4e6f7a1a..a433cef78496 100644 --- a/drivers/gpu/drm/amd/display/dc/basics/conversion.h +++ b/drivers/gpu/drm/amd/display/dc/basics/conversion.h @@ -41,6 +41,10 @@ void convert_float_matrix( void reduce_fraction(uint32_t num, uint32_t den, uint32_t *out_num, uint32_t *out_den); +void convert_hw_matrix(struct fixed31_32 *matrix, + uint16_t *reg, + uint32_t buffer_size); + static inline unsigned int log_2(unsigned int num) { return ilog2(num); diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c index 818a529cacc3..86f9198e7501 100644 --- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c +++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c @@ -37,7 +37,7 @@ #define EXEC_BIOS_CMD_TABLE(command, params)\ (amdgpu_atom_execute_table(((struct amdgpu_device *)bp->base.ctx->driver_context)->mode_info.atom_context, \ GetIndexIntoMasterTable(COMMAND, command), \ - (uint32_t *)¶ms) == 0) + (uint32_t *)¶ms, sizeof(params)) == 0) #define BIOS_CMD_TABLE_REVISION(command, frev, crev)\ amdgpu_atom_parse_cmd_header(((struct amdgpu_device *)bp->base.ctx->driver_context)->mode_info.atom_context, \ diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c index 293a919d605d..cbae1be7b009 100644 --- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c +++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c @@ -49,7 +49,7 @@ #define EXEC_BIOS_CMD_TABLE(fname, params)\ (amdgpu_atom_execute_table(((struct amdgpu_device *)bp->base.ctx->driver_context)->mode_info.atom_context, \ GET_INDEX_INTO_MASTER_TABLE(command, fname), \ - (uint32_t *)¶ms) == 0) + (uint32_t *)¶ms, sizeof(params)) == 0) #define BIOS_CMD_TABLE_REVISION(fname, frev, crev)\ amdgpu_atom_parse_cmd_header(((struct amdgpu_device *)bp->base.ctx->driver_context)->mode_info.atom_context, \ diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c index 0c6a4ab72b1d..e3e1940198a9 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c @@ -707,9 +707,7 @@ void rn_clk_mgr_construct( int is_green_sardine = 0; struct clk_log_info log_info = {0}; -#if defined(CONFIG_DRM_AMD_DC_FP) is_green_sardine = ASICREV_IS_GREEN_SARDINE(ctx->asic_id.hw_internal_rev); -#endif clk_mgr->base.ctx = ctx; clk_mgr->base.funcs = &dcn21_funcs; diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c index 8c9d45e5b13b..d72acbb049b1 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c @@ -185,10 +185,6 @@ int rn_vbios_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int reque VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c index e4f96b6fd79d..19e5b3be9275 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/dcn301_smu.c @@ -180,10 +180,6 @@ int dcn301_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c index 32279c5db724..6904e95113c1 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_smu.c @@ -202,10 +202,6 @@ int dcn31_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int requeste VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c index 07baa10a8647..c4af406146b7 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_smu.c @@ -220,12 +220,6 @@ int dcn314_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", - actual_dcfclk_set_mhz, - actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c index 1042cf1a3ab0..879f1494c4cd 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c @@ -215,10 +215,6 @@ int dcn315_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c index 3ed19197a755..8b82092b91cd 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_smu.c @@ -189,10 +189,6 @@ int dcn316_smu_set_hard_min_dcfclk(struct clk_mgr_internal *clk_mgr, int request VBIOSSMC_MSG_SetHardMinDcfclkByFreq, khz_to_mhz_ceil(requested_dcfclk_khz)); -#ifdef DBG - smu_print("actual_dcfclk_set_mhz %d is set to : %d\n", actual_dcfclk_set_mhz, actual_dcfclk_set_mhz * 1000); -#endif - return actual_dcfclk_set_mhz * 1000; } diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c index aadd07bc68c5..e64e45e4c833 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c @@ -387,7 +387,15 @@ static void dcn32_update_clocks_update_dentist( uint32_t temp_dispclk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR * clk_mgr->base.dentist_vco_freq_khz) / temp_disp_divider; if (clk_mgr->smu_present) - dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(temp_dispclk_khz)); + /* + * SMU uses discrete dispclk presets. We applied + * the same formula to increase our dppclk_khz + * to the next matching discrete value. By + * contract, we should use the preset dispclk + * floored in Mhz to describe the intended clock. + */ + dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, + khz_to_mhz_floor(temp_dispclk_khz)); if (dc->debug.override_dispclk_programming) { REG_GET(DENTIST_DISPCLK_CNTL, @@ -426,7 +434,15 @@ static void dcn32_update_clocks_update_dentist( /* do requested DISPCLK updates*/ if (clk_mgr->smu_present) - dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr->base.clks.dispclk_khz)); + /* + * SMU uses discrete dispclk presets. We applied + * the same formula to increase our dppclk_khz + * to the next matching discrete value. By + * contract, we should use the preset dispclk + * floored in Mhz to describe the intended clock. + */ + dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, + khz_to_mhz_floor(clk_mgr->base.clks.dispclk_khz)); if (dc->debug.override_dispclk_programming) { REG_GET(DENTIST_DISPCLK_CNTL, @@ -493,6 +509,8 @@ static void dcn32_auto_dpm_test_log( } } + msleep(5); + mall_ss_size_bytes = context->bw_ctx.bw.dcn.mall_ss_size_bytes; dispclk_khz_reg = REG_READ(CLK1_CLK0_CURRENT_CNT); // DISPCLK @@ -734,7 +752,15 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base, clk_mgr_base->clks.dppclk_khz = new_clocks->dppclk_khz; if (clk_mgr->smu_present && !dpp_clock_lowered) - dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz)); + /* + * SMU uses discrete dppclk presets. We applied + * the same formula to increase our dppclk_khz + * to the next matching discrete value. By + * contract, we should use the preset dppclk + * floored in Mhz to describe the intended clock. + */ + dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, + khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz)); update_dppclk = true; } @@ -765,7 +791,15 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base, dcn32_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower); dcn32_update_clocks_update_dentist(clk_mgr, context); if (clk_mgr->smu_present) - dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz)); + /* + * SMU uses discrete dppclk presets. We applied + * the same formula to increase our dppclk_khz + * to the next matching discrete value. By + * contract, we should use the preset dppclk + * floored in Mhz to describe the intended clock. + */ + dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, + khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz)); } else { /* if clock is being raised, increase refclk before lowering DTO */ if (update_dppclk || update_dispclk) diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h index a34c258c19dc..c76352a817de 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h @@ -36,8 +36,7 @@ #define DALSMC_MSG_SetCabForUclkPstate 0x12 #define DALSMC_Result_OK 0x1 -void -dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool enable); +void dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool enable); void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr); void dcn32_smu_set_pme_workaround(struct clk_mgr_internal *clk_mgr); void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways); diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c index 14cec1c7b718..06edca50a8fa 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c @@ -384,19 +384,6 @@ static void dcn35_enable_pme_wa(struct clk_mgr *clk_mgr_base) dcn35_smu_enable_pme_wa(clk_mgr); } -void dcn35_init_clocks(struct clk_mgr *clk_mgr) -{ - uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz; - - memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks)); - - // Assumption is that boot state always supports pstate - clk_mgr->clks.ref_dtbclk_khz = ref_dtbclk; // restore ref_dtbclk - clk_mgr->clks.p_state_change_support = true; - clk_mgr->clks.prev_p_state_change_support = true; - clk_mgr->clks.pwr_state = DCN_PWR_STATE_UNKNOWN; - clk_mgr->clks.zstate_support = DCN_ZSTATE_SUPPORT_UNKNOWN; -} bool dcn35_are_clock_states_equal(struct dc_clocks *a, struct dc_clocks *b) @@ -422,6 +409,23 @@ static void dcn35_dump_clk_registers(struct clk_state_registers_and_bypass *regs { } +static void init_clk_states(struct clk_mgr *clk_mgr) +{ + uint32_t ref_dtbclk = clk_mgr->clks.ref_dtbclk_khz; + memset(&(clk_mgr->clks), 0, sizeof(struct dc_clocks)); + + clk_mgr->clks.dtbclk_en = true; + clk_mgr->clks.ref_dtbclk_khz = ref_dtbclk; // restore ref_dtbclk + clk_mgr->clks.p_state_change_support = true; + clk_mgr->clks.prev_p_state_change_support = true; + clk_mgr->clks.pwr_state = DCN_PWR_STATE_UNKNOWN; + clk_mgr->clks.zstate_support = DCN_ZSTATE_SUPPORT_UNKNOWN; +} + +void dcn35_init_clocks(struct clk_mgr *clk_mgr) +{ + init_clk_states(clk_mgr); +} static struct clk_bw_params dcn35_bw_params = { .vram_type = Ddr4MemType, .num_channels = 1, @@ -826,7 +830,7 @@ static void dcn35_set_low_power_state(struct clk_mgr *clk_mgr_base) } } -static void dcn35_set_idle_state(struct clk_mgr *clk_mgr_base, bool allow_idle) +static void dcn35_set_ips_idle_state(struct clk_mgr *clk_mgr_base, bool allow_idle) { struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); struct dc *dc = clk_mgr_base->ctx->dc; @@ -874,7 +878,7 @@ static bool dcn35_is_ips_supported(struct clk_mgr *clk_mgr_base) return ips_supported; } -static uint32_t dcn35_get_idle_state(struct clk_mgr *clk_mgr_base) +static uint32_t dcn35_get_ips_idle_state(struct clk_mgr *clk_mgr_base) { struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); @@ -883,7 +887,7 @@ static uint32_t dcn35_get_idle_state(struct clk_mgr *clk_mgr_base) static void dcn35_init_clocks_fpga(struct clk_mgr *clk_mgr) { - dcn35_init_clocks(clk_mgr); + init_clk_states(clk_mgr); /* TODO: Implement the functions and remove the ifndef guard */ } @@ -968,8 +972,8 @@ static struct clk_mgr_funcs dcn35_funcs = { .set_low_power_state = dcn35_set_low_power_state, .exit_low_power_state = dcn35_exit_low_power_state, .is_ips_supported = dcn35_is_ips_supported, - .set_idle_state = dcn35_set_idle_state, - .get_idle_state = dcn35_get_idle_state + .set_idle_state = dcn35_set_ips_idle_state, + .get_idle_state = dcn35_get_ips_idle_state }; struct clk_mgr_funcs dcn35_fpga_funcs = { diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_smu.c index 6d4a1ffab5ed..a07f7e685d28 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_smu.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_smu.c @@ -447,6 +447,9 @@ void dcn35_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable) void dcn35_vbios_smu_enable_48mhz_tmdp_refclk_pwrdwn(struct clk_mgr_internal *clk_mgr, bool enable) { + if (!clk_mgr->smu_present) + return; + dcn35_smu_send_msg_with_param( clk_mgr, VBIOSSMC_MSG_EnableTmdp48MHzRefclkPwrDown, @@ -458,6 +461,9 @@ int dcn35_smu_exit_low_power_state(struct clk_mgr_internal *clk_mgr) { int retv; + if (!clk_mgr->smu_present) + return 0; + retv = dcn35_smu_send_msg_with_param( clk_mgr, VBIOSSMC_MSG_DispPsrExit, @@ -470,6 +476,9 @@ int dcn35_smu_get_ips_supported(struct clk_mgr_internal *clk_mgr) { int retv; + if (!clk_mgr->smu_present) + return 0; + retv = dcn35_smu_send_msg_with_param( clk_mgr, VBIOSSMC_MSG_QueryIPS2Support, @@ -481,6 +490,9 @@ int dcn35_smu_get_ips_supported(struct clk_mgr_internal *clk_mgr) void dcn35_smu_write_ips_scratch(struct clk_mgr_internal *clk_mgr, uint32_t param) { + if (!clk_mgr->smu_present) + return; + REG_WRITE(MP1_SMN_C2PMSG_71, param); //smu_print("%s: write_ips_scratch = %x\n", __func__, param); } @@ -489,6 +501,9 @@ uint32_t dcn35_smu_read_ips_scratch(struct clk_mgr_internal *clk_mgr) { uint32_t retv; + if (!clk_mgr->smu_present) + return 0; + retv = REG_READ(MP1_SMN_C2PMSG_71); //smu_print("%s: dcn35_smu_read_ips_scratch = %x\n", __func__, retv); return retv; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index aa7c02ba948e..72512903f88f 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -414,6 +414,8 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc, if (dc->optimized_required || dc->wm_optimized_required) return false; + dc_exit_ips_for_hw_access(dc); + stream->adjust.v_total_max = adjust->v_total_max; stream->adjust.v_total_mid = adjust->v_total_mid; stream->adjust.v_total_mid_frame_num = adjust->v_total_mid_frame_num; @@ -454,6 +456,8 @@ bool dc_stream_get_last_used_drr_vtotal(struct dc *dc, int i = 0; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i]; @@ -484,6 +488,8 @@ bool dc_stream_get_crtc_position(struct dc *dc, bool ret = false; struct crtc_position position; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i]; @@ -603,6 +609,8 @@ bool dc_stream_configure_crc(struct dc *dc, struct dc_stream_state *stream, if (pipe == NULL) return false; + dc_exit_ips_for_hw_access(dc); + /* By default, capture the full frame */ param.windowa_x_start = 0; param.windowa_y_start = 0; @@ -662,6 +670,8 @@ bool dc_stream_get_crc(struct dc *dc, struct dc_stream_state *stream, struct pipe_ctx *pipe; struct timing_generator *tg; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { pipe = &dc->current_state->res_ctx.pipe_ctx[i]; if (pipe->stream == stream) @@ -686,6 +696,8 @@ void dc_stream_set_dyn_expansion(struct dc *dc, struct dc_stream_state *stream, int i; struct pipe_ctx *pipe_ctx; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream) { @@ -721,6 +733,8 @@ void dc_stream_set_dither_option(struct dc_stream_state *stream, if (option > DITHER_OPTION_MAX) return; + dc_exit_ips_for_hw_access(stream->ctx->dc); + stream->dither_option = option; memset(¶ms, 0, sizeof(params)); @@ -745,6 +759,8 @@ bool dc_stream_set_gamut_remap(struct dc *dc, const struct dc_stream_state *stre bool ret = false; struct pipe_ctx *pipes; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream) { pipes = &dc->current_state->res_ctx.pipe_ctx[i]; @@ -762,6 +778,8 @@ bool dc_stream_program_csc_matrix(struct dc *dc, struct dc_stream_state *stream) bool ret = false; struct pipe_ctx *pipes; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { if (dc->current_state->res_ctx.pipe_ctx[i].stream == stream) { @@ -788,6 +806,8 @@ void dc_stream_set_static_screen_params(struct dc *dc, struct pipe_ctx *pipes_affected[MAX_PIPES]; int num_pipes_affected = 0; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < num_streams; i++) { struct dc_stream_state *stream = streams[i]; @@ -1766,6 +1786,8 @@ void dc_enable_stereo( int i, j; struct pipe_ctx *pipe; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { if (context != NULL) { pipe = &context->res_ctx.pipe_ctx[i]; @@ -1785,6 +1807,8 @@ void dc_enable_stereo( void dc_trigger_sync(struct dc *dc, struct dc_state *context) { if (context->stream_count > 1 && !dc->debug.disable_timing_sync) { + dc_exit_ips_for_hw_access(dc); + enable_timing_multisync(dc, context); program_timing_sync(dc, context); } @@ -2041,6 +2065,8 @@ enum dc_status dc_commit_streams(struct dc *dc, if (!streams_changed(dc, streams, stream_count)) return res; + dc_exit_ips_for_hw_access(dc); + DC_LOG_DC("%s: %d streams\n", __func__, stream_count); for (i = 0; i < stream_count; i++) { @@ -3067,6 +3093,10 @@ static bool update_planes_and_stream_state(struct dc *dc, if (otg_master && otg_master->stream->test_pattern.type != DP_TEST_PATTERN_VIDEO_MODE) resource_build_test_pattern_params(&context->res_ctx, otg_master); + + if (otg_master && (otg_master->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR422 || + otg_master->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420)) + resource_build_subsampling_params(&context->res_ctx, otg_master); } } @@ -3376,6 +3406,8 @@ static void commit_planes_for_stream_fast(struct dc *dc, int i, j; struct pipe_ctx *top_pipe_to_program = NULL; struct dc_stream_status *stream_status = NULL; + dc_exit_ips_for_hw_access(dc); + dc_z10_restore(dc); top_pipe_to_program = resource_get_otg_master_for_stream( @@ -3503,10 +3535,23 @@ static void commit_planes_for_stream(struct dc *dc, // dc->current_state anymore, so we have to cache it before we apply // the new SubVP context subvp_prev_use = false; + dc_exit_ips_for_hw_access(dc); + dc_z10_restore(dc); if (update_type == UPDATE_TYPE_FULL) wait_for_outstanding_hw_updates(dc, context); + for (i = 0; i < dc->res_pool->pipe_count; i++) { + struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; + + if (pipe->stream && pipe->plane_state) { + set_p_state_switch_method(dc, context, pipe); + + if (dc->debug.visual_confirm) + dc_update_visual_confirm_color(dc, context, pipe); + } + } + if (update_type == UPDATE_TYPE_FULL) { dc_allow_idle_optimizations(dc, false); @@ -3541,17 +3586,6 @@ static void commit_planes_for_stream(struct dc *dc, } } - for (i = 0; i < dc->res_pool->pipe_count; i++) { - struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; - - if (pipe->stream && pipe->plane_state) { - set_p_state_switch_method(dc, context, pipe); - - if (dc->debug.visual_confirm) - dc_update_visual_confirm_color(dc, context, pipe); - } - } - if (stream->test_pattern.type != DP_TEST_PATTERN_VIDEO_MODE) { struct pipe_ctx *mpcc_pipe; struct pipe_ctx *odm_pipe; @@ -3817,7 +3851,9 @@ static void commit_planes_for_stream(struct dc *dc, * programming has completed (we turn on phantom OTG in order * to complete the plane disable for phantom pipes). */ - dc->hwss.apply_ctx_to_hw(dc, context); + + if (dc->hwss.disable_phantom_streams) + dc->hwss.disable_phantom_streams(dc, context); } if (update_type != UPDATE_TYPE_FAST) @@ -4382,6 +4418,8 @@ bool dc_update_planes_and_stream(struct dc *dc, bool is_plane_addition = 0; bool is_fast_update_only; + dc_exit_ips_for_hw_access(dc); + populate_fast_updates(fast_update, srf_updates, surface_count, stream_update); is_fast_update_only = fast_update_only(dc, fast_update, srf_updates, surface_count, stream_update, stream); @@ -4502,6 +4540,8 @@ void dc_commit_updates_for_stream(struct dc *dc, int i, j; struct dc_fast_update fast_update[MAX_SURFACES] = {0}; + dc_exit_ips_for_hw_access(dc); + populate_fast_updates(fast_update, srf_updates, surface_count, stream_update); stream_status = dc_stream_get_status(stream); context = dc->current_state; @@ -4686,6 +4726,8 @@ void dc_set_power_state( case DC_ACPI_CM_POWER_STATE_D0: dc_state_construct(dc, dc->current_state); + dc_exit_ips_for_hw_access(dc); + dc_z10_restore(dc); dc->hwss.init_hw(dc); @@ -4827,6 +4869,12 @@ void dc_allow_idle_optimizations(struct dc *dc, bool allow) dc->idle_optimizations_allowed = allow; } +void dc_exit_ips_for_hw_access(struct dc *dc) +{ + if (dc->caps.ips_support) + dc_allow_idle_optimizations(dc, false); +} + bool dc_dmub_is_ips_idle_state(struct dc *dc) { uint32_t idle_state = 0; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c index 9fbdb09697fd..96ea283bd169 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c @@ -822,6 +822,16 @@ static struct rect calculate_odm_slice_in_timing_active(struct pipe_ctx *pipe_ct stream->timing.v_border_bottom + stream->timing.v_border_top; + /* Recout for ODM slices after the first slice need one extra left edge pixel + * for 3-tap chroma subsampling. + */ + if (odm_slice_idx > 0 && + (pipe_ctx->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR422 || + pipe_ctx->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420)) { + odm_rec.x -= 1; + odm_rec.width += 1; + } + return odm_rec; } @@ -1438,6 +1448,7 @@ void resource_build_test_pattern_params(struct resource_context *res_ctx, enum controller_dp_test_pattern controller_test_pattern; enum controller_dp_color_space controller_color_space; enum dc_color_depth color_depth = otg_master->stream->timing.display_color_depth; + enum dc_pixel_encoding pixel_encoding = otg_master->stream->timing.pixel_encoding; int h_active = otg_master->stream->timing.h_addressable + otg_master->stream->timing.h_border_left + otg_master->stream->timing.h_border_right; @@ -1469,10 +1480,36 @@ void resource_build_test_pattern_params(struct resource_context *res_ctx, else params->width = last_odm_slice_width; + /* Extra left edge pixel is required for 3-tap chroma subsampling. */ + if (i != 0 && (pixel_encoding == PIXEL_ENCODING_YCBCR422 || + pixel_encoding == PIXEL_ENCODING_YCBCR420)) { + params->offset -= 1; + params->width += 1; + } + offset += odm_slice_width; } } +void resource_build_subsampling_params(struct resource_context *res_ctx, + struct pipe_ctx *otg_master) +{ + struct pipe_ctx *opp_heads[MAX_PIPES]; + int odm_cnt = 1; + int i; + + odm_cnt = resource_get_opp_heads_for_otg_master(otg_master, res_ctx, opp_heads); + + /* For ODM slices after the first slice, extra left edge pixel is required + * for 3-tap chroma subsampling. + */ + if (otg_master->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR422 || + otg_master->stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420) { + for (i = 0; i < odm_cnt; i++) + opp_heads[i]->stream_res.left_edge_extra_pixel = (i == 0) ? false : true; + } +} + bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx) { const struct dc_plane_state *plane_state = pipe_ctx->plane_state; @@ -1834,23 +1871,6 @@ int resource_find_any_free_pipe(struct resource_context *new_res_ctx, bool resource_is_pipe_type(const struct pipe_ctx *pipe_ctx, enum pipe_type type) { -#ifdef DBG - if (pipe_ctx->stream == NULL) { - /* a free pipe with dangling states */ - ASSERT(!pipe_ctx->plane_state); - ASSERT(!pipe_ctx->prev_odm_pipe); - ASSERT(!pipe_ctx->next_odm_pipe); - ASSERT(!pipe_ctx->top_pipe); - ASSERT(!pipe_ctx->bottom_pipe); - } else if (pipe_ctx->top_pipe) { - /* a secondary DPP pipe must be signed to a plane */ - ASSERT(pipe_ctx->plane_state) - } - /* Add more checks here to prevent corrupted pipe ctx. It is very hard - * to debug this issue afterwards because we can't pinpoint the code - * location causing inconsistent pipe context states. - */ -#endif switch (type) { case OTG_MASTER: return !pipe_ctx->prev_odm_pipe && diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c index 88c6436b28b6..180ac47868c2 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c @@ -291,11 +291,14 @@ void dc_state_destruct(struct dc_state *state) dc_stream_release(state->phantom_streams[i]); state->phantom_streams[i] = NULL; } + state->phantom_stream_count = 0; for (i = 0; i < state->phantom_plane_count; i++) { dc_plane_state_release(state->phantom_planes[i]); state->phantom_planes[i] = NULL; } + state->phantom_plane_count = 0; + state->stream_mask = 0; memset(&state->res_ctx, 0, sizeof(state->res_ctx)); memset(&state->pp_display_cfg, 0, sizeof(state->pp_display_cfg)); diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c index 54670e0b1518..51a970fcb5d0 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c @@ -423,6 +423,8 @@ bool dc_stream_add_writeback(struct dc *dc, return false; } + dc_exit_ips_for_hw_access(dc); + wb_info->dwb_params.out_transfer_func = stream->out_transfer_func; dwb = dc->res_pool->dwbc[wb_info->dwb_pipe_inst]; @@ -493,6 +495,8 @@ bool dc_stream_fc_disable_writeback(struct dc *dc, return false; } + dc_exit_ips_for_hw_access(dc); + if (dwb->funcs->set_fc_enable) dwb->funcs->set_fc_enable(dwb, DWB_FRAME_CAPTURE_DISABLE); @@ -542,6 +546,8 @@ bool dc_stream_remove_writeback(struct dc *dc, return false; } + dc_exit_ips_for_hw_access(dc); + /* disable writeback */ if (dc->hwss.disable_writeback) { struct dwbc *dwb = dc->res_pool->dwbc[dwb_pipe_inst]; @@ -557,6 +563,8 @@ bool dc_stream_warmup_writeback(struct dc *dc, int num_dwb, struct dc_writeback_info *wb_info) { + dc_exit_ips_for_hw_access(dc); + if (dc->hwss.mmhubbub_warmup) return dc->hwss.mmhubbub_warmup(dc, num_dwb, wb_info); else @@ -569,6 +577,8 @@ uint32_t dc_stream_get_vblank_counter(const struct dc_stream_state *stream) struct resource_context *res_ctx = &dc->current_state->res_ctx; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg; @@ -597,6 +607,8 @@ bool dc_stream_send_dp_sdp(const struct dc_stream_state *stream, dc = stream->ctx->dc; res_ctx = &dc->current_state->res_ctx; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[i]; @@ -628,6 +640,8 @@ bool dc_stream_get_scanoutpos(const struct dc_stream_state *stream, struct resource_context *res_ctx = &dc->current_state->res_ctx; + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < MAX_PIPES; i++) { struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg; @@ -664,6 +678,8 @@ bool dc_stream_dmdata_status_done(struct dc *dc, struct dc_stream_state *stream) if (i == MAX_PIPES) return true; + dc_exit_ips_for_hw_access(dc); + return dc->hwss.dmdata_status_done(pipe); } @@ -698,6 +714,8 @@ bool dc_stream_set_dynamic_metadata(struct dc *dc, pipe_ctx->stream->dmdata_address = attr->address; + dc_exit_ips_for_hw_access(dc); + dc->hwss.program_dmdata_engine(pipe_ctx); if (hubp->funcs->dmdata_set_attributes != NULL && diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c index 19a2c7140ae8..19140fb65787 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c @@ -161,6 +161,8 @@ const struct dc_plane_status *dc_plane_get_status( break; } + dc_exit_ips_for_hw_access(dc); + for (i = 0; i < dc->res_pool->pipe_count; i++) { struct pipe_ctx *pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i]; diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h index c9317ea0258e..c789cc2e216d 100644 --- a/drivers/gpu/drm/amd/display/dc/dc.h +++ b/drivers/gpu/drm/amd/display/dc/dc.h @@ -51,7 +51,7 @@ struct aux_payload; struct set_config_cmd_payload; struct dmub_notification; -#define DC_VER "3.2.266" +#define DC_VER "3.2.271" #define MAX_SURFACES 3 #define MAX_PLANES 6 @@ -429,12 +429,12 @@ struct dc_config { bool force_bios_enable_lttpr; uint8_t force_bios_fixed_vs; int sdpif_request_limit_words_per_umc; - bool use_old_fixed_vs_sequence; bool dc_mode_clk_limit_support; bool EnableMinDispClkODM; bool enable_auto_dpm_test_logs; unsigned int disable_ips; unsigned int disable_ips_in_vpb; + bool usb4_bw_alloc_support; }; enum visual_confirm { @@ -987,9 +987,11 @@ struct dc_debug_options { bool psp_disabled_wa; unsigned int ips2_eval_delay_us; unsigned int ips2_entry_delay_us; + bool disable_dmub_reallow_idle; bool disable_timeout; bool disable_extblankadj; unsigned int static_screen_wait_frames; + bool force_chroma_subsampling_1tap; }; struct gpu_info_soc_bounding_box_v1_0; @@ -1068,6 +1070,7 @@ struct dc { } scratch; struct dml2_configuration_options dml2_options; + enum dc_acpi_cm_power_state power_state; }; enum frame_buffer_mode { @@ -2219,11 +2222,9 @@ struct dc_sink_dsc_caps { // 'true' if these are virtual DPCD's DSC caps (immediately upstream of sink in MST topology), // 'false' if they are sink's DSC caps bool is_virtual_dpcd_dsc; -#if defined(CONFIG_DRM_AMD_DC_FP) // 'true' if MST topology supports DSC passthrough for sink // 'false' if MST topology does not support DSC passthrough bool is_dsc_passthrough_supported; -#endif struct dsc_dec_dpcd_caps dsc_dec_caps; }; @@ -2325,6 +2326,7 @@ bool dc_is_plane_eligible_for_idle_optimizations(struct dc *dc, struct dc_plane_ struct dc_cursor_attributes *cursor_attr); void dc_allow_idle_optimizations(struct dc *dc, bool allow); +void dc_exit_ips_for_hw_access(struct dc *dc); bool dc_dmub_is_ips_idle_state(struct dc *dc); /* set min and max memory clock to lowest and highest DPM level, respectively */ diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c index 2b79a0e5638e..a1477906fe4f 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c @@ -74,7 +74,10 @@ void dc_dmub_srv_wait_idle(struct dc_dmub_srv *dc_dmub_srv) struct dc_context *dc_ctx = dc_dmub_srv->ctx; enum dmub_status status; - status = dmub_srv_wait_for_idle(dmub, 100000); + do { + status = dmub_srv_wait_for_idle(dmub, 100000); + } while (dc_dmub_srv->ctx->dc->debug.disable_timeout && status != DMUB_STATUS_OK); + if (status != DMUB_STATUS_OK) { DC_ERROR("Error waiting for DMUB idle: status=%d\n", status); dc_dmub_srv_log_diagnostic_data(dc_dmub_srv); @@ -145,7 +148,9 @@ bool dc_dmub_srv_cmd_list_queue_execute(struct dc_dmub_srv *dc_dmub_srv, if (status == DMUB_STATUS_POWER_STATE_D3) return false; - dmub_srv_wait_for_idle(dmub, 100000); + do { + status = dmub_srv_wait_for_idle(dmub, 100000); + } while (dc_dmub_srv->ctx->dc->debug.disable_timeout && status != DMUB_STATUS_OK); /* Requeue the command. */ status = dmub_srv_cmd_queue(dmub, &cmd_list[i]); @@ -186,7 +191,9 @@ bool dc_dmub_srv_wait_for_idle(struct dc_dmub_srv *dc_dmub_srv, // Wait for DMUB to process command if (wait_type != DM_DMUB_WAIT_TYPE_NO_WAIT) { - status = dmub_srv_wait_for_idle(dmub, 100000); + do { + status = dmub_srv_wait_for_idle(dmub, 100000); + } while (dc_dmub_srv->ctx->dc->debug.disable_timeout && status != DMUB_STATUS_OK); if (status != DMUB_STATUS_OK) { DC_LOG_DEBUG("No reply for DMUB command: status=%d\n", status); @@ -780,21 +787,22 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc, } else if (subvp_pipe->next_odm_pipe) { pipe_data->pipe_config.subvp_data.main_split_pipe_index = subvp_pipe->next_odm_pipe->pipe_idx; } else { - pipe_data->pipe_config.subvp_data.main_split_pipe_index = 0; + pipe_data->pipe_config.subvp_data.main_split_pipe_index = 0xF; } // Find phantom pipe index based on phantom stream for (j = 0; j < dc->res_pool->pipe_count; j++) { struct pipe_ctx *phantom_pipe = &context->res_ctx.pipe_ctx[j]; - if (phantom_pipe->stream == dc_state_get_paired_subvp_stream(context, subvp_pipe->stream)) { + if (resource_is_pipe_type(phantom_pipe, OTG_MASTER) && + phantom_pipe->stream == dc_state_get_paired_subvp_stream(context, subvp_pipe->stream)) { pipe_data->pipe_config.subvp_data.phantom_pipe_index = phantom_pipe->stream_res.tg->inst; if (phantom_pipe->bottom_pipe) { pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = phantom_pipe->bottom_pipe->plane_res.hubp->inst; } else if (phantom_pipe->next_odm_pipe) { pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = phantom_pipe->next_odm_pipe->plane_res.hubp->inst; } else { - pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = 0; + pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = 0xF; } break; } @@ -1195,6 +1203,9 @@ static void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle) if (dc->debug.dmcub_emulation) return; + if (!dc->ctx->dmub_srv || !dc->ctx->dmub_srv->dmub) + return; + memset(&cmd, 0, sizeof(cmd)); cmd.idle_opt_notify_idle.header.type = DMUB_CMD__IDLE_OPT; cmd.idle_opt_notify_idle.header.sub_type = DMUB_CMD__IDLE_OPT_DCN_NOTIFY_IDLE; @@ -1205,13 +1216,15 @@ static void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle) cmd.idle_opt_notify_idle.cntl_data.driver_idle = allow_idle; if (allow_idle) { + dc_dmub_srv_wait_idle(dc->ctx->dmub_srv); + if (dc->hwss.set_idle_state) dc->hwss.set_idle_state(dc, true); } /* NOTE: This does not use the "wake" interface since this is part of the wake path. */ /* We also do not perform a wait since DMCUB could enter idle after the notification. */ - dm_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT); + dm_execute_dmub_cmd(dc->ctx, &cmd, allow_idle ? DM_DMUB_WAIT_TYPE_NO_WAIT : DM_DMUB_WAIT_TYPE_WAIT); } static void dc_dmub_srv_exit_low_power_state(const struct dc *dc) @@ -1361,7 +1374,7 @@ bool dc_wake_and_execute_dmub_cmd_list(const struct dc_context *ctx, unsigned in else result = dm_execute_dmub_cmd(ctx, cmd, wait_type); - if (result && reallow_idle) + if (result && reallow_idle && !ctx->dc->debug.disable_dmub_reallow_idle) dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, true); return result; @@ -1410,7 +1423,7 @@ bool dc_wake_and_execute_gpint(const struct dc_context *ctx, enum dmub_gpint_com result = dc_dmub_execute_gpint(ctx, command_code, param, response, wait_type); - if (result && reallow_idle) + if (result && reallow_idle && !ctx->dc->debug.disable_dmub_reallow_idle) dc_dmub_srv_apply_idle_power_optimizations(ctx->dc, true); return result; diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h index 811474f4419b..aae2f3a2660d 100644 --- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h +++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h @@ -827,9 +827,7 @@ struct dc_dsc_config { uint32_t version_minor; /* DSC minor version. Full version is formed as 1.version_minor. */ bool ycbcr422_simple; /* Tell DSC engine to convert YCbCr 4:2:2 to 'YCbCr 4:2:2 simple'. */ int32_t rc_buffer_size; /* DSC RC buffer block size in bytes */ -#if defined(CONFIG_DRM_AMD_DC_FP) bool is_frl; /* indicate if DSC is applied based on HDMI FRL sink's capability */ -#endif bool is_dp; /* indicate if DSC is applied based on DP's capability */ uint32_t mst_pbn; /* pbn of display on dsc mst hub */ const struct dc_dsc_rc_params_override *rc_params_ovrd; /* DM owned memory. If not NULL, apply custom dsc rc params */ @@ -942,6 +940,7 @@ struct dc_crtc_timing { uint32_t hdmi_vic; uint32_t rid; uint32_t fr_index; + uint32_t frl_uncompressed_video_bandwidth_in_kbps; enum dc_timing_3d_format timing_3d_format; enum dc_color_depth display_color_depth; enum dc_pixel_encoding pixel_encoding; diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c index f0458b8f00af..12f3c35b3a34 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.c @@ -239,27 +239,294 @@ static void check_audio_bandwidth_hdmi( } } } +static struct fixed31_32 get_link_symbol_clk_freq_mhz(enum dc_link_rate link_rate) +{ + switch (link_rate) { + case LINK_RATE_LOW: + return dc_fixpt_from_int(162); /* 162 MHz */ + case LINK_RATE_HIGH: + return dc_fixpt_from_int(270); /* 270 MHz */ + case LINK_RATE_HIGH2: + return dc_fixpt_from_int(540); /* 540 MHz */ + case LINK_RATE_HIGH3: + return dc_fixpt_from_int(810); /* 810 MHz */ + case LINK_RATE_UHBR10: + return dc_fixpt_from_fraction(3125, 10); /* 312.5 MHz */ + case LINK_RATE_UHBR13_5: + return dc_fixpt_from_fraction(421875, 1000); /* 421.875 MHz */ + case LINK_RATE_UHBR20: + return dc_fixpt_from_int(625); /* 625 MHz */ + default: + /* Unexpected case, this requires debug if encountered. */ + ASSERT(0); + return dc_fixpt_from_int(0); + } +} + +struct dp_audio_layout_config { + uint8_t layouts_per_sample_denom; + uint8_t symbols_per_layout; + uint8_t max_layouts_per_audio_sdp; +}; + +static void get_audio_layout_config( + uint32_t channel_count, + enum dp_link_encoding encoding, + struct dp_audio_layout_config *output) +{ + /* Assuming L-PCM audio. Current implementation uses max 1 layout per SDP, + * with each layout being the same size (8ch layout). + */ + if (encoding == DP_8b_10b_ENCODING) { + if (channel_count == 2) { + output->layouts_per_sample_denom = 4; + output->symbols_per_layout = 40; + output->max_layouts_per_audio_sdp = 1; + } else if (channel_count == 8 || channel_count == 6) { + output->layouts_per_sample_denom = 1; + output->symbols_per_layout = 40; + output->max_layouts_per_audio_sdp = 1; + } + } else if (encoding == DP_128b_132b_ENCODING) { + if (channel_count == 2) { + output->layouts_per_sample_denom = 4; + output->symbols_per_layout = 10; + output->max_layouts_per_audio_sdp = 1; + } else if (channel_count == 8 || channel_count == 6) { + output->layouts_per_sample_denom = 1; + output->symbols_per_layout = 10; + output->max_layouts_per_audio_sdp = 1; + } + } +} -/*For DP SST, calculate if specified sample rates can fit into a given timing */ -static void check_audio_bandwidth_dpsst( +static uint32_t get_av_stream_map_lane_count( + enum dp_link_encoding encoding, + enum dc_lane_count lane_count, + bool is_mst) +{ + uint32_t av_stream_map_lane_count = 0; + + if (encoding == DP_8b_10b_ENCODING) { + if (!is_mst) + av_stream_map_lane_count = lane_count; + else + av_stream_map_lane_count = 4; + } else if (encoding == DP_128b_132b_ENCODING) { + av_stream_map_lane_count = 4; + } + + ASSERT(av_stream_map_lane_count != 0); + + return av_stream_map_lane_count; +} + +static uint32_t get_audio_sdp_overhead( + enum dp_link_encoding encoding, + enum dc_lane_count lane_count, + bool is_mst) +{ + uint32_t audio_sdp_overhead = 0; + + if (encoding == DP_8b_10b_ENCODING) { + if (is_mst) + audio_sdp_overhead = 16; /* 4 * 2 + 8 */ + else + audio_sdp_overhead = lane_count * 2 + 8; + } else if (encoding == DP_128b_132b_ENCODING) { + audio_sdp_overhead = 10; /* 4 x 2.5 */ + } + + ASSERT(audio_sdp_overhead != 0); + + return audio_sdp_overhead; +} + +static uint32_t calculate_required_audio_bw_in_symbols( const struct audio_crtc_info *crtc_info, + const struct dp_audio_layout_config *layout_config, uint32_t channel_count, - union audio_sample_rates *sample_rates) + uint32_t sample_rate_hz, + uint32_t av_stream_map_lane_count, + uint32_t audio_sdp_overhead) +{ + /* DP spec recommends between 1.05 to 1.1 safety margin to prevent sample under-run */ + struct fixed31_32 audio_sdp_margin = dc_fixpt_from_fraction(110, 100); + struct fixed31_32 horizontal_line_freq_khz = dc_fixpt_from_fraction( + crtc_info->requested_pixel_clock_100Hz, crtc_info->h_total * 10); + struct fixed31_32 samples_per_line; + struct fixed31_32 layouts_per_line; + struct fixed31_32 symbols_per_sdp_max_layout; + struct fixed31_32 remainder; + uint32_t num_sdp_with_max_layouts; + uint32_t required_symbols_per_hblank; + + samples_per_line = dc_fixpt_from_fraction(sample_rate_hz, 1000); + samples_per_line = dc_fixpt_div(samples_per_line, horizontal_line_freq_khz); + layouts_per_line = dc_fixpt_div_int(samples_per_line, layout_config->layouts_per_sample_denom); + + num_sdp_with_max_layouts = dc_fixpt_floor( + dc_fixpt_div_int(layouts_per_line, layout_config->max_layouts_per_audio_sdp)); + symbols_per_sdp_max_layout = dc_fixpt_from_int( + layout_config->max_layouts_per_audio_sdp * layout_config->symbols_per_layout); + symbols_per_sdp_max_layout = dc_fixpt_add_int(symbols_per_sdp_max_layout, audio_sdp_overhead); + symbols_per_sdp_max_layout = dc_fixpt_mul(symbols_per_sdp_max_layout, audio_sdp_margin); + required_symbols_per_hblank = num_sdp_with_max_layouts; + required_symbols_per_hblank *= ((dc_fixpt_ceil(symbols_per_sdp_max_layout) + av_stream_map_lane_count) / + av_stream_map_lane_count) * av_stream_map_lane_count; + + if (num_sdp_with_max_layouts != dc_fixpt_ceil( + dc_fixpt_div_int(layouts_per_line, layout_config->max_layouts_per_audio_sdp))) { + remainder = dc_fixpt_sub_int(layouts_per_line, + num_sdp_with_max_layouts * layout_config->max_layouts_per_audio_sdp); + remainder = dc_fixpt_mul_int(remainder, layout_config->symbols_per_layout); + remainder = dc_fixpt_add_int(remainder, audio_sdp_overhead); + remainder = dc_fixpt_mul(remainder, audio_sdp_margin); + required_symbols_per_hblank += ((dc_fixpt_ceil(remainder) + av_stream_map_lane_count) / + av_stream_map_lane_count) * av_stream_map_lane_count; + } + + return required_symbols_per_hblank; +} + +/* Current calculation only applicable for 8b/10b MST and 128b/132b SST/MST. + */ +static uint32_t calculate_available_hblank_bw_in_symbols( + const struct audio_crtc_info *crtc_info, + const struct audio_dp_link_info *dp_link_info) { - /* do nothing */ + uint64_t hblank = crtc_info->h_total - crtc_info->h_active; + struct fixed31_32 hblank_time_msec = + dc_fixpt_from_fraction(hblank * 10, crtc_info->requested_pixel_clock_100Hz); + struct fixed31_32 lsclkfreq_mhz = + get_link_symbol_clk_freq_mhz(dp_link_info->link_rate); + struct fixed31_32 average_stream_sym_bw_frac; + struct fixed31_32 peak_stream_bw_kbps; + struct fixed31_32 bits_per_pixel; + struct fixed31_32 link_bw_kbps; + struct fixed31_32 available_stream_sym_count; + uint32_t available_hblank_bw = 0; /* in stream symbols */ + + if (crtc_info->dsc_bits_per_pixel) { + bits_per_pixel = dc_fixpt_from_fraction(crtc_info->dsc_bits_per_pixel, 16); + } else { + switch (crtc_info->color_depth) { + case COLOR_DEPTH_666: + bits_per_pixel = dc_fixpt_from_int(6); + break; + case COLOR_DEPTH_888: + bits_per_pixel = dc_fixpt_from_int(8); + break; + case COLOR_DEPTH_101010: + bits_per_pixel = dc_fixpt_from_int(10); + break; + case COLOR_DEPTH_121212: + bits_per_pixel = dc_fixpt_from_int(12); + break; + default: + /* Default to commonly supported color depth. */ + bits_per_pixel = dc_fixpt_from_int(8); + break; + } + + bits_per_pixel = dc_fixpt_mul_int(bits_per_pixel, 3); + + if (crtc_info->pixel_encoding == PIXEL_ENCODING_YCBCR422) { + bits_per_pixel = dc_fixpt_div_int(bits_per_pixel, 3); + bits_per_pixel = dc_fixpt_mul_int(bits_per_pixel, 2); + } else if (crtc_info->pixel_encoding == PIXEL_ENCODING_YCBCR420) { + bits_per_pixel = dc_fixpt_div_int(bits_per_pixel, 2); + } + } + + /* Use simple stream BW calculation because mainlink overhead is + * accounted for separately in the audio BW calculations. + */ + peak_stream_bw_kbps = dc_fixpt_from_fraction(crtc_info->requested_pixel_clock_100Hz, 10); + peak_stream_bw_kbps = dc_fixpt_mul(peak_stream_bw_kbps, bits_per_pixel); + link_bw_kbps = dc_fixpt_from_int(dp_link_info->link_bandwidth_kbps); + average_stream_sym_bw_frac = dc_fixpt_div(peak_stream_bw_kbps, link_bw_kbps); + + available_stream_sym_count = dc_fixpt_mul_int(hblank_time_msec, 1000); + available_stream_sym_count = dc_fixpt_mul(available_stream_sym_count, lsclkfreq_mhz); + available_stream_sym_count = dc_fixpt_mul(available_stream_sym_count, average_stream_sym_bw_frac); + available_hblank_bw = dc_fixpt_floor(available_stream_sym_count); + available_hblank_bw *= dp_link_info->lane_count; + available_hblank_bw -= crtc_info->dsc_num_slices * 4; /* EOC overhead */ + + if (available_hblank_bw < dp_link_info->hblank_min_symbol_width) + available_hblank_bw = dp_link_info->hblank_min_symbol_width; + + if (available_hblank_bw < 12) + available_hblank_bw = 0; + else + available_hblank_bw -= 12; /* Main link overhead */ + + return available_hblank_bw; } -/*For DP MST, calculate if specified sample rates can fit into a given timing */ -static void check_audio_bandwidth_dpmst( +static void check_audio_bandwidth_dp( const struct audio_crtc_info *crtc_info, + const struct audio_dp_link_info *dp_link_info, uint32_t channel_count, union audio_sample_rates *sample_rates) { - /* do nothing */ + struct dp_audio_layout_config layout_config = {0}; + uint32_t available_hblank_bw; + uint32_t av_stream_map_lane_count; + uint32_t audio_sdp_overhead; + + /* TODO: Add validation for SST 8b/10 case */ + if (!dp_link_info->is_mst && dp_link_info->encoding == DP_8b_10b_ENCODING) + return; + + available_hblank_bw = calculate_available_hblank_bw_in_symbols( + crtc_info, dp_link_info); + av_stream_map_lane_count = get_av_stream_map_lane_count( + dp_link_info->encoding, dp_link_info->lane_count, dp_link_info->is_mst); + audio_sdp_overhead = get_audio_sdp_overhead( + dp_link_info->encoding, dp_link_info->lane_count, dp_link_info->is_mst); + get_audio_layout_config( + channel_count, dp_link_info->encoding, &layout_config); + + if (layout_config.max_layouts_per_audio_sdp == 0 || + layout_config.symbols_per_layout == 0 || + layout_config.layouts_per_sample_denom == 0) { + return; + } + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 192000, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_192 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 176400, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_176_4 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 96000, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_96 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 88200, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_88_2 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 48000, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_48 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 44100, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_44_1 = 0; + if (available_hblank_bw < calculate_required_audio_bw_in_symbols( + crtc_info, &layout_config, channel_count, 32000, + av_stream_map_lane_count, audio_sdp_overhead)) + sample_rates->rate.RATE_32 = 0; } static void check_audio_bandwidth( const struct audio_crtc_info *crtc_info, + const struct audio_dp_link_info *dp_link_info, uint32_t channel_count, enum signal_type signal, union audio_sample_rates *sample_rates) @@ -271,12 +538,9 @@ static void check_audio_bandwidth( break; case SIGNAL_TYPE_EDP: case SIGNAL_TYPE_DISPLAY_PORT: - check_audio_bandwidth_dpsst( - crtc_info, channel_count, sample_rates); - break; case SIGNAL_TYPE_DISPLAY_PORT_MST: - check_audio_bandwidth_dpmst( - crtc_info, channel_count, sample_rates); + check_audio_bandwidth_dp( + crtc_info, dp_link_info, channel_count, sample_rates); break; default: break; @@ -394,7 +658,8 @@ void dce_aud_az_configure( struct audio *audio, enum signal_type signal, const struct audio_crtc_info *crtc_info, - const struct audio_info *audio_info) + const struct audio_info *audio_info, + const struct audio_dp_link_info *dp_link_info) { struct dce_audio *aud = DCE_AUD(audio); @@ -529,6 +794,7 @@ void dce_aud_az_configure( check_audio_bandwidth( crtc_info, + dp_link_info, channel_count, signal, &sample_rates); @@ -588,6 +854,7 @@ void dce_aud_az_configure( check_audio_bandwidth( crtc_info, + dp_link_info, 8, signal, &sample_rate); diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.h b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.h index dbd2cfed0603..539f881928d1 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dce_audio.h +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_audio.h @@ -170,7 +170,8 @@ void dce_aud_az_disable(struct audio *audio); void dce_aud_az_configure(struct audio *audio, enum signal_type signal, const struct audio_crtc_info *crtc_info, - const struct audio_info *audio_info); + const struct audio_info *audio_info, + const struct audio_dp_link_info *dp_link_info); void dce_aud_wall_dto_setup(struct audio *audio, enum signal_type signal, diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c index 38e4797e9476..b010814706fe 100644 --- a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c +++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c @@ -258,7 +258,7 @@ static void dmub_replay_residency(struct dmub_replay *dmub, uint8_t panel_inst, *residency = 0; } -/** +/* * Set REPLAY power optimization flags and coasting vtotal. */ static void dmub_replay_set_power_opt_and_coasting_vtotal(struct dmub_replay *dmub, @@ -280,7 +280,7 @@ static void dmub_replay_set_power_opt_and_coasting_vtotal(struct dmub_replay *dm dc_wake_and_execute_dmub_cmd(dc, &cmd, DM_DMUB_WAIT_TYPE_WAIT); } -/** +/* * send Replay general cmd to DMUB. */ static void dmub_replay_send_cmd(struct dmub_replay *dmub, diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c index 3538973bd0c6..b7e57aa27361 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c @@ -62,6 +62,26 @@ void cm_helper_program_color_matrices( } +void cm_helper_read_color_matrices(struct dc_context *ctx, + uint16_t *regval, + const struct color_matrices_reg *reg) +{ + uint32_t cur_csc_reg, regval0, regval1; + unsigned int i = 0; + + for (cur_csc_reg = reg->csc_c11_c12; + cur_csc_reg <= reg->csc_c33_c34; cur_csc_reg++) { + REG_GET_2(cur_csc_reg, + csc_c11, ®val0, + csc_c12, ®val1); + + regval[2 * i] = regval0; + regval[(2 * i) + 1] = regval1; + + i++; + } +} + void cm_helper_program_xfer_func( struct dc_context *ctx, const struct pwl_params *params, diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.h index 0a68b63d6126..decc50b1ac53 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.h +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.h @@ -114,5 +114,7 @@ bool cm_helper_translate_curve_to_degamma_hw_format( const struct dc_transfer_func *output_tf, struct pwl_params *lut_params); - +void cm_helper_read_color_matrices(struct dc_context *ctx, + uint16_t *regval, + const struct color_matrices_reg *reg); #endif diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c index ef52e6b6eccf..4e391fd1d71c 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c @@ -543,7 +543,8 @@ static const struct dpp_funcs dcn10_dpp_funcs = { .dpp_set_hdr_multiplier = dpp1_set_hdr_multiplier, .dpp_program_blnd_lut = NULL, .dpp_program_shaper_lut = NULL, - .dpp_program_3dlut = NULL + .dpp_program_3dlut = NULL, + .dpp_get_gamut_remap = dpp1_cm_get_gamut_remap, }; static struct dpp_caps dcn10_dpp_cap = { diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h index c9e045666dcc..a039eedc7c24 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.h @@ -1521,4 +1521,7 @@ void dpp1_construct(struct dcn10_dpp *dpp1, const struct dcn_dpp_registers *tf_regs, const struct dcn_dpp_shift *tf_shift, const struct dcn_dpp_mask *tf_mask); + +void dpp1_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust); #endif diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c index 904c2d278998..2f994a3a0b9c 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c @@ -98,7 +98,7 @@ static void program_gamut_remap( if (regval == NULL || select == GAMUT_REMAP_BYPASS) { REG_SET(CM_GAMUT_REMAP_CONTROL, 0, - CM_GAMUT_REMAP_MODE, 0); + CM_GAMUT_REMAP_MODE, 0); return; } switch (select) { @@ -181,6 +181,74 @@ void dpp1_cm_set_gamut_remap( } } +static void read_gamut_remap(struct dcn10_dpp *dpp, + uint16_t *regval, + enum gamut_remap_select *select) +{ + struct color_matrices_reg gam_regs; + uint32_t selection; + + REG_GET(CM_GAMUT_REMAP_CONTROL, + CM_GAMUT_REMAP_MODE, &selection); + + *select = selection; + + gam_regs.shifts.csc_c11 = dpp->tf_shift->CM_GAMUT_REMAP_C11; + gam_regs.masks.csc_c11 = dpp->tf_mask->CM_GAMUT_REMAP_C11; + gam_regs.shifts.csc_c12 = dpp->tf_shift->CM_GAMUT_REMAP_C12; + gam_regs.masks.csc_c12 = dpp->tf_mask->CM_GAMUT_REMAP_C12; + + if (*select == GAMUT_REMAP_COEFF) { + + gam_regs.csc_c11_c12 = REG(CM_GAMUT_REMAP_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_GAMUT_REMAP_C33_C34); + + cm_helper_read_color_matrices( + dpp->base.ctx, + regval, + &gam_regs); + + } else if (*select == GAMUT_REMAP_COMA_COEFF) { + + gam_regs.csc_c11_c12 = REG(CM_COMA_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_COMA_C33_C34); + + cm_helper_read_color_matrices( + dpp->base.ctx, + regval, + &gam_regs); + + } else if (*select == GAMUT_REMAP_COMB_COEFF) { + + gam_regs.csc_c11_c12 = REG(CM_COMB_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_COMB_C33_C34); + + cm_helper_read_color_matrices( + dpp->base.ctx, + regval, + &gam_regs); + } +} + +void dpp1_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust) +{ + struct dcn10_dpp *dpp = TO_DCN10_DPP(dpp_base); + uint16_t arr_reg_val[12]; + enum gamut_remap_select select; + + read_gamut_remap(dpp, arr_reg_val, &select); + + if (select == GAMUT_REMAP_BYPASS) { + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_BYPASS; + return; + } + + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_SW; + convert_hw_matrix(adjust->temperature_matrix, + arr_reg_val, ARRAY_SIZE(arr_reg_val)); +} + static void dpp1_cm_program_color_matrix( struct dcn10_dpp *dpp, const uint16_t *regval) diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_opp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_opp.c index 0dec57679269..48a40dcc7050 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_opp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_opp.c @@ -23,6 +23,7 @@ * */ +#include "core_types.h" #include "dm_services.h" #include "dcn10_opp.h" #include "reg_helper.h" @@ -160,6 +161,9 @@ static void opp1_set_pixel_encoding( struct dcn10_opp *oppn10, const struct clamping_and_pixel_encoding_params *params) { + bool force_chroma_subsampling_1tap = + oppn10->base.ctx->dc->debug.force_chroma_subsampling_1tap; + switch (params->pixel_encoding) { case PIXEL_ENCODING_RGB: @@ -178,6 +182,9 @@ static void opp1_set_pixel_encoding( default: break; } + + if (force_chroma_subsampling_1tap) + REG_UPDATE(FMT_CONTROL, FMT_SUBSAMPLING_MODE, 0); } /** diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c index eaa7032f0f1a..1516c0a48726 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c @@ -55,21 +55,23 @@ void dpp20_read_state(struct dpp *dpp_base, REG_GET(DPP_CONTROL, DPP_CLOCK_ENABLE, &s->is_enabled); + + // Degamma LUT (RAM) REG_GET(CM_DGAM_CONTROL, - CM_DGAM_LUT_MODE, &s->dgam_lut_mode); - // BGAM has no ROM, and definition is different, can't reuse same dump - //REG_GET(CM_BLNDGAM_CONTROL, - // CM_BLNDGAM_LUT_MODE, &s->rgam_lut_mode); - REG_GET(CM_GAMUT_REMAP_CONTROL, - CM_GAMUT_REMAP_MODE, &s->gamut_remap_mode); - if (s->gamut_remap_mode) { - s->gamut_remap_c11_c12 = REG_READ(CM_GAMUT_REMAP_C11_C12); - s->gamut_remap_c13_c14 = REG_READ(CM_GAMUT_REMAP_C13_C14); - s->gamut_remap_c21_c22 = REG_READ(CM_GAMUT_REMAP_C21_C22); - s->gamut_remap_c23_c24 = REG_READ(CM_GAMUT_REMAP_C23_C24); - s->gamut_remap_c31_c32 = REG_READ(CM_GAMUT_REMAP_C31_C32); - s->gamut_remap_c33_c34 = REG_READ(CM_GAMUT_REMAP_C33_C34); - } + CM_DGAM_LUT_MODE, &s->dgam_lut_mode); + + // Shaper LUT (RAM), 3D LUT (mode, bit-depth, size) + REG_GET(CM_SHAPER_CONTROL, + CM_SHAPER_LUT_MODE, &s->shaper_lut_mode); + REG_GET_2(CM_3DLUT_READ_WRITE_CONTROL, + CM_3DLUT_CONFIG_STATUS, &s->lut3d_mode, + CM_3DLUT_30BIT_EN, &s->lut3d_bit_depth); + REG_GET(CM_3DLUT_MODE, + CM_3DLUT_SIZE, &s->lut3d_size); + + // Blend/Out Gamma (RAM) + REG_GET(CM_BLNDGAM_LUT_WRITE_EN_MASK, + CM_BLNDGAM_CONFIG_STATUS, &s->rgam_lut_mode); } void dpp2_power_on_obuf( @@ -393,6 +395,7 @@ static struct dpp_funcs dcn20_dpp_funcs = { .set_optional_cursor_attributes = dpp1_cnv_set_optional_cursor_attributes, .dpp_dppclk_control = dpp1_dppclk_control, .dpp_set_hdr_multiplier = dpp2_set_hdr_multiplier, + .dpp_get_gamut_remap = dpp2_cm_get_gamut_remap, }; static struct dpp_caps dcn20_dpp_cap = { diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.h index e735363d0051..672cde46c4b9 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.h +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.h @@ -775,4 +775,7 @@ bool dpp2_construct(struct dcn20_dpp *dpp2, void dpp2_power_on_obuf( struct dpp *dpp_base, bool power_on); + +void dpp2_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust); #endif /* __DC_HWSS_DCN20_H__ */ diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c index 598caa508d43..58dc69926e8a 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c @@ -234,6 +234,61 @@ void dpp2_cm_set_gamut_remap( } } +static void read_gamut_remap(struct dcn20_dpp *dpp, + uint16_t *regval, + enum dcn20_gamut_remap_select *select) +{ + struct color_matrices_reg gam_regs; + uint32_t selection; + + IX_REG_GET(CM_TEST_DEBUG_INDEX, CM_TEST_DEBUG_DATA, + CM_TEST_DEBUG_DATA_STATUS_IDX, + CM_TEST_DEBUG_DATA_GAMUT_REMAP_MODE, &selection); + + *select = selection; + + gam_regs.shifts.csc_c11 = dpp->tf_shift->CM_GAMUT_REMAP_C11; + gam_regs.masks.csc_c11 = dpp->tf_mask->CM_GAMUT_REMAP_C11; + gam_regs.shifts.csc_c12 = dpp->tf_shift->CM_GAMUT_REMAP_C12; + gam_regs.masks.csc_c12 = dpp->tf_mask->CM_GAMUT_REMAP_C12; + + if (*select == DCN2_GAMUT_REMAP_COEF_A) { + gam_regs.csc_c11_c12 = REG(CM_GAMUT_REMAP_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_GAMUT_REMAP_C33_C34); + + cm_helper_read_color_matrices(dpp->base.ctx, + regval, + &gam_regs); + + } else if (*select == DCN2_GAMUT_REMAP_COEF_B) { + gam_regs.csc_c11_c12 = REG(CM_GAMUT_REMAP_B_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_GAMUT_REMAP_B_C33_C34); + + cm_helper_read_color_matrices(dpp->base.ctx, + regval, + &gam_regs); + } +} + +void dpp2_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust) +{ + struct dcn20_dpp *dpp = TO_DCN20_DPP(dpp_base); + uint16_t arr_reg_val[12]; + enum dcn20_gamut_remap_select select; + + read_gamut_remap(dpp, arr_reg_val, &select); + + if (select == DCN2_GAMUT_REMAP_BYPASS) { + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_BYPASS; + return; + } + + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_SW; + convert_hw_matrix(adjust->temperature_matrix, + arr_reg_val, ARRAY_SIZE(arr_reg_val)); +} + void dpp2_program_input_csc( struct dpp *dpp_base, enum dc_color_space color_space, diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c index 5da6e44f284a..16b5ff208d14 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c @@ -542,8 +542,30 @@ static struct mpcc *mpc2_get_mpcc_for_dpp(struct mpc_tree *tree, int dpp_id) return NULL; } +static void mpc2_read_mpcc_state( + struct mpc *mpc, + int mpcc_inst, + struct mpcc_state *s) +{ + struct dcn20_mpc *mpc20 = TO_DCN20_MPC(mpc); + + REG_GET(MPCC_OPP_ID[mpcc_inst], MPCC_OPP_ID, &s->opp_id); + REG_GET(MPCC_TOP_SEL[mpcc_inst], MPCC_TOP_SEL, &s->dpp_id); + REG_GET(MPCC_BOT_SEL[mpcc_inst], MPCC_BOT_SEL, &s->bot_mpcc_id); + REG_GET_4(MPCC_CONTROL[mpcc_inst], MPCC_MODE, &s->mode, + MPCC_ALPHA_BLND_MODE, &s->alpha_mode, + MPCC_ALPHA_MULTIPLIED_MODE, &s->pre_multiplied_alpha, + MPCC_BLND_ACTIVE_OVERLAP_ONLY, &s->overlap_only); + REG_GET_2(MPCC_STATUS[mpcc_inst], MPCC_IDLE, &s->idle, + MPCC_BUSY, &s->busy); + + /* Gamma block state */ + REG_GET(MPCC_OGAM_LUT_RAM_CONTROL[mpcc_inst], + MPCC_OGAM_CONFIG_STATUS, &s->rgam_mode); +} + static const struct mpc_funcs dcn20_mpc_funcs = { - .read_mpcc_state = mpc1_read_mpcc_state, + .read_mpcc_state = mpc2_read_mpcc_state, .insert_plane = mpc1_insert_plane, .remove_mpcc = mpc1_remove_mpcc, .mpc_init = mpc1_mpc_init, diff --git a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c index a7268027a472..f809a7d21033 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c @@ -275,6 +275,7 @@ static struct dpp_funcs dcn201_dpp_funcs = { .set_optional_cursor_attributes = dpp1_cnv_set_optional_cursor_attributes, .dpp_dppclk_control = dpp1_dppclk_control, .dpp_set_hdr_multiplier = dpp2_set_hdr_multiplier, + .dpp_get_gamut_remap = dpp2_cm_get_gamut_remap, }; static struct dpp_caps dcn201_dpp_cap = { diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c index 11f7746f3a65..a3a769aad042 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c @@ -44,12 +44,45 @@ void dpp30_read_state(struct dpp *dpp_base, struct dcn_dpp_state *s) { struct dcn3_dpp *dpp = TO_DCN30_DPP(dpp_base); + uint32_t gamcor_lut_mode, rgam_lut_mode; REG_GET(DPP_CONTROL, - DPP_CLOCK_ENABLE, &s->is_enabled); + DPP_CLOCK_ENABLE, &s->is_enabled); + + // Pre-degamma (ROM) + REG_GET_2(PRE_DEGAM, + PRE_DEGAM_MODE, &s->pre_dgam_mode, + PRE_DEGAM_SELECT, &s->pre_dgam_select); + + // Gamma Correction (RAM) + REG_GET(CM_GAMCOR_CONTROL, + CM_GAMCOR_MODE_CURRENT, &s->gamcor_mode); + if (s->gamcor_mode) { + REG_GET(CM_GAMCOR_CONTROL, CM_GAMCOR_SELECT_CURRENT, &gamcor_lut_mode); + if (!gamcor_lut_mode) + s->gamcor_mode = LUT_RAM_A; // Otherwise, LUT_RAM_B + } - // TODO: Implement for DCN3 + // Shaper LUT (RAM), 3D LUT (mode, bit-depth, size) + REG_GET(CM_SHAPER_CONTROL, + CM_SHAPER_LUT_MODE, &s->shaper_lut_mode); + REG_GET(CM_3DLUT_MODE, + CM_3DLUT_MODE_CURRENT, &s->lut3d_mode); + REG_GET(CM_3DLUT_READ_WRITE_CONTROL, + CM_3DLUT_30BIT_EN, &s->lut3d_bit_depth); + REG_GET(CM_3DLUT_MODE, + CM_3DLUT_SIZE, &s->lut3d_size); + + // Blend/Out Gamma (RAM) + REG_GET(CM_BLNDGAM_CONTROL, + CM_BLNDGAM_MODE_CURRENT, &s->rgam_lut_mode); + if (s->rgam_lut_mode){ + REG_GET(CM_BLNDGAM_CONTROL, CM_BLNDGAM_SELECT_CURRENT, &rgam_lut_mode); + if (!rgam_lut_mode) + s->rgam_lut_mode = LUT_RAM_A; // Otherwise, LUT_RAM_B + } } + /*program post scaler scs block in dpp CM*/ void dpp3_program_post_csc( struct dpp *dpp_base, @@ -1462,6 +1495,7 @@ static struct dpp_funcs dcn30_dpp_funcs = { .set_optional_cursor_attributes = dpp1_cnv_set_optional_cursor_attributes, .dpp_dppclk_control = dpp1_dppclk_control, .dpp_set_hdr_multiplier = dpp3_set_hdr_multiplier, + .dpp_get_gamut_remap = dpp3_cm_get_gamut_remap, }; diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h index cea3208e4ab1..2ac8045a87a1 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h @@ -637,4 +637,6 @@ void dpp3_program_cm_dealpha( struct dpp *dpp_base, uint32_t enable, uint32_t additive_blending); +void dpp3_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust); #endif /* __DC_HWSS_DCN30_H__ */ diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c index e43f77c11c00..54ec144f7b81 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp_cm.c @@ -408,3 +408,57 @@ void dpp3_cm_set_gamut_remap( program_gamut_remap(dpp, arr_reg_val, gamut_mode); } } + +static void read_gamut_remap(struct dcn3_dpp *dpp, + uint16_t *regval, + int *select) +{ + struct color_matrices_reg gam_regs; + uint32_t selection; + + //current coefficient set in use + REG_GET(CM_GAMUT_REMAP_CONTROL, CM_GAMUT_REMAP_MODE_CURRENT, &selection); + + *select = selection; + + gam_regs.shifts.csc_c11 = dpp->tf_shift->CM_GAMUT_REMAP_C11; + gam_regs.masks.csc_c11 = dpp->tf_mask->CM_GAMUT_REMAP_C11; + gam_regs.shifts.csc_c12 = dpp->tf_shift->CM_GAMUT_REMAP_C12; + gam_regs.masks.csc_c12 = dpp->tf_mask->CM_GAMUT_REMAP_C12; + + if (*select == GAMUT_REMAP_COEFF) { + gam_regs.csc_c11_c12 = REG(CM_GAMUT_REMAP_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_GAMUT_REMAP_C33_C34); + + cm_helper_read_color_matrices(dpp->base.ctx, + regval, + &gam_regs); + + } else if (*select == GAMUT_REMAP_COMA_COEFF) { + gam_regs.csc_c11_c12 = REG(CM_GAMUT_REMAP_B_C11_C12); + gam_regs.csc_c33_c34 = REG(CM_GAMUT_REMAP_B_C33_C34); + + cm_helper_read_color_matrices(dpp->base.ctx, + regval, + &gam_regs); + } +} + +void dpp3_cm_get_gamut_remap(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust) +{ + struct dcn3_dpp *dpp = TO_DCN30_DPP(dpp_base); + uint16_t arr_reg_val[12]; + int select; + + read_gamut_remap(dpp, arr_reg_val, &select); + + if (select == GAMUT_REMAP_BYPASS) { + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_BYPASS; + return; + } + + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_SW; + convert_hw_matrix(adjust->temperature_matrix, + arr_reg_val, ARRAY_SIZE(arr_reg_val)); +} diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c index d1500b223858..bf3386cd444d 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c @@ -1129,6 +1129,64 @@ void mpc3_set_gamut_remap( } } +static void read_gamut_remap(struct dcn30_mpc *mpc30, + int mpcc_id, + uint16_t *regval, + uint32_t *select) +{ + struct color_matrices_reg gam_regs; + + //current coefficient set in use + REG_GET(MPCC_GAMUT_REMAP_MODE[mpcc_id], MPCC_GAMUT_REMAP_MODE_CURRENT, select); + + gam_regs.shifts.csc_c11 = mpc30->mpc_shift->MPCC_GAMUT_REMAP_C11_A; + gam_regs.masks.csc_c11 = mpc30->mpc_mask->MPCC_GAMUT_REMAP_C11_A; + gam_regs.shifts.csc_c12 = mpc30->mpc_shift->MPCC_GAMUT_REMAP_C12_A; + gam_regs.masks.csc_c12 = mpc30->mpc_mask->MPCC_GAMUT_REMAP_C12_A; + + if (*select == GAMUT_REMAP_COEFF) { + gam_regs.csc_c11_c12 = REG(MPC_GAMUT_REMAP_C11_C12_A[mpcc_id]); + gam_regs.csc_c33_c34 = REG(MPC_GAMUT_REMAP_C33_C34_A[mpcc_id]); + + cm_helper_read_color_matrices( + mpc30->base.ctx, + regval, + &gam_regs); + + } else if (*select == GAMUT_REMAP_COMA_COEFF) { + + gam_regs.csc_c11_c12 = REG(MPC_GAMUT_REMAP_C11_C12_B[mpcc_id]); + gam_regs.csc_c33_c34 = REG(MPC_GAMUT_REMAP_C33_C34_B[mpcc_id]); + + cm_helper_read_color_matrices( + mpc30->base.ctx, + regval, + &gam_regs); + + } + +} + +void mpc3_get_gamut_remap(struct mpc *mpc, + int mpcc_id, + struct mpc_grph_gamut_adjustment *adjust) +{ + struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); + uint16_t arr_reg_val[12]; + int select; + + read_gamut_remap(mpc30, mpcc_id, arr_reg_val, &select); + + if (select == GAMUT_REMAP_BYPASS) { + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_BYPASS; + return; + } + + adjust->gamut_adjust_type = GRAPHICS_GAMUT_ADJUST_TYPE_SW; + convert_hw_matrix(adjust->temperature_matrix, + arr_reg_val, ARRAY_SIZE(arr_reg_val)); +} + bool mpc3_program_3dlut( struct mpc *mpc, const struct tetrahedral_params *params, @@ -1382,8 +1440,54 @@ static void mpc3_set_mpc_mem_lp_mode(struct mpc *mpc) } } +static void mpc3_read_mpcc_state( + struct mpc *mpc, + int mpcc_inst, + struct mpcc_state *s) +{ + struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); + uint32_t rmu_status = 0xf; + + REG_GET(MPCC_OPP_ID[mpcc_inst], MPCC_OPP_ID, &s->opp_id); + REG_GET(MPCC_TOP_SEL[mpcc_inst], MPCC_TOP_SEL, &s->dpp_id); + REG_GET(MPCC_BOT_SEL[mpcc_inst], MPCC_BOT_SEL, &s->bot_mpcc_id); + REG_GET_4(MPCC_CONTROL[mpcc_inst], MPCC_MODE, &s->mode, + MPCC_ALPHA_BLND_MODE, &s->alpha_mode, + MPCC_ALPHA_MULTIPLIED_MODE, &s->pre_multiplied_alpha, + MPCC_BLND_ACTIVE_OVERLAP_ONLY, &s->overlap_only); + REG_GET_2(MPCC_STATUS[mpcc_inst], MPCC_IDLE, &s->idle, + MPCC_BUSY, &s->busy); + + /* Color blocks state */ + REG_GET(MPC_RMU_CONTROL, MPC_RMU0_MUX_STATUS, &rmu_status); + + if (rmu_status == mpcc_inst) { + REG_GET(SHAPER_CONTROL[0], + MPC_RMU_SHAPER_LUT_MODE_CURRENT, &s->shaper_lut_mode); + REG_GET(RMU_3DLUT_MODE[0], + MPC_RMU_3DLUT_MODE_CURRENT, &s->lut3d_mode); + REG_GET(RMU_3DLUT_READ_WRITE_CONTROL[0], + MPC_RMU_3DLUT_30BIT_EN, &s->lut3d_bit_depth); + REG_GET(RMU_3DLUT_MODE[0], + MPC_RMU_3DLUT_SIZE, &s->lut3d_size); + } else { + REG_GET(SHAPER_CONTROL[1], + MPC_RMU_SHAPER_LUT_MODE_CURRENT, &s->shaper_lut_mode); + REG_GET(RMU_3DLUT_MODE[1], + MPC_RMU_3DLUT_MODE_CURRENT, &s->lut3d_mode); + REG_GET(RMU_3DLUT_READ_WRITE_CONTROL[1], + MPC_RMU_3DLUT_30BIT_EN, &s->lut3d_bit_depth); + REG_GET(RMU_3DLUT_MODE[1], + MPC_RMU_3DLUT_SIZE, &s->lut3d_size); + } + + REG_GET_2(MPCC_OGAM_CONTROL[mpcc_inst], + MPCC_OGAM_MODE_CURRENT, &s->rgam_mode, + MPCC_OGAM_SELECT_CURRENT, &s->rgam_lut); +} + static const struct mpc_funcs dcn30_mpc_funcs = { - .read_mpcc_state = mpc1_read_mpcc_state, + .read_mpcc_state = mpc3_read_mpcc_state, .insert_plane = mpc1_insert_plane, .remove_mpcc = mpc1_remove_mpcc, .mpc_init = mpc1_mpc_init, diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h index 5198f2167c7c..9cb96ae95a2f 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h @@ -1056,6 +1056,10 @@ void mpc3_set_gamut_remap( int mpcc_id, const struct mpc_grph_gamut_adjustment *adjust); +void mpc3_get_gamut_remap(struct mpc *mpc, + int mpcc_id, + struct mpc_grph_gamut_adjustment *adjust); + void mpc3_set_rmu_mux( struct mpc *mpc, int rmu_idx, diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c index dcf12a0b031c..681e75c6dbaf 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c @@ -133,6 +133,7 @@ static struct dpp_funcs dcn32_dpp_funcs = { .set_optional_cursor_attributes = dpp1_cnv_set_optional_cursor_attributes, .dpp_dppclk_control = dpp1_dppclk_control, .dpp_set_hdr_multiplier = dpp3_set_hdr_multiplier, + .dpp_get_gamut_remap = dpp3_cm_get_gamut_remap, }; diff --git a/drivers/gpu/drm/amd/display/dc/dm_cp_psp.h b/drivers/gpu/drm/amd/display/dc/dm_cp_psp.h index 4229369c57f4..f4d3f04ec857 100644 --- a/drivers/gpu/drm/amd/display/dc/dm_cp_psp.h +++ b/drivers/gpu/drm/amd/display/dc/dm_cp_psp.h @@ -26,6 +26,9 @@ #ifndef DM_CP_PSP_IF__H #define DM_CP_PSP_IF__H +/* + * Interface to CPLIB/PSP to enable ASSR + */ struct dc_link; struct cp_psp_stream_config { diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile index 6042a5a6a44f..59ade76ffb18 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/Makefile +++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile @@ -72,11 +72,11 @@ CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_ccflags) CFLAGS_$(AMDDALPATH)/dc/dml/display_mode_vba.o := $(dml_ccflags) CFLAGS_$(AMDDALPATH)/dc/dml/dcn10/dcn10_fpu.o := $(dml_ccflags) CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/dcn20_fpu.o := $(dml_ccflags) -CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20.o := $(dml_ccflags) +CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20.o := $(dml_ccflags) $(frame_warn_flag) CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20.o := $(dml_ccflags) -CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_ccflags) +CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_mode_vba_20v2.o := $(dml_ccflags) $(frame_warn_flag) CFLAGS_$(AMDDALPATH)/dc/dml/dcn20/display_rq_dlg_calc_20v2.o := $(dml_ccflags) -CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags) +CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_mode_vba_21.o := $(dml_ccflags) $(frame_warn_flag) CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_ccflags) CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_ccflags) $(frame_warn_flag) CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_ccflags) diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c index 63c48c29ba49..e7f4a2d491cc 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c @@ -4273,7 +4273,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l //Calculate Swath, DET Configuration, DCFCLKDeepSleep // - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { for (k = 0; k < v->NumberOfActivePlanes; ++k) { v->RequiredDPPCLKThisState[k] = v->RequiredDPPCLK[i][j][k]; @@ -4576,7 +4576,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l //Calculate Return BW - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { if (v->BlendingAndTiming[k] == k) { @@ -4635,7 +4635,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l v->UrgentOutOfOrderReturnPerChannelVMDataOnly); v->FinalDRAMClockChangeLatency = (v->DRAMClockChangeLatencyOverride > 0 ? v->DRAMClockChangeLatencyOverride : v->DRAMClockChangeLatency); - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { v->DCFCLKState[i][j] = v->DCFCLKPerState[i]; } @@ -4646,7 +4646,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l if (v->ClampMinDCFCLK) { /* Clamp calculated values to actual minimum */ - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { if (v->DCFCLKState[i][j] < mode_lib->soc.min_dcfclk) { v->DCFCLKState[i][j] = mode_lib->soc.min_dcfclk; @@ -4656,7 +4656,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l } } - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { v->IdealSDPPortBandwidthPerState[i][j] = dml_min3( v->ReturnBusWidth * v->DCFCLKState[i][j], @@ -4674,7 +4674,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l //Re-ordering Buffer Support Check - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { if ((v->ROBBufferSizeInKByte - v->PixelChunkSizeInKByte) * 1024 / v->ReturnBWPerState[i][j] > (v->RoundTripPingLatencyCycles + 32) / v->DCFCLKState[i][j] + ReorderingBytes / v->ReturnBWPerState[i][j]) { @@ -4692,7 +4692,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l MaxTotalVActiveRDBandwidth = MaxTotalVActiveRDBandwidth + v->ReadBandwidthLuma[k] + v->ReadBandwidthChroma[k]; } - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { v->MaxTotalVerticalActiveAvailableBandwidth[i][j] = dml_min( v->IdealSDPPortBandwidthPerState[i][j] * v->MaxAveragePercentOfIdealSDPPortBWDisplayCanUseInNormalSystemOperation / 100, @@ -4708,7 +4708,7 @@ void dml30_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l //Prefetch Check - for (i = 0; i < mode_lib->soc.num_states; ++i) { + for (i = start_state; i < mode_lib->soc.num_states; ++i) { for (j = 0; j <= 1; ++j) { int NextPrefetchModeState = MinPrefetchMode; diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c index 3eb3a021ab7d..3f02bb806d42 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn303/dcn303_fpu.c @@ -266,6 +266,17 @@ void dcn303_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p optimal_uclk_for_dcfclk_sta_targets[i] = bw_params->clk_table.entries[j].memclk_mhz * 16; break; + } else { + /* condition where (dcfclk_sta_targets[i] >= optimal_dcfclk_for_uclk[j]): + * This is required for dcn303 because it just so happens that the memory + * bandwidth is low enough such that all the optimal DCFCLK for each UCLK + * is lower than the smallest DCFCLK STA target. In this case we need to + * populate the optimal UCLK for each DCFCLK STA target to be the max UCLK. + */ + if (j == num_uclk_states - 1) { + optimal_uclk_for_dcfclk_sta_targets[i] = + bw_params->clk_table.entries[j].memclk_mhz * 16; + } } } } diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c index dd781a20692e..ba76dd4a2ce2 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c @@ -1288,7 +1288,7 @@ static bool update_pipes_with_split_flags(struct dc *dc, struct dc_state *contex return updated; } -static bool should_allow_odm_power_optimization(struct dc *dc, +static bool should_apply_odm_power_optimization(struct dc *dc, struct dc_state *context, struct vba_vars_st *v, int *split, bool *merge) { @@ -1392,9 +1392,12 @@ static void try_odm_power_optimization_and_revalidate( { int i; unsigned int new_vlevel; + unsigned int cur_policy[MAX_PIPES]; - for (i = 0; i < pipe_cnt; i++) + for (i = 0; i < pipe_cnt; i++) { + cur_policy[i] = pipes[i].pipe.dest.odm_combine_policy; pipes[i].pipe.dest.odm_combine_policy = dm_odm_combine_policy_2to1; + } new_vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, pipe_cnt); @@ -1403,6 +1406,9 @@ static void try_odm_power_optimization_and_revalidate( memset(merge, 0, MAX_PIPES * sizeof(bool)); *vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, new_vlevel, split, merge); context->bw_ctx.dml.vba.VoltageLevel = *vlevel; + } else { + for (i = 0; i < pipe_cnt; i++) + pipes[i].pipe.dest.odm_combine_policy = cur_policy[i]; } } @@ -1580,7 +1586,7 @@ static void dcn32_full_validate_bw_helper(struct dc *dc, } } - if (should_allow_odm_power_optimization(dc, context, vba, split, merge)) + if (should_apply_odm_power_optimization(dc, context, vba, split, merge)) try_odm_power_optimization_and_revalidate( dc, context, pipes, split, merge, vlevel, *pipe_cnt); @@ -2209,7 +2215,8 @@ bool dcn32_internal_validate_bw(struct dc *dc, int i; pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate); - dcn32_update_dml_pipes_odm_policy_based_on_context(dc, context, pipes); + if (!dc->config.enable_windowed_mpo_odm) + dcn32_update_dml_pipes_odm_policy_based_on_context(dc, context, pipes); /* repopulate_pipes = 1 means the pipes were either split or merged. In this case * we have to re-calculate the DET allocation and run through DML once more to diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c index 7ea2bd5374d5..912256006d75 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c @@ -583,9 +583,9 @@ void dcn35_decide_zstate_support(struct dc *dc, struct dc_state *context) plane_count++; } - if (plane_count == 0) { + if (context->stream_count == 0 || plane_count == 0) { support = DCN_ZSTATE_SUPPORT_ALLOW; - } else if (plane_count == 1 && context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { + } else if (context->stream_count == 1 && context->streams[0]->signal == SIGNAL_TYPE_EDP) { struct dc_link *link = context->streams[0]->sink->link; bool is_pwrseq0 = link && link->link_index == 0; bool is_psr1 = link && link->psr_settings.psr_version == DC_PSR_VERSION_1 && !link->panel_config.psr.disable_psr; diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c index 0baf39d64a2d..a52c594e1ba4 100644 --- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c +++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c @@ -141,14 +141,33 @@ static unsigned int find_pipes_assigned_to_plane(struct dml2_context *ctx, { int i; unsigned int num_found = 0; - unsigned int plane_id_assigned_to_pipe; + unsigned int plane_id_assigned_to_pipe = -1; for (i = 0; i < ctx->config.dcn_pipe_count; i++) { - if (state->res_ctx.pipe_ctx[i].plane_state && get_plane_id(ctx, state, state->res_ctx.pipe_ctx[i].plane_state, - state->res_ctx.pipe_ctx[i].stream->stream_id, - ctx->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_plane_index[state->res_ctx.pipe_ctx[i].pipe_idx], &plane_id_assigned_to_pipe)) { - if (plane_id_assigned_to_pipe == plane_id) - pipes[num_found++] = i; + struct pipe_ctx *pipe = &state->res_ctx.pipe_ctx[i]; + + if (!pipe->plane_state || !pipe->stream) + continue; + + get_plane_id(ctx, state, pipe->plane_state, pipe->stream->stream_id, + ctx->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_plane_index[pipe->pipe_idx], + &plane_id_assigned_to_pipe); + if (plane_id_assigned_to_pipe == plane_id && !pipe->prev_odm_pipe + && (!pipe->top_pipe || pipe->top_pipe->plane_state != pipe->plane_state)) { + while (pipe) { + struct pipe_ctx *mpc_pipe = pipe; + + while (mpc_pipe) { + pipes[num_found++] = mpc_pipe->pipe_idx; + mpc_pipe = mpc_pipe->bottom_pipe; + if (!mpc_pipe) + break; + if (mpc_pipe->plane_state != pipe->plane_state) + mpc_pipe = NULL; + } + pipe = pipe->next_odm_pipe; + } + break; } } @@ -566,8 +585,14 @@ static unsigned int find_pipes_assigned_to_stream(struct dml2_context *ctx, stru unsigned int num_found = 0; for (i = 0; i < ctx->config.dcn_pipe_count; i++) { - if (state->res_ctx.pipe_ctx[i].stream && state->res_ctx.pipe_ctx[i].stream->stream_id == stream_id) { - pipes[num_found++] = i; + struct pipe_ctx *pipe = &state->res_ctx.pipe_ctx[i]; + + if (pipe->stream && pipe->stream->stream_id == stream_id && !pipe->top_pipe && !pipe->prev_odm_pipe) { + while (pipe) { + pipes[num_found++] = pipe->pipe_idx; + pipe = pipe->next_odm_pipe; + } + break; } } diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c index 1068b962d1c1..f15d1dbad6a9 100644 --- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c +++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_utils.c @@ -234,7 +234,7 @@ static bool get_plane_id(struct dml2_context *dml2, const struct dc_state *state if (state->streams[i]->stream_id == stream_id) { for (j = 0; j < state->stream_status[i].plane_count; j++) { if (state->stream_status[i].plane_states[j] == plane && - (!is_plane_duplicate || (is_plane_duplicate && (j == plane_index)))) { + (!is_plane_duplicate || (j == plane_index))) { *plane_id = (i << 16) | j; return true; } diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c index 0df6c55eb326..ac41f9c0a283 100644 --- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c +++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c @@ -137,6 +137,11 @@ uint32_t dc_bandwidth_in_kbps_from_timing( if (link_encoding == DC_LINK_ENCODING_DP_128b_132b) kbps = apply_128b_132b_stream_overhead(timing, kbps); + if (link_encoding == DC_LINK_ENCODING_HDMI_FRL && + timing->vic == 0 && timing->hdmi_vic == 0 && + timing->frl_uncompressed_video_bandwidth_in_kbps != 0) + kbps = timing->frl_uncompressed_video_bandwidth_in_kbps; + return kbps; } diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c index 2352428bcea3..9d5df4c0da59 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c @@ -1291,6 +1291,46 @@ static enum audio_dto_source translate_to_dto_source(enum controller_id crtc_id) } } +static void populate_audio_dp_link_info( + const struct pipe_ctx *pipe_ctx, + struct audio_dp_link_info *dp_link_info) +{ + const struct dc_stream_state *stream = pipe_ctx->stream; + const struct dc_link *link = stream->link; + struct fixed31_32 link_bw_kbps; + + dp_link_info->encoding = link->dc->link_srv->dp_get_encoding_format( + &pipe_ctx->link_config.dp_link_settings); + dp_link_info->is_mst = (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST); + dp_link_info->lane_count = pipe_ctx->link_config.dp_link_settings.lane_count; + dp_link_info->link_rate = pipe_ctx->link_config.dp_link_settings.link_rate; + + link_bw_kbps = dc_fixpt_from_int(dc_link_bandwidth_kbps(link, + &pipe_ctx->link_config.dp_link_settings)); + + /* For audio stream calculations, the video stream should not include FEC or SSC + * in order to get the most pessimistic values. + */ + if (dp_link_info->encoding == DP_8b_10b_ENCODING && + link->dc->link_srv->dp_is_fec_supported(link)) { + link_bw_kbps = dc_fixpt_mul(link_bw_kbps, + dc_fixpt_from_fraction(100, DATA_EFFICIENCY_8b_10b_FEC_EFFICIENCY_x100)); + } else if (dp_link_info->encoding == DP_128b_132b_ENCODING) { + link_bw_kbps = dc_fixpt_mul(link_bw_kbps, + dc_fixpt_from_fraction(10000, 9975)); /* 99.75% SSC overhead*/ + } + + dp_link_info->link_bandwidth_kbps = dc_fixpt_floor(link_bw_kbps); + + /* HW minimum for 128b/132b HBlank is 4 frame symbols. + * TODO: Plumb the actual programmed HBlank min symbol width to here. + */ + if (dp_link_info->encoding == DP_128b_132b_ENCODING) + dp_link_info->hblank_min_symbol_width = 4; + else + dp_link_info->hblank_min_symbol_width = 0; +} + static void build_audio_output( struct dc_state *state, const struct pipe_ctx *pipe_ctx, @@ -1338,6 +1378,15 @@ static void build_audio_output( audio_output->crtc_info.calculated_pixel_clock_100Hz = pipe_ctx->stream_res.pix_clk_params.requested_pix_clk_100hz; + audio_output->crtc_info.pixel_encoding = + stream->timing.pixel_encoding; + + audio_output->crtc_info.dsc_bits_per_pixel = + stream->timing.dsc_cfg.bits_per_pixel; + + audio_output->crtc_info.dsc_num_slices = + stream->timing.dsc_cfg.num_slices_h; + /*for HDMI, audio ACR is with deep color ratio factor*/ if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal) && audio_output->crtc_info.requested_pixel_clock_100Hz == @@ -1371,6 +1420,10 @@ static void build_audio_output( audio_output->pll_info.ss_percentage = pipe_ctx->pll_settings.ss_percentage; + + if (dc_is_dp_signal(pipe_ctx->stream->signal)) { + populate_audio_dp_link_info(pipe_ctx, &audio_output->dp_link_info); + } } static void program_scaler(const struct dc *dc, @@ -1476,7 +1529,7 @@ static enum dc_status dce110_enable_stream_timing( return DC_OK; } -static enum dc_status apply_single_controller_ctx_to_hw( +enum dc_status dce110_apply_single_controller_ctx_to_hw( struct pipe_ctx *pipe_ctx, struct dc_state *context, struct dc *dc) @@ -1507,7 +1560,8 @@ static enum dc_status apply_single_controller_ctx_to_hw( pipe_ctx->stream_res.audio, pipe_ctx->stream->signal, &audio_output.crtc_info, - &pipe_ctx->stream->audio_info); + &pipe_ctx->stream->audio_info, + &audio_output.dp_link_info); } /* make sure no pipes syncd to the pipe being enabled */ @@ -2302,7 +2356,7 @@ enum dc_status dce110_apply_ctx_to_hw( if (pipe_ctx->top_pipe || pipe_ctx->prev_odm_pipe) continue; - status = apply_single_controller_ctx_to_hw( + status = dce110_apply_single_controller_ctx_to_hw( pipe_ctx, context, dc); diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.h index 08028a1779ae..ed3cc3648e8e 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.h @@ -39,6 +39,10 @@ enum dc_status dce110_apply_ctx_to_hw( struct dc *dc, struct dc_state *context); +enum dc_status dce110_apply_single_controller_ctx_to_hw( + struct pipe_ctx *pipe_ctx, + struct dc_state *context, + struct dc *dc); void dce110_enable_stream(struct pipe_ctx *pipe_ctx); diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c index 6dd479e8a348..314798400b16 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c @@ -283,33 +283,33 @@ static void dcn10_log_hubp_states(struct dc *dc, void *log_ctx) DTN_INFO("\n"); } -void dcn10_log_hw_state(struct dc *dc, - struct dc_log_buffer_ctx *log_ctx) +static void dcn10_log_color_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx) { struct dc_context *dc_ctx = dc->ctx; struct resource_pool *pool = dc->res_pool; int i; - DTN_INFO_BEGIN(); - - dcn10_log_hubbub_state(dc, log_ctx); - - dcn10_log_hubp_states(dc, log_ctx); - - DTN_INFO("DPP: IGAM format IGAM mode DGAM mode RGAM mode" - " GAMUT mode C11 C12 C13 C14 C21 C22 C23 C24 " - "C31 C32 C33 C34\n"); + DTN_INFO("DPP: IGAM format IGAM mode DGAM mode RGAM mode" + " GAMUT adjust " + "C11 C12 C13 C14 " + "C21 C22 C23 C24 " + "C31 C32 C33 C34 \n"); for (i = 0; i < pool->pipe_count; i++) { struct dpp *dpp = pool->dpps[i]; struct dcn_dpp_state s = {0}; dpp->funcs->dpp_read_state(dpp, &s); + dpp->funcs->dpp_get_gamut_remap(dpp, &s.gamut_remap); if (!s.is_enabled) continue; - DTN_INFO("[%2d]: %11xh %-11s %-11s %-11s" - "%8x %08xh %08xh %08xh %08xh %08xh %08xh", + DTN_INFO("[%2d]: %11xh %11s %9s %9s" + " %12s " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld", dpp->inst, s.igam_input_format, (s.igam_lut_mode == 0) ? "BypassFixed" : @@ -329,16 +329,42 @@ void dcn10_log_hw_state(struct dc *dc, ((s.rgam_lut_mode == 3) ? "RAM" : ((s.rgam_lut_mode == 4) ? "RAM" : "Unknown")))), - s.gamut_remap_mode, - s.gamut_remap_c11_c12, - s.gamut_remap_c13_c14, - s.gamut_remap_c21_c22, - s.gamut_remap_c23_c24, - s.gamut_remap_c31_c32, - s.gamut_remap_c33_c34); + (s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" : + ((s.gamut_remap.gamut_adjust_type == 1) ? "HW" : + "SW"), + s.gamut_remap.temperature_matrix[0].value, + s.gamut_remap.temperature_matrix[1].value, + s.gamut_remap.temperature_matrix[2].value, + s.gamut_remap.temperature_matrix[3].value, + s.gamut_remap.temperature_matrix[4].value, + s.gamut_remap.temperature_matrix[5].value, + s.gamut_remap.temperature_matrix[6].value, + s.gamut_remap.temperature_matrix[7].value, + s.gamut_remap.temperature_matrix[8].value, + s.gamut_remap.temperature_matrix[9].value, + s.gamut_remap.temperature_matrix[10].value, + s.gamut_remap.temperature_matrix[11].value); DTN_INFO("\n"); } DTN_INFO("\n"); + DTN_INFO("DPP Color Caps: input_lut_shared:%d icsc:%d" + " dgam_ram:%d dgam_rom: srgb:%d,bt2020:%d,gamma2_2:%d,pq:%d,hlg:%d" + " post_csc:%d gamcor:%d dgam_rom_for_yuv:%d 3d_lut:%d" + " blnd_lut:%d oscs:%d\n\n", + dc->caps.color.dpp.input_lut_shared, + dc->caps.color.dpp.icsc, + dc->caps.color.dpp.dgam_ram, + dc->caps.color.dpp.dgam_rom_caps.srgb, + dc->caps.color.dpp.dgam_rom_caps.bt2020, + dc->caps.color.dpp.dgam_rom_caps.gamma2_2, + dc->caps.color.dpp.dgam_rom_caps.pq, + dc->caps.color.dpp.dgam_rom_caps.hlg, + dc->caps.color.dpp.post_csc, + dc->caps.color.dpp.gamma_corr, + dc->caps.color.dpp.dgam_rom_for_yuv, + dc->caps.color.dpp.hw_3d_lut, + dc->caps.color.dpp.ogam_ram, + dc->caps.color.dpp.ocsc); DTN_INFO("MPCC: OPP DPP MPCCBOT MODE ALPHA_MODE PREMULT OVERLAP_ONLY IDLE\n"); for (i = 0; i < pool->pipe_count; i++) { @@ -352,6 +378,30 @@ void dcn10_log_hw_state(struct dc *dc, s.idle); } DTN_INFO("\n"); + DTN_INFO("MPC Color Caps: gamut_remap:%d, 3dlut:%d, ogam_ram:%d, ocsc:%d\n\n", + dc->caps.color.mpc.gamut_remap, + dc->caps.color.mpc.num_3dluts, + dc->caps.color.mpc.ogam_ram, + dc->caps.color.mpc.ocsc); +} + +void dcn10_log_hw_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx) +{ + struct dc_context *dc_ctx = dc->ctx; + struct resource_pool *pool = dc->res_pool; + int i; + + DTN_INFO_BEGIN(); + + dcn10_log_hubbub_state(dc, log_ctx); + + dcn10_log_hubp_states(dc, log_ctx); + + if (dc->hwss.log_color_state) + dc->hwss.log_color_state(dc, log_ctx); + else + dcn10_log_color_state(dc, log_ctx); DTN_INFO("OTG: v_bs v_be v_ss v_se vpol vmax vmin vmax_sel vmin_sel h_bs h_be h_ss h_se hpol htot vtot underflow blank_en\n"); @@ -1840,6 +1890,9 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx, { struct dpp *dpp = pipe_ctx->plane_res.dpp; + if (!stream) + return false; + if (dpp == NULL) return false; @@ -1862,8 +1915,8 @@ bool dcn10_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx, } else dpp->funcs->dpp_program_regamma_pwl(dpp, NULL, OPP_REGAMMA_BYPASS); - if (stream != NULL && stream->ctx != NULL && - stream->out_transfer_func != NULL) { + if (stream->ctx && + stream->out_transfer_func) { log_tf(stream->ctx, stream->out_transfer_func, dpp->regamma_params.hw_points_num); diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c index 4853ecac53f9..bc0a21957e33 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c @@ -71,6 +71,112 @@ #define FN(reg_name, field_name) \ hws->shifts->field_name, hws->masks->field_name +void dcn20_log_color_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx) +{ + struct dc_context *dc_ctx = dc->ctx; + struct resource_pool *pool = dc->res_pool; + int i; + + DTN_INFO("DPP: DGAM mode SHAPER mode 3DLUT mode 3DLUT bit depth" + " 3DLUT size RGAM mode GAMUT adjust " + "C11 C12 C13 C14 " + "C21 C22 C23 C24 " + "C31 C32 C33 C34 \n"); + + for (i = 0; i < pool->pipe_count; i++) { + struct dpp *dpp = pool->dpps[i]; + struct dcn_dpp_state s = {0}; + + dpp->funcs->dpp_read_state(dpp, &s); + dpp->funcs->dpp_get_gamut_remap(dpp, &s.gamut_remap); + + if (!s.is_enabled) + continue; + + DTN_INFO("[%2d]: %8s %11s %10s %15s %10s %9s %12s " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld", + dpp->inst, + (s.dgam_lut_mode == 0) ? "Bypass" : + ((s.dgam_lut_mode == 1) ? "sRGB" : + ((s.dgam_lut_mode == 2) ? "Ycc" : + ((s.dgam_lut_mode == 3) ? "RAM" : + ((s.dgam_lut_mode == 4) ? "RAM" : + "Unknown")))), + (s.shaper_lut_mode == 1) ? "RAM A" : + ((s.shaper_lut_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_mode == 1) ? "RAM A" : + ((s.lut3d_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_bit_depth <= 0) ? "12-bit" : "10-bit", + (s.lut3d_size == 0) ? "17x17x17" : "9x9x9", + (s.rgam_lut_mode == 1) ? "RAM A" : + ((s.rgam_lut_mode == 1) ? "RAM B" : "Bypass"), + (s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" : + ((s.gamut_remap.gamut_adjust_type == 1) ? "HW" : + "SW"), + s.gamut_remap.temperature_matrix[0].value, + s.gamut_remap.temperature_matrix[1].value, + s.gamut_remap.temperature_matrix[2].value, + s.gamut_remap.temperature_matrix[3].value, + s.gamut_remap.temperature_matrix[4].value, + s.gamut_remap.temperature_matrix[5].value, + s.gamut_remap.temperature_matrix[6].value, + s.gamut_remap.temperature_matrix[7].value, + s.gamut_remap.temperature_matrix[8].value, + s.gamut_remap.temperature_matrix[9].value, + s.gamut_remap.temperature_matrix[10].value, + s.gamut_remap.temperature_matrix[11].value); + DTN_INFO("\n"); + } + DTN_INFO("\n"); + DTN_INFO("DPP Color Caps: input_lut_shared:%d icsc:%d" + " dgam_ram:%d dgam_rom: srgb:%d,bt2020:%d,gamma2_2:%d,pq:%d,hlg:%d" + " post_csc:%d gamcor:%d dgam_rom_for_yuv:%d 3d_lut:%d" + " blnd_lut:%d oscs:%d\n\n", + dc->caps.color.dpp.input_lut_shared, + dc->caps.color.dpp.icsc, + dc->caps.color.dpp.dgam_ram, + dc->caps.color.dpp.dgam_rom_caps.srgb, + dc->caps.color.dpp.dgam_rom_caps.bt2020, + dc->caps.color.dpp.dgam_rom_caps.gamma2_2, + dc->caps.color.dpp.dgam_rom_caps.pq, + dc->caps.color.dpp.dgam_rom_caps.hlg, + dc->caps.color.dpp.post_csc, + dc->caps.color.dpp.gamma_corr, + dc->caps.color.dpp.dgam_rom_for_yuv, + dc->caps.color.dpp.hw_3d_lut, + dc->caps.color.dpp.ogam_ram, + dc->caps.color.dpp.ocsc); + + DTN_INFO("MPCC: OPP DPP MPCCBOT MODE ALPHA_MODE PREMULT OVERLAP_ONLY IDLE" + " OGAM mode\n"); + + for (i = 0; i < pool->pipe_count; i++) { + struct mpcc_state s = {0}; + + pool->mpc->funcs->read_mpcc_state(pool->mpc, i, &s); + if (s.opp_id != 0xf) + DTN_INFO("[%2d]: %2xh %2xh %6xh %4d %10d %7d %12d %4d %9s\n", + i, s.opp_id, s.dpp_id, s.bot_mpcc_id, + s.mode, s.alpha_mode, s.pre_multiplied_alpha, s.overlap_only, + s.idle, + (s.rgam_mode == 1) ? "RAM A" : + ((s.rgam_mode == 2) ? "RAM B" : + "Bypass")); + } + DTN_INFO("\n"); + DTN_INFO("MPC Color Caps: gamut_remap:%d, 3dlut:%d, ogam_ram:%d, ocsc:%d\n\n", + dc->caps.color.mpc.gamut_remap, + dc->caps.color.mpc.num_3dluts, + dc->caps.color.mpc.ogam_ram, + dc->caps.color.mpc.ocsc); +} + + static int find_free_gsl_group(const struct dc *dc) { if (dc->res_pool->gsl_groups.gsl_0 == 0) @@ -1467,7 +1573,8 @@ static void dcn20_detect_pipe_changes(struct dc_state *old_state, * makes this assumption at the moment with how hubp reset is matched to * same index mpcc reset. */ - if (old_pipe->stream_res.opp != new_pipe->stream_res.opp) + if (old_pipe->stream_res.opp != new_pipe->stream_res.opp || + old_pipe->stream_res.left_edge_extra_pixel != new_pipe->stream_res.left_edge_extra_pixel) new_pipe->update_flags.bits.opp_changed = 1; if (old_pipe->stream_res.tg != new_pipe->stream_res.tg) new_pipe->update_flags.bits.tg_changed = 1; @@ -1853,6 +1960,10 @@ static void dcn20_program_pipe( pipe_ctx->stream_res.opp, &pipe_ctx->stream->bit_depth_params, &pipe_ctx->stream->clamping); + + pipe_ctx->stream_res.opp->funcs->opp_program_left_edge_extra_pixel( + pipe_ctx->stream_res.opp, + pipe_ctx->stream_res.left_edge_extra_pixel); } /* Set ABM pipe after other pipe configurations done */ @@ -1958,7 +2069,6 @@ void dcn20_program_front_end_for_ctx( && context->res_ctx.pipe_ctx[i].stream) hws->funcs.blank_pixel_data(dc, &context->res_ctx.pipe_ctx[i], true); - /* Disconnect mpcc */ for (i = 0; i < dc->res_pool->pipe_count; i++) if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable @@ -2561,7 +2671,7 @@ void dcn20_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx) tg->funcs->setup_vertical_interrupt2(tg, start_line); } -static void dcn20_reset_back_end_for_pipe( +void dcn20_reset_back_end_for_pipe( struct dc *dc, struct pipe_ctx *pipe_ctx, struct dc_state *context) diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.h index b94c85340abf..5c874f7b0683 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.h @@ -28,6 +28,8 @@ #include "hw_sequencer_private.h" +void dcn20_log_color_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx); bool dcn20_set_blend_lut( struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state); bool dcn20_set_shaper_3dlut( @@ -84,6 +86,10 @@ enum dc_status dcn20_enable_stream_timing( void dcn20_disable_stream_gating(struct dc *dc, struct pipe_ctx *pipe_ctx); void dcn20_enable_stream_gating(struct dc *dc, struct pipe_ctx *pipe_ctx); void dcn20_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx); +void dcn20_reset_back_end_for_pipe( + struct dc *dc, + struct pipe_ctx *pipe_ctx, + struct dc_state *context); void dcn20_init_blank( struct dc *dc, struct timing_generator *tg); diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c index 8e88dcaf88f5..5c7f380a84f9 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c @@ -206,28 +206,32 @@ void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx) void dcn21_set_pipe(struct pipe_ctx *pipe_ctx) { struct abm *abm = pipe_ctx->stream_res.abm; - uint32_t otg_inst = pipe_ctx->stream_res.tg->inst; + struct timing_generator *tg = pipe_ctx->stream_res.tg; struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl; struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu; + uint32_t otg_inst; + + if (!abm && !tg && !panel_cntl) + return; + + otg_inst = tg->inst; if (dmcu) { dce110_set_pipe(pipe_ctx); return; } - if (abm && panel_cntl) { - if (abm->funcs && abm->funcs->set_pipe_ex) { - abm->funcs->set_pipe_ex(abm, + if (abm->funcs && abm->funcs->set_pipe_ex) { + abm->funcs->set_pipe_ex(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst, panel_cntl->pwrseq_inst); - } else { - dmub_abm_set_pipe(abm, otg_inst, - SET_ABM_PIPE_NORMAL, - panel_cntl->inst, - panel_cntl->pwrseq_inst); - } + } else { + dmub_abm_set_pipe(abm, otg_inst, + SET_ABM_PIPE_NORMAL, + panel_cntl->inst, + panel_cntl->pwrseq_inst); } } @@ -237,34 +241,35 @@ bool dcn21_set_backlight_level(struct pipe_ctx *pipe_ctx, { struct dc_context *dc = pipe_ctx->stream->ctx; struct abm *abm = pipe_ctx->stream_res.abm; + struct timing_generator *tg = pipe_ctx->stream_res.tg; struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl; + uint32_t otg_inst; + + if (!abm && !tg && !panel_cntl) + return false; + + otg_inst = tg->inst; if (dc->dc->res_pool->dmcu) { dce110_set_backlight_level(pipe_ctx, backlight_pwm_u16_16, frame_ramp); return true; } - if (abm != NULL) { - uint32_t otg_inst = pipe_ctx->stream_res.tg->inst; - - if (abm && panel_cntl) { - if (abm->funcs && abm->funcs->set_pipe_ex) { - abm->funcs->set_pipe_ex(abm, - otg_inst, - SET_ABM_PIPE_NORMAL, - panel_cntl->inst, - panel_cntl->pwrseq_inst); - } else { - dmub_abm_set_pipe(abm, - otg_inst, - SET_ABM_PIPE_NORMAL, - panel_cntl->inst, - panel_cntl->pwrseq_inst); - } - } + if (abm->funcs && abm->funcs->set_pipe_ex) { + abm->funcs->set_pipe_ex(abm, + otg_inst, + SET_ABM_PIPE_NORMAL, + panel_cntl->inst, + panel_cntl->pwrseq_inst); + } else { + dmub_abm_set_pipe(abm, + otg_inst, + SET_ABM_PIPE_NORMAL, + panel_cntl->inst, + panel_cntl->pwrseq_inst); } - if (abm && abm->funcs && abm->funcs->set_backlight_level_pwm) + if (abm->funcs && abm->funcs->set_backlight_level_pwm) abm->funcs->set_backlight_level_pwm(abm, backlight_pwm_u16_16, frame_ramp, 0, panel_cntl->inst); else diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c index c34c13e1e0a4..7e6b7f2a6dc9 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c @@ -69,6 +69,155 @@ #define FN(reg_name, field_name) \ hws->shifts->field_name, hws->masks->field_name +void dcn30_log_color_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx) +{ + struct dc_context *dc_ctx = dc->ctx; + struct resource_pool *pool = dc->res_pool; + int i; + + DTN_INFO("DPP: DGAM ROM DGAM ROM type DGAM LUT SHAPER mode" + " 3DLUT mode 3DLUT bit depth 3DLUT size RGAM mode" + " GAMUT adjust " + "C11 C12 C13 C14 " + "C21 C22 C23 C24 " + "C31 C32 C33 C34 \n"); + + for (i = 0; i < pool->pipe_count; i++) { + struct dpp *dpp = pool->dpps[i]; + struct dcn_dpp_state s = {0}; + + dpp->funcs->dpp_read_state(dpp, &s); + dpp->funcs->dpp_get_gamut_remap(dpp, &s.gamut_remap); + + if (!s.is_enabled) + continue; + + DTN_INFO("[%2d]: %7x %13s %8s %11s %10s %15s %10s %9s" + " %12s " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld", + dpp->inst, + s.pre_dgam_mode, + (s.pre_dgam_select == 0) ? "sRGB" : + ((s.pre_dgam_select == 1) ? "Gamma 2.2" : + ((s.pre_dgam_select == 2) ? "Gamma 2.4" : + ((s.pre_dgam_select == 3) ? "Gamma 2.6" : + ((s.pre_dgam_select == 4) ? "BT.709" : + ((s.pre_dgam_select == 5) ? "PQ" : + ((s.pre_dgam_select == 6) ? "HLG" : + "Unknown")))))), + (s.gamcor_mode == 0) ? "Bypass" : + ((s.gamcor_mode == 1) ? "RAM A" : + "RAM B"), + (s.shaper_lut_mode == 1) ? "RAM A" : + ((s.shaper_lut_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_mode == 1) ? "RAM A" : + ((s.lut3d_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_bit_depth <= 0) ? "12-bit" : "10-bit", + (s.lut3d_size == 0) ? "17x17x17" : "9x9x9", + (s.rgam_lut_mode == 0) ? "Bypass" : + ((s.rgam_lut_mode == 1) ? "RAM A" : + "RAM B"), + (s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" : + ((s.gamut_remap.gamut_adjust_type == 1) ? "HW" : + "SW"), + s.gamut_remap.temperature_matrix[0].value, + s.gamut_remap.temperature_matrix[1].value, + s.gamut_remap.temperature_matrix[2].value, + s.gamut_remap.temperature_matrix[3].value, + s.gamut_remap.temperature_matrix[4].value, + s.gamut_remap.temperature_matrix[5].value, + s.gamut_remap.temperature_matrix[6].value, + s.gamut_remap.temperature_matrix[7].value, + s.gamut_remap.temperature_matrix[8].value, + s.gamut_remap.temperature_matrix[9].value, + s.gamut_remap.temperature_matrix[10].value, + s.gamut_remap.temperature_matrix[11].value); + DTN_INFO("\n"); + } + DTN_INFO("\n"); + DTN_INFO("DPP Color Caps: input_lut_shared:%d icsc:%d" + " dgam_ram:%d dgam_rom: srgb:%d,bt2020:%d,gamma2_2:%d,pq:%d,hlg:%d" + " post_csc:%d gamcor:%d dgam_rom_for_yuv:%d 3d_lut:%d" + " blnd_lut:%d oscs:%d\n\n", + dc->caps.color.dpp.input_lut_shared, + dc->caps.color.dpp.icsc, + dc->caps.color.dpp.dgam_ram, + dc->caps.color.dpp.dgam_rom_caps.srgb, + dc->caps.color.dpp.dgam_rom_caps.bt2020, + dc->caps.color.dpp.dgam_rom_caps.gamma2_2, + dc->caps.color.dpp.dgam_rom_caps.pq, + dc->caps.color.dpp.dgam_rom_caps.hlg, + dc->caps.color.dpp.post_csc, + dc->caps.color.dpp.gamma_corr, + dc->caps.color.dpp.dgam_rom_for_yuv, + dc->caps.color.dpp.hw_3d_lut, + dc->caps.color.dpp.ogam_ram, + dc->caps.color.dpp.ocsc); + + DTN_INFO("MPCC: OPP DPP MPCCBOT MODE ALPHA_MODE PREMULT OVERLAP_ONLY IDLE" + " SHAPER mode 3DLUT mode 3DLUT bit-depth 3DLUT size OGAM mode OGAM LUT" + " GAMUT adjust " + "C11 C12 C13 C14 " + "C21 C22 C23 C24 " + "C31 C32 C33 C34 \n"); + + for (i = 0; i < pool->pipe_count; i++) { + struct mpcc_state s = {0}; + + pool->mpc->funcs->read_mpcc_state(pool->mpc, i, &s); + mpc3_get_gamut_remap(pool->mpc, i, &s.gamut_remap); + + if (s.opp_id != 0xf) + DTN_INFO("[%2d]: %2xh %2xh %6xh %4d %10d %7d %12d %4d %11s %11s %16s %11s %10s %9s" + " %-12s " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld " + "%010lld %010lld %010lld %010lld\n", + i, s.opp_id, s.dpp_id, s.bot_mpcc_id, + s.mode, s.alpha_mode, s.pre_multiplied_alpha, s.overlap_only, + s.idle, + (s.shaper_lut_mode == 1) ? "RAM A" : + ((s.shaper_lut_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_mode == 1) ? "RAM A" : + ((s.lut3d_mode == 2) ? "RAM B" : + "Bypass"), + (s.lut3d_bit_depth <= 0) ? "12-bit" : "10-bit", + (s.lut3d_size == 0) ? "17x17x17" : "9x9x9", + (s.rgam_mode == 0) ? "Bypass" : + ((s.rgam_mode == 2) ? "RAM" : + "Unknown"), + (s.rgam_mode == 1) ? "B" : "A", + (s.gamut_remap.gamut_adjust_type == 0) ? "Bypass" : + ((s.gamut_remap.gamut_adjust_type == 1) ? "HW" : + "SW"), + s.gamut_remap.temperature_matrix[0].value, + s.gamut_remap.temperature_matrix[1].value, + s.gamut_remap.temperature_matrix[2].value, + s.gamut_remap.temperature_matrix[3].value, + s.gamut_remap.temperature_matrix[4].value, + s.gamut_remap.temperature_matrix[5].value, + s.gamut_remap.temperature_matrix[6].value, + s.gamut_remap.temperature_matrix[7].value, + s.gamut_remap.temperature_matrix[8].value, + s.gamut_remap.temperature_matrix[9].value, + s.gamut_remap.temperature_matrix[10].value, + s.gamut_remap.temperature_matrix[11].value); + + } + DTN_INFO("\n"); + DTN_INFO("MPC Color Caps: gamut_remap:%d, 3dlut:%d, ogam_ram:%d, ocsc:%d\n\n", + dc->caps.color.mpc.gamut_remap, + dc->caps.color.mpc.num_3dluts, + dc->caps.color.mpc.ogam_ram, + dc->caps.color.mpc.ocsc); +} + bool dcn30_set_blend_lut( struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state) { @@ -1015,21 +1164,3 @@ void dcn30_prepare_bandwidth(struct dc *dc, if (!dc->clk_mgr->clks.fw_based_mclk_switching) dc_dmub_srv_p_state_delegate(dc, false, context); } - -void dcn30_set_static_screen_control(struct pipe_ctx **pipe_ctx, - int num_pipes, const struct dc_static_screen_params *params) -{ - unsigned int i; - unsigned int triggers = 0; - - if (params->triggers.surface_update) - triggers |= 0x100; - if (params->triggers.cursor_update) - triggers |= 0x8; - if (params->triggers.force_trigger) - triggers |= 0x1; - - for (i = 0; i < num_pipes; i++) - pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(pipe_ctx[i]->stream_res.tg, - triggers, params->num_frames); -} diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h index e557e2b98618..638f018a3cb5 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h @@ -52,6 +52,9 @@ bool dcn30_mmhubbub_warmup( unsigned int num_dwb, struct dc_writeback_info *wb_info); +void dcn30_log_color_state(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx); + bool dcn30_set_blend_lut(struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state); @@ -90,7 +93,4 @@ void dcn30_set_hubp_blank(const struct dc *dc, void dcn30_prepare_bandwidth(struct dc *dc, struct dc_state *context); -void dcn30_set_static_screen_control(struct pipe_ctx **pipe_ctx, - int num_pipes, const struct dc_static_screen_params *params); - #endif /* __DC_HWSS_DCN30_H__ */ diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c index 9894caedffed..ef913445a795 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c @@ -64,7 +64,7 @@ static const struct hw_sequencer_funcs dcn30_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn10_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn10_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c index 7423880fabb6..a760f0c6fe98 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.c @@ -98,10 +98,8 @@ static void enable_memory_low_power(struct dc *dc) for (i = 0; i < dc->res_pool->stream_enc_count; i++) if (dc->res_pool->stream_enc[i]->vpg) dc->res_pool->stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->stream_enc[i]->vpg); -#if defined(CONFIG_DRM_AMD_DC_FP) for (i = 0; i < dc->res_pool->hpo_dp_stream_enc_count; i++) dc->res_pool->hpo_dp_stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->hpo_dp_stream_enc[i]->vpg); -#endif } } @@ -617,3 +615,21 @@ void dcn31_setup_hpo_hw_control(const struct dce_hwseq *hws, bool enable) if (hws->ctx->dc->debug.hpo_optimization) REG_UPDATE(HPO_TOP_HW_CONTROL, HPO_IO_EN, !!enable); } + +void dcn31_set_static_screen_control(struct pipe_ctx **pipe_ctx, + int num_pipes, const struct dc_static_screen_params *params) +{ + unsigned int i; + unsigned int triggers = 0; + + if (params->triggers.surface_update) + triggers |= 0x100; + if (params->triggers.cursor_update) + triggers |= 0x8; + if (params->triggers.force_trigger) + triggers |= 0x1; + + for (i = 0; i < num_pipes; i++) + pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(pipe_ctx[i]->stream_res.tg, + triggers, params->num_frames); +} diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.h index edfc01d6ad73..b8bc939da155 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_hwseq.h @@ -56,4 +56,8 @@ bool dcn31_is_abm_supported(struct dc *dc, void dcn31_init_pipes(struct dc *dc, struct dc_state *context); void dcn31_setup_hpo_hw_control(const struct dce_hwseq *hws, bool enable); +void dcn31_set_static_screen_control(struct pipe_ctx **pipe_ctx, + int num_pipes, const struct dc_static_screen_params *params); + + #endif /* __DC_HWSS_DCN31_H__ */ diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c index 669f524bd064..c06cc2c5da92 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c @@ -67,7 +67,7 @@ static const struct hw_sequencer_funcs dcn31_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn10_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn31_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c index ccb7e317e86a..542ce3b7f9e4 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c @@ -69,7 +69,7 @@ static const struct hw_sequencer_funcs dcn314_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn10_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn31_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c index 6c9299c7683d..aa36d7a56ca8 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c @@ -1474,9 +1474,44 @@ void dcn32_update_dsc_pg(struct dc *dc, } } +void dcn32_disable_phantom_streams(struct dc *dc, struct dc_state *context) +{ + struct dce_hwseq *hws = dc->hwseq; + int i; + + for (i = dc->res_pool->pipe_count - 1; i >= 0 ; i--) { + struct pipe_ctx *pipe_ctx_old = + &dc->current_state->res_ctx.pipe_ctx[i]; + struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; + + if (!pipe_ctx_old->stream) + continue; + + if (dc_state_get_pipe_subvp_type(dc->current_state, pipe_ctx_old) != SUBVP_PHANTOM) + continue; + + if (pipe_ctx_old->top_pipe || pipe_ctx_old->prev_odm_pipe) + continue; + + if (!pipe_ctx->stream || pipe_need_reprogram(pipe_ctx_old, pipe_ctx) || + (pipe_ctx->stream && dc_state_get_pipe_subvp_type(context, pipe_ctx) != SUBVP_PHANTOM)) { + struct clock_source *old_clk = pipe_ctx_old->clock_source; + + if (hws->funcs.reset_back_end_for_pipe) + hws->funcs.reset_back_end_for_pipe(dc, pipe_ctx_old, dc->current_state); + if (hws->funcs.enable_stream_gating) + hws->funcs.enable_stream_gating(dc, pipe_ctx_old); + if (old_clk) + old_clk->funcs->cs_power_down(old_clk); + } + } +} + void dcn32_enable_phantom_streams(struct dc *dc, struct dc_state *context) { unsigned int i; + enum dc_status status = DC_OK; + struct dce_hwseq *hws = dc->hwseq; for (i = 0; i < dc->res_pool->pipe_count; i++) { struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; @@ -1497,16 +1532,39 @@ void dcn32_enable_phantom_streams(struct dc *dc, struct dc_state *context) } } for (i = 0; i < dc->res_pool->pipe_count; i++) { - struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i]; - - if (new_pipe->stream && dc_state_get_pipe_subvp_type(context, new_pipe) == SUBVP_PHANTOM) { - // If old context or new context has phantom pipes, apply - // the phantom timings now. We can't change the phantom - // pipe configuration safely without driver acquiring - // the DMCUB lock first. - dc->hwss.apply_ctx_to_hw(dc, context); - break; + struct pipe_ctx *pipe_ctx_old = + &dc->current_state->res_ctx.pipe_ctx[i]; + struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; + + if (pipe_ctx->stream == NULL) + continue; + + if (dc_state_get_pipe_subvp_type(context, pipe_ctx) != SUBVP_PHANTOM) + continue; + + if (pipe_ctx->stream == pipe_ctx_old->stream && + pipe_ctx->stream->link->link_state_valid) { + continue; } + + if (pipe_ctx_old->stream && !pipe_need_reprogram(pipe_ctx_old, pipe_ctx)) + continue; + + if (pipe_ctx->top_pipe || pipe_ctx->prev_odm_pipe) + continue; + + if (hws->funcs.apply_single_controller_ctx_to_hw) + status = hws->funcs.apply_single_controller_ctx_to_hw( + pipe_ctx, + context, + dc); + + ASSERT(status == DC_OK); + +#ifdef CONFIG_DRM_AMD_DC_FP + if (hws->funcs.resync_fifo_dccg_dio) + hws->funcs.resync_fifo_dccg_dio(hws, dc, context); +#endif } } diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.h index cecf7f0f5671..069e20bc87c0 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.h @@ -111,6 +111,8 @@ void dcn32_update_dsc_pg(struct dc *dc, void dcn32_enable_phantom_streams(struct dc *dc, struct dc_state *context); +void dcn32_disable_phantom_streams(struct dc *dc, struct dc_state *context); + void dcn32_init_blank( struct dc *dc, struct timing_generator *tg); diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c index 427cfc8c24a4..2b073123d3ed 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c @@ -65,7 +65,7 @@ static const struct hw_sequencer_funcs dcn32_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn10_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn31_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, @@ -109,6 +109,7 @@ static const struct hw_sequencer_funcs dcn32_funcs = { .get_dcc_en_bits = dcn10_get_dcc_en_bits, .commit_subvp_config = dcn32_commit_subvp_config, .enable_phantom_streams = dcn32_enable_phantom_streams, + .disable_phantom_streams = dcn32_disable_phantom_streams, .subvp_pipe_control_lock = dcn32_subvp_pipe_control_lock, .update_visual_confirm_color = dcn10_update_visual_confirm_color, .subvp_pipe_control_lock_fast = dcn32_subvp_pipe_control_lock_fast, @@ -159,6 +160,8 @@ static const struct hwseq_private_funcs dcn32_private_funcs = { .set_pixels_per_cycle = dcn32_set_pixels_per_cycle, .resync_fifo_dccg_dio = dcn32_resync_fifo_dccg_dio, .is_dp_dig_pixel_rate_div_policy = dcn32_is_dp_dig_pixel_rate_div_policy, + .apply_single_controller_ctx_to_hw = dce110_apply_single_controller_ctx_to_hw, + .reset_back_end_for_pipe = dcn20_reset_back_end_for_pipe, }; void dcn32_hw_sequencer_init_functions(struct dc *dc) diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c index 8b6c49622f3b..4b92df23ff0d 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c @@ -1342,8 +1342,8 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx, { int i = 0; struct drr_params params = {0}; - // DRR set trigger event mapped to OTG_TRIG_A (bit 11) for manual control flow - unsigned int event_triggers = 0x800; + // DRR set trigger event mapped to OTG_TRIG_A + unsigned int event_triggers = 0x2;//Bit[1]: OTG_TRIG_A // Note DRR trigger events are generated regardless of whether num frames met. unsigned int num_frames = 2; @@ -1377,3 +1377,20 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx, } } } +void dcn35_set_static_screen_control(struct pipe_ctx **pipe_ctx, + int num_pipes, const struct dc_static_screen_params *params) +{ + unsigned int i; + unsigned int triggers = 0; + + if (params->triggers.surface_update) + triggers |= 0x200;/*bit 9 : 10 0000 0000*/ + if (params->triggers.cursor_update) + triggers |= 0x8;/*bit3*/ + if (params->triggers.force_trigger) + triggers |= 0x1; + for (i = 0; i < num_pipes; i++) + pipe_ctx[i]->stream_res.tg->funcs-> + set_static_screen_control(pipe_ctx[i]->stream_res.tg, + triggers, params->num_frames); +} diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.h index fd66316e33de..c354efa6c1b2 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.h @@ -90,4 +90,7 @@ uint32_t dcn35_get_idle_state(const struct dc *dc); void dcn35_set_drr(struct pipe_ctx **pipe_ctx, int num_pipes, struct dc_crtc_timing_adjust adjust); +void dcn35_set_static_screen_control(struct pipe_ctx **pipe_ctx, + int num_pipes, const struct dc_static_screen_params *params); + #endif /* __DC_HWSS_DCN35_H__ */ diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c index a630aa77dcec..a93073055e7b 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c @@ -70,7 +70,7 @@ static const struct hw_sequencer_funcs dcn35_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn35_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn35_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c index 143d3fc0221c..ab17fa1c64e8 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c @@ -69,7 +69,7 @@ static const struct hw_sequencer_funcs dcn351_funcs = { .update_bandwidth = dcn20_update_bandwidth, .set_drr = dcn10_set_drr, .get_position = dcn10_get_position, - .set_static_screen_control = dcn30_set_static_screen_control, + .set_static_screen_control = dcn35_set_static_screen_control, .setup_stereo = dcn10_setup_stereo, .set_avmute = dcn30_set_avmute, .log_hw_state = dcn10_log_hw_state, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h index a54399383318..f89f205e42a1 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h @@ -339,6 +339,8 @@ struct hw_sequencer_funcs { /* HW State Logging Related */ void (*log_hw_state)(struct dc *dc, struct dc_log_buffer_ctx *log_ctx); + void (*log_color_state)(struct dc *dc, + struct dc_log_buffer_ctx *log_ctx); void (*get_hw_state)(struct dc *dc, char *pBuf, unsigned int bufSize, unsigned int mask); void (*clear_status_bits)(struct dc *dc, unsigned int mask); @@ -379,6 +381,7 @@ struct hw_sequencer_funcs { struct dc_cursor_attributes *cursor_attr); void (*commit_subvp_config)(struct dc *dc, struct dc_state *context); void (*enable_phantom_streams)(struct dc *dc, struct dc_state *context); + void (*disable_phantom_streams)(struct dc *dc, struct dc_state *context); void (*subvp_pipe_control_lock)(struct dc *dc, struct dc_state *context, bool lock, diff --git a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer_private.h index 6137cf09aa54..554cfab5ab24 100644 --- a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer_private.h +++ b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer_private.h @@ -155,7 +155,6 @@ struct hwseq_private_funcs { void (*setup_hpo_hw_control)(const struct dce_hwseq *hws, bool enable); void (*enable_plane)(struct dc *dc, struct pipe_ctx *pipe_ctx, struct dc_state *context); -#ifdef CONFIG_DRM_AMD_DC_FP void (*program_mall_pipe_config)(struct dc *dc, struct dc_state *context); void (*update_force_pstate)(struct dc *dc, struct dc_state *context); void (*update_mall_sel)(struct dc *dc, struct dc_state *context); @@ -165,8 +164,14 @@ struct hwseq_private_funcs { void (*set_pixels_per_cycle)(struct pipe_ctx *pipe_ctx); void (*resync_fifo_dccg_dio)(struct dce_hwseq *hws, struct dc *dc, struct dc_state *context); + enum dc_status (*apply_single_controller_ctx_to_hw)( + struct pipe_ctx *pipe_ctx, + struct dc_state *context, + struct dc *dc); bool (*is_dp_dig_pixel_rate_div_policy)(struct pipe_ctx *pipe_ctx); -#endif + void (*reset_back_end_for_pipe)(struct dc *dc, + struct pipe_ctx *pipe_ctx, + struct dc_state *context); }; struct dce_hwseq { diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h index 3a6bf77a6873..ebb659c327e0 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h +++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h @@ -333,6 +333,8 @@ struct stream_resource { uint8_t gsl_group; struct test_pattern_params test_pattern_params; + + bool left_edge_extra_pixel; }; struct plane_resource { diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/audio.h b/drivers/gpu/drm/amd/display/dc/inc/hw/audio.h index 6ed1fb8c9300..b6203253111c 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/audio.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/audio.h @@ -43,7 +43,8 @@ struct audio_funcs { void (*az_configure)(struct audio *audio, enum signal_type signal, const struct audio_crtc_info *crtc_info, - const struct audio_info *audio_info); + const struct audio_info *audio_info, + const struct audio_dp_link_info *dp_link_info); void (*wall_dto_setup)(struct audio *audio, enum signal_type signal, diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h index 6f4c97543c14..f4d4a68c91dc 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h @@ -356,6 +356,7 @@ struct clk_mgr_internal { long long wm_range_table_addr; bool dpm_present; + bool pme_trigger_pending; }; struct clk_mgr_internal_funcs { @@ -393,6 +394,11 @@ static inline int khz_to_mhz_ceil(int khz) return (khz + 999) / 1000; } +static inline int khz_to_mhz_floor(int khz) +{ + return khz / 1000; +} + int clk_mgr_helper_get_active_display_cnt( struct dc *dc, struct dc_state *context); diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h index 901891316dfb..2ae7484d18af 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h @@ -26,6 +26,12 @@ #ifndef __DAL_DCHUBBUB_H__ #define __DAL_DCHUBBUB_H__ +/** + * DOC: overview + * + * There is only one common DCHUBBUB. It contains the common request and return + * blocks for the Data Fabric Interface that are not clock/power gated. + */ enum dcc_control { dcc_control__256_256_xxx, diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h index f4aa76e02518..0f24afbf4388 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dpp.h @@ -27,6 +27,31 @@ #ifndef __DAL_DPP_H__ #define __DAL_DPP_H__ +/** + * DOC: overview + * + * The DPP (Display Pipe and Plane) block is the unified display data + * processing engine in DCN for processing graphic or video data on per DPP + * rectangle base. This rectangle can be a part of SLS (Single Large Surface), + * or a layer to be blended with other DPP, or a rectangle associated with a + * display tile. + * + * It provides various functions including: + * - graphic color keyer + * - graphic cursor compositing + * - graphic or video image source to destination scaling + * - image sharping + * - video format conversion from 4:2:0 or 4:2:2 to 4:4:4 + * - Color Space Conversion + * - Host LUT gamma adjustment + * - Color Gamut Remap + * - brightness and contrast adjustment. + * + * DPP pipe consists of Converter and Cursor (CNVC), Scaler (DSCL), Color + * Management (CM), Output Buffer (OBUF) and Digital Bypass (DPB) module + * connected in a video/graphics pipeline. + */ + #include "transform.h" #include "cursor_reg_cache.h" @@ -141,6 +166,7 @@ struct dcn_dpp_state { uint32_t igam_input_format; uint32_t dgam_lut_mode; uint32_t rgam_lut_mode; + // gamut_remap data for dcn10_get_cm_states() uint32_t gamut_remap_mode; uint32_t gamut_remap_c11_c12; uint32_t gamut_remap_c13_c14; @@ -148,6 +174,16 @@ struct dcn_dpp_state { uint32_t gamut_remap_c23_c24; uint32_t gamut_remap_c31_c32; uint32_t gamut_remap_c33_c34; + // gamut_remap data for dcn*_log_color_state() + struct dpp_grph_csc_adjustment gamut_remap; + uint32_t shaper_lut_mode; + uint32_t lut3d_mode; + uint32_t lut3d_bit_depth; + uint32_t lut3d_size; + uint32_t blnd_lut_mode; + uint32_t pre_dgam_mode; + uint32_t pre_dgam_select; + uint32_t gamcor_mode; }; struct CM_bias_params { @@ -290,6 +326,9 @@ struct dpp_funcs { void (*dpp_cnv_set_alpha_keyer)( struct dpp *dpp_base, struct cnv_color_keyer_params *color_keyer); + + void (*dpp_get_gamut_remap)(struct dpp *dpp_base, + struct dpp_grph_csc_adjustment *adjust); }; diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h index 7f3f9b69e903..72610cd7eae0 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h @@ -26,13 +26,24 @@ #ifndef __DAL_HUBP_H__ #define __DAL_HUBP_H__ +/** + * DOC: overview + * + * Display Controller Hub (DCHUB) is the gateway between the Scalable Data Port + * (SDP) and DCN. This component has multiple features, such as memory + * arbitration, rotation, and cursor manipulation. + * + * There is one HUBP allocated per pipe, which fetches data and converts + * different pixel formats (i.e. ARGB8888, NV12, etc) into linear, interleaved + * and fixed-depth streams of pixel data. + */ + #include "mem_input.h" #include "cursor_reg_cache.h" #define OPP_ID_INVALID 0xf #define MAX_TTU 0xffffff - enum cursor_pitch { CURSOR_PITCH_64_PIXELS = 0, CURSOR_PITCH_128_PIXELS, @@ -146,9 +157,7 @@ struct hubp_funcs { void (*set_blank)(struct hubp *hubp, bool blank); void (*set_blank_regs)(struct hubp *hubp, bool blank); -#ifdef CONFIG_DRM_AMD_DC_FP void (*phantom_hubp_post_enable)(struct hubp *hubp); -#endif void (*set_hubp_blank_en)(struct hubp *hubp, bool blank); void (*set_cursor_attributes)( diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h index 61a2406dcc53..ba9b942ce09f 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h @@ -23,13 +23,28 @@ */ /** - * DOC: mpc-overview + * DOC: overview * - * Multiple Pipe/Plane Combined (MPC) is a component in the hardware pipeline + * Multiple Pipe/Plane Combiner (MPC) is a component in the hardware pipeline * that performs blending of multiple planes, using global and per-pixel alpha. * It also performs post-blending color correction operations according to the * hardware capabilities, such as color transformation matrix and gamma 1D and * 3D LUT. + * + * MPC receives output from all DPP pipes and combines them to multiple outputs + * supporting "M MPC inputs -> N MPC outputs" flexible composition + * architecture. It features: + * + * - Programmable blending structure to allow software controlled blending and + * cascading; + * - Programmable window location of each DPP in active region of display; + * - Combining multiple DPP pipes in one active region when a single DPP pipe + * cannot process very large surface; + * - Combining multiple DPP from different SLS with blending; + * - Stereo formats from single DPP in top-bottom or side-by-side modes; + * - Stereo formats from 2 DPPs; + * - Alpha blending of multiple layers from different DPP pipes; + * - Programmable background color; */ #ifndef __DC_MPCC_H__ @@ -83,34 +98,66 @@ enum mpcc_alpha_blend_mode { /** * struct mpcc_blnd_cfg - MPCC blending configuration - * - * @black_color: background color - * @alpha_mode: alpha blend mode (MPCC_ALPHA_BLND_MODE) - * @pre_multiplied_alpha: whether pixel color values were pre-multiplied by the - * alpha channel (MPCC_ALPHA_MULTIPLIED_MODE) - * @global_gain: used when blend mode considers both pixel alpha and plane - * alpha value and assumes the global alpha value. - * @global_alpha: plane alpha value - * @overlap_only: whether overlapping of different planes is allowed - * @bottom_gain_mode: blend mode for bottom gain setting - * @background_color_bpc: background color for bpc - * @top_gain: top gain setting - * @bottom_inside_gain: blend mode for bottom inside - * @bottom_outside_gain: blend mode for bottom outside */ struct mpcc_blnd_cfg { - struct tg_color black_color; /* background color */ - enum mpcc_alpha_blend_mode alpha_mode; /* alpha blend mode */ - bool pre_multiplied_alpha; /* alpha pre-multiplied mode flag */ + /** + * @black_color: background color. + */ + struct tg_color black_color; + + /** + * @alpha_mode: alpha blend mode (MPCC_ALPHA_BLND_MODE). + */ + enum mpcc_alpha_blend_mode alpha_mode; + + /*** + * @@pre_multiplied_alpha: + * + * Whether pixel color values were pre-multiplied by the alpha channel + * (MPCC_ALPHA_MULTIPLIED_MODE). + */ + bool pre_multiplied_alpha; + + /** + * @global_gain: Used when blend mode considers both pixel alpha and plane. + */ int global_gain; + + /** + * @global_alpha: Plane alpha value. + */ int global_alpha; + + /** + * @@overlap_only: Whether overlapping of different planes is allowed. + */ bool overlap_only; /* MPCC top/bottom gain settings */ + + /** + * @bottom_gain_mode: Blend mode for bottom gain setting. + */ int bottom_gain_mode; + + /** + * @background_color_bpc: Background color for bpc. + */ int background_color_bpc; + + /** + * @top_gain: Top gain setting. + */ int top_gain; + + /** + * @bottom_inside_gain: Blend mode for bottom inside. + */ int bottom_inside_gain; + + /** + * @bottom_outside_gain: Blend mode for bottom outside. + */ int bottom_outside_gain; }; @@ -150,34 +197,58 @@ struct mpc_dwb_flow_control { /** * struct mpcc - MPCC connection and blending configuration for a single MPCC instance. - * @mpcc_id: MPCC physical instance - * @dpp_id: DPP input to this MPCC - * @mpcc_bot: pointer to bottom layer MPCC. NULL when not connected. - * @blnd_cfg: the blending configuration for this MPCC - * @sm_cfg: stereo mix setting for this MPCC - * @shared_bottom: if MPCC output to both OPP and DWB endpoints, true. Otherwise, false. * * This struct is used as a node in an MPC tree. */ struct mpcc { - int mpcc_id; /* MPCC physical instance */ - int dpp_id; /* DPP input to this MPCC */ - struct mpcc *mpcc_bot; /* pointer to bottom layer MPCC. NULL when not connected */ - struct mpcc_blnd_cfg blnd_cfg; /* The blending configuration for this MPCC */ - struct mpcc_sm_cfg sm_cfg; /* stereo mix setting for this MPCC */ - bool shared_bottom; /* TRUE if MPCC output to both OPP and DWB endpoints, else FALSE */ + /** + * @mpcc_id: MPCC physical instance. + */ + int mpcc_id; + + /** + * @dpp_id: DPP input to this MPCC + */ + int dpp_id; + + /** + * @mpcc_bot: Pointer to bottom layer MPCC. NULL when not connected. + */ + struct mpcc *mpcc_bot; + + /** + * @blnd_cfg: The blending configuration for this MPCC. + */ + struct mpcc_blnd_cfg blnd_cfg; + + /** + * @sm_cfg: stereo mix setting for this MPCC + */ + struct mpcc_sm_cfg sm_cfg; + + /** + * @shared_bottom: + * + * If MPCC output to both OPP and DWB endpoints, true. Otherwise, false. + */ + bool shared_bottom; }; /** * struct mpc_tree - MPC tree represents all MPCC connections for a pipe. * - * @opp_id: the OPP instance that owns this MPC tree - * @opp_list: the top MPCC layer of the MPC tree that outputs to OPP endpoint * */ struct mpc_tree { - int opp_id; /* The OPP instance that owns this MPC tree */ - struct mpcc *opp_list; /* The top MPCC layer of the MPC tree that outputs to OPP endpoint */ + /** + * @opp_id: The OPP instance that owns this MPC tree. + */ + int opp_id; + + /** + * @opp_list: the top MPCC layer of the MPC tree that outputs to OPP endpoint + */ + struct mpcc *opp_list; }; struct mpc { @@ -199,6 +270,13 @@ struct mpcc_state { uint32_t overlap_only; uint32_t idle; uint32_t busy; + uint32_t shaper_lut_mode; + uint32_t lut3d_mode; + uint32_t lut3d_bit_depth; + uint32_t lut3d_size; + uint32_t rgam_mode; + uint32_t rgam_lut; + struct mpc_grph_gamut_adjustment gamut_remap; }; /** @@ -217,16 +295,20 @@ struct mpc_funcs { * Only used for planes that are part of blending chain for OPP output * * Parameters: - * [in/out] mpc - MPC context. - * [in/out] tree - MPC tree structure that plane will be added to. - * [in] blnd_cfg - MPCC blending configuration for the new blending layer. - * [in] sm_cfg - MPCC stereo mix configuration for the new blending layer. - * stereo mix must disable for the very bottom layer of the tree config. - * [in] insert_above_mpcc - Insert new plane above this MPCC. If NULL, insert as bottom plane. - * [in] dpp_id - DPP instance for the plane to be added. - * [in] mpcc_id - The MPCC physical instance to use for blending. - * - * Return: struct mpcc* - MPCC that was added. + * + * - [in/out] mpc - MPC context. + * - [in/out] tree - MPC tree structure that plane will be added to. + * - [in] blnd_cfg - MPCC blending configuration for the new blending layer. + * - [in] sm_cfg - MPCC stereo mix configuration for the new blending layer. + * stereo mix must disable for the very bottom layer of the tree config. + * - [in] insert_above_mpcc - Insert new plane above this MPCC. + * If NULL, insert as bottom plane. + * - [in] dpp_id - DPP instance for the plane to be added. + * - [in] mpcc_id - The MPCC physical instance to use for blending. + * + * Return: + * + * struct mpcc* - MPCC that was added. */ struct mpcc* (*insert_plane)( struct mpc *mpc, @@ -243,11 +325,14 @@ struct mpc_funcs { * Remove a specified MPCC from the MPC tree. * * Parameters: - * [in/out] mpc - MPC context. - * [in/out] tree - MPC tree structure that plane will be removed from. - * [in/out] mpcc - MPCC to be removed from tree. * - * Return: void + * - [in/out] mpc - MPC context. + * - [in/out] tree - MPC tree structure that plane will be removed from. + * - [in/out] mpcc - MPCC to be removed from tree. + * + * Return: + * + * void */ void (*remove_mpcc)( struct mpc *mpc, @@ -260,9 +345,12 @@ struct mpc_funcs { * Reset the MPCC HW status by disconnecting all muxes. * * Parameters: - * [in/out] mpc - MPC context. * - * Return: void + * - [in/out] mpc - MPC context. + * + * Return: + * + * void */ void (*mpc_init)(struct mpc *mpc); void (*mpc_init_single_inst)( @@ -275,11 +363,14 @@ struct mpc_funcs { * Update the blending configuration for a specified MPCC. * * Parameters: - * [in/out] mpc - MPC context. - * [in] blnd_cfg - MPCC blending configuration. - * [in] mpcc_id - The MPCC physical instance. * - * Return: void + * - [in/out] mpc - MPC context. + * - [in] blnd_cfg - MPCC blending configuration. + * - [in] mpcc_id - The MPCC physical instance. + * + * Return: + * + * void */ void (*update_blending)( struct mpc *mpc, @@ -289,15 +380,18 @@ struct mpc_funcs { /** * @cursor_lock: * - * Lock cursor updates for the specified OPP. - * OPP defines the set of MPCC that are locked together for cursor. + * Lock cursor updates for the specified OPP. OPP defines the set of + * MPCC that are locked together for cursor. * * Parameters: - * [in] mpc - MPC context. - * [in] opp_id - The OPP to lock cursor updates on - * [in] lock - lock/unlock the OPP * - * Return: void + * - [in] mpc - MPC context. + * - [in] opp_id - The OPP to lock cursor updates on + * - [in] lock - lock/unlock the OPP + * + * Return: + * + * void */ void (*cursor_lock)( struct mpc *mpc, @@ -307,20 +401,25 @@ struct mpc_funcs { /** * @insert_plane_to_secondary: * - * Add DPP into secondary MPC tree based on specified blending position. - * Only used for planes that are part of blending chain for DWB output + * Add DPP into secondary MPC tree based on specified blending + * position. Only used for planes that are part of blending chain for + * DWB output * * Parameters: - * [in/out] mpc - MPC context. - * [in/out] tree - MPC tree structure that plane will be added to. - * [in] blnd_cfg - MPCC blending configuration for the new blending layer. - * [in] sm_cfg - MPCC stereo mix configuration for the new blending layer. - * stereo mix must disable for the very bottom layer of the tree config. - * [in] insert_above_mpcc - Insert new plane above this MPCC. If NULL, insert as bottom plane. - * [in] dpp_id - DPP instance for the plane to be added. - * [in] mpcc_id - The MPCC physical instance to use for blending. - * - * Return: struct mpcc* - MPCC that was added. + * + * - [in/out] mpc - MPC context. + * - [in/out] tree - MPC tree structure that plane will be added to. + * - [in] blnd_cfg - MPCC blending configuration for the new blending layer. + * - [in] sm_cfg - MPCC stereo mix configuration for the new blending layer. + * stereo mix must disable for the very bottom layer of the tree config. + * - [in] insert_above_mpcc - Insert new plane above this MPCC. If + * NULL, insert as bottom plane. + * - [in] dpp_id - DPP instance for the plane to be added. + * - [in] mpcc_id - The MPCC physical instance to use for blending. + * + * Return: + * + * struct mpcc* - MPCC that was added. */ struct mpcc* (*insert_plane_to_secondary)( struct mpc *mpc, @@ -337,10 +436,14 @@ struct mpc_funcs { * Remove a specified DPP from the 'secondary' MPC tree. * * Parameters: - * [in/out] mpc - MPC context. - * [in/out] tree - MPC tree structure that plane will be removed from. - * [in] mpcc - MPCC to be removed from tree. - * Return: void + * + * - [in/out] mpc - MPC context. + * - [in/out] tree - MPC tree structure that plane will be removed from. + * - [in] mpcc - MPCC to be removed from tree. + * + * Return: + * + * void */ void (*remove_mpcc_from_secondary)( struct mpc *mpc, diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/opp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/opp.h index 7617fabbd16e..aee5372e292c 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/opp.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/opp.h @@ -23,6 +23,22 @@ * */ +/** + * DOC: overview + * + * The Output Plane Processor (OPP) block groups have functions that format + * pixel streams such that they are suitable for display at the display device. + * The key functions contained in the OPP are: + * + * - Adaptive Backlight Modulation (ABM) + * - Formatter (FMT) which provide pixel-by-pixel operations for format the + * incoming pixel stream. + * - Output Buffer that provide pixel replication, and overlapping. + * - Interface between MPC and OPTC. + * - Clock and reset generation. + * - CRC generation. + */ + #ifndef __DAL_OPP_H__ #define __DAL_OPP_H__ diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h index 9a00a99317b2..d98d72f35be5 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h @@ -182,9 +182,7 @@ struct timing_generator_funcs { bool (*enable_crtc)(struct timing_generator *tg); bool (*disable_crtc)(struct timing_generator *tg); -#ifdef CONFIG_DRM_AMD_DC_FP void (*phantom_crtc_post_enable)(struct timing_generator *tg); -#endif void (*disable_phantom_crtc)(struct timing_generator *tg); bool (*immediate_disable_crtc)(struct timing_generator *tg); bool (*is_counter_moving)(struct timing_generator *tg); diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h index c958ef37b78a..b14d52e52fa2 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/resource.h +++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h @@ -107,6 +107,10 @@ void resource_build_test_pattern_params( struct resource_context *res_ctx, struct pipe_ctx *pipe_ctx); +void resource_build_subsampling_params( + struct resource_context *res_ctx, + struct pipe_ctx *pipe_ctx); + bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx); enum dc_status resource_build_scaling_params_for_context( @@ -427,22 +431,18 @@ struct pipe_ctx *resource_get_primary_dpp_pipe(const struct pipe_ctx *dpp_pipe); int resource_get_mpc_slice_index(const struct pipe_ctx *dpp_pipe); /* - * Get number of MPC "cuts" of the plane associated with the pipe. MPC slice - * count is equal to MPC splits + 1. For example if a plane is cut 3 times, it - * will have 4 pieces of slice. - * return - 0 if pipe is not used for a plane with MPCC combine. otherwise - * the number of MPC "cuts" for the plane. + * Get the number of MPC slices associated with the pipe. + * The function returns 0 if the pipe is not associated with an MPC combine + * pipe topology. */ -int resource_get_mpc_slice_count(const struct pipe_ctx *opp_head); +int resource_get_mpc_slice_count(const struct pipe_ctx *pipe); /* - * Get number of ODM "cuts" of the timing associated with the pipe. ODM slice - * count is equal to ODM splits + 1. For example if a timing is cut 3 times, it - * will have 4 pieces of slice. - * return - 0 if pipe is not used for ODM combine. otherwise - * the number of ODM "cuts" for the timing. + * Get the number of ODM slices associated with the pipe. + * The function returns 0 if the pipe is not associated with an ODM combine + * pipe topology. */ -int resource_get_odm_slice_count(const struct pipe_ctx *otg_master); +int resource_get_odm_slice_count(const struct pipe_ctx *pipe); /* Get the ODM slice index counting from 0 from left most slice */ int resource_get_odm_slice_index(const struct pipe_ctx *opp_head); diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.h b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.h index f4633d3cf9b9..a1f72fe378ee 100644 --- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.h +++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_dio.h @@ -22,6 +22,16 @@ * Authors: AMD * */ + +/** + * DOC: overview + * + * Display Input Output (DIO), is the display input and output unit in DCN. It + * includes output encoders to support different display output, like + * DisplayPort, HDMI, DVI interface, and others. It also includes the control + * and status channels for these interfaces. + */ + #ifndef __LINK_HWSS_DIO_H__ #define __LINK_HWSS_DIO_H__ diff --git a/drivers/gpu/drm/amd/display/dc/link/link_detection.c b/drivers/gpu/drm/amd/display/dc/link/link_detection.c index 24153b0df503..b8c4a04dd175 100644 --- a/drivers/gpu/drm/amd/display/dc/link/link_detection.c +++ b/drivers/gpu/drm/amd/display/dc/link/link_detection.c @@ -41,6 +41,7 @@ #include "protocols/link_dp_dpia.h" #include "protocols/link_dp_phy.h" #include "protocols/link_dp_training.h" +#include "protocols/link_dp_dpia_bw.h" #include "accessories/link_dp_trace.h" #include "link_enc_cfg.h" @@ -991,6 +992,23 @@ static bool detect_link_and_local_sink(struct dc_link *link, if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA && link->reported_link_cap.link_rate > LINK_RATE_HIGH3) link->reported_link_cap.link_rate = LINK_RATE_HIGH3; + + /* + * If this is DP over USB4 link then we need to: + * - Enable BW ALLOC support on DPtx if applicable + */ + if (dc->config.usb4_bw_alloc_support) { + if (link_dp_dpia_set_dptx_usb4_bw_alloc_support(link)) { + /* update with non reduced link cap if bw allocation mode is supported */ + if (link->dpia_bw_alloc_config.nrd_max_link_rate && + link->dpia_bw_alloc_config.nrd_max_lane_count) { + link->reported_link_cap.link_rate = + link->dpia_bw_alloc_config.nrd_max_link_rate; + link->reported_link_cap.lane_count = + link->dpia_bw_alloc_config.nrd_max_lane_count; + } + } + } break; } diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c index 3cbfbf8d107e..a72de44a5747 100644 --- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c +++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c @@ -2197,6 +2197,64 @@ static enum dc_status enable_link( static bool allocate_usb4_bandwidth_for_stream(struct dc_stream_state *stream, int bw) { + struct dc_link *link = stream->sink->link; + int req_bw = bw; + + DC_LOGGER_INIT(link->ctx->logger); + + if (!link->dpia_bw_alloc_config.bw_alloc_enabled) + return false; + + if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) { + int sink_index = 0; + int i = 0; + + for (i = 0; i < link->sink_count; i++) { + if (link->remote_sinks[i] == NULL) + continue; + + if (stream->sink->sink_id != link->remote_sinks[i]->sink_id) + req_bw += link->dpia_bw_alloc_config.remote_sink_req_bw[i]; + else + sink_index = i; + } + + link->dpia_bw_alloc_config.remote_sink_req_bw[sink_index] = bw; + } + + /* get dp overhead for dp tunneling */ + link->dpia_bw_alloc_config.dp_overhead = link_dp_dpia_get_dp_overhead_in_dp_tunneling(link); + req_bw += link->dpia_bw_alloc_config.dp_overhead; + + if (link_dp_dpia_allocate_usb4_bandwidth_for_stream(link, req_bw)) { + if (req_bw <= link->dpia_bw_alloc_config.allocated_bw) { + DC_LOG_DEBUG("%s, Success in allocate bw for link(%d), allocated_bw(%d), dp_overhead(%d)\n", + __func__, link->link_index, link->dpia_bw_alloc_config.allocated_bw, + link->dpia_bw_alloc_config.dp_overhead); + } else { + // Cannot get the required bandwidth. + DC_LOG_ERROR("%s, Failed to allocate bw for link(%d), allocated_bw(%d), dp_overhead(%d)\n", + __func__, link->link_index, link->dpia_bw_alloc_config.allocated_bw, + link->dpia_bw_alloc_config.dp_overhead); + return false; + } + } else { + DC_LOG_DEBUG("%s, usb4 request bw timeout\n", __func__); + return false; + } + + if (stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) { + int i = 0; + + for (i = 0; i < link->sink_count; i++) { + if (link->remote_sinks[i] == NULL) + continue; + DC_LOG_DEBUG("%s, remote_sink=%s, request_bw=%d\n", __func__, + (const char *)(&link->remote_sinks[i]->edid_caps.display_name[0]), + link->dpia_bw_alloc_config.remote_sink_req_bw[i]); + } + } + return true; } diff --git a/drivers/gpu/drm/amd/display/dc/link/link_validation.c b/drivers/gpu/drm/amd/display/dc/link/link_validation.c index 8fe66c367850..1c038e2a527b 100644 --- a/drivers/gpu/drm/amd/display/dc/link/link_validation.c +++ b/drivers/gpu/drm/amd/display/dc/link/link_validation.c @@ -125,11 +125,9 @@ static bool dp_active_dongle_validate_timing( if (dongle_caps->dp_hdmi_frl_max_link_bw_in_kbps > 0) { // DP to HDMI FRL converter struct dc_crtc_timing outputTiming = *timing; -#if defined(CONFIG_DRM_AMD_DC_FP) if (timing->flags.DSC && !timing->dsc_cfg.is_frl) /* DP input has DSC, HDMI FRL output doesn't have DSC, remove DSC from output timing */ outputTiming.flags.DSC = 0; -#endif if (dc_bandwidth_in_kbps_from_timing(&outputTiming, DC_LINK_ENCODING_HDMI_FRL) > dongle_caps->dp_hdmi_frl_max_link_bw_in_kbps) return false; diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c index 5a0b04518956..e06d3c2d8910 100644 --- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c +++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training.c @@ -1505,10 +1505,7 @@ enum link_training_result dp_perform_link_training( * Non-LT AUX transactions inside training mode. */ if ((link->chip_caps & EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN) && encoding == DP_8b_10b_ENCODING) - if (link->dc->config.use_old_fixed_vs_sequence) - status = dp_perform_fixed_vs_pe_training_sequence_legacy(link, link_res, <_settings); - else - status = dp_perform_fixed_vs_pe_training_sequence(link, link_res, <_settings); + status = dp_perform_fixed_vs_pe_training_sequence(link, link_res, <_settings); else if (encoding == DP_8b_10b_ENCODING) status = dp_perform_8b_10b_link_training(link, link_res, <_settings); else if (encoding == DP_128b_132b_ENCODING) diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c index 7087cdc9e977..b5cf75975fff 100644 --- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c +++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.c @@ -186,356 +186,6 @@ static enum link_training_result perform_fixed_vs_pe_nontransparent_training_seq return status; } - -enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( - struct dc_link *link, - const struct link_resource *link_res, - struct link_training_settings *lt_settings) -{ - const uint8_t vendor_lttpr_write_data_reset[4] = {0x1, 0x50, 0x63, 0xFF}; - const uint8_t offset = dp_parse_lttpr_repeater_count( - link->dpcd_caps.lttpr_caps.phy_repeater_cnt); - const uint8_t vendor_lttpr_write_data_intercept_en[4] = {0x1, 0x55, 0x63, 0x0}; - const uint8_t vendor_lttpr_write_data_intercept_dis[4] = {0x1, 0x55, 0x63, 0x68}; - uint32_t pre_disable_intercept_delay_ms = 0; - uint8_t vendor_lttpr_write_data_vs[4] = {0x1, 0x51, 0x63, 0x0}; - uint8_t vendor_lttpr_write_data_pe[4] = {0x1, 0x52, 0x63, 0x0}; - const uint8_t vendor_lttpr_write_data_4lane_1[4] = {0x1, 0x6E, 0xF2, 0x19}; - const uint8_t vendor_lttpr_write_data_4lane_2[4] = {0x1, 0x6B, 0xF2, 0x01}; - const uint8_t vendor_lttpr_write_data_4lane_3[4] = {0x1, 0x6D, 0xF2, 0x18}; - const uint8_t vendor_lttpr_write_data_4lane_4[4] = {0x1, 0x6C, 0xF2, 0x03}; - const uint8_t vendor_lttpr_write_data_4lane_5[4] = {0x1, 0x03, 0xF3, 0x06}; - const uint8_t vendor_lttpr_write_data_dpmf[4] = {0x1, 0x6, 0x70, 0x87}; - enum link_training_result status = LINK_TRAINING_SUCCESS; - uint8_t lane = 0; - union down_spread_ctrl downspread = {0}; - union lane_count_set lane_count_set = {0}; - uint8_t toggle_rate; - uint8_t rate; - - /* Only 8b/10b is supported */ - ASSERT(link_dp_get_encoding_format(<_settings->link_settings) == - DP_8b_10b_ENCODING); - - if (lt_settings->lttpr_mode == LTTPR_MODE_NON_TRANSPARENT) { - status = perform_fixed_vs_pe_nontransparent_training_sequence(link, link_res, lt_settings); - return status; - } - - if (offset != 0xFF) { - if (offset == 2) { - pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa; - - /* Certain display and cable configuration require extra delay */ - } else if (offset > 2) { - pre_disable_intercept_delay_ms = link->dc->debug.fixed_vs_aux_delay_config_wa * 2; - } - } - - /* Vendor specific: Reset lane settings */ - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_reset[0], sizeof(vendor_lttpr_write_data_reset)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_vs[0], sizeof(vendor_lttpr_write_data_vs)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_pe[0], sizeof(vendor_lttpr_write_data_pe)); - - /* Vendor specific: Enable intercept */ - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_intercept_en[0], sizeof(vendor_lttpr_write_data_intercept_en)); - - - /* 1. set link rate, lane count and spread. */ - - downspread.raw = (uint8_t)(lt_settings->link_settings.link_spread); - - lane_count_set.bits.LANE_COUNT_SET = - lt_settings->link_settings.lane_count; - - lane_count_set.bits.ENHANCED_FRAMING = lt_settings->enhanced_framing; - lane_count_set.bits.POST_LT_ADJ_REQ_GRANTED = 0; - - - if (lt_settings->pattern_for_eq < DP_TRAINING_PATTERN_SEQUENCE_4) { - lane_count_set.bits.POST_LT_ADJ_REQ_GRANTED = - link->dpcd_caps.max_ln_count.bits.POST_LT_ADJ_REQ_SUPPORTED; - } - - core_link_write_dpcd(link, DP_DOWNSPREAD_CTRL, - &downspread.raw, sizeof(downspread)); - - core_link_write_dpcd(link, DP_LANE_COUNT_SET, - &lane_count_set.raw, 1); - - rate = get_dpcd_link_rate(<_settings->link_settings); - - /* Vendor specific: Toggle link rate */ - toggle_rate = (rate == 0x6) ? 0xA : 0x6; - - if (link->vendor_specific_lttpr_link_rate_wa == rate || link->vendor_specific_lttpr_link_rate_wa == 0) { - core_link_write_dpcd( - link, - DP_LINK_BW_SET, - &toggle_rate, - 1); - } - - link->vendor_specific_lttpr_link_rate_wa = rate; - - core_link_write_dpcd(link, DP_LINK_BW_SET, &rate, 1); - - DC_LOG_HW_LINK_TRAINING("%s\n %x rate = %x\n %x lane = %x framing = %x\n %x spread = %x\n", - __func__, - DP_LINK_BW_SET, - lt_settings->link_settings.link_rate, - DP_LANE_COUNT_SET, - lt_settings->link_settings.lane_count, - lt_settings->enhanced_framing, - DP_DOWNSPREAD_CTRL, - lt_settings->link_settings.link_spread); - - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_dpmf[0], - sizeof(vendor_lttpr_write_data_dpmf)); - - if (lt_settings->link_settings.lane_count == LANE_COUNT_FOUR) { - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_4lane_1[0], sizeof(vendor_lttpr_write_data_4lane_1)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_4lane_2[0], sizeof(vendor_lttpr_write_data_4lane_2)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_4lane_3[0], sizeof(vendor_lttpr_write_data_4lane_3)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_4lane_4[0], sizeof(vendor_lttpr_write_data_4lane_4)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_4lane_5[0], sizeof(vendor_lttpr_write_data_4lane_5)); - } - - /* 2. Perform link training */ - - /* Perform Clock Recovery Sequence */ - if (status == LINK_TRAINING_SUCCESS) { - const uint8_t max_vendor_dpcd_retries = 10; - uint32_t retries_cr; - uint32_t retry_count; - uint32_t wait_time_microsec; - enum dc_lane_count lane_count = lt_settings->link_settings.lane_count; - union lane_status dpcd_lane_status[LANE_COUNT_DP_MAX]; - union lane_align_status_updated dpcd_lane_status_updated; - union lane_adjust dpcd_lane_adjust[LANE_COUNT_DP_MAX] = {0}; - uint8_t i = 0; - - retries_cr = 0; - retry_count = 0; - - memset(&dpcd_lane_status, '\0', sizeof(dpcd_lane_status)); - memset(&dpcd_lane_status_updated, '\0', - sizeof(dpcd_lane_status_updated)); - - while ((retries_cr < LINK_TRAINING_MAX_RETRY_COUNT) && - (retry_count < LINK_TRAINING_MAX_CR_RETRY)) { - - - /* 1. call HWSS to set lane settings */ - dp_set_hw_lane_settings( - link, - link_res, - lt_settings, - 0); - - /* 2. update DPCD of the receiver */ - if (!retry_count) { - /* EPR #361076 - write as a 5-byte burst, - * but only for the 1-st iteration. - */ - dpcd_set_lt_pattern_and_lane_settings( - link, - lt_settings, - lt_settings->pattern_for_cr, - 0); - /* Vendor specific: Disable intercept */ - for (i = 0; i < max_vendor_dpcd_retries; i++) { - if (pre_disable_intercept_delay_ms != 0) - msleep(pre_disable_intercept_delay_ms); - if (link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_intercept_dis[0], - sizeof(vendor_lttpr_write_data_intercept_dis))) - break; - - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_intercept_en[0], - sizeof(vendor_lttpr_write_data_intercept_en)); - } - } else { - vendor_lttpr_write_data_vs[3] = 0; - vendor_lttpr_write_data_pe[3] = 0; - - for (lane = 0; lane < lane_count; lane++) { - vendor_lttpr_write_data_vs[3] |= - lt_settings->dpcd_lane_settings[lane].bits.VOLTAGE_SWING_SET << (2 * lane); - vendor_lttpr_write_data_pe[3] |= - lt_settings->dpcd_lane_settings[lane].bits.PRE_EMPHASIS_SET << (2 * lane); - } - - /* Vendor specific: Update VS and PE to DPRX requested value */ - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_vs[0], sizeof(vendor_lttpr_write_data_vs)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_pe[0], sizeof(vendor_lttpr_write_data_pe)); - - dpcd_set_lane_settings( - link, - lt_settings, - 0); - } - - /* 3. wait receiver to lock-on*/ - wait_time_microsec = lt_settings->cr_pattern_time; - - dp_wait_for_training_aux_rd_interval( - link, - wait_time_microsec); - - /* 4. Read lane status and requested drive - * settings as set by the sink - */ - dp_get_lane_status_and_lane_adjust( - link, - lt_settings, - dpcd_lane_status, - &dpcd_lane_status_updated, - dpcd_lane_adjust, - 0); - - /* 5. check CR done*/ - if (dp_is_cr_done(lane_count, dpcd_lane_status)) { - status = LINK_TRAINING_SUCCESS; - break; - } - - /* 6. max VS reached*/ - if (dp_is_max_vs_reached(lt_settings)) - break; - - /* 7. same lane settings */ - /* Note: settings are the same for all lanes, - * so comparing first lane is sufficient - */ - if (lt_settings->dpcd_lane_settings[0].bits.VOLTAGE_SWING_SET == - dpcd_lane_adjust[0].bits.VOLTAGE_SWING_LANE) - retries_cr++; - else - retries_cr = 0; - - /* 8. update VS/PE/PC2 in lt_settings*/ - dp_decide_lane_settings(lt_settings, dpcd_lane_adjust, - lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings); - retry_count++; - } - - if (retry_count >= LINK_TRAINING_MAX_CR_RETRY) { - ASSERT(0); - DC_LOG_ERROR("%s: Link Training Error, could not get CR after %d tries. Possibly voltage swing issue", - __func__, - LINK_TRAINING_MAX_CR_RETRY); - - } - - status = dp_get_cr_failure(lane_count, dpcd_lane_status); - } - - /* Perform Channel EQ Sequence */ - if (status == LINK_TRAINING_SUCCESS) { - enum dc_dp_training_pattern tr_pattern; - uint32_t retries_ch_eq; - uint32_t wait_time_microsec; - enum dc_lane_count lane_count = lt_settings->link_settings.lane_count; - union lane_align_status_updated dpcd_lane_status_updated = {0}; - union lane_status dpcd_lane_status[LANE_COUNT_DP_MAX] = {0}; - union lane_adjust dpcd_lane_adjust[LANE_COUNT_DP_MAX] = {0}; - - /* Note: also check that TPS4 is a supported feature*/ - tr_pattern = lt_settings->pattern_for_eq; - - dp_set_hw_training_pattern(link, link_res, tr_pattern, 0); - - status = LINK_TRAINING_EQ_FAIL_EQ; - - for (retries_ch_eq = 0; retries_ch_eq <= LINK_TRAINING_MAX_RETRY_COUNT; - retries_ch_eq++) { - - dp_set_hw_lane_settings(link, link_res, lt_settings, 0); - - vendor_lttpr_write_data_vs[3] = 0; - vendor_lttpr_write_data_pe[3] = 0; - - for (lane = 0; lane < lane_count; lane++) { - vendor_lttpr_write_data_vs[3] |= - lt_settings->dpcd_lane_settings[lane].bits.VOLTAGE_SWING_SET << (2 * lane); - vendor_lttpr_write_data_pe[3] |= - lt_settings->dpcd_lane_settings[lane].bits.PRE_EMPHASIS_SET << (2 * lane); - } - - /* Vendor specific: Update VS and PE to DPRX requested value */ - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_vs[0], sizeof(vendor_lttpr_write_data_vs)); - link_configure_fixed_vs_pe_retimer(link->ddc, - &vendor_lttpr_write_data_pe[0], sizeof(vendor_lttpr_write_data_pe)); - - /* 2. update DPCD*/ - if (!retries_ch_eq) - /* EPR #361076 - write as a 5-byte burst, - * but only for the 1-st iteration - */ - - dpcd_set_lt_pattern_and_lane_settings( - link, - lt_settings, - tr_pattern, 0); - else - dpcd_set_lane_settings(link, lt_settings, 0); - - /* 3. wait for receiver to lock-on*/ - wait_time_microsec = lt_settings->eq_pattern_time; - - dp_wait_for_training_aux_rd_interval( - link, - wait_time_microsec); - - /* 4. Read lane status and requested - * drive settings as set by the sink - */ - dp_get_lane_status_and_lane_adjust( - link, - lt_settings, - dpcd_lane_status, - &dpcd_lane_status_updated, - dpcd_lane_adjust, - 0); - - /* 5. check CR done*/ - if (!dp_is_cr_done(lane_count, dpcd_lane_status)) { - status = LINK_TRAINING_EQ_FAIL_CR; - break; - } - - /* 6. check CHEQ done*/ - if (dp_is_ch_eq_done(lane_count, dpcd_lane_status) && - dp_is_symbol_locked(lane_count, dpcd_lane_status) && - dp_is_interlane_aligned(dpcd_lane_status_updated)) { - status = LINK_TRAINING_SUCCESS; - break; - } - - /* 7. update VS/PE/PC2 in lt_settings*/ - dp_decide_lane_settings(lt_settings, dpcd_lane_adjust, - lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings); - } - } - - return status; -} - enum link_training_result dp_perform_fixed_vs_pe_training_sequence( struct dc_link *link, const struct link_resource *link_res, @@ -620,18 +270,20 @@ enum link_training_result dp_perform_fixed_vs_pe_training_sequence( rate = get_dpcd_link_rate(<_settings->link_settings); - /* Vendor specific: Toggle link rate */ - toggle_rate = (rate == 0x6) ? 0xA : 0x6; + if (!link->dpcd_caps.lttpr_caps.main_link_channel_coding.bits.DP_128b_132b_SUPPORTED) { + /* Vendor specific: Toggle link rate */ + toggle_rate = (rate == 0x6) ? 0xA : 0x6; - if (link->vendor_specific_lttpr_link_rate_wa == rate || link->vendor_specific_lttpr_link_rate_wa == 0) { - core_link_write_dpcd( - link, - DP_LINK_BW_SET, - &toggle_rate, - 1); - } + if (link->vendor_specific_lttpr_link_rate_wa == rate || link->vendor_specific_lttpr_link_rate_wa == 0) { + core_link_write_dpcd( + link, + DP_LINK_BW_SET, + &toggle_rate, + 1); + } - link->vendor_specific_lttpr_link_rate_wa = rate; + link->vendor_specific_lttpr_link_rate_wa = rate; + } core_link_write_dpcd(link, DP_LINK_BW_SET, &rate, 1); diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.h b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.h index c0d6ea329504..e61970e27661 100644 --- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.h +++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.h @@ -28,11 +28,6 @@ #define __DC_LINK_DP_FIXED_VS_PE_RETIMER_H__ #include "link_dp_training.h" -enum link_training_result dp_perform_fixed_vs_pe_training_sequence_legacy( - struct dc_link *link, - const struct link_resource *link_res, - struct link_training_settings *lt_settings); - enum link_training_result dp_perform_fixed_vs_pe_training_sequence( struct dc_link *link, const struct link_resource *link_res, diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c index 046d3e205415..443215b96308 100644 --- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c +++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c @@ -287,7 +287,7 @@ bool set_default_brightness_aux(struct dc_link *link) if (link && link->dpcd_sink_ext_caps.bits.oled == 1) { if (!read_default_bl_aux(link, &default_backlight)) default_backlight = 150000; - // if < 1 nits or > 5000, it might be wrong readback + // if > 5000, it might be wrong readback. 0 nits is a valid default value for OLED panel. if (default_backlight < 1000 || default_backlight > 5000000) default_backlight = 150000; diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c index 37a64186f324..ecc477ef8e3b 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c @@ -2169,6 +2169,17 @@ void dcn30_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params optimal_uclk_for_dcfclk_sta_targets[i] = bw_params->clk_table.entries[j].memclk_mhz * 16; break; + } else { + /* condition where (dcfclk_sta_targets[i] >= optimal_dcfclk_for_uclk[j]): + * If it just so happens that the memory bandwidth is low enough such that + * all the optimal DCFCLK for each UCLK is lower than the smallest DCFCLK STA + * target, we need to populate the optimal UCLK for each DCFCLK STA target to + * be the max UCLK. + */ + if (j == num_uclk_states - 1) { + optimal_uclk_for_dcfclk_sta_targets[i] = + bw_params->clk_table.entries[j].memclk_mhz * 16; + } } } } diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c index 511ff6b5b985..7538b548c572 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c @@ -999,7 +999,7 @@ static struct stream_encoder *dcn301_stream_encoder_create(enum engine_id eng_id vpg = dcn301_vpg_create(ctx, vpg_inst); afmt = dcn301_afmt_create(ctx, afmt_inst); - if (!enc1 || !vpg || !afmt) { + if (!enc1 || !vpg || !afmt || eng_id >= ARRAY_SIZE(stream_enc_regs)) { kfree(enc1); kfree(vpg); kfree(afmt); diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c index 31035fc3d868..04d142f97474 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c @@ -1941,8 +1941,6 @@ static bool dcn31_resource_construct( dc->caps.color.mpc.ogam_rom_caps.hlg = 0; dc->caps.color.mpc.ocsc = 1; - dc->config.use_old_fixed_vs_sequence = true; - /* Use pipe context based otg sync logic */ dc->config.use_pipe_ctx_sync_logic = true; diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c index c4d71e7f18af..6f10052caeef 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c @@ -1829,7 +1829,21 @@ int dcn32_populate_dml_pipes_from_context( dcn32_zero_pipe_dcc_fraction(pipes, pipe_cnt); DC_FP_END(); pipes[pipe_cnt].pipe.dest.vfront_porch = timing->v_front_porch; - pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal; + if (dc->config.enable_windowed_mpo_odm && + dc->debug.enable_single_display_2to1_odm_policy) { + switch (resource_get_odm_slice_count(pipe)) { + case 2: + pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_2to1; + break; + case 4: + pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_4to1; + break; + default: + pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal; + } + } else { + pipes[pipe_cnt].pipe.dest.odm_combine_policy = dm_odm_combine_policy_dal; + } pipes[pipe_cnt].pipe.src.gpuvm_min_page_size_kbytes = 256; // according to spreadsheet pipes[pipe_cnt].pipe.src.unbounded_req_mode = false; pipes[pipe_cnt].pipe.scale_ratio_depth.lb_depth = dm_lb_19; diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c index 74412e5f03fe..6f832bf278cf 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c @@ -1760,6 +1760,7 @@ static bool dcn321_resource_construct( dc->caps.color.mpc.ocsc = 1; dc->config.dc_mode_clk_limit_support = true; + dc->config.enable_windowed_mpo_odm = false; /* read VBIOS LTTPR caps */ { if (ctx->dc_bios->funcs->get_lttpr_caps) { diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c index 761ec9891875..e534e87cc85b 100644 --- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c @@ -701,7 +701,7 @@ static const struct dc_plane_cap plane_cap = { // 6:1 downscaling ratio: 1000/6 = 166.666 .max_downscale_factor = { - .argb8888 = 167, + .argb8888 = 250, .nv12 = 167, .fp16 = 167 }, @@ -764,6 +764,7 @@ static const struct dc_debug_options debug_defaults_drv = { }, .seamless_boot_odm_combine = DML_FAIL_SOURCE_PIXEL_FORMAT, .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/ + .minimum_z8_residency_time = 2100, .using_dml2 = true, .support_eDP1_5 = true, .enable_hpo_pg_support = false, @@ -780,8 +781,9 @@ static const struct dc_debug_options debug_defaults_drv = { .disable_z10 = false, .ignore_pg = true, .psp_disabled_wa = true, - .ips2_eval_delay_us = 200, - .ips2_entry_delay_us = 400, + .ips2_eval_delay_us = 1650, + .ips2_entry_delay_us = 800, + .disable_dmub_reallow_idle = true, .static_screen_wait_frames = 2, }; @@ -2130,6 +2132,7 @@ static bool dcn35_resource_construct( dc->dml2_options.dcn_pipe_count = pool->base.pipe_count; dc->dml2_options.use_native_pstate_optimization = true; dc->dml2_options.use_native_soc_bb_construction = true; + dc->dml2_options.minimize_dispclk_using_odm = false; if (dc->config.EnableMinDispClkODM) dc->dml2_options.minimize_dispclk_using_odm = true; dc->dml2_options.enable_windowed_mpo_odm = dc->config.enable_windowed_mpo_odm; diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h index c78c9224ab60..ae30fe2b6d0d 100644 --- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h +++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h @@ -78,6 +78,12 @@ struct dmub_srv_dcn31_regs; struct dmcub_trace_buf_entry; +/* enum dmub_window_memory_type - memory location type specification for windows */ +enum dmub_window_memory_type { + DMUB_WINDOW_MEMORY_TYPE_FB = 0, + DMUB_WINDOW_MEMORY_TYPE_GART +}; + /* enum dmub_status - return code for dmcub functions */ enum dmub_status { DMUB_STATUS_OK = 0, @@ -203,7 +209,7 @@ struct dmub_srv_region_params { uint32_t vbios_size; const uint8_t *fw_inst_const; const uint8_t *fw_bss_data; - bool is_mailbox_in_inbox; + const enum dmub_window_memory_type *window_memory_type; }; /** @@ -223,7 +229,7 @@ struct dmub_srv_region_params { */ struct dmub_srv_region_info { uint32_t fb_size; - uint32_t inbox_size; + uint32_t gart_size; uint8_t num_regions; struct dmub_region regions[DMUB_WINDOW_TOTAL]; }; @@ -239,9 +245,10 @@ struct dmub_srv_region_info { struct dmub_srv_memory_params { const struct dmub_srv_region_info *region_info; void *cpu_fb_addr; - void *cpu_inbox_addr; + void *cpu_gart_addr; uint64_t gpu_fb_addr; - uint64_t gpu_inbox_addr; + uint64_t gpu_gart_addr; + const enum dmub_window_memory_type *window_memory_type; }; /** @@ -443,7 +450,6 @@ struct dmub_srv_create_params { struct dmub_srv_base_funcs funcs; struct dmub_srv_hw_funcs *hw_funcs; void *user_ctx; - struct dc_context *dc_ctx; enum dmub_asic asic; uint32_t fw_version; bool is_virtual; diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h index e699731ee68e..59b96136871e 100644 --- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h +++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h @@ -26,15 +26,6 @@ #ifndef DMUB_CMD_H #define DMUB_CMD_H -#if defined(_TEST_HARNESS) || defined(FPGA_USB4) -#include "dmub_fw_types.h" -#include "include_legacy/atomfirmware.h" - -#if defined(_TEST_HARNESS) -#include <string.h> -#endif -#else - #include <asm/byteorder.h> #include <linux/types.h> #include <linux/string.h> @@ -42,8 +33,6 @@ #include "atomfirmware.h" -#endif // defined(_TEST_HARNESS) || defined(FPGA_USB4) - //<DMUB_TYPES>================================================================== /* Basic type definitions. */ @@ -403,15 +392,16 @@ union replay_debug_flags { /** * 0x400 (bit 10) - * @force_disable_ips1: Force disable IPS1 state + * @enable_ips_visual_confirm: Enable IPS visual confirm when entering IPS + * If we enter IPS2, the Visual confirm bar will change to yellow */ - uint32_t force_disable_ips1 : 1; + uint32_t enable_ips_visual_confirm : 1; /** * 0x800 (bit 11) - * @force_disable_ips2: Force disable IPS2 state + * @enable_ips_residency_profiling: Enable IPS residency profiling */ - uint32_t force_disable_ips2 : 1; + uint32_t enable_ips_residency_profiling : 1; uint32_t reserved : 20; } bitfields; @@ -1270,11 +1260,11 @@ struct dmub_cmd_PLAT_54186_wa { uint32_t DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH_C; /**< reg value */ uint32_t DCSURF_PRIMARY_SURFACE_ADDRESS_C; /**< reg value */ struct { - uint8_t hubp_inst : 4; /**< HUBP instance */ - uint8_t tmz_surface : 1; /**< TMZ enable or disable */ - uint8_t immediate :1; /**< Immediate flip */ - uint8_t vmid : 4; /**< VMID */ - uint8_t grph_stereo : 1; /**< 1 if stereo */ + uint32_t hubp_inst : 4; /**< HUBP instance */ + uint32_t tmz_surface : 1; /**< TMZ enable or disable */ + uint32_t immediate :1; /**< Immediate flip */ + uint32_t vmid : 4; /**< VMID */ + uint32_t grph_stereo : 1; /**< 1 if stereo */ uint32_t reserved : 21; /**< Reserved */ } flip_params; /**< Pageflip parameters */ uint32_t reserved[9]; /**< Reserved bits */ diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c index 2daa1e0c8061..305463b8f110 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c @@ -32,8 +32,6 @@ #include "dcn/dcn_3_2_0_offset.h" #include "dcn/dcn_3_2_0_sh_mask.h" -#define DCN_BASE__INST0_SEG2 0x000034C0 - #define BASE_INNER(seg) ctx->dcn_reg_offsets[seg] #define CTX dmub #define REGS dmub->regs_dcn32 diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c index 9ad738805320..569c2a27a042 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c @@ -417,58 +417,44 @@ void dmub_srv_destroy(struct dmub_srv *dmub) dmub_memset(dmub, 0, sizeof(*dmub)); } +static uint32_t dmub_srv_calc_regions_for_memory_type(const struct dmub_srv_region_params *params, + struct dmub_srv_region_info *out, + const uint32_t *window_sizes, + enum dmub_window_memory_type memory_type) +{ + uint32_t i, top = 0; + + for (i = 0; i < DMUB_WINDOW_TOTAL; ++i) { + if (params->window_memory_type[i] == memory_type) { + struct dmub_region *region = &out->regions[i]; + + region->base = dmub_align(top, 256); + region->top = region->base + dmub_align(window_sizes[i], 64); + top = region->top; + } + } + + return dmub_align(top, 4096); +} + enum dmub_status -dmub_srv_calc_region_info(struct dmub_srv *dmub, - const struct dmub_srv_region_params *params, - struct dmub_srv_region_info *out) + dmub_srv_calc_region_info(struct dmub_srv *dmub, + const struct dmub_srv_region_params *params, + struct dmub_srv_region_info *out) { - struct dmub_region *inst = &out->regions[DMUB_WINDOW_0_INST_CONST]; - struct dmub_region *stack = &out->regions[DMUB_WINDOW_1_STACK]; - struct dmub_region *data = &out->regions[DMUB_WINDOW_2_BSS_DATA]; - struct dmub_region *bios = &out->regions[DMUB_WINDOW_3_VBIOS]; - struct dmub_region *mail = &out->regions[DMUB_WINDOW_4_MAILBOX]; - struct dmub_region *trace_buff = &out->regions[DMUB_WINDOW_5_TRACEBUFF]; - struct dmub_region *fw_state = &out->regions[DMUB_WINDOW_6_FW_STATE]; - struct dmub_region *scratch_mem = &out->regions[DMUB_WINDOW_7_SCRATCH_MEM]; const struct dmub_fw_meta_info *fw_info; uint32_t fw_state_size = DMUB_FW_STATE_SIZE; uint32_t trace_buffer_size = DMUB_TRACE_BUFFER_SIZE; - uint32_t scratch_mem_size = DMUB_SCRATCH_MEM_SIZE; - uint32_t previous_top = 0; + uint32_t window_sizes[DMUB_WINDOW_TOTAL] = { 0 }; + if (!dmub->sw_init) return DMUB_STATUS_INVALID; memset(out, 0, sizeof(*out)); + memset(window_sizes, 0, sizeof(window_sizes)); out->num_regions = DMUB_NUM_WINDOWS; - inst->base = 0x0; - inst->top = inst->base + params->inst_const_size; - - data->base = dmub_align(inst->top, 256); - data->top = data->base + params->bss_data_size; - - /* - * All cache windows below should be aligned to the size - * of the DMCUB cache line, 64 bytes. - */ - - stack->base = dmub_align(data->top, 256); - stack->top = stack->base + DMUB_STACK_SIZE + DMUB_CONTEXT_SIZE; - - bios->base = dmub_align(stack->top, 256); - bios->top = bios->base + params->vbios_size; - - if (params->is_mailbox_in_inbox) { - mail->base = 0; - mail->top = mail->base + DMUB_MAILBOX_SIZE; - previous_top = bios->top; - } else { - mail->base = dmub_align(bios->top, 256); - mail->top = mail->base + DMUB_MAILBOX_SIZE; - previous_top = mail->top; - } - fw_info = dmub_get_fw_meta_info(params); if (fw_info) { @@ -486,19 +472,20 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub, dmub->fw_version = fw_info->fw_version; } - trace_buff->base = dmub_align(previous_top, 256); - trace_buff->top = trace_buff->base + dmub_align(trace_buffer_size, 64); - - fw_state->base = dmub_align(trace_buff->top, 256); - fw_state->top = fw_state->base + dmub_align(fw_state_size, 64); + window_sizes[DMUB_WINDOW_0_INST_CONST] = params->inst_const_size; + window_sizes[DMUB_WINDOW_1_STACK] = DMUB_STACK_SIZE + DMUB_CONTEXT_SIZE; + window_sizes[DMUB_WINDOW_2_BSS_DATA] = params->bss_data_size; + window_sizes[DMUB_WINDOW_3_VBIOS] = params->vbios_size; + window_sizes[DMUB_WINDOW_4_MAILBOX] = DMUB_MAILBOX_SIZE; + window_sizes[DMUB_WINDOW_5_TRACEBUFF] = trace_buffer_size; + window_sizes[DMUB_WINDOW_6_FW_STATE] = fw_state_size; + window_sizes[DMUB_WINDOW_7_SCRATCH_MEM] = DMUB_SCRATCH_MEM_SIZE; - scratch_mem->base = dmub_align(fw_state->top, 256); - scratch_mem->top = scratch_mem->base + dmub_align(scratch_mem_size, 64); + out->fb_size = + dmub_srv_calc_regions_for_memory_type(params, out, window_sizes, DMUB_WINDOW_MEMORY_TYPE_FB); - out->fb_size = dmub_align(scratch_mem->top, 4096); - - if (params->is_mailbox_in_inbox) - out->inbox_size = dmub_align(mail->top, 4096); + out->gart_size = + dmub_srv_calc_regions_for_memory_type(params, out, window_sizes, DMUB_WINDOW_MEMORY_TYPE_GART); return DMUB_STATUS_OK; } @@ -507,8 +494,6 @@ enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub, const struct dmub_srv_memory_params *params, struct dmub_srv_fb_info *out) { - uint8_t *cpu_base; - uint64_t gpu_base; uint32_t i; if (!dmub->sw_init) @@ -519,19 +504,16 @@ enum dmub_status dmub_srv_calc_mem_info(struct dmub_srv *dmub, if (params->region_info->num_regions != DMUB_NUM_WINDOWS) return DMUB_STATUS_INVALID; - cpu_base = (uint8_t *)params->cpu_fb_addr; - gpu_base = params->gpu_fb_addr; - for (i = 0; i < DMUB_NUM_WINDOWS; ++i) { const struct dmub_region *reg = ¶ms->region_info->regions[i]; - out->fb[i].cpu_addr = cpu_base + reg->base; - out->fb[i].gpu_addr = gpu_base + reg->base; - - if (i == DMUB_WINDOW_4_MAILBOX && params->cpu_inbox_addr != 0) { - out->fb[i].cpu_addr = (uint8_t *)params->cpu_inbox_addr + reg->base; - out->fb[i].gpu_addr = params->gpu_inbox_addr + reg->base; + if (params->window_memory_type[i] == DMUB_WINDOW_MEMORY_TYPE_GART) { + out->fb[i].cpu_addr = (uint8_t *)params->cpu_gart_addr + reg->base; + out->fb[i].gpu_addr = params->gpu_gart_addr + reg->base; + } else { + out->fb[i].cpu_addr = (uint8_t *)params->cpu_fb_addr + reg->base; + out->fb[i].gpu_addr = params->gpu_fb_addr + reg->base; } out->fb[i].size = reg->top - reg->base; @@ -809,11 +791,20 @@ enum dmub_status dmub_srv_cmd_execute(struct dmub_srv *dmub) bool dmub_srv_is_hw_pwr_up(struct dmub_srv *dmub) { + union dmub_fw_boot_status status; + if (!dmub->hw_funcs.is_hw_powered_up) return true; - return dmub->hw_funcs.is_hw_powered_up(dmub) && - dmub->hw_funcs.is_hw_init(dmub); + if (!dmub->hw_funcs.is_hw_powered_up(dmub)) + return false; + + if (!dmub->hw_funcs.is_hw_init(dmub)) + return false; + + status = dmub->hw_funcs.get_fw_status(dmub); + + return status.bits.dal_fw && status.bits.mailbox_rdy; } enum dmub_status dmub_srv_wait_for_hw_pwr_up(struct dmub_srv *dmub, diff --git a/drivers/gpu/drm/amd/display/include/audio_types.h b/drivers/gpu/drm/amd/display/include/audio_types.h index 915a031a43cb..e4a26143f14c 100644 --- a/drivers/gpu/drm/amd/display/include/audio_types.h +++ b/drivers/gpu/drm/amd/display/include/audio_types.h @@ -27,11 +27,21 @@ #define __AUDIO_TYPES_H__ #include "signal_types.h" +#include "fixed31_32.h" +#include "dc_dp_types.h" #define AUDIO_INFO_DISPLAY_NAME_SIZE_IN_CHARS 20 #define MAX_HW_AUDIO_INFO_DISPLAY_NAME_SIZE_IN_CHARS 18 #define MULTI_CHANNEL_SPLIT_NO_ASSO_INFO 0xFFFFFFFF +struct audio_dp_link_info { + uint32_t link_bandwidth_kbps; + uint32_t hblank_min_symbol_width; + enum dp_link_encoding encoding; + enum dc_link_rate link_rate; + enum dc_lane_count lane_count; + bool is_mst; +}; struct audio_crtc_info { uint32_t h_total; @@ -42,7 +52,10 @@ struct audio_crtc_info { uint32_t calculated_pixel_clock_100Hz; /* in 100Hz */ uint32_t refresh_rate; enum dc_color_depth color_depth; + enum dc_pixel_encoding pixel_encoding; bool interlaced; + uint32_t dsc_bits_per_pixel; + uint32_t dsc_num_slices; }; struct azalia_clock_info { uint32_t pixel_clock_in_10khz; @@ -95,6 +108,8 @@ struct audio_output { enum signal_type signal; /* video timing */ struct audio_crtc_info crtc_info; + /* DP link info */ + struct audio_dp_link_info dp_link_info; /* PLL for audio */ struct audio_pll_info pll_info; }; |