>-----Original Message-----
>From: Ander Conselvan De Oliveira [mailto:conselv...@gmail.com]
>Sent: Friday, February 12, 2016 10:13 PM
>To: R, Durgadoss <durgados...@intel.com>; intel-gfx@lists.freedesktop.org
>Subject: Re: [Intel-gfx] [PATCHv2 4/4] drm/i915/dp: Enable Upfront link 
>training for typeC DP support on
>BXT
>
>On Mon, 2016-02-01 at 19:27 +0530, Durgadoss R wrote:
>> To support USB type C alternate DP mode, the display driver needs to
>> know the number of lanes required by the DP panel as well as number
>> of lanes that can be supported by the type-C cable. Sometimes, the
>> type-C cable may limit the bandwidth even if Panel can support
>> more lanes. To address these scenarios, the display driver will
>> start link training with max lanes, and if that fails, the driver
>> falls back to x2 lanes; and repeats this procedure for all
>> bandwidth/lane configurations.
>>
>> * Since link training is done before modeset only the port
>>   (and not pipe/planes) and its associated PLLs are enabled.
>> * On DP hotplug: Directly start link training on the crtc
>>   associated with the DP encoder.
>> * On Connected boot scenarios: When booted with an LFP and a DP,
>>   typically, BIOS brings up DP. In these cases, we disable the
>>   crtc, do upfront link training and then re-enable it back.
>> * All local changes made for upfront link training are reset
>>   to their previous values once it is done; so that the
>>   subsequent modeset is not aware of these changes.
>>
>
>Please always include a changelog when sending a new version of a patch.

Sure, will add it in the next version.

>
>> Signed-off-by: Durgadoss R <durgados...@intel.com>
>> ---
>>  drivers/gpu/drm/i915/intel_ddi.c     | 102 +++++++++++++++++++++++++++++
>>  drivers/gpu/drm/i915/intel_display.c |  38 ++++++++++-
>>  drivers/gpu/drm/i915/intel_dp.c      | 122 
>> ++++++++++++++++++++++++++++++++++
>> -
>>  drivers/gpu/drm/i915/intel_drv.h     |  10 +++
>>  4 files changed, 270 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_ddi.c
>> b/drivers/gpu/drm/i915/intel_ddi.c
>> index 3fb9a03..cc5cad5 100644
>> --- a/drivers/gpu/drm/i915/intel_ddi.c
>> +++ b/drivers/gpu/drm/i915/intel_ddi.c
>> @@ -3217,6 +3217,108 @@ intel_ddi_init_hdmi_connector(struct
>> intel_digital_port *intel_dig_port)
>>      return connector;
>>  }
>>
>> +static void intel_ddi_commit_upfront_pll_config(struct intel_dp *intel_dp,
>> +                                    struct intel_shared_dpll *pll)
>> +{
>> +    struct intel_shared_dpll_config tmp_config;
>> +
>> +    /*
>> +     * The shared_dpll_config is computed in intel_get_shared_dpll().
>> +     * It is committed to 'pll->config' here to be used in
>> +     * intel_enable/disable_shared_dpll functions. Before committing.
>> +     * save the existing config, so that once upfront link training is
>> +     * done, the original value of 'pll->config' can be restored.
>> +     */
>> +    tmp_config = pll->config;
>> +    pll->config = intel_dp->upfront_pll_config;
>> +    intel_dp->upfront_pll_config = tmp_config;
>> +}
>> +
>> +static struct intel_shared_dpll *intel_ddi_select_upfront_pll_config(
>> +                    struct intel_dp *intel_dp, struct intel_crtc *crtc)
>> +{
>> +    if (!intel_ddi_pll_select(crtc, crtc->config))
>> +            return NULL;
>> +
>> +    return intel_crtc_to_shared_dpll(crtc);
>> +}
>> +
>> +static void intel_ddi_clear_upfront_pll_config(struct intel_dp *intel_dp,
>> +                                    struct intel_shared_dpll *pll)
>> +{
>> +    pll->config = intel_dp->upfront_pll_config;
>> +}
>> +
>
>The shared pll interface is really getting in the way here. And BXT plls aren't
>even shared. We are jumping through hoops to get the pll that matches the given
>encoder and to call the code that calculates the dpll_hw_state based on the DP
>link rate. I'd like to get that interface fixed but I reckon it will be tricky
>to find something that works for all the platforms we support. That's the next
>thing on my todo list.
>
>If we have to live with some intermediary solution, I think it would be better
>to (almost) completely bypass the shared dpll stuff. First we would need to
>split bxt_ddi_pll_sel() so that we would have a function that takes the link
>bandwidth and spits out a dpll_hw_state without looking at crtc_state. Then we
>would just take the right pll based on dig_port->port, make sure it is not 
>being
>used and program it with the hw state we get from that new function.
>
>I wrote a patch to do that split. With it, the PLL enable logic in the upfront
>link train logic would look a bit like below. There is some potential for
>problems with the state checker, but it should be fine as long as the old dpll

This is similar to what we had in the earlier version except the
ddi_pll_select split.. I moved it this way because we wanted all the
dpll functionality within the dpll interfaces. But yes, as you said,
we could not get the whole thing cleaner.

>hw state is saved and restored when everything is over.
>
>
>/* FIXME: this only works for BXT */
>dpll_id = (enum intel_dpll_id) dig_port->port;
>pll = dev_priv->shared_dplls[dpll_id];
>
>if (WARN_ON(pll->config.crtc_mask) || WARN_ON(pll->active))
>       goto exit;
>
>bxt_ddi_dp_pll_dividers(crtc->config->port_clock, &clk_div);
>if (!bxt_ddi_set_dpll_hw_state(clock, &clk_div,
>                              &pll->config.hw_state)) {
>       DRM_ERROR("Could not setup DPLL\n");
>       goto exit;
>}
>
>
>/* Enable PLL followed by port */
>pll->enable(dev_priv, pll);
>encoder->pre_enable(encoder);
>
>
>This solution would remove the need for patches 1 and 3.
>
>What do you think?

This (and your other patch) looks ok to me. I will test and confirm once.

>
>> +int intel_ddi_upfront_link_train(struct intel_dp *intel_dp,
>> +                            struct intel_crtc *crtc)
>> +{
>> +    struct intel_connector *connector = intel_dp->attached_connector;
>> +    struct intel_encoder *encoder = connector->encoder;
>> +    struct intel_shared_dpll *pll = NULL;
>> +    struct drm_crtc *drm_crtc = NULL;
>> +    struct intel_crtc_state *tmp_crtc_config;
>> +    uint8_t link_bw, lane_count;
>> +
>> +    if (!crtc) {
>> +            drm_crtc = intel_get_unused_crtc(&encoder->base);
>> +            if (!drm_crtc) {
>> +                    DRM_ERROR("No crtc for upfront link training\n");
>> +                    return -EINVAL;
>> +            }
>> +            encoder->base.crtc = drm_crtc;
>> +            crtc = to_intel_crtc(drm_crtc);
>> +    }
>> +
>> +    /* Initialize with Max Link rate & lane count supported by panel */
>> +    link_bw =  intel_dp->dpcd[DP_MAX_LINK_RATE];
>> +    lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
>> +
>> +    /* Save the crtc->config */
>> +    tmp_crtc_config = crtc->config;
>> +
>> +    DRM_DEBUG_KMS("Using pipe %c\n", pipe_name(crtc->pipe));
>> +    do {
>> +            crtc->config->port_clock =
>> drm_dp_bw_code_to_link_rate(link_bw);
>> +            crtc->config->lane_count = lane_count;
>> +
>> +            pll = intel_ddi_select_upfront_pll_config(intel_dp, crtc);
>> +            if (!pll) {
>> +                    DRM_ERROR("Could not get shared DPLL\n");
>> +                    goto exit;
>> +            }
>> +
>> +            intel_ddi_commit_upfront_pll_config(intel_dp, pll);
>> +
>> +            /* Enable PLL followed by port */
>> +            intel_enable_shared_dpll(crtc);
>> +            encoder->pre_enable(encoder);
>> +
>> +            /* Check if link training passed; if so update DPCD */
>> +            if (intel_dp->train_set_valid)
>> +                    intel_dp_update_dpcd_params(intel_dp);
>> +
>> +            /* Disable port followed by PLL for next retry/clean up */
>> +            encoder->post_disable(encoder);
>> +            intel_disable_shared_dpll(crtc);
>> +
>> +            intel_ddi_clear_upfront_pll_config(intel_dp, pll);
>> +
>> +    } while (!intel_dp->train_set_valid &&
>> +            intel_dp_get_link_retry_params(intel_dp, &lane_count,
>> &link_bw));
>> +exit:
>> +    /* Reset local associations made */
>> +    if (drm_crtc)
>> +            encoder->base.crtc = NULL;
>> +
>> +    /* Restore crtc config */
>> +    crtc->config = tmp_crtc_config;
>> +
>> +    DRM_DEBUG_KMS("Upfront link train %s: lanes:%d bw:%d\n",
>> +    intel_dp->train_set_valid ? "Passed" : "Failed", lane_count,
>> link_bw);
>> +
>> +    return intel_dp->train_set_valid ? 0 : -1;
>> +}
>> +
>>  void intel_ddi_init(struct drm_device *dev, enum port port)
>>  {
>>      struct drm_i915_private *dev_priv = dev->dev_private;
>> diff --git a/drivers/gpu/drm/i915/intel_display.c
>> b/drivers/gpu/drm/i915/intel_display.c
>> index af50e61..6a650aa 100644
>> --- a/drivers/gpu/drm/i915/intel_display.c
>> +++ b/drivers/gpu/drm/i915/intel_display.c
>> @@ -4238,16 +4238,45 @@ static void lpt_pch_enable(struct drm_crtc *crtc)
>>      lpt_enable_pch_transcoder(dev_priv, cpu_transcoder);
>>  }
>>
>> +static
>> +void intel_get_new_shared_dpll_config(struct drm_i915_private *dev_priv,
>> +                              struct intel_shared_dpll_config
>> *shared_dpll)
>> +{
>> +    enum intel_dpll_id i;
>> +
>> +    /* Create a new shared dpll config by duplicating pll->config */
>> +    for (i = 0; i < dev_priv->num_shared_dpll; i++) {
>> +            struct intel_shared_dpll *pll = &dev_priv->shared_dplls[i];
>> +            shared_dpll[i] = pll->config;
>> +            shared_dpll[i].crtc_mask = 0;
>> +    }
>> +}
>> +
>>  struct intel_shared_dpll *intel_get_shared_dpll(struct intel_crtc *crtc,
>>                                              struct intel_crtc_state
>> *crtc_state)
>>  {
>>      struct drm_i915_private *dev_priv = crtc->base.dev->dev_private;
>>      struct intel_shared_dpll *pll;
>>      struct intel_shared_dpll_config *shared_dpll;
>> +    struct intel_shared_dpll_config tmp_config[I915_NUM_PLLS];
>>      enum intel_dpll_id i;
>>      int max = dev_priv->num_shared_dpll;
>>
>> -    shared_dpll = intel_atomic_get_shared_dpll_state(crtc_state
>> ->base.state);
>> +    /*
>> +     * intel_get_shared_dpll needs a shared_dpll[] to store the computed
>> +     * dpll_config values. For atomic path, it is stored in
>> +     * intel_atomic_state->shared_dpll[], which is later committed to
>> +     * dev_priv->shared_dpll[] in atomic commit. For non-atomic path,
>> +     * (where intel_atomic_state does not exist) the dpll_config values
>> +     * are stored in 'tmp_config[]' and copied to encoder structures
>> +     * for later use.
>> +     */
>> +    if (crtc_state->base.state) {
>> +            shared_dpll = intel_atomic_get_shared_dpll_state(crtc_state
>> ->base.state);
>> +    } else {
>> +            intel_get_new_shared_dpll_config(dev_priv, tmp_config);
>> +            shared_dpll = tmp_config;
>> +    }
>>
>>      if (HAS_PCH_IBX(dev_priv->dev)) {
>>              /* Ironlake PCH has a fixed PLL->PCH pipe mapping. */
>> @@ -4277,6 +4306,7 @@ struct intel_shared_dpll *intel_get_shared_dpll(struct
>> intel_crtc *crtc,
>>              pll = &dev_priv->shared_dplls[i];
>>              DRM_DEBUG_KMS("CRTC:%d using pre-allocated %s\n",
>>                      crtc->base.base.id, pll->name);
>> +
>>              WARN_ON(shared_dpll[i].crtc_mask);
>>
>>              goto found;
>> @@ -4325,6 +4355,12 @@ found:
>>
>>      shared_dpll[i].crtc_mask |= 1 << crtc->pipe;
>>
>> +    /*
>> +     * For DP, this shared dpll config is saved to intel_dp,
>> +     * and used by upfront link train logic subsequently.
>> +     */
>> +    intel_dp_update_shared_dpll_config(crtc_state, shared_dpll[i]);
>> +
>>      return pll;
>>  }
>>
>> diff --git a/drivers/gpu/drm/i915/intel_dp.c 
>> b/drivers/gpu/drm/i915/intel_dp.c
>> index e2bea710..cc7b6f3 100644
>> --- a/drivers/gpu/drm/i915/intel_dp.c
>> +++ b/drivers/gpu/drm/i915/intel_dp.c
>> @@ -1679,6 +1679,55 @@ void intel_dp_set_link_params(struct intel_dp
>> *intel_dp,
>>      intel_dp->lane_count = pipe_config->lane_count;
>>  }
>>
>> +void intel_dp_update_dpcd_params(struct intel_dp *intel_dp)
>> +{
>> +    intel_dp->dpcd[DP_MAX_LANE_COUNT] &= ~DP_MAX_LANE_COUNT_MASK;
>> +    intel_dp->dpcd[DP_MAX_LANE_COUNT] |=
>> +                    intel_dp->lane_count & DP_MAX_LANE_COUNT_MASK;
>> +
>> +    intel_dp->dpcd[DP_MAX_LINK_RATE] =
>> +                    drm_dp_link_rate_to_bw_code(intel_dp->link_rate);
>
>Maybe it would be good to have an explicit field for actual max lane count and
>link rate. That way, reading the DPCD from debugfs will give the actual results
>read from the sink instead of an edited value.

Looks like the 'dpcd_show' interface reads the DPCD everytime and prints
It .. And does not seem to look up at intel_dp->dpcd[].
Or, are we talking about two different interfaces ?

>
>> +}
>> +
>> +bool intel_dp_get_link_retry_params(struct intel_dp *intel_dp,
>> +                    uint8_t *lane_count, uint8_t *link_bw)
>> +{
>> +    /*
>> +     * As per DP1.3 Spec, retry all link rates for a particular
>> +     * lane count value, before reducing number of lanes.
>> +     */
>> +    if (*link_bw == DP_LINK_BW_5_4) {
>> +            *link_bw = DP_LINK_BW_2_7;
>> +    } else if (*link_bw == DP_LINK_BW_2_7) {
>> +            *link_bw = DP_LINK_BW_1_62;
>> +    } else if (*lane_count == 4) {
>> +            *lane_count = 2;
>> +            *link_bw = intel_dp_max_link_bw(intel_dp);
>> +    } else if (*lane_count == 2) {
>> +            *lane_count = 1;
>> +            *link_bw = intel_dp_max_link_bw(intel_dp);
>> +    } else {
>> +            /* Tried all combinations, so exit */
>> +            return false;
>> +    }
>> +
>> +    return true;
>> +}
>> +
>> +void intel_dp_update_shared_dpll_config(struct intel_crtc_state 
>> *pipe_config,
>> +                            struct intel_shared_dpll_config config)
>> +{
>> +    struct intel_encoder *encoder;
>> +    struct intel_dp *intel_dp;
>> +
>> +    encoder = intel_ddi_get_crtc_new_encoder(pipe_config);
>> +    if (!encoder || encoder->type != INTEL_OUTPUT_DISPLAYPORT)
>> +            return;
>> +
>> +    intel_dp = enc_to_intel_dp(&encoder->base);
>> +    intel_dp->upfront_pll_config = config;
>> +}
>> +
>>  static void intel_dp_prepare(struct intel_encoder *encoder)
>>  {
>>      struct drm_device *dev = encoder->base.dev;
>> @@ -4601,6 +4650,66 @@ intel_dp_unset_edid(struct intel_dp *intel_dp)
>>      intel_dp->has_audio = false;
>>  }
>>
>> +static bool intel_dp_upfront_link_train(struct drm_connector *connector)
>> +{
>> +    struct intel_dp *intel_dp = intel_attached_dp(connector);
>> +    struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>> +    struct intel_encoder *intel_encoder = &intel_dig_port->base;
>> +    struct drm_device *dev = intel_encoder->base.dev;
>> +    struct drm_crtc *crtc = intel_dig_port->base.base.crtc;
>> +    struct drm_mode_config *config = &dev->mode_config;
>> +    struct intel_crtc *intel_crtc = crtc ? to_intel_crtc(crtc) : NULL;
>> +    struct drm_modeset_acquire_ctx ctx, *old_ctx = NULL;
>> +    bool need_dpms_on = false;
>> +    int ret;
>> +
>> +    if (!IS_BROXTON(dev))
>> +            return true;
>> +
>> +    /*
>> +     * On hotplug cases, we call _upfront_link_train directly.
>> +     * In connected boot scenarios (boot with {MIPI/eDP} + DP),
>> +     * BIOS typically brings up DP. Hence, we disable the crtc
>> +     * to do _upfront_link_training and then re-enable it back.
>> +     */
>> +    if (crtc && crtc->enabled && intel_crtc->active) {
>> +            old_ctx = crtc->acquire_ctx;
>> +            drm_modeset_acquire_init(&ctx, 0);
>> +retry:
>> +            ret = drm_modeset_lock(&config->connection_mutex, &ctx);
>> +            if (ret == -EDEADLK) {
>> +                    drm_modeset_backoff(&ctx);
>> +                    goto retry;
>> +            }
>
>We need to acquire those locks even if the crtc is disabled, otherwise a 
>modeset
>could sneak past and wreak havoc while we are doing the link training and
>fiddling with the plls.
>
>> +
>> +            crtc->acquire_ctx = &ctx;
>> +            ret = drm_atomic_helper_connector_dpms(connector,
>> DRM_MODE_DPMS_OFF);
>
>The crtc->acquire_ctx override seems wrong. One thing that slipped by me when I
>looked at the previous version of this patch is that we need to acquire some
>locks even if the crtc is already off. So it is probably better to write a

Ok, I think this is what Ville is also mentioning. I did not notice it either 
;-(
I am still learning the scheme of ww_* and ctx based locking..
So, yes, will re-work this.

>separate function that takes an acquire context and uses atomic to set
>crtc_state->active to false and calls drm_atomic_commit().
>
>> +            if (ret) {
>> +                    DRM_ERROR("DPMS off failed:%d\n", ret);
>> +                    goto exit_unlock;
>> +            }
>> +
>> +            need_dpms_on = true;
>> +    }
>> +
>> +    if (HAS_DDI(dev))
>> +            ret = intel_ddi_upfront_link_train(intel_dp, intel_crtc);
>> +            /* Other platforms upfront link train call goes here..*/
>> +
>> +    if (!need_dpms_on)
>> +            return ret;
>> +
>> +    ret = drm_atomic_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
>> +    if (ret)
>> +            DRM_ERROR("DPMS on failed:%d\n", ret);
>> +
>> +exit_unlock:
>> +    drm_modeset_drop_locks(&ctx);
>> +    drm_modeset_acquire_fini(&ctx);
>> +    crtc->acquire_ctx = old_ctx;
>> +    return ret;
>
>The return type of this function is bool. The conversion here will do the wrong
>thing, since ret potentially has the return value of functions that return non
>-zero on failure.

I wanted to make this as an 'int'. I missed this. Will fix it in next version.

>
>> +}
>> +
>>  static enum drm_connector_status
>>  intel_dp_detect(struct drm_connector *connector, bool force)
>>  {
>> @@ -4610,7 +4719,8 @@ intel_dp_detect(struct drm_connector *connector, bool
>> force)
>>      struct drm_device *dev = connector->dev;
>>      enum drm_connector_status status;
>>      enum intel_display_power_domain power_domain;
>> -    bool ret;
>> +    bool do_upfront_link_train;
>> +    int ret;
>>      u8 sink_irq_vector;
>>
>>      DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
>> @@ -4684,6 +4794,16 @@ intel_dp_detect(struct drm_connector *connector, bool
>> force)
>>                      DRM_DEBUG_DRIVER("CP or sink specific irq
>> unhandled\n");
>>      }
>>
>> +    /* Do not do upfront link train, if it is a compliance request */
>> +    do_upfront_link_train =
>> +            intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT &&
>> +            intel_dp->compliance_test_type != DP_TEST_LINK_TRAINING;
>> +
>> +    if (do_upfront_link_train) {
>> +            ret = intel_dp_upfront_link_train(connector);
>> +            if (ret)
>> +                    status = connector_status_disconnected;
>> +    }
>
>So if upfront link training fails because there is no unused crtc we'll say 
>that
>the connector is disconnected?

I was not sure on what should be the stand we should take here..

Should we distinguish various causes of upfront failure and then
report status accordingly ? Not just crtc but pll selection, link training
failures, etc..

Please let me know what you think.

Thanks,
Durga

>
>
>Ander
>
>>  out:
>>      intel_display_power_put(to_i915(dev), power_domain);
>>      return status;
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h
>> b/drivers/gpu/drm/i915/intel_drv.h
>> index 9fe7c4b..c5ca4ab 100644
>> --- a/drivers/gpu/drm/i915/intel_drv.h
>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>> @@ -812,6 +812,9 @@ struct intel_dp {
>>      unsigned long compliance_test_type;
>>      unsigned long compliance_test_data;
>>      bool compliance_test_active;
>> +
>> +    /* Used to store pll config for upfront link training */
>> +    struct intel_shared_dpll_config upfront_pll_config;
>>  };
>>
>>  struct intel_digital_port {
>> @@ -1031,6 +1034,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>>                       struct intel_crtc_state *pipe_config);
>>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>> +int intel_ddi_upfront_link_train(struct intel_dp *intel_dp,
>> +                            struct intel_crtc *crtc);
>>
>>  /* intel_frontbuffer.c */
>>  void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
>> @@ -1239,6 +1244,9 @@ bool intel_dp_init_connector(struct intel_digital_port
>> *intel_dig_port,
>>                           struct intel_connector *intel_connector);
>>  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>>                            const struct intel_crtc_state *pipe_config);
>> +void intel_dp_update_dpcd_params(struct intel_dp *intel_dp);
>> +bool intel_dp_get_link_retry_params(struct intel_dp *intel_dp,
>> +                            uint8_t *lane_count, uint8_t *link_bw);
>>  void intel_dp_start_link_train(struct intel_dp *intel_dp);
>>  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>>  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>> @@ -1246,6 +1254,8 @@ void intel_dp_encoder_destroy(struct drm_encoder
>> *encoder);
>>  int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc);
>>  bool intel_dp_compute_config(struct intel_encoder *encoder,
>>                           struct intel_crtc_state *pipe_config);
>> +void intel_dp_update_shared_dpll_config(struct intel_crtc_state 
>> *pipe_config,
>> +                            struct intel_shared_dpll_config config);
>>  bool intel_dp_is_edp(struct drm_device *dev, enum port port);
>>  enum irqreturn intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port,
>>                                bool long_hpd);
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to