RE: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-03-06 Thread Lin, Wayne
[AMD Public Use]


> -Original Message-
> From: Sean Paul  
> Sent: Friday, March 6, 2020 1:19 AM
> To: Lin, Wayne 
> Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul 
> ; Maarten Lankhorst 
> ; Maxime Ripard ; 
> David Airlie 
> Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> 
> On Wed, Feb 19, 2020 at 6:00 AM Lin, Wayne  wrote:
> >
> > [AMD Public Use]
> >
> >
> >
> > > -Original Message-
> > > From: Sean Paul 
> > > Sent: Wednesday, February 19, 2020 1:15 AM
> > > To: Lin, Wayne 
> > > Cc: Sean Paul ; dri-devel@lists.freedesktop.org; 
> > > ly...@redhat.com; Sean Paul ; Maarten 
> > > Lankhorst ; Maxime Ripard 
> > > ; David Airlie 
> > > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > >
> > > On Tue, Feb 18, 2020 at 10:52:06AM -0500, Sean Paul wrote:
> > > > On Mon, Feb 17, 2020 at 07:08:37AM +, Lin, Wayne wrote:
> > > > > [AMD Public Use]
> > > > >
> > > > >
> > > > >
> > > > > > -Original Message-
> > > > > > From: Sean Paul 
> > > > > > Sent: Saturday, February 15, 2020 12:09 AM
> > > > > > To: Lin, Wayne 
> > > > > > Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean 
> > > > > > Paul ; Maarten Lankhorst 
> > > > > > ; Maxime Ripard 
> > > > > > ; David Airlie 
> > > > > > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg 
> > > > > > restriction.
> > > > > >
> > > > > > On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne 
> > > > > > 
> > > wrote:
> > > > > > >
> > > > > > > [AMD Public Use]
> > > > > > >
> > > > > > > Hi Paul,
> > > > > > >
> > > > > > > Thanks for the mail!
> > > > > > >
> > > > > > > I tried to solve this problem by having restriction on 
> > > > > > > sending one msg at a
> > > > > > time due to hub/dock compatibility problems.
> > > > > > > From my experience, some branch devices don't handle well on 
> > > > > > > interleaved replies (Dock from HP I think)
> > > > > >
> > > > > > Hi Wayne,
> > > > > > Hmm, that's interesting, do you have a part number of the 
> > > > > > failing dock so I can test it?
> > > > > >
> > > > > Hi Paul,
> > > > >
> > > > > Sorry but it's been quite a while. I can't exactly tell the part 
> > > > > number.
> > > > > If I remember correctly, when the specific branch device 
> > > > > receives interleaved replies, it just doesn't reply to any requests.
> > > > >
> > > > > > > As the result of that, correct me if I'm wrong, I remember 
> > > > > > > most gpu vendors
> > > > > > just send one down request at a time now in windows environment.
> > > > > > > I would suggest the original solution :)
> > > > > >
> > > > > > I can't really say what happens on the Windows side of the 
> > > > > > world, but I suppose that makes sense if this is a widespread 
> > > > > > issue with docks. I do worry about the performance hit.
> > > > > >
> > > > > > If indeed this is a problem, could we ratelimit per branch 
> > > > > > device instead of globally? Even that would be better than 
> > > > > > serializing everything.
> > > > > >
> > > > > Since the problem was because some branch devices can't 
> > > > > simultaneously handle two replies, I'm afraid that we might 
> > > > > still encounter
> > > the same problem?
> > > > >
> > > >
> > > > Hi Wayne,
> > > > Thanks for clarifying. I'm a bit hesitant to scrap this idea 
> > > > without solid evidence that this is a common problem. Do you have 
> > > > any hubs around AMD that you could try my patchset with?
> > Hi Paul,
> > Sure! I will see what I have at hand and try your patch set. It might 
> > take me some time but I will update this as soon as possible. :)
> >
&g

Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-03-05 Thread Sean Paul
On Wed, Feb 19, 2020 at 6:00 AM Lin, Wayne  wrote:
>
> [AMD Public Use]
>
>
>
> > -Original Message-
> > From: Sean Paul 
> > Sent: Wednesday, February 19, 2020 1:15 AM
> > To: Lin, Wayne 
> > Cc: Sean Paul ; dri-devel@lists.freedesktop.org;
> > ly...@redhat.com; Sean Paul ; Maarten Lankhorst
> > ; Maxime Ripard ;
> > David Airlie 
> > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> >
> > On Tue, Feb 18, 2020 at 10:52:06AM -0500, Sean Paul wrote:
> > > On Mon, Feb 17, 2020 at 07:08:37AM +, Lin, Wayne wrote:
> > > > [AMD Public Use]
> > > >
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: Sean Paul 
> > > > > Sent: Saturday, February 15, 2020 12:09 AM
> > > > > To: Lin, Wayne 
> > > > > Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul
> > > > > ; Maarten Lankhorst
> > > > > ; Maxime Ripard
> > > > > ; David Airlie 
> > > > > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > > > >
> > > > > On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne 
> > wrote:
> > > > > >
> > > > > > [AMD Public Use]
> > > > > >
> > > > > > Hi Paul,
> > > > > >
> > > > > > Thanks for the mail!
> > > > > >
> > > > > > I tried to solve this problem by having restriction on sending
> > > > > > one msg at a
> > > > > time due to hub/dock compatibility problems.
> > > > > > From my experience, some branch devices don't handle well on
> > > > > > interleaved replies (Dock from HP I think)
> > > > >
> > > > > Hi Wayne,
> > > > > Hmm, that's interesting, do you have a part number of the failing
> > > > > dock so I can test it?
> > > > >
> > > > Hi Paul,
> > > >
> > > > Sorry but it's been quite a while. I can't exactly tell the part number.
> > > > If I remember correctly, when the specific branch device receives
> > > > interleaved replies, it just doesn't reply to any requests.
> > > >
> > > > > > As the result of that, correct me if I'm wrong, I remember most
> > > > > > gpu vendors
> > > > > just send one down request at a time now in windows environment.
> > > > > > I would suggest the original solution :)
> > > > >
> > > > > I can't really say what happens on the Windows side of the world,
> > > > > but I suppose that makes sense if this is a widespread issue with
> > > > > docks. I do worry about the performance hit.
> > > > >
> > > > > If indeed this is a problem, could we ratelimit per branch device
> > > > > instead of globally? Even that would be better than serializing 
> > > > > everything.
> > > > >
> > > > Since the problem was because some branch devices can't
> > > > simultaneously handle two replies, I'm afraid that we might still 
> > > > encounter
> > the same problem?
> > > >
> > >
> > > Hi Wayne,
> > > Thanks for clarifying. I'm a bit hesitant to scrap this idea without
> > > solid evidence that this is a common problem. Do you have any hubs
> > > around AMD that you could try my patchset with?
> Hi Paul,
> Sure! I will see what I have at hand and try your patch set. It might take
> me some time but I will update this as soon as possible. :)
>

Hi Wayne,
I'm seeing spurious timeouts even with your serialization patch on
mainline. Have you had a chance to test this set? At least on my
machines it seems to be working better.

Sean

> Thanks!
> >
> > Oh, if you can test the set, you'll also need this patch as well :-)
> >
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -3814,7 +3814,9 @@ static bool drm_dp_get_one_sb_msg(struct
> > drm_dp_mst_topology_mgr *mgr, bool up,
> > int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE :
> >DP_SIDEBAND_MSG_DOWN_REP_BASE;
> >
> > -   *mstb = NULL;
> > +   if (mstb)
> > +   

RE: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-19 Thread Lin, Wayne
[AMD Public Use]



> -Original Message-
> From: Sean Paul 
> Sent: Wednesday, February 19, 2020 1:15 AM
> To: Lin, Wayne 
> Cc: Sean Paul ; dri-devel@lists.freedesktop.org;
> ly...@redhat.com; Sean Paul ; Maarten Lankhorst
> ; Maxime Ripard ;
> David Airlie 
> Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> 
> On Tue, Feb 18, 2020 at 10:52:06AM -0500, Sean Paul wrote:
> > On Mon, Feb 17, 2020 at 07:08:37AM +, Lin, Wayne wrote:
> > > [AMD Public Use]
> > >
> > >
> > >
> > > > -Original Message-
> > > > From: Sean Paul 
> > > > Sent: Saturday, February 15, 2020 12:09 AM
> > > > To: Lin, Wayne 
> > > > Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul
> > > > ; Maarten Lankhorst
> > > > ; Maxime Ripard
> > > > ; David Airlie 
> > > > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > > >
> > > > On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne 
> wrote:
> > > > >
> > > > > [AMD Public Use]
> > > > >
> > > > > Hi Paul,
> > > > >
> > > > > Thanks for the mail!
> > > > >
> > > > > I tried to solve this problem by having restriction on sending
> > > > > one msg at a
> > > > time due to hub/dock compatibility problems.
> > > > > From my experience, some branch devices don't handle well on
> > > > > interleaved replies (Dock from HP I think)
> > > >
> > > > Hi Wayne,
> > > > Hmm, that's interesting, do you have a part number of the failing
> > > > dock so I can test it?
> > > >
> > > Hi Paul,
> > >
> > > Sorry but it's been quite a while. I can't exactly tell the part number.
> > > If I remember correctly, when the specific branch device receives
> > > interleaved replies, it just doesn't reply to any requests.
> > >
> > > > > As the result of that, correct me if I'm wrong, I remember most
> > > > > gpu vendors
> > > > just send one down request at a time now in windows environment.
> > > > > I would suggest the original solution :)
> > > >
> > > > I can't really say what happens on the Windows side of the world,
> > > > but I suppose that makes sense if this is a widespread issue with
> > > > docks. I do worry about the performance hit.
> > > >
> > > > If indeed this is a problem, could we ratelimit per branch device
> > > > instead of globally? Even that would be better than serializing 
> > > > everything.
> > > >
> > > Since the problem was because some branch devices can't
> > > simultaneously handle two replies, I'm afraid that we might still 
> > > encounter
> the same problem?
> > >
> >
> > Hi Wayne,
> > Thanks for clarifying. I'm a bit hesitant to scrap this idea without
> > solid evidence that this is a common problem. Do you have any hubs
> > around AMD that you could try my patchset with?
Hi Paul,
Sure! I will see what I have at hand and try your patch set. It might take
me some time but I will update this as soon as possible. :)

Thanks!
> 
> Oh, if you can test the set, you'll also need this patch as well :-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -3814,7 +3814,9 @@ static bool drm_dp_get_one_sb_msg(struct
> drm_dp_mst_topology_mgr *mgr, bool up,
> int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE :
>DP_SIDEBAND_MSG_DOWN_REP_BASE;
> 
> -   *mstb = NULL;
> +   if (mstb)
> +   *mstb = NULL;
> +
> *seqno = -1;
> 
> len = min(mgr->max_dpcd_transaction_bytes, 16);
> 
> 
> > Perhaps we could get some hard data? I'm happy to test things on my
> > end, but I probably shouldn't just start buying a bunch of random HP
> > docks in hopes one of them exhibits the issue :)
> >
> > To add anecdote to anecdote, everything on my desk seems to work with
> > interleaved replies.
> >
> > Since HDCP spec requires the host to verify each channel's encryption
> > every <2s, we're going to be increasing the amount of sideband traffic
> > a fair amount, so I would like to ensure we're doing 

Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-18 Thread Sean Paul
On Tue, Feb 18, 2020 at 10:52:06AM -0500, Sean Paul wrote:
> On Mon, Feb 17, 2020 at 07:08:37AM +, Lin, Wayne wrote:
> > [AMD Public Use]
> > 
> > 
> > 
> > > -Original Message-
> > > From: Sean Paul 
> > > Sent: Saturday, February 15, 2020 12:09 AM
> > > To: Lin, Wayne 
> > > Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul
> > > ; Maarten Lankhorst
> > > ; Maxime Ripard ;
> > > David Airlie 
> > > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > > 
> > > On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne  wrote:
> > > >
> > > > [AMD Public Use]
> > > >
> > > > Hi Paul,
> > > >
> > > > Thanks for the mail!
> > > >
> > > > I tried to solve this problem by having restriction on sending one msg 
> > > > at a
> > > time due to hub/dock compatibility problems.
> > > > From my experience, some branch devices don't handle well on
> > > > interleaved replies (Dock from HP I think)
> > > 
> > > Hi Wayne,
> > > Hmm, that's interesting, do you have a part number of the failing dock so 
> > > I can
> > > test it?
> > > 
> > Hi Paul,
> > 
> > Sorry but it's been quite a while. I can't exactly tell the part number. 
> > If I remember correctly, when the specific branch device receives 
> > interleaved replies,
> > it just doesn't reply to any requests.
> > 
> > > > As the result of that, correct me if I'm wrong, I remember most gpu 
> > > > vendors
> > > just send one down request at a time now in windows environment.
> > > > I would suggest the original solution :)
> > > 
> > > I can't really say what happens on the Windows side of the world, but I 
> > > suppose
> > > that makes sense if this is a widespread issue with docks. I do worry 
> > > about the
> > > performance hit.
> > > 
> > > If indeed this is a problem, could we ratelimit per branch device instead 
> > > of
> > > globally? Even that would be better than serializing everything.
> > > 
> > Since the problem was because some branch devices can't simultaneously 
> > handle 
> > two replies, I'm afraid that we might still encounter the same problem?
> >  
> 
> Hi Wayne,
> Thanks for clarifying. I'm a bit hesitant to scrap this idea without solid
> evidence that this is a common problem. Do you have any hubs around AMD that
> you could try my patchset with?

Oh, if you can test the set, you'll also need this patch as well :-)

diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
b/drivers/gpu/drm/drm_dp_mst_topology.c
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -3814,7 +3814,9 @@ static bool drm_dp_get_one_sb_msg(struct 
drm_dp_mst_topology_mgr *mgr, bool up,
int basereg = up ? DP_SIDEBAND_MSG_UP_REQ_BASE :
   DP_SIDEBAND_MSG_DOWN_REP_BASE;

-   *mstb = NULL;
+   if (mstb)
+   *mstb = NULL;
+
*seqno = -1;

len = min(mgr->max_dpcd_transaction_bytes, 16);


> Perhaps we could get some hard data? I'm happy
> to test things on my end, but I probably shouldn't just start buying a bunch 
> of
> random HP docks in hopes one of them exhibits the issue :)
> 
> To add anecdote to anecdote, everything on my desk seems to work with
> interleaved replies.
> 
> Since HDCP spec requires the host to verify each channel's encryption every 
> <2s,
> we're going to be increasing the amount of sideband traffic a fair amount, so 
> I
> would like to ensure we're doing everything possible to maximize our sideband
> bandwidth.
> 
> Sean
> 
> > Thanks!
> > > Sean
> > > 
> > > >
> > > > Thanks!
> > > > > -Original Message-
> > > > > From: Sean Paul 
> > > > > Sent: Friday, February 14, 2020 5:15 AM
> > > > > To: dri-devel@lists.freedesktop.org
> > > > > Cc: ly...@redhat.com; Lin, Wayne ; Sean Paul
> > > > > ; Maarten Lankhorst
> > > > > ; Maxime Ripard
> > > > > ; David Airlie 
> > > > > Subject: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > > > >
> > > > > From: Sean Paul 
> > > > >
> > > > > Now that we can support multiple simultaneous replies, remove the
> > > > > restrict

Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-18 Thread Sean Paul
On Mon, Feb 17, 2020 at 07:08:37AM +, Lin, Wayne wrote:
> [AMD Public Use]
> 
> 
> 
> > -Original Message-
> > From: Sean Paul 
> > Sent: Saturday, February 15, 2020 12:09 AM
> > To: Lin, Wayne 
> > Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul
> > ; Maarten Lankhorst
> > ; Maxime Ripard ;
> > David Airlie 
> > Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > 
> > On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne  wrote:
> > >
> > > [AMD Public Use]
> > >
> > > Hi Paul,
> > >
> > > Thanks for the mail!
> > >
> > > I tried to solve this problem by having restriction on sending one msg at 
> > > a
> > time due to hub/dock compatibility problems.
> > > From my experience, some branch devices don't handle well on
> > > interleaved replies (Dock from HP I think)
> > 
> > Hi Wayne,
> > Hmm, that's interesting, do you have a part number of the failing dock so I 
> > can
> > test it?
> > 
> Hi Paul,
> 
> Sorry but it's been quite a while. I can't exactly tell the part number. 
> If I remember correctly, when the specific branch device receives interleaved 
> replies,
> it just doesn't reply to any requests.
> 
> > > As the result of that, correct me if I'm wrong, I remember most gpu 
> > > vendors
> > just send one down request at a time now in windows environment.
> > > I would suggest the original solution :)
> > 
> > I can't really say what happens on the Windows side of the world, but I 
> > suppose
> > that makes sense if this is a widespread issue with docks. I do worry about 
> > the
> > performance hit.
> > 
> > If indeed this is a problem, could we ratelimit per branch device instead of
> > globally? Even that would be better than serializing everything.
> > 
> Since the problem was because some branch devices can't simultaneously handle 
> two replies, I'm afraid that we might still encounter the same problem?
>  

Hi Wayne,
Thanks for clarifying. I'm a bit hesitant to scrap this idea without solid
evidence that this is a common problem. Do you have any hubs around AMD that
you could try my patchset with? Perhaps we could get some hard data? I'm happy
to test things on my end, but I probably shouldn't just start buying a bunch of
random HP docks in hopes one of them exhibits the issue :)

To add anecdote to anecdote, everything on my desk seems to work with
interleaved replies.

Since HDCP spec requires the host to verify each channel's encryption every <2s,
we're going to be increasing the amount of sideband traffic a fair amount, so I
would like to ensure we're doing everything possible to maximize our sideband
bandwidth.

Sean

> Thanks!
> > Sean
> > 
> > >
> > > Thanks!
> > > > -Original Message-
> > > > From: Sean Paul 
> > > > Sent: Friday, February 14, 2020 5:15 AM
> > > > To: dri-devel@lists.freedesktop.org
> > > > Cc: ly...@redhat.com; Lin, Wayne ; Sean Paul
> > > > ; Maarten Lankhorst
> > > > ; Maxime Ripard
> > > > ; David Airlie 
> > > > Subject: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > > >
> > > > From: Sean Paul 
> > > >
> > > > Now that we can support multiple simultaneous replies, remove the
> > > > restrictions placed on sending new tx msgs.
> > > >
> > > > This patch essentially just reverts commit
> > > >   5a64967a2f3b ("drm/dp_mst: Have DP_Tx send one msg at a time")
> > now
> > > > that the problem is solved in a different way.
> > > >
> > > > Cc: Wayne Lin 
> > > > Signed-off-by: Sean Paul 
> > > > ---
> > > >  drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++
> > > >  include/drm/drm_dp_mst_helper.h   |  6 --
> > > >  2 files changed, 2 insertions(+), 18 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > index 7e6a82efdfc02..cbf0bb0ddeb84 100644
> > > > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > > @@ -1203,8 +1203,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> > > > drm_dp_mst_branch *mstb,
> > > >   txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {

RE: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-16 Thread Lin, Wayne
[AMD Public Use]



> -Original Message-
> From: Sean Paul 
> Sent: Saturday, February 15, 2020 12:09 AM
> To: Lin, Wayne 
> Cc: dri-devel@lists.freedesktop.org; ly...@redhat.com; Sean Paul
> ; Maarten Lankhorst
> ; Maxime Ripard ;
> David Airlie 
> Subject: Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> 
> On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne  wrote:
> >
> > [AMD Public Use]
> >
> > Hi Paul,
> >
> > Thanks for the mail!
> >
> > I tried to solve this problem by having restriction on sending one msg at a
> time due to hub/dock compatibility problems.
> > From my experience, some branch devices don't handle well on
> > interleaved replies (Dock from HP I think)
> 
> Hi Wayne,
> Hmm, that's interesting, do you have a part number of the failing dock so I 
> can
> test it?
> 
Hi Paul,

Sorry but it's been quite a while. I can't exactly tell the part number. 
If I remember correctly, when the specific branch device receives interleaved 
replies,
it just doesn't reply to any requests.

> > As the result of that, correct me if I'm wrong, I remember most gpu vendors
> just send one down request at a time now in windows environment.
> > I would suggest the original solution :)
> 
> I can't really say what happens on the Windows side of the world, but I 
> suppose
> that makes sense if this is a widespread issue with docks. I do worry about 
> the
> performance hit.
> 
> If indeed this is a problem, could we ratelimit per branch device instead of
> globally? Even that would be better than serializing everything.
> 
Since the problem was because some branch devices can't simultaneously handle 
two replies, I'm afraid that we might still encounter the same problem?
 
Thanks!
> Sean
> 
> >
> > Thanks!
> > > -Original Message-
> > > From: Sean Paul 
> > > Sent: Friday, February 14, 2020 5:15 AM
> > > To: dri-devel@lists.freedesktop.org
> > > Cc: ly...@redhat.com; Lin, Wayne ; Sean Paul
> > > ; Maarten Lankhorst
> > > ; Maxime Ripard
> > > ; David Airlie 
> > > Subject: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> > >
> > > From: Sean Paul 
> > >
> > > Now that we can support multiple simultaneous replies, remove the
> > > restrictions placed on sending new tx msgs.
> > >
> > > This patch essentially just reverts commit
> > >   5a64967a2f3b ("drm/dp_mst: Have DP_Tx send one msg at a time")
> now
> > > that the problem is solved in a different way.
> > >
> > > Cc: Wayne Lin 
> > > Signed-off-by: Sean Paul 
> > > ---
> > >  drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++
> > >  include/drm/drm_dp_mst_helper.h   |  6 --
> > >  2 files changed, 2 insertions(+), 18 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > index 7e6a82efdfc02..cbf0bb0ddeb84 100644
> > > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > > @@ -1203,8 +1203,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> > > drm_dp_mst_branch *mstb,
> > >   txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
> > >   mstb->tx_slots[txmsg->seqno] = NULL;
> > >   }
> > > - mgr->is_waiting_for_dwn_reply = false;
> > > -
> > >   }
> > >  out:
> > >   if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) {
> > > @@
> > > -1214,7 +1212,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> > > drm_dp_mst_branch *mstb,
> > >   }
> > >   mutex_unlock(&mgr->qlock);
> > >
> > > - drm_dp_mst_kick_tx(mgr);
> > >   return ret;
> > >  }
> > >
> > > @@ -2797,11 +2794,9 @@ static void
> > > process_single_down_tx_qlock(struct
> > > drm_dp_mst_topology_mgr *mgr)
> > >   ret = process_single_tx_qlock(mgr, txmsg, false);
> > >   if (ret == 1) {
> > >   /* txmsg is sent it should be in the slots now */
> > > - mgr->is_waiting_for_dwn_reply = true;
> > >   list_del(&txmsg->next);
> > >   } else if (ret) {
> > >   DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
> > > - mgr->is_waiting_for_dwn_reply = false;
> >

Re: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-14 Thread Sean Paul
On Fri, Feb 14, 2020 at 12:58 AM Lin, Wayne  wrote:
>
> [AMD Public Use]
>
> Hi Paul,
>
> Thanks for the mail!
>
> I tried to solve this problem by having restriction on sending one msg at a 
> time due to hub/dock compatibility problems.
> From my experience, some branch devices don't handle well on interleaved 
> replies (Dock from HP I think)

Hi Wayne,
Hmm, that's interesting, do you have a part number of the failing dock
so I can test it?

> As the result of that, correct me if I'm wrong, I remember most gpu vendors 
> just send one down request at a time now in windows environment.
> I would suggest the original solution :)

I can't really say what happens on the Windows side of the world, but
I suppose that makes sense if this is a widespread issue with docks. I
do worry about the performance hit.

If indeed this is a problem, could we ratelimit per branch device
instead of globally? Even that would be better than serializing
everything.

Sean

>
> Thanks!
> > -Original Message-
> > From: Sean Paul 
> > Sent: Friday, February 14, 2020 5:15 AM
> > To: dri-devel@lists.freedesktop.org
> > Cc: ly...@redhat.com; Lin, Wayne ; Sean Paul
> > ; Maarten Lankhorst
> > ; Maxime Ripard ;
> > David Airlie 
> > Subject: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> >
> > From: Sean Paul 
> >
> > Now that we can support multiple simultaneous replies, remove the
> > restrictions placed on sending new tx msgs.
> >
> > This patch essentially just reverts commit
> >   5a64967a2f3b ("drm/dp_mst: Have DP_Tx send one msg at a time") now
> > that the problem is solved in a different way.
> >
> > Cc: Wayne Lin 
> > Signed-off-by: Sean Paul 
> > ---
> >  drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++
> >  include/drm/drm_dp_mst_helper.h   |  6 --
> >  2 files changed, 2 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> > b/drivers/gpu/drm/drm_dp_mst_topology.c
> > index 7e6a82efdfc02..cbf0bb0ddeb84 100644
> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> > @@ -1203,8 +1203,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> > drm_dp_mst_branch *mstb,
> >   txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
> >   mstb->tx_slots[txmsg->seqno] = NULL;
> >   }
> > - mgr->is_waiting_for_dwn_reply = false;
> > -
> >   }
> >  out:
> >   if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) { @@
> > -1214,7 +1212,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> > drm_dp_mst_branch *mstb,
> >   }
> >   mutex_unlock(&mgr->qlock);
> >
> > - drm_dp_mst_kick_tx(mgr);
> >   return ret;
> >  }
> >
> > @@ -2797,11 +2794,9 @@ static void process_single_down_tx_qlock(struct
> > drm_dp_mst_topology_mgr *mgr)
> >   ret = process_single_tx_qlock(mgr, txmsg, false);
> >   if (ret == 1) {
> >   /* txmsg is sent it should be in the slots now */
> > - mgr->is_waiting_for_dwn_reply = true;
> >   list_del(&txmsg->next);
> >   } else if (ret) {
> >   DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
> > - mgr->is_waiting_for_dwn_reply = false;
> >   list_del(&txmsg->next);
> >   if (txmsg->seqno != -1)
> >   txmsg->dst->tx_slots[txmsg->seqno] = NULL; @@ -2841,8
> > +2836,7 @@ static void drm_dp_queue_down_tx(struct
> > drm_dp_mst_topology_mgr *mgr,
> >   drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
> >   }
> >
> > - if (list_is_singular(&mgr->tx_msg_downq) &&
> > - !mgr->is_waiting_for_dwn_reply)
> > + if (list_is_singular(&mgr->tx_msg_downq))
> >   process_single_down_tx_qlock(mgr);
> >   mutex_unlock(&mgr->qlock);
> >  }
> > @@ -3822,7 +3816,6 @@ static int drm_dp_mst_handle_down_rep(struct
> > drm_dp_mst_topology_mgr *mgr)
> >   mutex_lock(&mgr->qlock);
> >   txmsg->state = DRM_DP_SIDEBAND_TX_RX;
> >   mstb->tx_slots[seqno] = NULL;
> > - mgr->is_waiting_for_dwn_reply = false;
> >   mutex_unlock(&mgr->qlock);
> >
> >   wake_up_all(&mgr->tx_waitq);
> > @@ -3830,9 +3823,6 @@ static int drm_dp_mst_handle_down_rep(struct
> > drm_dp_mst_topology_mgr

RE: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-13 Thread Lin, Wayne
[AMD Public Use]

Hi Paul,

Thanks for the mail!

I tried to solve this problem by having restriction on sending one msg at a 
time due to hub/dock compatibility problems.
>From my experience, some branch devices don't handle well on interleaved 
>replies (Dock from HP I think)
As the result of that, correct me if I'm wrong, I remember most gpu vendors 
just send one down request at a time now in windows environment.
I would suggest the original solution :)

Thanks!
> -Original Message-
> From: Sean Paul 
> Sent: Friday, February 14, 2020 5:15 AM
> To: dri-devel@lists.freedesktop.org
> Cc: ly...@redhat.com; Lin, Wayne ; Sean Paul
> ; Maarten Lankhorst
> ; Maxime Ripard ;
> David Airlie 
> Subject: [PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.
> 
> From: Sean Paul 
> 
> Now that we can support multiple simultaneous replies, remove the
> restrictions placed on sending new tx msgs.
> 
> This patch essentially just reverts commit
>   5a64967a2f3b ("drm/dp_mst: Have DP_Tx send one msg at a time") now
> that the problem is solved in a different way.
> 
> Cc: Wayne Lin 
> Signed-off-by: Sean Paul 
> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++
>  include/drm/drm_dp_mst_helper.h   |  6 --
>  2 files changed, 2 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 7e6a82efdfc02..cbf0bb0ddeb84 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1203,8 +1203,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> drm_dp_mst_branch *mstb,
>   txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
>   mstb->tx_slots[txmsg->seqno] = NULL;
>   }
> - mgr->is_waiting_for_dwn_reply = false;
> -
>   }
>  out:
>   if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) { @@
> -1214,7 +1212,6 @@ static int drm_dp_mst_wait_tx_reply(struct
> drm_dp_mst_branch *mstb,
>   }
>   mutex_unlock(&mgr->qlock);
> 
> - drm_dp_mst_kick_tx(mgr);
>   return ret;
>  }
> 
> @@ -2797,11 +2794,9 @@ static void process_single_down_tx_qlock(struct
> drm_dp_mst_topology_mgr *mgr)
>   ret = process_single_tx_qlock(mgr, txmsg, false);
>   if (ret == 1) {
>   /* txmsg is sent it should be in the slots now */
> - mgr->is_waiting_for_dwn_reply = true;
>   list_del(&txmsg->next);
>   } else if (ret) {
>   DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
> - mgr->is_waiting_for_dwn_reply = false;
>   list_del(&txmsg->next);
>   if (txmsg->seqno != -1)
>   txmsg->dst->tx_slots[txmsg->seqno] = NULL; @@ -2841,8
> +2836,7 @@ static void drm_dp_queue_down_tx(struct
> drm_dp_mst_topology_mgr *mgr,
>   drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
>   }
> 
> - if (list_is_singular(&mgr->tx_msg_downq) &&
> - !mgr->is_waiting_for_dwn_reply)
> + if (list_is_singular(&mgr->tx_msg_downq))
>   process_single_down_tx_qlock(mgr);
>   mutex_unlock(&mgr->qlock);
>  }
> @@ -3822,7 +3816,6 @@ static int drm_dp_mst_handle_down_rep(struct
> drm_dp_mst_topology_mgr *mgr)
>   mutex_lock(&mgr->qlock);
>   txmsg->state = DRM_DP_SIDEBAND_TX_RX;
>   mstb->tx_slots[seqno] = NULL;
> - mgr->is_waiting_for_dwn_reply = false;
>   mutex_unlock(&mgr->qlock);
> 
>   wake_up_all(&mgr->tx_waitq);
> @@ -3830,9 +3823,6 @@ static int drm_dp_mst_handle_down_rep(struct
> drm_dp_mst_topology_mgr *mgr)
>   return 0;
> 
>  out_clear_reply:
> - mutex_lock(&mgr->qlock);
> - mgr->is_waiting_for_dwn_reply = false;
> - mutex_unlock(&mgr->qlock);
>   if (msg)
>   memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx));
>  out:
> @@ -4670,7 +4660,7 @@ static void drm_dp_tx_work(struct work_struct
> *work)
>   struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct
> drm_dp_mst_topology_mgr, tx_work);
> 
>   mutex_lock(&mgr->qlock);
> - if (!list_empty(&mgr->tx_msg_downq)
> && !mgr->is_waiting_for_dwn_reply)
> + if (!list_empty(&mgr->tx_msg_downq))
>   process_single_down_tx_qlock(mgr);
>   mutex_unlock(&mgr->qlock);
>  }
> diff --git a/include/drm/drm_dp_mst_helper.h
> b/include/drm/drm_dp_mst_helper.h index 7d0341f94ce1b..fcc30e64c8e7e
> 10064

[PATCH 3/3] drm/dp_mst: Remove single tx msg restriction.

2020-02-13 Thread Sean Paul
From: Sean Paul 

Now that we can support multiple simultaneous replies, remove the
restrictions placed on sending new tx msgs.

This patch essentially just reverts commit
  5a64967a2f3b ("drm/dp_mst: Have DP_Tx send one msg at a time")
now that the problem is solved in a different way.

Cc: Wayne Lin 
Signed-off-by: Sean Paul 
---
 drivers/gpu/drm/drm_dp_mst_topology.c | 14 ++
 include/drm/drm_dp_mst_helper.h   |  6 --
 2 files changed, 2 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c 
b/drivers/gpu/drm/drm_dp_mst_topology.c
index 7e6a82efdfc02..cbf0bb0ddeb84 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -1203,8 +1203,6 @@ static int drm_dp_mst_wait_tx_reply(struct 
drm_dp_mst_branch *mstb,
txmsg->state == DRM_DP_SIDEBAND_TX_SENT) {
mstb->tx_slots[txmsg->seqno] = NULL;
}
-   mgr->is_waiting_for_dwn_reply = false;
-
}
 out:
if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) {
@@ -1214,7 +1212,6 @@ static int drm_dp_mst_wait_tx_reply(struct 
drm_dp_mst_branch *mstb,
}
mutex_unlock(&mgr->qlock);
 
-   drm_dp_mst_kick_tx(mgr);
return ret;
 }
 
@@ -2797,11 +2794,9 @@ static void process_single_down_tx_qlock(struct 
drm_dp_mst_topology_mgr *mgr)
ret = process_single_tx_qlock(mgr, txmsg, false);
if (ret == 1) {
/* txmsg is sent it should be in the slots now */
-   mgr->is_waiting_for_dwn_reply = true;
list_del(&txmsg->next);
} else if (ret) {
DRM_DEBUG_KMS("failed to send msg in q %d\n", ret);
-   mgr->is_waiting_for_dwn_reply = false;
list_del(&txmsg->next);
if (txmsg->seqno != -1)
txmsg->dst->tx_slots[txmsg->seqno] = NULL;
@@ -2841,8 +2836,7 @@ static void drm_dp_queue_down_tx(struct 
drm_dp_mst_topology_mgr *mgr,
drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
}
 
-   if (list_is_singular(&mgr->tx_msg_downq) &&
-   !mgr->is_waiting_for_dwn_reply)
+   if (list_is_singular(&mgr->tx_msg_downq))
process_single_down_tx_qlock(mgr);
mutex_unlock(&mgr->qlock);
 }
@@ -3822,7 +3816,6 @@ static int drm_dp_mst_handle_down_rep(struct 
drm_dp_mst_topology_mgr *mgr)
mutex_lock(&mgr->qlock);
txmsg->state = DRM_DP_SIDEBAND_TX_RX;
mstb->tx_slots[seqno] = NULL;
-   mgr->is_waiting_for_dwn_reply = false;
mutex_unlock(&mgr->qlock);
 
wake_up_all(&mgr->tx_waitq);
@@ -3830,9 +3823,6 @@ static int drm_dp_mst_handle_down_rep(struct 
drm_dp_mst_topology_mgr *mgr)
return 0;
 
 out_clear_reply:
-   mutex_lock(&mgr->qlock);
-   mgr->is_waiting_for_dwn_reply = false;
-   mutex_unlock(&mgr->qlock);
if (msg)
memset(msg, 0, sizeof(struct drm_dp_sideband_msg_rx));
 out:
@@ -4670,7 +4660,7 @@ static void drm_dp_tx_work(struct work_struct *work)
struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct 
drm_dp_mst_topology_mgr, tx_work);
 
mutex_lock(&mgr->qlock);
-   if (!list_empty(&mgr->tx_msg_downq) && !mgr->is_waiting_for_dwn_reply)
+   if (!list_empty(&mgr->tx_msg_downq))
process_single_down_tx_qlock(mgr);
mutex_unlock(&mgr->qlock);
 }
diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
index 7d0341f94ce1b..fcc30e64c8e7e 100644
--- a/include/drm/drm_dp_mst_helper.h
+++ b/include/drm/drm_dp_mst_helper.h
@@ -619,12 +619,6 @@ struct drm_dp_mst_topology_mgr {
 * &drm_dp_sideband_msg_tx.state once they are queued
 */
struct mutex qlock;
-
-   /**
-* @is_waiting_for_dwn_reply: indicate whether is waiting for down reply
-*/
-   bool is_waiting_for_dwn_reply;
-
/**
 * @tx_msg_downq: List of pending down replies.
 */
-- 
Sean Paul, Software Engineer, Google / Chromium OS

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel