Re: slow rx 5600 xt fps

2020-05-24 Thread Javad Karabi
wow, i totally just realized that this is what you meant by talking
about primary gpu, early on in this email chain.
ive come full circle! you were totally right and even knew exactly
what the easiest change was lol.
my bad!

On Sun, May 24, 2020 at 8:03 PM Javad Karabi  wrote:
>
> Michel, ah my bad! thank you. sorry, thought it was mutter
>
> also, one other thing. so i have been messing around with all types of
> xorg configuration blah blah blah, but i just had an epiphany, and it
> works!
>
> so, all i ever needed to do was add Option "PrimaryGpu" "true" to
> /usr/share/X11/xorg.conf.d/10-amdgpu.conf
> with that _one_ change, i dont need any other xorg configs, and when i
> boot without the amdgpu, it should work just fine, and when the amdgpu
> is present it will automatically become the primary due to the
> outputclass matching it!
>
> that PrimaryGpu being added was exactly the thing. im so glad it works now
>
> So, these are my thoughts:
> theres no telling what other graphics cards might be installed, so
> xorg defaults to using whatever linux was booted with as the primary,
> in my case the intel graphics i guess.
>
> now, on a regular desktop, thats totally fine because the graphics
> card has direct access to ram much easier, and with fancy things like
> dma and whatnot, its no problem at all for a graphics card to act as a
> render offload since the card  can simply dma the results into main
> memory or something
>
> but when you got the graphics card in an eGPU, across a thunderbolt
> connection, it essentially because NUMA, since that memory access has
> way more latency
>
> so the fact that the debian package isnt saying "PrimaryGpu" "true" i
> guess makes sense, becuase who knows what you want the primary to be.
>
> but yea, just thought yall might be interested to know that the
> solution for running an egpu in linux is simply to add "PrimaryGpu" to
> the output class that matches your gpu.
> and when you boot without the gpu, the outputclass wont match, so it
> will default to normal behavior
>
> also, lets say you have N number of gpus, each of which may or may not
> be present. from what i understand, you can still enforce a level of
> precedence about picking which one to be primary like this:
>
> "If multiple output devices match an OutputClass section with the
> PrimaryGPU option set, the first one enumerated becomes the primary
> GPU."
>
> so one can simply define a file in which you define N number of
> outputclasses, in order from highest to lowest precedence for being
> the primary gpu, then simply put Option "PrimaryGpu" "true"
>
> i realize this isnt an xorg list, and doesnt have much to do with
> amdgpu, but would love to hear yalls thoughts. theres alot of
> discussion online in forums and whatnot, and people coming up with all
> kinds of "automatic xorg configuration startup scripts" and stuff to
> manage egpus, but if my hypothesis is correct, this is the cleanest,
> simplest and most elegant solution
>
>
> On Sat, May 23, 2020 at 5:17 AM Michel Dänzer  wrote:
> >
> > On 2020-05-23 12:48 a.m., Javad Karabi wrote:
> > >
> > > also, the whole thing about "monitor updating once every 3 seconds"
> > > when i close the lid is because mutter will go down to 1fps when it
> > > detects that the lid is closed.
> >
> > Xorg's Present extension code ends up doing that (because it has no
> > support for secondary GPUs), not mutter.
> >
> >
> > --
> > Earthling Michel Dänzer   |   https://redhat.com
> > Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: slow rx 5600 xt fps

2020-05-24 Thread Javad Karabi
Michel, ah my bad! thank you. sorry, thought it was mutter

also, one other thing. so i have been messing around with all types of
xorg configuration blah blah blah, but i just had an epiphany, and it
works!

so, all i ever needed to do was add Option "PrimaryGpu" "true" to
/usr/share/X11/xorg.conf.d/10-amdgpu.conf
with that _one_ change, i dont need any other xorg configs, and when i
boot without the amdgpu, it should work just fine, and when the amdgpu
is present it will automatically become the primary due to the
outputclass matching it!

that PrimaryGpu being added was exactly the thing. im so glad it works now

So, these are my thoughts:
theres no telling what other graphics cards might be installed, so
xorg defaults to using whatever linux was booted with as the primary,
in my case the intel graphics i guess.

now, on a regular desktop, thats totally fine because the graphics
card has direct access to ram much easier, and with fancy things like
dma and whatnot, its no problem at all for a graphics card to act as a
render offload since the card  can simply dma the results into main
memory or something

but when you got the graphics card in an eGPU, across a thunderbolt
connection, it essentially because NUMA, since that memory access has
way more latency

so the fact that the debian package isnt saying "PrimaryGpu" "true" i
guess makes sense, becuase who knows what you want the primary to be.

but yea, just thought yall might be interested to know that the
solution for running an egpu in linux is simply to add "PrimaryGpu" to
the output class that matches your gpu.
and when you boot without the gpu, the outputclass wont match, so it
will default to normal behavior

also, lets say you have N number of gpus, each of which may or may not
be present. from what i understand, you can still enforce a level of
precedence about picking which one to be primary like this:

"If multiple output devices match an OutputClass section with the
PrimaryGPU option set, the first one enumerated becomes the primary
GPU."

so one can simply define a file in which you define N number of
outputclasses, in order from highest to lowest precedence for being
the primary gpu, then simply put Option "PrimaryGpu" "true"

i realize this isnt an xorg list, and doesnt have much to do with
amdgpu, but would love to hear yalls thoughts. theres alot of
discussion online in forums and whatnot, and people coming up with all
kinds of "automatic xorg configuration startup scripts" and stuff to
manage egpus, but if my hypothesis is correct, this is the cleanest,
simplest and most elegant solution


On Sat, May 23, 2020 at 5:17 AM Michel Dänzer  wrote:
>
> On 2020-05-23 12:48 a.m., Javad Karabi wrote:
> >
> > also, the whole thing about "monitor updating once every 3 seconds"
> > when i close the lid is because mutter will go down to 1fps when it
> > detects that the lid is closed.
>
> Xorg's Present extension code ends up doing that (because it has no
> support for secondary GPUs), not mutter.
>
>
> --
> Earthling Michel Dänzer   |   https://redhat.com
> Libre software enthusiast | Mesa and X developer
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: slow rx 5600 xt fps

2020-05-22 Thread Javad Karabi
so yea, looks like the compositing wasnt happening on the amdgpu, so
thats why when i would see 300fps for glxgears etc.

also, the whole thing about "monitor updating once every 3 seconds"
when i close the lid is because mutter will go down to 1fps when it
detects that the lid is closed.
i setup the compositor to use the graphics card (by simply using a
custom xorg.conf with the Screen's device section being the amd
device) and now it runs perfectly. ill write up a lil blog post or
something to explain it. will link it in this thread if yall are
curious. but really it boils down to "yall are right"

so the fix for now is simply to use a single xorg.conf which
specifically says to use the device for the X screen

thanks alot for yalls help

On Thu, May 21, 2020 at 4:21 PM Javad Karabi  wrote:
>
> the files i attached are using the amdgpu ddx
>
> also, one thing to note: i just switched to modesetting but it seems
> it has the same issue.
> i got it working last night, forgot what i changed. but that was one
> of things i changed. but here are the files for when i use the amdgpu
> ddx
>
> On Thu, May 21, 2020 at 2:15 PM Alex Deucher  wrote:
> >
> > Please provide your dmesg output and xorg log.
> >
> > Alex
> >
> > On Thu, May 21, 2020 at 3:03 PM Javad Karabi  wrote:
> > >
> > > Alex,
> > > yea, youre totally right i was overcomplicating it lol
> > > so i was able to get the radeon to run super fast, by doing as you
> > > suggested and blacklisting i915.
> > > (had to use module_blacklist= though because modprobe.blacklist still
> > > allows i915, if a dependency wants to load it)
> > > but with one caveat:
> > > using the amdgpu driver, there was some error saying something about
> > > telling me that i need to add BusID to my device or something.
> > > maybe amdgpu wasnt able to find the card or something, i dont
> > > remember. so i used modesetting instead and it seemed to work.
> > > i will try going back to amdgpu and seeing what that error message was.
> > > i recall you saying that modesetting doesnt have some features that
> > > amdgpu provides.
> > > what are some examples of that?
> > > is the direction that graphics drivers are going, to be simply used as
> > > "modesetting" via xorg?
> > >
> > > On Wed, May 20, 2020 at 10:12 PM Alex Deucher  
> > > wrote:
> > > >
> > > > I think you are overcomplicating things.  Just try and get X running
> > > > on just the AMD GPU on bare metal.  Introducing virtualization is just
> > > > adding more uncertainty.  If you can't configure X to not use the
> > > > integrated GPU, just blacklist the i915 driver (append
> > > > modprobe.blacklist=i915 to the kernel command line in grub) and X
> > > > should come up on the dGPU.
> > > >
> > > > Alex
> > > >
> > > > On Wed, May 20, 2020 at 6:05 PM Javad Karabi  
> > > > wrote:
> > > > >
> > > > > Thanks Alex,
> > > > > Here's my plan:
> > > > >
> > > > > since my laptop's os is pretty customized, e.g. compiling my own 
> > > > > kernel, building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > > > > im going to use the intel iommu and pass through my rx 5600 into a 
> > > > > virtual machine, which will be a 100% stock ubuntu installation.
> > > > > then, inside that vm, i will continue to debug
> > > > >
> > > > > does that sound like it would make sense for testing? for example, 
> > > > > with that scenario, it adds the iommu into the mix, so who knows if 
> > > > > that causes performance issues. but i think its worth a shot, to see 
> > > > > if a stock kernel will handle it better
> > > > >
> > > > > also, quick question:
> > > > > from what i understand, a thunderbolt 3 pci express connection should 
> > > > > handle 8 GT/s x4, however, along the chain of bridges to my device, i 
> > > > > notice that the bridge closest to the graphics card is at 2.5 GT/s 
> > > > > x4, and it also says "downgraded" (this is via the lspci output)
> > > > >
> > > > > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it 
> > > > > runs extremely well. no issues at all.
> > > > >
> > > > > so my question is: the fact that the bridge is at 2.5 GT/s x4, and 
> > > > > not at it

Re: slow rx 5600 xt fps

2020-05-21 Thread Javad Karabi
Alex,
yea, youre totally right i was overcomplicating it lol
so i was able to get the radeon to run super fast, by doing as you
suggested and blacklisting i915.
(had to use module_blacklist= though because modprobe.blacklist still
allows i915, if a dependency wants to load it)
but with one caveat:
using the amdgpu driver, there was some error saying something about
telling me that i need to add BusID to my device or something.
maybe amdgpu wasnt able to find the card or something, i dont
remember. so i used modesetting instead and it seemed to work.
i will try going back to amdgpu and seeing what that error message was.
i recall you saying that modesetting doesnt have some features that
amdgpu provides.
what are some examples of that?
is the direction that graphics drivers are going, to be simply used as
"modesetting" via xorg?

On Wed, May 20, 2020 at 10:12 PM Alex Deucher  wrote:
>
> I think you are overcomplicating things.  Just try and get X running
> on just the AMD GPU on bare metal.  Introducing virtualization is just
> adding more uncertainty.  If you can't configure X to not use the
> integrated GPU, just blacklist the i915 driver (append
> modprobe.blacklist=i915 to the kernel command line in grub) and X
> should come up on the dGPU.
>
> Alex
>
> On Wed, May 20, 2020 at 6:05 PM Javad Karabi  wrote:
> >
> > Thanks Alex,
> > Here's my plan:
> >
> > since my laptop's os is pretty customized, e.g. compiling my own kernel, 
> > building latest xorg, latest xorg-driver-amdgpu, etc etc,
> > im going to use the intel iommu and pass through my rx 5600 into a virtual 
> > machine, which will be a 100% stock ubuntu installation.
> > then, inside that vm, i will continue to debug
> >
> > does that sound like it would make sense for testing? for example, with 
> > that scenario, it adds the iommu into the mix, so who knows if that causes 
> > performance issues. but i think its worth a shot, to see if a stock kernel 
> > will handle it better
> >
> > also, quick question:
> > from what i understand, a thunderbolt 3 pci express connection should 
> > handle 8 GT/s x4, however, along the chain of bridges to my device, i 
> > notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and 
> > it also says "downgraded" (this is via the lspci output)
> >
> > now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs 
> > extremely well. no issues at all.
> >
> > so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at 
> > its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_ 
> > be an issue?
> > i do not think so, because, like i said, in windows it also reports that 
> > link speed.
> > i would assume that you would want the fastest link speed possible, because 
> > i would assume that of _all_ tb3 pci express devices, a GPU would be the #1 
> > most demanding on the link
> >
> > just curious if you think 2.5 GT/s could be the bottleneck
> >
> > i will pass through the device into a ubuntu vm and let you know how it 
> > goes. thanks
> >
> >
> >
> > On Tue, May 19, 2020 at 9:29 PM Alex Deucher  wrote:
> >>
> >> On Tue, May 19, 2020 at 9:16 PM Javad Karabi  wrote:
> >> >
> >> > thanks for the answers alex.
> >> >
> >> > so, i went ahead and got a displayport cable to see if that changes
> >> > anything. and now, when i run monitor only, and the monitor connected
> >> > to the card, it has no issues like before! so i am thinking that
> >> > somethings up with either the hdmi cable, or some hdmi related setting
> >> > in my system? who knows, but im just gonna roll with only using
> >> > displayport cables now.
> >> > the previous hdmi cable was actually pretty long, because i was
> >> > extending it with an hdmi extension cable, so maybe the signal was
> >> > really bad or something :/
> >> >
> >> > but yea, i guess the only real issue now is maybe something simple
> >> > related to some sysfs entry about enabling some powermode, voltage,
> >> > clock frequency, or something, so that glxgears will give me more than
> >> > 300 fps. but atleast now i can use a single monitor configuration with
> >> > the monitor displayported up to the card.
> >> >
> >>
> >> The GPU dynamically adjusts the clocks and voltages based on load.  No
> >> manual configuration is required.
> >>
> >> At this point, we probably need to see you xorg log and dmesg o

Re: slow rx 5600 xt fps

2020-05-20 Thread Javad Karabi
Thanks Alex,
Here's my plan:

since my laptop's os is pretty customized, e.g. compiling my own kernel,
building latest xorg, latest xorg-driver-amdgpu, etc etc,
im going to use the intel iommu and pass through my rx 5600 into a virtual
machine, which will be a 100% stock ubuntu installation.
then, inside that vm, i will continue to debug

does that sound like it would make sense for testing? for example, with
that scenario, it adds the iommu into the mix, so who knows if that causes
performance issues. but i think its worth a shot, to see if a stock kernel
will handle it better

also, quick question:
from what i understand, a thunderbolt 3 pci express connection should
handle 8 GT/s x4, however, along the chain of bridges to my device, i
notice that the bridge closest to the graphics card is at 2.5 GT/s x4, and
it also says "downgraded" (this is via the lspci output)

now, when i boot into windows, it _also_ says 2.5 GT/s x4, and it runs
extremely well. no issues at all.

so my question is: the fact that the bridge is at 2.5 GT/s x4, and not at
its theoretical "full link speed" of 8 GT/s x4, do you suppose that _could_
be an issue?
i do not think so, because, like i said, in windows it also reports that
link speed.
i would assume that you would want the fastest link speed possible, because
i would assume that of _all_ tb3 pci express devices, a GPU would be the #1
most demanding on the link

just curious if you think 2.5 GT/s could be the bottleneck

i will pass through the device into a ubuntu vm and let you know how it
goes. thanks



On Tue, May 19, 2020 at 9:29 PM Alex Deucher  wrote:

> On Tue, May 19, 2020 at 9:16 PM Javad Karabi 
> wrote:
> >
> > thanks for the answers alex.
> >
> > so, i went ahead and got a displayport cable to see if that changes
> > anything. and now, when i run monitor only, and the monitor connected
> > to the card, it has no issues like before! so i am thinking that
> > somethings up with either the hdmi cable, or some hdmi related setting
> > in my system? who knows, but im just gonna roll with only using
> > displayport cables now.
> > the previous hdmi cable was actually pretty long, because i was
> > extending it with an hdmi extension cable, so maybe the signal was
> > really bad or something :/
> >
> > but yea, i guess the only real issue now is maybe something simple
> > related to some sysfs entry about enabling some powermode, voltage,
> > clock frequency, or something, so that glxgears will give me more than
> > 300 fps. but atleast now i can use a single monitor configuration with
> > the monitor displayported up to the card.
> >
>
> The GPU dynamically adjusts the clocks and voltages based on load.  No
> manual configuration is required.
>
> At this point, we probably need to see you xorg log and dmesg output
> to try and figure out exactly what is going on.  I still suspect there
> is some interaction going on with both GPUs and the integrated GPU
> being the primary, so as I mentioned before, you should try and run X
> on just the amdgpu rather than trying to use both of them.
>
> Alex
>
>
> > also, one other thing i think you might be interested in, that was
> > happening before.
> >
> > so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> > funny thing happening which i never could figure out.
> > when i would look at the X logs, i would see that "modesetting" (for
> > the intel integrated graphics) was reporting that MonitorA was used
> > with "eDP-1",  which is correct and what i expected.
> > when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> > used for another MonitorB, which also is what i expected (albeit i
> > have no idea why its saying A-1-2)
> > but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> > radeon card) was being used for MonitorA, which is the same Monitor
> > that the modesetting driver had claimed to be using with eDP-1!
> >
> > so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> > although that is what modesetting was using for eDP-1.
> >
> > anyway, thats a little aside, i doubt it was related to the terrible
> > hdmi experience i was getting, since its about display port and stuff,
> > but i thought id let you know about that.
> >
> > if you think that is a possible issue, im more than happy to plug the
> > hdmi setup back in and create an issue on gitlab with the logs and
> > everything
> >
> > On Tue, May 19, 2020 at 4:42 PM Alex Deucher 
> wrote:
> > >
> > > On Tue, May 19, 2020 at 5:22 PM Javad Karabi 
> wrote:
> > > >
> > > &g

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
John,

yea, totally agree with you.
one other thing i havent mentioned is that, each time, ive also been
testing everything by running dota 2 with graphics settings all the
way up. and the behavior in dota2 has been consistent

its funny: when i run dota 2, it consistently hovers at 40fps, but the
weird thing is that with graphics settings all the way low, or
graphics settings all the way up, it sticks to 40fps. regardless of
vsync on / off.
i didnt mention my testing of dota 2 because i figured that glxgears
would summarize the issue best, but i do understand what you mean by
trying a more demanding test.
ive also been testing with glmark2, and it would only give 300-400fps too

heres an example:

$ vblank_mode=0 DRI_PRIME=1 glmark2
ATTENTION: default value of option vblank_mode overridden by environment.
ATTENTION: option value of option vblank_mode ignored.
===
glmark2 2014.03+git20150611.fa71af2d
===
OpenGL Information
GL_VENDOR: X.Org
GL_RENDERER:   AMD Radeon RX 5600 XT (NAVI10, DRM 3.36.0,
5.6.13-karabijavad, LLVM 9.0.1)
GL_VERSION:4.6 (Compatibility Profile) Mesa 20.0.4
===
[build] use-vbo=false: FPS: 128 FrameTime: 7.812 ms
[build] use-vbo=true: FPS: 129 FrameTime: 7.752 ms



On Tue, May 19, 2020 at 8:20 PM Bridgman, John  wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
>
> Suggest you use something more demanding that glxgears as a test - part of 
> the problem is that glxgears runs so fast normally (30x faster than your 
> display) that even a small amount of overhead copying a frame from one place 
> to another makes a huge difference in FPS.
>
> If you use a test program that normally runs at 90 FPS you'll probably find 
> that the "slow" speed is something like 85 FPS, rather than the 6:1 
> difference you see with glxgears.
>
> ________
> From: amd-gfx  on behalf of Javad 
> Karabi 
> Sent: May 19, 2020 9:16 PM
> To: Alex Deucher 
> Cc: amd-gfx list 
> Subject: Re: slow rx 5600 xt fps
>
> thanks for the answers alex.
>
> so, i went ahead and got a displayport cable to see if that changes
> anything. and now, when i run monitor only, and the monitor connected
> to the card, it has no issues like before! so i am thinking that
> somethings up with either the hdmi cable, or some hdmi related setting
> in my system? who knows, but im just gonna roll with only using
> displayport cables now.
> the previous hdmi cable was actually pretty long, because i was
> extending it with an hdmi extension cable, so maybe the signal was
> really bad or something :/
>
> but yea, i guess the only real issue now is maybe something simple
> related to some sysfs entry about enabling some powermode, voltage,
> clock frequency, or something, so that glxgears will give me more than
> 300 fps. but atleast now i can use a single monitor configuration with
> the monitor displayported up to the card.
>
> also, one other thing i think you might be interested in, that was
> happening before.
>
> so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> funny thing happening which i never could figure out.
> when i would look at the X logs, i would see that "modesetting" (for
> the intel integrated graphics) was reporting that MonitorA was used
> with "eDP-1",  which is correct and what i expected.
> when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> used for another MonitorB, which also is what i expected (albeit i
> have no idea why its saying A-1-2)
> but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> radeon card) was being used for MonitorA, which is the same Monitor
> that the modesetting driver had claimed to be using with eDP-1!
>
> so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> although that is what modesetting was using for eDP-1.
>
> anyway, thats a little aside, i doubt it was related to the terrible
> hdmi experience i was getting, since its about display port and stuff,
> but i thought id let you know about that.
>
> if you think that is a possible issue, im more than happy to plug the
> hdmi setup back in and create an issue on gitlab with the logs and
> everything
>
> On Tue, May 19, 2020 at 4:42 PM Alex Deucher  wrote:
> >
> > On Tue, May 19, 2020 at 5:22 PM Javad Karabi  wrote:
> > >
> > > lol youre quick!
> > >
> > > "Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux"
> > >
> > >

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
s/Monitor0/MonitorA

(the Monitor0 and Monitor1 are actually Monitor4 (for the laptop) and
Montor0 (for the hdmi output), atleast i think that was the numbers.)
they were autogenerated Monitor identifiers by xorg, so i dont
remember the exact numbers, but either way, for some reason the
radeon's DisplayPort-1-2 was "using" the same monitor as modesetting's
eDP1

On Tue, May 19, 2020 at 8:16 PM Javad Karabi  wrote:
>
> thanks for the answers alex.
>
> so, i went ahead and got a displayport cable to see if that changes
> anything. and now, when i run monitor only, and the monitor connected
> to the card, it has no issues like before! so i am thinking that
> somethings up with either the hdmi cable, or some hdmi related setting
> in my system? who knows, but im just gonna roll with only using
> displayport cables now.
> the previous hdmi cable was actually pretty long, because i was
> extending it with an hdmi extension cable, so maybe the signal was
> really bad or something :/
>
> but yea, i guess the only real issue now is maybe something simple
> related to some sysfs entry about enabling some powermode, voltage,
> clock frequency, or something, so that glxgears will give me more than
> 300 fps. but atleast now i can use a single monitor configuration with
> the monitor displayported up to the card.
>
> also, one other thing i think you might be interested in, that was
> happening before.
>
> so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
> funny thing happening which i never could figure out.
> when i would look at the X logs, i would see that "modesetting" (for
> the intel integrated graphics) was reporting that MonitorA was used
> with "eDP-1",  which is correct and what i expected.
> when i scrolled further down, i then saw that "HDMI-A-1-2" was being
> used for another MonitorB, which also is what i expected (albeit i
> have no idea why its saying A-1-2)
> but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
> radeon card) was being used for MonitorA, which is the same Monitor
> that the modesetting driver had claimed to be using with eDP-1!
>
> so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
> although that is what modesetting was using for eDP-1.
>
> anyway, thats a little aside, i doubt it was related to the terrible
> hdmi experience i was getting, since its about display port and stuff,
> but i thought id let you know about that.
>
> if you think that is a possible issue, im more than happy to plug the
> hdmi setup back in and create an issue on gitlab with the logs and
> everything
>
> On Tue, May 19, 2020 at 4:42 PM Alex Deucher  wrote:
> >
> > On Tue, May 19, 2020 at 5:22 PM Javad Karabi  wrote:
> > >
> > > lol youre quick!
> > >
> > > "Windows has supported peer to peer DMA for years so it already has a
> > > numbers of optimizations that are only now becoming possible on Linux"
> > >
> > > whoa, i figured linux would be ahead of windows when it comes to
> > > things like that. but peer-to-peer dma is something that is only
> > > recently possible on linux, but has been possible on windows? what
> > > changed recently that allows for peer to peer dma in linux?
> > >
> >
> > A few things that made this more complicated on Linux:
> > 1. Linux uses IOMMUs more extensively than windows so you can't just
> > pass around physical bus addresses.
> > 2. Linux supports lots of strange architectures that have a lot of
> > limitations with respect to peer to peer transactions
> >
> > It just took years to get all the necessary bits in place in Linux and
> > make everyone happy.
> >
> > > also, in the context of a game running opengl on some gpu, is the
> > > "peer-to-peer" dma transfer something like: the game draw's to some
> > > memory it has allocated, then a DMA transfer gets that and moves it
> > > into the graphics card output?
> >
> > Peer to peer DMA just lets devices access another devices local memory
> > directly.  So if you have a buffer in vram on one device, you can
> > share that directly with another device rather than having to copy it
> > to system memory first.  For example, if you have two GPUs, you can
> > have one of them copy it's content directly to a buffer in the other
> > GPU's vram rather than having to go through system memory first.
> >
> > >
> > > also, i know it can be super annoying trying to debug an issue like
> > > this, with someone like me who has all types of differences from a
> > >

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
thanks for the answers alex.

so, i went ahead and got a displayport cable to see if that changes
anything. and now, when i run monitor only, and the monitor connected
to the card, it has no issues like before! so i am thinking that
somethings up with either the hdmi cable, or some hdmi related setting
in my system? who knows, but im just gonna roll with only using
displayport cables now.
the previous hdmi cable was actually pretty long, because i was
extending it with an hdmi extension cable, so maybe the signal was
really bad or something :/

but yea, i guess the only real issue now is maybe something simple
related to some sysfs entry about enabling some powermode, voltage,
clock frequency, or something, so that glxgears will give me more than
300 fps. but atleast now i can use a single monitor configuration with
the monitor displayported up to the card.

also, one other thing i think you might be interested in, that was
happening before.

so, previously, with laptop -tb3-> egpu-hdmi> monitor, there was a
funny thing happening which i never could figure out.
when i would look at the X logs, i would see that "modesetting" (for
the intel integrated graphics) was reporting that MonitorA was used
with "eDP-1",  which is correct and what i expected.
when i scrolled further down, i then saw that "HDMI-A-1-2" was being
used for another MonitorB, which also is what i expected (albeit i
have no idea why its saying A-1-2)
but amdgpu was _also_ saying that DisplayPort-1-2 (a port on the
radeon card) was being used for MonitorA, which is the same Monitor
that the modesetting driver had claimed to be using with eDP-1!

so the point is that amdgpu was "using" Monitor0 with DisplayPort-1-2,
although that is what modesetting was using for eDP-1.

anyway, thats a little aside, i doubt it was related to the terrible
hdmi experience i was getting, since its about display port and stuff,
but i thought id let you know about that.

if you think that is a possible issue, im more than happy to plug the
hdmi setup back in and create an issue on gitlab with the logs and
everything

On Tue, May 19, 2020 at 4:42 PM Alex Deucher  wrote:
>
> On Tue, May 19, 2020 at 5:22 PM Javad Karabi  wrote:
> >
> > lol youre quick!
> >
> > "Windows has supported peer to peer DMA for years so it already has a
> > numbers of optimizations that are only now becoming possible on Linux"
> >
> > whoa, i figured linux would be ahead of windows when it comes to
> > things like that. but peer-to-peer dma is something that is only
> > recently possible on linux, but has been possible on windows? what
> > changed recently that allows for peer to peer dma in linux?
> >
>
> A few things that made this more complicated on Linux:
> 1. Linux uses IOMMUs more extensively than windows so you can't just
> pass around physical bus addresses.
> 2. Linux supports lots of strange architectures that have a lot of
> limitations with respect to peer to peer transactions
>
> It just took years to get all the necessary bits in place in Linux and
> make everyone happy.
>
> > also, in the context of a game running opengl on some gpu, is the
> > "peer-to-peer" dma transfer something like: the game draw's to some
> > memory it has allocated, then a DMA transfer gets that and moves it
> > into the graphics card output?
>
> Peer to peer DMA just lets devices access another devices local memory
> directly.  So if you have a buffer in vram on one device, you can
> share that directly with another device rather than having to copy it
> to system memory first.  For example, if you have two GPUs, you can
> have one of them copy it's content directly to a buffer in the other
> GPU's vram rather than having to go through system memory first.
>
> >
> > also, i know it can be super annoying trying to debug an issue like
> > this, with someone like me who has all types of differences from a
> > normal setup (e.g. using it via egpu, using a kernel with custom
> > configs and stuff) so as a token of my appreciation i donated 50$ to
> > the red cross' corona virus outbreak charity thing, on behalf of
> > amd-gfx.
>
> Thanks,
>
> Alex
>
> >
> > On Tue, May 19, 2020 at 4:13 PM Alex Deucher  wrote:
> > >
> > > On Tue, May 19, 2020 at 3:44 PM Javad Karabi  
> > > wrote:
> > > >
> > > > just a couple more questions:
> > > >
> > > > - based on what you are aware of, the technical details such as
> > > > "shared buffers go through system memory", and all that, do you see
> > > > any issues that might exist that i might be missing in my setup? i
> > > > cant imagine this be

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
lol youre quick!

"Windows has supported peer to peer DMA for years so it already has a
numbers of optimizations that are only now becoming possible on Linux"

whoa, i figured linux would be ahead of windows when it comes to
things like that. but peer-to-peer dma is something that is only
recently possible on linux, but has been possible on windows? what
changed recently that allows for peer to peer dma in linux?

also, in the context of a game running opengl on some gpu, is the
"peer-to-peer" dma transfer something like: the game draw's to some
memory it has allocated, then a DMA transfer gets that and moves it
into the graphics card output?

also, i know it can be super annoying trying to debug an issue like
this, with someone like me who has all types of differences from a
normal setup (e.g. using it via egpu, using a kernel with custom
configs and stuff) so as a token of my appreciation i donated 50$ to
the red cross' corona virus outbreak charity thing, on behalf of
amd-gfx.

On Tue, May 19, 2020 at 4:13 PM Alex Deucher  wrote:
>
> On Tue, May 19, 2020 at 3:44 PM Javad Karabi  wrote:
> >
> > just a couple more questions:
> >
> > - based on what you are aware of, the technical details such as
> > "shared buffers go through system memory", and all that, do you see
> > any issues that might exist that i might be missing in my setup? i
> > cant imagine this being the case because the card works great in
> > windows, unless the windows driver does something different?
> >
>
> Windows has supported peer to peer DMA for years so it already has a
> numbers of optimizations that are only now becoming possible on Linux.
>
> > - as far as kernel config, is there anything in particular which
> > _should_ or _should not_ be enabled/disabled?
>
> You'll need the GPU drivers for your devices and dma-buf support.
>
> >
> > - does the vendor matter? for instance, this is an xfx card. when it
> > comes to different vendors, are there interface changes that might
> > make one vendor work better for linux than another? i dont really
> > understand the differences in vendors, but i imagine that the vbios
> > differs between vendors, and as such, the linux compatibility would
> > maybe change?
>
> board vendor shouldn't matter.
>
> >
> > - is the pcie bandwidth possible an issue? the pcie_bw file changes
> > between values like this:
> > 18446683600662707640 18446744071581623085 128
> > and sometimes i see this:
> > 4096 0 128
> > as you can see, the second value seems significantly lower. is that
> > possibly an issue? possibly due to aspm?
>
> pcie_bw is not implemented for navi yet so you are just seeing
> uninitialized data.  This patch set should clear that up.
> https://patchwork.freedesktop.org/patch/366262/
>
> Alex
>
> >
> > On Tue, May 19, 2020 at 2:20 PM Javad Karabi  wrote:
> > >
> > > im using Driver "amdgpu" in my xorg conf
> > >
> > > how does one verify which gpu is the primary? im assuming my intel
> > > card is the primary, since i have not done anything to change that.
> > >
> > > also, if all shared buffers have to go through system memory, then
> > > that means an eGPU amdgpu wont work very well in general right?
> > > because going through system memory for the egpu means going over the
> > > thunderbolt connection
> > >
> > > and what are the shared buffers youre referring to? for example, if an
> > > application is drawing to a buffer, is that an example of a shared
> > > buffer that has to go through system memory? if so, thats fine, right?
> > > because the application's memory is in system memory, so that copy
> > > wouldnt be an issue.
> > >
> > > in general, do you think the "copy buffer across system memory might
> > > be a hindrance for thunderbolt? im trying to figure out which
> > > directions to go to debug and im totally lost, so maybe i can do some
> > > testing that direction?
> > >
> > > and for what its worth, when i turn the display "off" via the gnome
> > > display settings, its the same issue as when the laptop lid is closed,
> > > so unless the motherboard reads the "closed lid" the same as "display
> > > off", then im not sure if its thermal issues.
> > >
> > > On Tue, May 19, 2020 at 2:14 PM Alex Deucher  
> > > wrote:
> > > >
> > > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi  
> > > > wrote:
> > > > >
> > > > > given this setup:
>

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
another tidbit:
when in linux, the gpu's fans _never_ come on.

even when i run 4 instances of glmark2, the fans do not come on :/
i see the temp hitting just below 50 deg c, and i saw some value that
says that 50c was the max?
isnt 50c low for a max gpu temp?


On Tue, May 19, 2020 at 2:44 PM Javad Karabi  wrote:
>
> just a couple more questions:
>
> - based on what you are aware of, the technical details such as
> "shared buffers go through system memory", and all that, do you see
> any issues that might exist that i might be missing in my setup? i
> cant imagine this being the case because the card works great in
> windows, unless the windows driver does something different?
>
> - as far as kernel config, is there anything in particular which
> _should_ or _should not_ be enabled/disabled?
>
> - does the vendor matter? for instance, this is an xfx card. when it
> comes to different vendors, are there interface changes that might
> make one vendor work better for linux than another? i dont really
> understand the differences in vendors, but i imagine that the vbios
> differs between vendors, and as such, the linux compatibility would
> maybe change?
>
> - is the pcie bandwidth possible an issue? the pcie_bw file changes
> between values like this:
> 18446683600662707640 18446744071581623085 128
> and sometimes i see this:
> 4096 0 128
> as you can see, the second value seems significantly lower. is that
> possibly an issue? possibly due to aspm?
>
> On Tue, May 19, 2020 at 2:20 PM Javad Karabi  wrote:
> >
> > im using Driver "amdgpu" in my xorg conf
> >
> > how does one verify which gpu is the primary? im assuming my intel
> > card is the primary, since i have not done anything to change that.
> >
> > also, if all shared buffers have to go through system memory, then
> > that means an eGPU amdgpu wont work very well in general right?
> > because going through system memory for the egpu means going over the
> > thunderbolt connection
> >
> > and what are the shared buffers youre referring to? for example, if an
> > application is drawing to a buffer, is that an example of a shared
> > buffer that has to go through system memory? if so, thats fine, right?
> > because the application's memory is in system memory, so that copy
> > wouldnt be an issue.
> >
> > in general, do you think the "copy buffer across system memory might
> > be a hindrance for thunderbolt? im trying to figure out which
> > directions to go to debug and im totally lost, so maybe i can do some
> > testing that direction?
> >
> > and for what its worth, when i turn the display "off" via the gnome
> > display settings, its the same issue as when the laptop lid is closed,
> > so unless the motherboard reads the "closed lid" the same as "display
> > off", then im not sure if its thermal issues.
> >
> > On Tue, May 19, 2020 at 2:14 PM Alex Deucher  wrote:
> > >
> > > On Tue, May 19, 2020 at 2:59 PM Javad Karabi  
> > > wrote:
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> 
> > > > monitor
> > > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > > >
> > > > given this setup:
> > > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > > laptop -hdmi-> monitor
> > > >
> > > > glx gears gives me ~1800fps
> > > >
> > > > this doesnt make sense to me because i thought that having the monitor
> > > > plugged directly into the card should give best performance.
> > > >
> > >
> > > Do you have displays connected to both GPUs?  If you are using X which
> > > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > > Note that the GPU which does the rendering is not necessarily the one
> > > that the displays are attached to.  The render GPU renders to it's
> > > render buffer and then that data may end up being copied other GPUs
> > > for display.  Also, at this point, all shared buffers have to go
> > > through system memory (this will be changing eventually now that we
> > > support device memory via dma-buf), so there is often an extra copy
> > > involved.
> > >
> > > > theres another really weird issue...
> > > >
> > > > given 

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
just a couple more questions:

- based on what you are aware of, the technical details such as
"shared buffers go through system memory", and all that, do you see
any issues that might exist that i might be missing in my setup? i
cant imagine this being the case because the card works great in
windows, unless the windows driver does something different?

- as far as kernel config, is there anything in particular which
_should_ or _should not_ be enabled/disabled?

- does the vendor matter? for instance, this is an xfx card. when it
comes to different vendors, are there interface changes that might
make one vendor work better for linux than another? i dont really
understand the differences in vendors, but i imagine that the vbios
differs between vendors, and as such, the linux compatibility would
maybe change?

- is the pcie bandwidth possible an issue? the pcie_bw file changes
between values like this:
18446683600662707640 18446744071581623085 128
and sometimes i see this:
4096 0 128
as you can see, the second value seems significantly lower. is that
possibly an issue? possibly due to aspm?

On Tue, May 19, 2020 at 2:20 PM Javad Karabi  wrote:
>
> im using Driver "amdgpu" in my xorg conf
>
> how does one verify which gpu is the primary? im assuming my intel
> card is the primary, since i have not done anything to change that.
>
> also, if all shared buffers have to go through system memory, then
> that means an eGPU amdgpu wont work very well in general right?
> because going through system memory for the egpu means going over the
> thunderbolt connection
>
> and what are the shared buffers youre referring to? for example, if an
> application is drawing to a buffer, is that an example of a shared
> buffer that has to go through system memory? if so, thats fine, right?
> because the application's memory is in system memory, so that copy
> wouldnt be an issue.
>
> in general, do you think the "copy buffer across system memory might
> be a hindrance for thunderbolt? im trying to figure out which
> directions to go to debug and im totally lost, so maybe i can do some
> testing that direction?
>
> and for what its worth, when i turn the display "off" via the gnome
> display settings, its the same issue as when the laptop lid is closed,
> so unless the motherboard reads the "closed lid" the same as "display
> off", then im not sure if its thermal issues.
>
> On Tue, May 19, 2020 at 2:14 PM Alex Deucher  wrote:
> >
> > On Tue, May 19, 2020 at 2:59 PM Javad Karabi  wrote:
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > > DRI_PRIME=1 glxgears gears gives me ~300fps
> > >
> > > given this setup:
> > > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > > laptop -hdmi-> monitor
> > >
> > > glx gears gives me ~1800fps
> > >
> > > this doesnt make sense to me because i thought that having the monitor
> > > plugged directly into the card should give best performance.
> > >
> >
> > Do you have displays connected to both GPUs?  If you are using X which
> > ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> > IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> > yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> > Note that the GPU which does the rendering is not necessarily the one
> > that the displays are attached to.  The render GPU renders to it's
> > render buffer and then that data may end up being copied other GPUs
> > for display.  Also, at this point, all shared buffers have to go
> > through system memory (this will be changing eventually now that we
> > support device memory via dma-buf), so there is often an extra copy
> > involved.
> >
> > > theres another really weird issue...
> > >
> > > given setup 1, where the monitor is plugged in to the card:
> > > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > > can "use it" in a sense
> > >
> > > however, heres the weirdness:
> > > the mouse cursor will move along the monitor perfectly smooth and
> > > fine, but all the other updates to the screen are delayed by about 2
> > > or 3 seconds.
> > > that is to say, its as if the laptop is doing everything (e.g. if i
> > > open a terminal, the terminal will open, but it will take 2 seconds
> > > for me to see it)
> > >
> > > its almost as if all the frames and everything are being drawn, and
> > > the laptop is running fine 

Re: slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
im using Driver "amdgpu" in my xorg conf

how does one verify which gpu is the primary? im assuming my intel
card is the primary, since i have not done anything to change that.

also, if all shared buffers have to go through system memory, then
that means an eGPU amdgpu wont work very well in general right?
because going through system memory for the egpu means going over the
thunderbolt connection

and what are the shared buffers youre referring to? for example, if an
application is drawing to a buffer, is that an example of a shared
buffer that has to go through system memory? if so, thats fine, right?
because the application's memory is in system memory, so that copy
wouldnt be an issue.

in general, do you think the "copy buffer across system memory might
be a hindrance for thunderbolt? im trying to figure out which
directions to go to debug and im totally lost, so maybe i can do some
testing that direction?

and for what its worth, when i turn the display "off" via the gnome
display settings, its the same issue as when the laptop lid is closed,
so unless the motherboard reads the "closed lid" the same as "display
off", then im not sure if its thermal issues.

On Tue, May 19, 2020 at 2:14 PM Alex Deucher  wrote:
>
> On Tue, May 19, 2020 at 2:59 PM Javad Karabi  wrote:
> >
> > given this setup:
> > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
> > DRI_PRIME=1 glxgears gears gives me ~300fps
> >
> > given this setup:
> > laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
> > laptop -hdmi-> monitor
> >
> > glx gears gives me ~1800fps
> >
> > this doesnt make sense to me because i thought that having the monitor
> > plugged directly into the card should give best performance.
> >
>
> Do you have displays connected to both GPUs?  If you are using X which
> ddx are you using?  xf86-video-modesetting or xf86-video-amdgpu?
> IIRC, xf86-video-amdgpu has some optimizations for prime which are not
> yet in xf86-video-modesetting.  Which GPU is set up as the primary?
> Note that the GPU which does the rendering is not necessarily the one
> that the displays are attached to.  The render GPU renders to it's
> render buffer and then that data may end up being copied other GPUs
> for display.  Also, at this point, all shared buffers have to go
> through system memory (this will be changing eventually now that we
> support device memory via dma-buf), so there is often an extra copy
> involved.
>
> > theres another really weird issue...
> >
> > given setup 1, where the monitor is plugged in to the card:
> > when i close the laptop lid, my monitor is "active" and whatnot, and i
> > can "use it" in a sense
> >
> > however, heres the weirdness:
> > the mouse cursor will move along the monitor perfectly smooth and
> > fine, but all the other updates to the screen are delayed by about 2
> > or 3 seconds.
> > that is to say, its as if the laptop is doing everything (e.g. if i
> > open a terminal, the terminal will open, but it will take 2 seconds
> > for me to see it)
> >
> > its almost as if all the frames and everything are being drawn, and
> > the laptop is running fine and everything, but i simply just dont get
> > to see it on the monitor, except for one time every 2 seconds.
> >
> > its hard to articulate, because its so bizarre. its not like, a "low
> > fps" per se, because the cursor is totally smooth. but its that
> > _everything else_ is only updated once every couple seconds.
>
> This might also be related to which GPU is the primary.  It still may
> be the integrated GPU since that is what is attached to the laptop
> panel.  Also the platform and some drivers may do certain things when
> the lid is closed.  E.g., for thermal reasons, the integrated GPU or
> CPU may have a more limited TDP because the laptop cannot cool as
> efficiently.
>
> Alex
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


slow rx 5600 xt fps

2020-05-19 Thread Javad Karabi
given this setup:
laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2 -hdmi-> monitor
DRI_PRIME=1 glxgears gears gives me ~300fps

given this setup:
laptop -thunderbolt-> razer core x -> xfx rx 5600 xt raw 2
laptop -hdmi-> monitor

glx gears gives me ~1800fps

this doesnt make sense to me because i thought that having the monitor
plugged directly into the card should give best performance.

theres another really weird issue...

given setup 1, where the monitor is plugged in to the card:
when i close the laptop lid, my monitor is "active" and whatnot, and i
can "use it" in a sense

however, heres the weirdness:
the mouse cursor will move along the monitor perfectly smooth and
fine, but all the other updates to the screen are delayed by about 2
or 3 seconds.
that is to say, its as if the laptop is doing everything (e.g. if i
open a terminal, the terminal will open, but it will take 2 seconds
for me to see it)

its almost as if all the frames and everything are being drawn, and
the laptop is running fine and everything, but i simply just dont get
to see it on the monitor, except for one time every 2 seconds.

its hard to articulate, because its so bizarre. its not like, a "low
fps" per se, because the cursor is totally smooth. but its that
_everything else_ is only updated once every couple seconds.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: regarding vcn

2020-05-19 Thread Javad Karabi
thanks alex. i guess all the little things that i think are the
problem, are really red herrings lol. i keep finding little things
that i think might fix the 5600 issues im having but i guess theyre
unrelated. ill make another post which simply defines the issues im
having

On Tue, May 19, 2020 at 1:48 PM Alex Deucher  wrote:
>
> On Tue, May 19, 2020 at 2:45 PM Javad Karabi  wrote:
> >
> > for a rx 5600 xt graphics card, is VCN supposed to be set to disabled?
> >
> > if i understand correctly, 5600 is navi10, which has vcn
> >
> > but i currently see VCN: disabled
> >
> > $ sudo grep VCN /sys/kernel/debug/dri/1/amdgpu_pm_info
> > VCN: Disabled
>
> amdgpu_pm_info shows power information.  When the VCN block is not in
> use, the driver disables it to save power.  If you read back the
> amdgpu_pm_info while VCN is in use, it will show up as enabled.
>
> Alex
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


regarding vcn

2020-05-19 Thread Javad Karabi
for a rx 5600 xt graphics card, is VCN supposed to be set to disabled?

if i understand correctly, 5600 is navi10, which has vcn

but i currently see VCN: disabled

$ sudo grep VCN /sys/kernel/debug/dri/1/amdgpu_pm_info
VCN: Disabled
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx


Re: XFX RX 5600 XT Raw II graphics card slow

2020-05-17 Thread Javad Karabi
hmm, actually upon digging deeper, it looks like the latest
linux-firmware doesnt have navi10_mes.bin.
if i understand correctly, the rx 5600 xt is navi10, right?
navi10_mes.bin is one of the firmware files that update-initramfs is
saying is missing.

is there any way i can get my hands on navi10_mes.bin?


On Sun, May 17, 2020 at 4:02 PM Javad Karabi  wrote:
>
> oh, i also flashed the card bios with the bios provided at:
> https://www.xfxforce.com/gpus/xfx-amd-radeon-tm-rx-5600-xt-6gb-gddr6-raw-ii
>
> if you scroll down a bit and click on downloads, they have a link to a 
> "performance bios". i flashed that, and nothing changed. after the flash, the 
> card still worked great in windows, and still terrible in linux. so i guess 
> that flash didnt change anything.
>
> also, fyi i do have the latest linux-firmware installed (since apparently 
> there was some issue with the firmware for the rx 5600 which was solved in 
> the latest firmware i guess)
>
> $ md5sum /lib/firmware/amdgpu/navi10_smc.bin
> 632de739379e484c0233f6808cba2c7f  /lib/firmware/amdgpu/navi10_smc.bin
>
> On Sun, May 17, 2020 at 3:51 PM Javad Karabi  wrote:
>>
>> Heres my setup:
>>
>> kernel: linux-5.6.13
>> card: XFX RX 5600 XT Raw II  
>> (https://www.bestbuy.com/site/xfx-amd-radeon-rx-5600-xt-raw-ii-pro-6gb-gddr6-pci-express-4-0-graphics-card/6398005.p?skuId=6398005)
>>
>> x1 carbon 7th gen -thunderbolt-> Razer Core X -> rx 5600 xt -> hdmi 
>> connection to my monitor (asus mg248)
>>
>> when i boot into windows, the card works totally fine (installed the radeon 
>> drivers and everything)
>>
>> when im in linux, the card works, my monitor works, radeontop shows the gpu 
>> being used when i run DRI_PRIME=1 glxgears, etc etc, so it seems that the 
>> card is being properly utilized by everything.
>>
>> one interesting detail: when i install the kernel, update-initramfs reports 
>> that there is "possibly missing firmware". i dont see any errors in dmesg 
>> about missing firmware so im assuming thats not a problem?
>>
>> problem is, its very low fps. for example, heres my glxinfo/glxgears output:
>>
>> $ DRI_PRIME=0 glxgears
>> 3148 frames in 5.0 seconds = 628.420 FPS
>> 1950 frames in 5.0 seconds = 389.999 FPS
>> ^C
>> $ DRI_PRIME=1 glxgears
>> 755 frames in 5.0 seconds = 150.698 FPS
>> 662 frames in 5.0 seconds = 132.296 FPS
>> ^C
>> $ DRI_PRIME=0 glxinfo | grep vendor
>> server glx vendor string: SGI
>> client glx vendor string: Mesa Project and SGI
>> OpenGL vendor string: Intel
>> $ DRI_PRIME=1 glxinfo | grep vendor
>> server glx vendor string: SGI
>> client glx vendor string: Mesa Project and SGI
>> OpenGL vendor string: X.Org
>>
>> $ dmesg | egrep -i "amdgpu|radeon"
>> [4.798043] amdgpu: unknown parameter 'si_support' ignored
>> [4.802600] amdgpu: unknown parameter 'cik_support' ignored
>> [4.813305] [drm] amdgpu kernel modesetting enabled.
>> [4.813449] amdgpu :0c:00.0: enabling device ( -> 0003)
>> [5.051950] amdgpu :0c:00.0: VRAM: 6128M 0x0080 - 
>> 0x00817EFF (6128M used)
>> [5.051952] amdgpu :0c:00.0: GART: 512M 0x - 
>> 0x1FFF
>> [5.052081] [drm] amdgpu: 6128M of VRAM memory ready
>> [5.052084] [drm] amdgpu: 6128M of GTT memory ready.
>> [6.125885] amdgpu :0c:00.0: RAS: ras ta ucode is not available
>> [6.131800] amdgpu: [powerplay] use vbios provided pptable
>> [6.131973] amdgpu: [powerplay] smu driver if version = 0x0033, smu 
>> fw if version = 0x0035, smu fw version = 0x002a3200 (42.50.0)
>> [6.131979] amdgpu: [powerplay] SMU driver if version not matched
>> [6.176170] amdgpu: [powerplay] SMU is initialized successfully!
>> [6.298473] amdgpu :0c:00.0: fb0: amdgpudrmfb frame buffer device
>> [6.310927] amdgpu :0c:00.0: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
>> [6.311158] amdgpu :0c:00.0: ring comp_1.0.0 uses VM inv eng 1 on hub >> 0
>> [6.311401] amdgpu :0c:00.0: ring comp_1.1.0 uses VM inv eng 4 on hub >> 0
>> [6.311648] amdgpu :0c:00.0: ring comp_1.2.0 uses VM inv eng 5 on hub >> 0
>> [6.311904] amdgpu :0c:00.0: ring comp_1.3.0 uses VM inv eng 6 on hub >> 0
>> [6.312133] amdgpu :0c:00.0: ring comp_1.0.1 uses VM inv eng 7 on hub >> 0
>> [6.312376] amdgpu :0c:00.0: ring comp_1.1.1 uses VM inv eng 8 on hub >> 0
>> [6.312619] amdgpu :0c:00.0: ring comp_1.2.1 uses VM inv eng 9 on hub >

Re: XFX RX 5600 XT Raw II graphics card slow

2020-05-17 Thread Javad Karabi
oh, i also flashed the card bios with the bios provided at:
https://www.xfxforce.com/gpus/xfx-amd-radeon-tm-rx-5600-xt-6gb-gddr6-raw-ii

if you scroll down a bit and click on downloads, they have a link to a
"performance bios". i flashed that, and nothing changed. after the flash,
the card still worked great in windows, and still terrible in linux. so i
guess that flash didnt change anything.

also, fyi i do have the latest linux-firmware installed (since apparently
there was some issue with the firmware for the rx 5600 which was solved in
the latest firmware i guess)

$ md5sum /lib/firmware/amdgpu/navi10_smc.bin
632de739379e484c0233f6808cba2c7f  /lib/firmware/amdgpu/navi10_smc.bin

On Sun, May 17, 2020 at 3:51 PM Javad Karabi  wrote:

> Heres my setup:
>
> kernel: linux-5.6.13
> card: XFX RX 5600 XT Raw II  (
> https://www.bestbuy.com/site/xfx-amd-radeon-rx-5600-xt-raw-ii-pro-6gb-gddr6-pci-express-4-0-graphics-card/6398005.p?skuId=6398005
> )
>
> x1 carbon 7th gen -thunderbolt-> Razer Core X -> rx 5600 xt -> hdmi
> connection to my monitor (asus mg248)
>
> when i boot into windows, the card works totally fine (installed the
> radeon drivers and everything)
>
> when im in linux, the card works, my monitor works, radeontop shows the
> gpu being used when i run DRI_PRIME=1 glxgears, etc etc, so it seems that
> the card is being properly utilized by everything.
>
> one interesting detail: when i install the kernel, update-initramfs
> reports that there is "possibly missing firmware". i dont see any errors in
> dmesg about missing firmware so im assuming thats not a problem?
>
> problem is, its very low fps. for example, heres my glxinfo/glxgears
> output:
>
> $ DRI_PRIME=0 glxgears
> 3148 frames in 5.0 seconds = 628.420 FPS
> 1950 frames in 5.0 seconds = 389.999 FPS
> ^C
> $ DRI_PRIME=1 glxgears
> 755 frames in 5.0 seconds = 150.698 FPS
> 662 frames in 5.0 seconds = 132.296 FPS
> ^C
> $ DRI_PRIME=0 glxinfo | grep vendor
> server glx vendor string: SGI
> client glx vendor string: Mesa Project and SGI
> OpenGL vendor string: Intel
> $ DRI_PRIME=1 glxinfo | grep vendor
> server glx vendor string: SGI
> client glx vendor string: Mesa Project and SGI
> OpenGL vendor string: X.Org
>
> $ dmesg | egrep -i "amdgpu|radeon"
> [4.798043] amdgpu: unknown parameter 'si_support' ignored
> [4.802600] amdgpu: unknown parameter 'cik_support' ignored
> [4.813305] [drm] amdgpu kernel modesetting enabled.
> [4.813449] amdgpu :0c:00.0: enabling device ( -> 0003)
> [5.051950] amdgpu :0c:00.0: VRAM: 6128M 0x0080 -
> 0x00817EFF (6128M used)
> [5.051952] amdgpu :0c:00.0: GART: 512M 0x -
> 0x1FFF
> [5.052081] [drm] amdgpu: 6128M of VRAM memory ready
> [5.052084] [drm] amdgpu: 6128M of GTT memory ready.
> [6.125885] amdgpu :0c:00.0: RAS: ras ta ucode is not available
> [6.131800] amdgpu: [powerplay] use vbios provided pptable
> [6.131973] amdgpu: [powerplay] smu driver if version = 0x0033, smu
> fw if version = 0x0035, smu fw version = 0x002a3200 (42.50.0)
> [6.131979] amdgpu: [powerplay] SMU driver if version not matched
> [6.176170] amdgpu: [powerplay] SMU is initialized successfully!
> [6.298473] amdgpu :0c:00.0: fb0: amdgpudrmfb frame buffer device
> [6.310927] amdgpu :0c:00.0: ring gfx_0.0.0 uses VM inv eng 0 on
> hub 0
> [6.311158] amdgpu :0c:00.0: ring comp_1.0.0 uses VM inv eng 1 on
> hub 0
> [6.311401] amdgpu :0c:00.0: ring comp_1.1.0 uses VM inv eng 4 on
> hub 0
> [6.311648] amdgpu :0c:00.0: ring comp_1.2.0 uses VM inv eng 5 on
> hub 0
> [6.311904] amdgpu :0c:00.0: ring comp_1.3.0 uses VM inv eng 6 on
> hub 0
> [6.312133] amdgpu :0c:00.0: ring comp_1.0.1 uses VM inv eng 7 on
> hub 0
> [6.312376] amdgpu :0c:00.0: ring comp_1.1.1 uses VM inv eng 8 on
> hub 0
> [6.312619] amdgpu :0c:00.0: ring comp_1.2.1 uses VM inv eng 9 on
> hub 0
> [6.312863] amdgpu :0c:00.0: ring comp_1.3.1 uses VM inv eng 10 on
> hub 0
> [6.313110] amdgpu :0c:00.0: ring kiq_2.1.0 uses VM inv eng 11 on
> hub 0
> [6.313355] amdgpu :0c:00.0: ring sdma0 uses VM inv eng 12 on hub 0
> [6.313585] amdgpu :0c:00.0: ring sdma1 uses VM inv eng 13 on hub 0
> [6.313821] amdgpu :0c:00.0: ring vcn_dec uses VM inv eng 0 on hub 1
> [6.314059] amdgpu :0c:00.0: ring vcn_enc0 uses VM inv eng 1 on hub
> 1
> [6.314298] amdgpu :0c:00.0: ring vcn_enc1 uses VM inv eng 4 on hub
> 1
> [6.314536] amdgpu :0c:00.0: ring jpeg_dec uses VM inv eng 5 on hub
> 1
> [6.316101] [drm] Initialized amdgpu 

XFX RX 5600 XT Raw II graphics card slow

2020-05-17 Thread Javad Karabi
Heres my setup:

kernel: linux-5.6.13
card: XFX RX 5600 XT Raw II  (
https://www.bestbuy.com/site/xfx-amd-radeon-rx-5600-xt-raw-ii-pro-6gb-gddr6-pci-express-4-0-graphics-card/6398005.p?skuId=6398005
)

x1 carbon 7th gen -thunderbolt-> Razer Core X -> rx 5600 xt -> hdmi
connection to my monitor (asus mg248)

when i boot into windows, the card works totally fine (installed the radeon
drivers and everything)

when im in linux, the card works, my monitor works, radeontop shows the gpu
being used when i run DRI_PRIME=1 glxgears, etc etc, so it seems that the
card is being properly utilized by everything.

one interesting detail: when i install the kernel, update-initramfs reports
that there is "possibly missing firmware". i dont see any errors in dmesg
about missing firmware so im assuming thats not a problem?

problem is, its very low fps. for example, heres my glxinfo/glxgears output:

$ DRI_PRIME=0 glxgears
3148 frames in 5.0 seconds = 628.420 FPS
1950 frames in 5.0 seconds = 389.999 FPS
^C
$ DRI_PRIME=1 glxgears
755 frames in 5.0 seconds = 150.698 FPS
662 frames in 5.0 seconds = 132.296 FPS
^C
$ DRI_PRIME=0 glxinfo | grep vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel
$ DRI_PRIME=1 glxinfo | grep vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: X.Org

$ dmesg | egrep -i "amdgpu|radeon"
[4.798043] amdgpu: unknown parameter 'si_support' ignored
[4.802600] amdgpu: unknown parameter 'cik_support' ignored
[4.813305] [drm] amdgpu kernel modesetting enabled.
[4.813449] amdgpu :0c:00.0: enabling device ( -> 0003)
[5.051950] amdgpu :0c:00.0: VRAM: 6128M 0x0080 -
0x00817EFF (6128M used)
[5.051952] amdgpu :0c:00.0: GART: 512M 0x -
0x1FFF
[5.052081] [drm] amdgpu: 6128M of VRAM memory ready
[5.052084] [drm] amdgpu: 6128M of GTT memory ready.
[6.125885] amdgpu :0c:00.0: RAS: ras ta ucode is not available
[6.131800] amdgpu: [powerplay] use vbios provided pptable
[6.131973] amdgpu: [powerplay] smu driver if version = 0x0033, smu
fw if version = 0x0035, smu fw version = 0x002a3200 (42.50.0)
[6.131979] amdgpu: [powerplay] SMU driver if version not matched
[6.176170] amdgpu: [powerplay] SMU is initialized successfully!
[6.298473] amdgpu :0c:00.0: fb0: amdgpudrmfb frame buffer device
[6.310927] amdgpu :0c:00.0: ring gfx_0.0.0 uses VM inv eng 0 on hub
0
[6.311158] amdgpu :0c:00.0: ring comp_1.0.0 uses VM inv eng 1 on
hub 0
[6.311401] amdgpu :0c:00.0: ring comp_1.1.0 uses VM inv eng 4 on
hub 0
[6.311648] amdgpu :0c:00.0: ring comp_1.2.0 uses VM inv eng 5 on
hub 0
[6.311904] amdgpu :0c:00.0: ring comp_1.3.0 uses VM inv eng 6 on
hub 0
[6.312133] amdgpu :0c:00.0: ring comp_1.0.1 uses VM inv eng 7 on
hub 0
[6.312376] amdgpu :0c:00.0: ring comp_1.1.1 uses VM inv eng 8 on
hub 0
[6.312619] amdgpu :0c:00.0: ring comp_1.2.1 uses VM inv eng 9 on
hub 0
[6.312863] amdgpu :0c:00.0: ring comp_1.3.1 uses VM inv eng 10 on
hub 0
[6.313110] amdgpu :0c:00.0: ring kiq_2.1.0 uses VM inv eng 11 on
hub 0
[6.313355] amdgpu :0c:00.0: ring sdma0 uses VM inv eng 12 on hub 0
[6.313585] amdgpu :0c:00.0: ring sdma1 uses VM inv eng 13 on hub 0
[6.313821] amdgpu :0c:00.0: ring vcn_dec uses VM inv eng 0 on hub 1
[6.314059] amdgpu :0c:00.0: ring vcn_enc0 uses VM inv eng 1 on hub 1
[6.314298] amdgpu :0c:00.0: ring vcn_enc1 uses VM inv eng 4 on hub 1
[6.314536] amdgpu :0c:00.0: ring jpeg_dec uses VM inv eng 5 on hub 1
[6.316101] [drm] Initialized amdgpu 3.36.0 20150101 for :0c:00.0 on
minor 1
[   10.797203] snd_hda_intel :0c:00.1: bound :0c:00.0 (ops
amdgpu_dm_audio_component_bind_ops [amdgpu])

is this perhaps a power management issue?
i can include my kernel config and X logs etc if yall need.
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx