Re: [Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-16 Thread Marek Olšák
What is the staging area? Note that radeonsi creates all textures in
VRAM. The driver allocates its own staging copy (in RAM) for each
texture upload and deallocates it after the upload is done. The driver
also doesn't release memory immediately; it keeps it and recycles it
for future allocations, or releases it when it's unused for some time.
It makes staging allocations for texture uploads very cheap. If OGRE
does some of that too, it just adds unnecessary work and memory usage.

Marek

On Tue, Nov 14, 2017 at 6:43 PM, Michel Dänzer  wrote:
> On 13/11/17 04:39 AM, Matias N. Goldberg wrote:
>>
>> I am on a Radeon RX 560 2GB; using mesa git-57c8ead0cd (So... not too new 
>> not too old), Kernel 4.12.10
>>
>> I've been having complaints about our WIP branch of Ogre 2.2 about out of 
>> memory crashes, and I fixed them.
>>
>> I made a stress test where 495 textures with very different resolutions 
>> (most of them not power-of-2), and total memory from those textures is 
>> around 700MB (for some reason radentop reports all 2GB of my card are used 
>> during this stress test).
>> Additionally, 495 cubes (one cube for each texture) are rendered to screen 
>> to ensure driver keeps them resident.
>>
>> The problem is, we have different strategies:
>> 1. In one extreme, we can load every texture to a staging region, one at a 
>> time; and then from staging region copy to the final texture.
>> 2. In the other extreme, we load all textures to RAM at once, and use one 
>> giant staging region.
>>
>> Loading everything at once causes a GL_OUT_OF_MEMORY while creating the 
>> staging area of 700MB. Ok... sounds sorta reasonable.
>>
>> But things get interesting when loading using a staging area of 512MB:
>> 1. Loading goes fine.
>> 2. For a time, everything works fine.
>> 3. If I hide all cubes so that they aren't shown anymore:
>> 1. Framerate usually goes way down (not always), like 8 fps or so 
>> (should be at 1000 fps while empty, around 200 fps while showing the cubes).
>> How slow it becomes is not consistent.2. radeontop shows consumption 
>> goes down a lot (like half or more).
>> 3. A few seconds later, I almost always get a crash (SIGBUS) while 
>> writing to an UBO buffer that had been persistently mapped (non-coherent) 
>> since the beginning of the application.
>> 4. Running through valgrind, I don't get a crash.
>> 5. There are no errors reported by OpenGL.
>> 4. I don't get a crash if I never hide the cubes.
>>
>> Using a smaller staging area (256MB or lower) everything is always fine.
>>
>> So... is this behavior expected?
>> Am I uncovering a weird bug in how radeonsi/amdgpu-pro handle memory pages?
>
> Are you using the amdgpu kernel driver from an amdgpu-pro release or
> from the upstream Linux kernel? (If you're not sure, provide the dmesg
> output and Xorg log file)
>
> If the latter, can you try a 4.13 or 4.14 kernel and see if that works
> better?
>
>
>> I'd normally update to latest git, then create a test if the problem 
>> persists; but I've pulled latest git and saw that it required me to 
>> recompile llvm as well...
>
> Why, doesn't your distro have LLVM development packages?
>
>
> --
> Earthling Michel Dänzer   |   http://www.amd.com
> Libre software enthusiast | Mesa and X developer
> ___
> mesa-dev mailing list
> mesa-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-16 Thread Eric Engestrom
On Wednesday, 2017-11-15 17:39:26 +0100, Kai Wasserbäch wrote:
> Hey Matias,
> Matias N. Goldberg wrote on 15.11.2017 16:51:
> >> Why, doesn't your distro have LLVM development packages?
> > They aren't as up to date. Keeping up-to-date with everything mesa needs is 
> > exhausting.
> > I started compiling LLVM from source when I needed to test an LLVM patch to 
> > fix a GLSL shader compiler bug.
> > 
> > I also compile mesa from source (rather thank using Oibaf or Padoka's PPA 
> > for Ubuntu) because as a graphics dev, being able to debug inside Mesa has 
> > proven to be an invaluable tool.
> 
> you seem to be using Ubuntu or another Debian derivate. In that case you can 
> get
> development snapshot packages from Debian's LLVM maintainer at
> .
> 
> Cheers,
> Kai
> 
> P.S.@Michel: Maybe it would be helpful to add that URL to the Mesa 
> documentation
> somewhere?

The "requirements" section of docs/llvmpipe.html looks like a good place
for it; care to send a patch? :)
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-15 Thread Kai Wasserbäch
Hey Matias,
Matias N. Goldberg wrote on 15.11.2017 16:51:
>> Why, doesn't your distro have LLVM development packages?
> They aren't as up to date. Keeping up-to-date with everything mesa needs is 
> exhausting.
> I started compiling LLVM from source when I needed to test an LLVM patch to 
> fix a GLSL shader compiler bug.
> 
> I also compile mesa from source (rather thank using Oibaf or Padoka's PPA for 
> Ubuntu) because as a graphics dev, being able to debug inside Mesa has proven 
> to be an invaluable tool.

you seem to be using Ubuntu or another Debian derivate. In that case you can get
development snapshot packages from Debian's LLVM maintainer at
.

Cheers,
Kai

P.S.@Michel: Maybe it would be helpful to add that URL to the Mesa documentation
somewhere?



signature.asc
Description: OpenPGP digital signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-15 Thread Matias N. Goldberg
Hi!

> Are you using the amdgpu kernel driver from an amdgpu-pro release or
> from the upstream Linux kernel? (If you're not sure, provide the dmesg
> output and Xorg log file)

> If the latter, can you try a 4.13 or 4.14 kernel and see if that works
> better?


I'm using upstream Linux kernel (without my distro's patches), with amdgpu (not 
pro).

I could try newer kernels.

> Why, doesn't your distro have LLVM development packages?
They aren't as up to date. Keeping up-to-date with everything mesa needs is 
exhausting.
I started compiling LLVM from source when I needed to test an LLVM patch to fix 
a GLSL shader compiler bug.

I also compile mesa from source (rather thank using Oibaf or Padoka's PPA for 
Ubuntu) because as a graphics dev, being able to debug inside Mesa has proven 
to be an invaluable tool.


So I take it from your questions, that this behavior I saw isn't exactly 
"normal" or "expected" and I should dedicate some time to see if I can repro in 
newer versions, and if possible create a small repro.

Thanks,

Cheers
Matias

De: Michel Dänzer <mic...@daenzer.net>
Para: Matias N. Goldberg <dark_syl...@yahoo.com.ar> 
CC: ML Mesa-dev <mesa-dev@lists.freedesktop.org>
Enviado: Martes, 14 de noviembre, 2017 14:43:28
Asunto: Re: [Mesa-dev] Has anyone stressed radeonsi memory?



On 13/11/17 04:39 AM, Matias N. Goldberg wrote:
> 
> I am on a Radeon RX 560 2GB; using mesa git-57c8ead0cd (So... not too new not 
> too old), Kernel 4.12.10
> 
> I've been having complaints about our WIP branch of Ogre 2.2 about out of 
> memory crashes, and I fixed them.
> 
> I made a stress test where 495 textures with very different resolutions (most 
> of them not power-of-2), and total memory from those textures is around 700MB 
> (for some reason radentop reports all 2GB of my card are used during this 
> stress test).
> Additionally, 495 cubes (one cube for each texture) are rendered to screen to 
> ensure driver keeps them resident.
> 
> The problem is, we have different strategies:
> 1. In one extreme, we can load every texture to a staging region, one at a 
> time; and then from staging region copy to the final texture.
> 2. In the other extreme, we load all textures to RAM at once, and use one 
> giant staging region.
> 
> Loading everything at once causes a GL_OUT_OF_MEMORY while creating the 
> staging area of 700MB. Ok... sounds sorta reasonable.
> 
> But things get interesting when loading using a staging area of 512MB:
> 1. Loading goes fine.
> 2. For a time, everything works fine.
> 3. If I hide all cubes so that they aren't shown anymore:
> 1. Framerate usually goes way down (not always), like 8 fps or so (should 
> be at 1000 fps while empty, around 200 fps while showing the cubes).
> How slow it becomes is not consistent.2. radeontop shows consumption goes 
> down a lot (like half or more).
> 3. A few seconds later, I almost always get a crash (SIGBUS) while 
> writing to an UBO buffer that had been persistently mapped (non-coherent) 
> since the beginning of the application.
> 4. Running through valgrind, I don't get a crash.
> 5. There are no errors reported by OpenGL.
> 4. I don't get a crash if I never hide the cubes.
> 
> Using a smaller staging area (256MB or lower) everything is always fine.
> 
> So... is this behavior expected?
> Am I uncovering a weird bug in how radeonsi/amdgpu-pro handle memory pages?

Are you using the amdgpu kernel driver from an amdgpu-pro release or
from the upstream Linux kernel? (If you're not sure, provide the dmesg
output and Xorg log file)

If the latter, can you try a 4.13 or 4.14 kernel and see if that works
better?



> I'd normally update to latest git, then create a test if the problem 
> persists; but I've pulled latest git and saw that it required me to recompile 
> llvm as well...

Why, doesn't your distro have LLVM development packages?


-- 
Earthling Michel Dänzer   |  http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-14 Thread Michel Dänzer
On 13/11/17 04:39 AM, Matias N. Goldberg wrote:
> 
> I am on a Radeon RX 560 2GB; using mesa git-57c8ead0cd (So... not too new not 
> too old), Kernel 4.12.10
> 
> I've been having complaints about our WIP branch of Ogre 2.2 about out of 
> memory crashes, and I fixed them.
> 
> I made a stress test where 495 textures with very different resolutions (most 
> of them not power-of-2), and total memory from those textures is around 700MB 
> (for some reason radentop reports all 2GB of my card are used during this 
> stress test).
> Additionally, 495 cubes (one cube for each texture) are rendered to screen to 
> ensure driver keeps them resident.
> 
> The problem is, we have different strategies:
> 1. In one extreme, we can load every texture to a staging region, one at a 
> time; and then from staging region copy to the final texture.
> 2. In the other extreme, we load all textures to RAM at once, and use one 
> giant staging region.
> 
> Loading everything at once causes a GL_OUT_OF_MEMORY while creating the 
> staging area of 700MB. Ok... sounds sorta reasonable.
> 
> But things get interesting when loading using a staging area of 512MB:
> 1. Loading goes fine.
> 2. For a time, everything works fine.
> 3. If I hide all cubes so that they aren't shown anymore:
> 1. Framerate usually goes way down (not always), like 8 fps or so (should 
> be at 1000 fps while empty, around 200 fps while showing the cubes).
> How slow it becomes is not consistent.2. radeontop shows consumption goes 
> down a lot (like half or more).
> 3. A few seconds later, I almost always get a crash (SIGBUS) while 
> writing to an UBO buffer that had been persistently mapped (non-coherent) 
> since the beginning of the application.
> 4. Running through valgrind, I don't get a crash.
> 5. There are no errors reported by OpenGL.
> 4. I don't get a crash if I never hide the cubes.
> 
> Using a smaller staging area (256MB or lower) everything is always fine.
> 
> So... is this behavior expected?
> Am I uncovering a weird bug in how radeonsi/amdgpu-pro handle memory pages?

Are you using the amdgpu kernel driver from an amdgpu-pro release or
from the upstream Linux kernel? (If you're not sure, provide the dmesg
output and Xorg log file)

If the latter, can you try a 4.13 or 4.14 kernel and see if that works
better?


> I'd normally update to latest git, then create a test if the problem 
> persists; but I've pulled latest git and saw that it required me to recompile 
> llvm as well...

Why, doesn't your distro have LLVM development packages?


-- 
Earthling Michel Dänzer   |   http://www.amd.com
Libre software enthusiast | Mesa and X developer
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] Has anyone stressed radeonsi memory?

2017-11-12 Thread Matias N. Goldberg
Hi!

I am on a Radeon RX 560 2GB; using mesa git-57c8ead0cd (So... not too new not 
too old), Kernel 4.12.10

I've been having complaints about our WIP branch of Ogre 2.2 about out of 
memory crashes, and I fixed them.

I made a stress test where 495 textures with very different resolutions (most 
of them not power-of-2), and total memory from those textures is around 700MB 
(for some reason radentop reports all 2GB of my card are used during this 
stress test).
Additionally, 495 cubes (one cube for each texture) are rendered to screen to 
ensure driver keeps them resident.

The problem is, we have different strategies:
1. In one extreme, we can load every texture to a staging region, one at a 
time; and then from staging region copy to the final texture.
2. In the other extreme, we load all textures to RAM at once, and use one giant 
staging region.

Loading everything at once causes a GL_OUT_OF_MEMORY while creating the staging 
area of 700MB. Ok... sounds sorta reasonable.

But things get interesting when loading using a staging area of 512MB:
1. Loading goes fine.
2. For a time, everything works fine.
3. If I hide all cubes so that they aren't shown anymore:
1. Framerate usually goes way down (not always), like 8 fps or so (should 
be at 1000 fps while empty, around 200 fps while showing the cubes).
How slow it becomes is not consistent.2. radeontop shows consumption goes 
down a lot (like half or more).
3. A few seconds later, I almost always get a crash (SIGBUS) while writing 
to an UBO buffer that had been persistently mapped (non-coherent) since the 
beginning of the application.
4. Running through valgrind, I don't get a crash.
5. There are no errors reported by OpenGL.
4. I don't get a crash if I never hide the cubes.

Using a smaller staging area (256MB or lower) everything is always fine.

So... is this behavior expected?
Am I uncovering a weird bug in how radeonsi/amdgpu-pro handle memory pages?

I'd normally update to latest git, then create a test if the problem persists; 
but I've pulled latest git and saw that it required me to recompile llvm as 
well... so this is why I'm asking first, before losing any more time to this.

From my perspective, if a limit of 256MB works, then I'm happy.
If you tell me this isn't normal, then I'll try to find some time to update 
mesa to try again; and if problem persists create a small test.

Cheers
Matias
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev