On Monday, 20 January 2014 at 08:29:32 UTC, Adam Wilson wrote:
Ok, I see were you are headed now, and it's not an all bad idea. Although it will introduce a layer of abstraction, however thin, that will negatively affect performance.

Actually, a scene graph can be reasonable efficient if you provide caching hints on the nodes, but the underlying engine should match up to the hardware on mobile units. OpenGL is not sufficient. So keep that in mind if you want to create a shared abstraction layer. (I think you will be better off not using one.)

Just to give you an idea of different characteristics (not entirely accurate):

A tiled mobile GPU has:
- slow shaders
- shaders should draw every pixel in a triangle
- slow CPU
- less non-GPU memory (it might share with CPU though)
- possibly fast CPU/GPU connection (if using the same memory)
- slower texturing/memory
- fewer texturing units
- wants few triangles of a particular size and do all the sorting for you whether you want it or not.

A discrete desktop GPU has:
- fast shaders
- shaders can abort drawing some pixels in a triangle
- fast GPU memory
- relatively slow connection to CPU (PCI)
- lots of non-GPU memory
- takes any triangle config, but you have to do the sorting yourself

CPU integrated GPUs may get
- much faster draw calls so you don't have to bundle (AMD Mantle)
- may have shared cache with CPU (fast CPU/GPU communication?)
- somewhere between discrete/mobile GPU in terms of shaders/texturing

never claimed that Aurora was going to set any speed records, nor are we trying to. It won't make for the best gaming performance, but it should allow for reasonable performance in most scenarios.

The Flash scene graph performs quite well. Largely because you provide caching hints to the engine (stating whether a drawing is likely to change between frames or not). Cocos2D and most other gaming scene graphs follow the same model, I believe.

Reply via email to