On Tuesday, 7 January 2014 at 20:06:47 UTC, Adam Wilson wrote:
Right, but Mike Parker has experience doing this, his opinion

(I don't know Mike, but it doesn't matter, I never care about technical opinions anyway. I care about arguments, so name dropping has zero effect on me. Even Carmack have opinions that are wrong.)

counts for quite a bit. His biggest point however is that the high-level API should be completely independent of the low-level API's.

That's not possible. The GPU pipeline defines a design space. For 2D graphics it consists of texture atlases, shaders and how to obtain "context coherency" and reduce the costs of overdraw. If you stay in that design space and do it well, you get great speed and can afford to have less efficient higher-level structures creating a framework that is easier to use. The more low level headroom you have, the more high level freedom you get.

The more speed you waste on the lower levels the more constrained and annoying using the high level api becomes, because you have to take care to avoid lower level bottlenecks.

Which is a good argument for retained mode at the cost of latency for high level frameworks.

The high-level API describes what the user wants and it's up to the graphics API implementor to get it right.

That is the scene-graph approach: Cocos2D, HTML, SVG, VRML, Open Inventor etc

to the trouble of making OpenGL render 2D shapes in 3D space, I've done that before, it's not easy. One of the more difficult problems is converting 2D pixels into the Cartesian coordinates system while accounting for DPI. It's doable, but it's more a

Well, not sure why DPI is a problem, but managing dynamic atlases (organizing multiple images on a single texture) in an optimal and transparent manner requires an infrastructure. Sure.

Reply via email to