Re: [whatwg] Challenging canvas.supportsContext

2013-09-05 Thread Benoit Jacob
(I want to be clear that the long delay hinders my ability to continue this
conversation. I'm just one regular Mozilla developer --- I'm not supposed
to be spending a lot of time discussing the canvas standard, and right now,
I can't really afford to. Back when I started this thread in june, I did
have some time to invest in a long conversation on this. Right now I don't.)

Some partial inline responses below.

2013/9/3 Ian Hickson i...@hixie.ch


 The long and short of this is that I renamed supportsContext() to
 probablySupportsContext(). It's already implemented in WebKit


And that's the real cost of having accepted supportsContext too early in
the HTML spec.


 Fundamentally, it addresses a need that none of the other proposals
 addressed: how to know whether or not you can expect to be able to do 3D.
 It's not 100% reliable, but then neither would actually attempting to
 create a context, because creating a context is so expensive on some
 platforms that some UAs are going to move to doing it lazily


The only conformant way to do lazy context creation would be to have
getContext return lost contexts, but given that only a tiny minority of
real-world code cares about that concept, that's not going to be feasible
in the foreseeable future. Maybe in a few years, optimistically.




 On Wed, 19 Jun 2013, Benoit Jacob wrote:
 
  I'd like to question the usefulness of canvas.supportsContext. I tried to
  think of an actual application use case for it, and couldn't find one.

 The use case is libraries like Modernizr that want to do feature detection
 up-front, but don't want a high performance hit on startup.


  However, that only shifts the question to: what is the reason for them
  to expose such APIs? In the end, I claim that the only thing that we
  should recognize as a reason to add a feature to the HTML spec, is
  *application* use cases.

 Oh well the use case for knowing whether or not 3D is supported on a
 particular device is straight-forward: you want to know which set of
 assets and logic to download and run.


Application developer wants things. But these are not necessarily good
ideas, because they may not reflect how things really work. More below.




  So let's look at the naive application usage pattern for supportsContext:
 
if (canvas.supportsContext(webgl)) {
  context = canvas.getContext(webgl);
}
 
  The problem is that the same can be achieved with just the getContext
  call, and checking whether it succeeded.

 Suppose you have an app that has a 3D feature, but it's not immediately
 used upon startup. For example, a preview window that is displayed on
 request. You want to preload all the code to run the preview window, but
 you need to load different code based on whether the device can do 3D or
 not. So the use case is more:

if (canvas.supportsContext(webgl))
  load3DCode();
else
  load2DCode();

// 3D code:
function run() {
  context = canvas.getContext(webgl);
  // ...
}


If now application developers call probablySupportsContext, it returns
true, they start downloading the WebGL assets, but getContext(webgl)
fails, their application startup experience will be wose, which will
pressure browser developers to optimize the accuracy of
probablySupportsContext, but that's going to be hard and unrewarding.

So my best hope is that application developers don't use
probablySupportsContext.

Instead, they should do their actual getContext call --- the one creating
the context that they will actually want to use --- right at the beginning
of their application startup, and download the right assets based on the
outcome of that getContext.

The downside of course is that assets download becomes gated on getContext
returning. But in practice that's not too bad:
 - Only the first getContext in a browser sessing can be really slow (say
100 ms), subsequent ones tend to take less than 5 ms --- not that much
compared to the time to download big assets.
 - If any assets are shared between the two code paths, they can be
downloaded first while getContext is running.



 You don't want to pay the cost of creating a throw-away 3D context on
 startup just to know which scripts to load. It defeats the whole point of
 not loading all the code up-front.


I'm not talking about having any throw-away 3d contexts just for testing. I
understand that modernizr has an API that would force it to be implemented
in that way. In my view, that makes it a bad API. There should be only one
context, the one that we actually want to use.




  Outside of exceptional cases (out of memory...), the slow path in
  getContext is the *success* case, and again, in that case a real
  application would want to actually *use* that context.

 Not necessarily, as noted above. The canvas you're going to draw to
 might not even exist at the point you need to know if it's 3D or not.


Precisely, that's my point: don't cater to the pathological
use-a-separate-throwaway-context use

Re: [whatwg] Challenging canvas.supportsContext

2013-08-02 Thread Benoit Jacob
2013/7/31 Ian Hickson i...@hixie.ch

 On Wed, 31 Jul 2013, Benoit Jacob wrote:
 
  Ping --- I thought that there was sufficient agreement in this thread,
  around the fact that supportsContext, as currently spec'd and currently
  implementable, is a feature without a valid use case, that removing it
  from the spec is the right thing to do at this point.

 It's on my list of e-mail to get to. Sorry about the delay. I'm currently
 about six months behind on feedback that isn't blocked on other things.

 In this particular case, I should note that we have at least one
 implementation, and we have no alternative solution to the problem for
 which the feature was added. So it's not as clear cut as it might seem.


As discussed in this thread, the alternative solution is to call getContext
and check if it succeeded; I was arguing that if one wants something
reliable and tightly spec'd, there is no alternative to doing the same
amount of work; and I was also arguing against the notion that something
more loosely spec'd (as supportsContext currently is) was useful to have.

Benoit





 (Note that decisions for WHATWG specs aren't made based on consensus. Even
 if everyone agrees on something, if there's one more compelling argument
 to the contrary, that's the one that's going to win.)

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Challenging canvas.supportsContext

2013-07-31 Thread Benoit Jacob
Ping --- I thought that there was sufficient agreement in this thread,
around the fact that supportsContext, as currently spec'd and currently
implementable, is a feature without a valid use case, that removing it from
the spec is the right thing to do at this point.

Benoit


2013/7/18 Benoit Jacob jacob.benoi...@gmail.com

 The thread seems to have settled down.

 I still believe that supportsContext, in its current form, should be
 removed from the HTML spec, because as currently spec'd it could be
 implemented as just returning whether WebGLRenderingContext is defined. I
 also still believe that it will be exceedingly hard to spec supportsContext
 in a way that makes it useful as opposed to just calling getContext.

 Emails in this thread have conveyed the idea that it is already useful to
 return whether WebGL is at least not blacklisted, regardless of whether
 actual context creation would succeed. That, however, is impossible to
 specify, and depends too much on details of how some current browsers and
 platforms work:
  - driver blacklists will hopefully be a thing of the past, eventually.
  - on low-end mobile devices, the main cause of WebGL context creation
 failure is not blacklists, but plain OpenGL context creation failures, or
 non-conformant OpenGL behavior, or OOM'ing right after context creation.
 For these reasons, justifying supportsContext by driver blacklisting seems
 like encoding short-term contingencies into the HTML spec, which we
 shouldn't do. Besides, even if we wanted to do that, there would remain the
 problem that that's impossible to spec in a precise and testable way.

 For these reasons, I still think that supportsContext should be removed
 from the spec.

 Benoit



 2013/6/19 Benoit Jacob jacob.benoi...@gmail.com

 Dear list,

 I'd like to question the usefulness of canvas.supportsContext. I tried to
 think of an actual application use case for it, and couldn't find one. It
 also doesn't seem like any valid application use case was given on this
 list when this topic was discussed around September 2012.

 The closest thing that I could find being discussed, was use cases by JS
 frameworks or libraries that already expose similar feature-detection APIs.
 However, that only shifts the question to: what is the reason for them to
 expose such APIs? In the end, I claim that the only thing that we should
 recognize as a reason to add a feature to the HTML spec, is *application*use 
 cases.

 So let's look at the naive application usage pattern for supportsContext:

   if (canvas.supportsContext(webgl)) {
 context = canvas.getContext(webgl);
   }

 The problem is that the same can be achieved with just the getContext
 call, and checking whether it succeeded.

 In other words, I'm saying that no matter what JS libraries/frameworks
 may offer for feature detection, in the end, applications don't want to
 just *detect* features --- applications want to *use* features. So
 they'll just pair supportsContext calls with getContext calls, making the
 supportsContext calls useless.

 There is also the argument that supportsContext can be much cheaper than
 a getContext, given that it only has to guarantee that getContext must fail
 if supportsContext returned false. But this argument is overlooking that in
 the typical failure case, which is failure due to system/driver
 blacklisting, getContext returns just as fast as supportsContext --- as
 they both just check the blacklist and return. Outside of exceptional cases
 (out of memory...), the slow path in getContext is the *success* case,
 and again, in that case a real application would want to actually *use*that 
 context.

 Keep in mind that supportsContext can't guarantee that if it returns
 true, then a subsequent getContext will succeed. The spec doesn't require
 it to, either. So if the existence of supportsContext misleads application
 developers into no longer checking for getContext failures, then we'll just
 have rendered canvas-using applications a little bit more fragile. Another
 problem with supportsContext is that it's untestable, at least when it
 returns true; it is spec-compliant to just implement it as returning
 whether the JS interface for the required canvas context exists, which is
 quite useless. Given such deep problems, I think that the usefulness bar
 for accepting supportsContext into the spec should be quite high.

 So, is there an application use case that actually benefits from
 supportsContext?

 Cheers,
 Benoit





Re: [whatwg] Challenging canvas.supportsContext

2013-07-18 Thread Benoit Jacob
The thread seems to have settled down.

I still believe that supportsContext, in its current form, should be
removed from the HTML spec, because as currently spec'd it could be
implemented as just returning whether WebGLRenderingContext is defined. I
also still believe that it will be exceedingly hard to spec supportsContext
in a way that makes it useful as opposed to just calling getContext.

Emails in this thread have conveyed the idea that it is already useful to
return whether WebGL is at least not blacklisted, regardless of whether
actual context creation would succeed. That, however, is impossible to
specify, and depends too much on details of how some current browsers and
platforms work:
 - driver blacklists will hopefully be a thing of the past, eventually.
 - on low-end mobile devices, the main cause of WebGL context creation
failure is not blacklists, but plain OpenGL context creation failures, or
non-conformant OpenGL behavior, or OOM'ing right after context creation.
For these reasons, justifying supportsContext by driver blacklisting seems
like encoding short-term contingencies into the HTML spec, which we
shouldn't do. Besides, even if we wanted to do that, there would remain the
problem that that's impossible to spec in a precise and testable way.

For these reasons, I still think that supportsContext should be removed
from the spec.

Benoit



2013/6/19 Benoit Jacob jacob.benoi...@gmail.com

 Dear list,

 I'd like to question the usefulness of canvas.supportsContext. I tried to
 think of an actual application use case for it, and couldn't find one. It
 also doesn't seem like any valid application use case was given on this
 list when this topic was discussed around September 2012.

 The closest thing that I could find being discussed, was use cases by JS
 frameworks or libraries that already expose similar feature-detection APIs.
 However, that only shifts the question to: what is the reason for them to
 expose such APIs? In the end, I claim that the only thing that we should
 recognize as a reason to add a feature to the HTML spec, is *application*use 
 cases.

 So let's look at the naive application usage pattern for supportsContext:

   if (canvas.supportsContext(webgl)) {
 context = canvas.getContext(webgl);
   }

 The problem is that the same can be achieved with just the getContext
 call, and checking whether it succeeded.

 In other words, I'm saying that no matter what JS libraries/frameworks may
 offer for feature detection, in the end, applications don't want to just *
 detect* features --- applications want to *use* features. So they'll just
 pair supportsContext calls with getContext calls, making the
 supportsContext calls useless.

 There is also the argument that supportsContext can be much cheaper than a
 getContext, given that it only has to guarantee that getContext must fail
 if supportsContext returned false. But this argument is overlooking that in
 the typical failure case, which is failure due to system/driver
 blacklisting, getContext returns just as fast as supportsContext --- as
 they both just check the blacklist and return. Outside of exceptional cases
 (out of memory...), the slow path in getContext is the *success* case,
 and again, in that case a real application would want to actually *use*that 
 context.

 Keep in mind that supportsContext can't guarantee that if it returns true,
 then a subsequent getContext will succeed. The spec doesn't require it to,
 either. So if the existence of supportsContext misleads application
 developers into no longer checking for getContext failures, then we'll just
 have rendered canvas-using applications a little bit more fragile. Another
 problem with supportsContext is that it's untestable, at least when it
 returns true; it is spec-compliant to just implement it as returning
 whether the JS interface for the required canvas context exists, which is
 quite useless. Given such deep problems, I think that the usefulness bar
 for accepting supportsContext into the spec should be quite high.

 So, is there an application use case that actually benefits from
 supportsContext?

 Cheers,
 Benoit




Re: [whatwg] Adding 2D Canvas features (Was: Grouping in canvas 2d)

2013-06-28 Thread Benoit Jacob
Has the Canvas 2D community considered a WebGL-like model to the
introduction of additional features?

By the WebGL model, I mean:
- Define a core feature set that's conservative and limited, but
consistently implemented by all browsers. Enforce consistent support for it
by an exhaustive conformance test suite.
- Let any additional feature be an extension. Using the feature requires
first explicitly calling context.getExtension(featurename); . This helps
ensuring that Web pages don't accidentally rely on that feature, and
instead, make a conscious portability-vs-features trade-off.
- Consider, similar to WebGL, further tightening this process by letting
any new feature start at draft status and wait a bit before being
approved.  Typically, the difference that this makes is that draft
extensions would be exposed by browsers only behind  a flag (a browser
option that's not enabled by default). Prerequisites for approval would
include: having been implemented and stable for some time, and being
covered by a conformance test.

If there is any doubt that such a model can be useful in practice, WebGL
has been very successful so far at combining fast feature iteration with
tight cross-browser compatibility, using a similar model.

See this document:
  http://www.khronos.org/registry/webgl/extensions/
.,. except it needs updating to reflect the recent move away from vendor
prefixes to flags, by some major browsers.

Benoit


2013/6/28 Tom Wiltzius wiltz...@chromium.org

 I agree there isn't a risk of these unrelated additional features not
 matching their hypothetical specs.

 However, my concern is Ian's comment that he'd prefer not to add additional
 features to the spec -- some of the ones being actively developed in
 Chromium aren't added yet, and I'd hate for them to not get added even if
 we have consensus on their behavior and early implementations.

 To quote Ian's initial message:

 I think before we add more features, it's important that we figure out
 which browsers want to implement which features, and that we start with
 the highest priority ones.

 ... so I'm trying to provide context about what Chromium is currently
 implementing, and hence what might be higher priority to spec.


 On Fri, Jun 28, 2013 at 12:48 PM, Rik Cabanier caban...@gmail.com wrote:

  As long as those features don't build upon other unimplemented features,
  there should be no risk.
  However, accessibility (=hit regions) is a must and should be tackled as
  soon as possible.
 
  Rik
 
 
  On Fri, Jun 28, 2013 at 12:30 PM, Tom Wiltzius wiltz...@chromium.org
 wrote:
 
  The only major Canvas2D features being actively developed in Chromium
  right now are:
 
   - having a canvas context alpha attribute
   - text decoration
   - compositing and blending
   - canvas access in workers
 
  (easily referenced from http://www.chromestatus.com/features)
 
  It is concerning to me that the list of other unimplemented features
 that
  aren't being worked on could block the standardization of the above
 (all of
  which have been discussed on this list at one point, but not all of
 which
  are in the spec yet).
 
  How can we help reconcile this discrepancy?
 
 
  On Fri, Jun 14, 2013 at 11:54 AM, Rik Cabanier caban...@gmail.com
 wrote:
 
  I agree that hit regions should be high on the priority list. They've
  been
  in the spec for a while and are absolutely required for accessibility.
  I will try to follow up on this feature with the browsers. We recently
  added a basic Path object to WebKit and I know that mozilla is looking
 at
  the path object.
 
  At this point, I wouldn't ask to add begin/endLayer to the spec.
 Instead,
  we will talk to the browser vendors and work on implementing the
 feature.
  Just putting it in the spec is not enough to get an implementation...
 
  On Fri, Jun 14, 2013 at 10:42 AM, Ian Hickson i...@hixie.ch wrote:
 
   On Fri, 14 Jun 2013, Brian Salomon wrote:
   
As an implementor, we would prefer the layer approach. This would
  have
lower overhead in Chromium/Skia. We can make better decisions about
caching and deferred rendering. It also seems like a really handy
 API
for devs, especially the ability to inherit the graphics state.
 Would
the spec have anything to say about beginLayer()/endLayer()
  balancing,
especially with respect to RAF?
  
   I have no ojection to adding this to the spec, but right now the spec
  has
   a bunch of features that aren't implemented, and there's a long list
 of
   other features people want that aren't yet specced. I'm very hesitant
  to
   get the spec further and further away from implementations.
  
   For example, here are some of the bug numbers for canvas feature
   requests:
  
   11399   canvas Locking individual color channels (e.g. drawing to
  alpha
only)
   21835   canvas Path object should have a way to add paths keeping
  only
the union given a fill rule
   21939   canvas Path objects would 

Re: [whatwg] Challenging canvas.supportsContext

2013-06-24 Thread Benoit Jacob
2013/6/21 Benoit Jacob jacob.benoi...@gmail.com

 Any other application use cases?


Anyone?

I believe that if no unquestionable application use case can be given, then
this function should be removed from the spec.

Benoit


Re: [whatwg] Challenging canvas.supportsContext

2013-06-21 Thread Benoit Jacob
Did any email in this thread having provided any valid application use
case? I can't see many application use cases being mentioned at all, and
the most of the ones that were have been rebuked as far as I can see.

The most serious remaining one, that I can see, is the Chrome Web Store.
However:
 - it is a case where reliability is particularly important, which goes in
favor of an actual getContext;
 - this kind of application inherently requires browser-specific APIs
anyway (at least for the time being --- we can revisit when it's no longer
the case), so having a standard API for this particular feature is less
valuable than elsewhere.

Any other application use cases?

Keep in mind that this is all in sharp contrast with every single other
canvas / 2d / webgl API feature that I can think of, for which application
use cases are abundant.

Benoit


2013/6/19 Benoit Jacob jacob.benoi...@gmail.com

 Dear list,

 I'd like to question the usefulness of canvas.supportsContext. I tried to
 think of an actual application use case for it, and couldn't find one. It
 also doesn't seem like any valid application use case was given on this
 list when this topic was discussed around September 2012.

 The closest thing that I could find being discussed, was use cases by JS
 frameworks or libraries that already expose similar feature-detection APIs.
 However, that only shifts the question to: what is the reason for them to
 expose such APIs? In the end, I claim that the only thing that we should
 recognize as a reason to add a feature to the HTML spec, is *application*use 
 cases.

 So let's look at the naive application usage pattern for supportsContext:

   if (canvas.supportsContext(webgl)) {
 context = canvas.getContext(webgl);
   }

 The problem is that the same can be achieved with just the getContext
 call, and checking whether it succeeded.

 In other words, I'm saying that no matter what JS libraries/frameworks may
 offer for feature detection, in the end, applications don't want to just *
 detect* features --- applications want to *use* features. So they'll just
 pair supportsContext calls with getContext calls, making the
 supportsContext calls useless.

 There is also the argument that supportsContext can be much cheaper than a
 getContext, given that it only has to guarantee that getContext must fail
 if supportsContext returned false. But this argument is overlooking that in
 the typical failure case, which is failure due to system/driver
 blacklisting, getContext returns just as fast as supportsContext --- as
 they both just check the blacklist and return. Outside of exceptional cases
 (out of memory...), the slow path in getContext is the *success* case,
 and again, in that case a real application would want to actually *use*that 
 context.

 Keep in mind that supportsContext can't guarantee that if it returns true,
 then a subsequent getContext will succeed. The spec doesn't require it to,
 either. So if the existence of supportsContext misleads application
 developers into no longer checking for getContext failures, then we'll just
 have rendered canvas-using applications a little bit more fragile. Another
 problem with supportsContext is that it's untestable, at least when it
 returns true; it is spec-compliant to just implement it as returning
 whether the JS interface for the required canvas context exists, which is
 quite useless. Given such deep problems, I think that the usefulness bar
 for accepting supportsContext into the spec should be quite high.

 So, is there an application use case that actually benefits from
 supportsContext?

 Cheers,
 Benoit




[whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Benoit Jacob
Dear list,

I'd like to question the usefulness of canvas.supportsContext. I tried to
think of an actual application use case for it, and couldn't find one. It
also doesn't seem like any valid application use case was given on this
list when this topic was discussed around September 2012.

The closest thing that I could find being discussed, was use cases by JS
frameworks or libraries that already expose similar feature-detection APIs.
However, that only shifts the question to: what is the reason for them to
expose such APIs? In the end, I claim that the only thing that we should
recognize as a reason to add a feature to the HTML spec, is
*application*use cases.

So let's look at the naive application usage pattern for supportsContext:

  if (canvas.supportsContext(webgl)) {
context = canvas.getContext(webgl);
  }

The problem is that the same can be achieved with just the getContext call,
and checking whether it succeeded.

In other words, I'm saying that no matter what JS libraries/frameworks may
offer for feature detection, in the end, applications don't want to just *
detect* features --- applications want to *use* features. So they'll just
pair supportsContext calls with getContext calls, making the
supportsContext calls useless.

There is also the argument that supportsContext can be much cheaper than a
getContext, given that it only has to guarantee that getContext must fail
if supportsContext returned false. But this argument is overlooking that in
the typical failure case, which is failure due to system/driver
blacklisting, getContext returns just as fast as supportsContext --- as
they both just check the blacklist and return. Outside of exceptional cases
(out of memory...), the slow path in getContext is the *success* case, and
again, in that case a real application would want to actually *use* that
context.

Keep in mind that supportsContext can't guarantee that if it returns true,
then a subsequent getContext will succeed. The spec doesn't require it to,
either. So if the existence of supportsContext misleads application
developers into no longer checking for getContext failures, then we'll just
have rendered canvas-using applications a little bit more fragile. Another
problem with supportsContext is that it's untestable, at least when it
returns true; it is spec-compliant to just implement it as returning
whether the JS interface for the required canvas context exists, which is
quite useless. Given such deep problems, I think that the usefulness bar
for accepting supportsContext into the spec should be quite high.

So, is there an application use case that actually benefits from
supportsContext?

Cheers,
Benoit


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Benoit Jacob
2013/6/19 Tab Atkins Jr. jackalm...@gmail.com

 On Wed, Jun 19, 2013 at 11:17 AM, Benoit Jacob jacob.benoi...@gmail.com
 wrote:
  So let's look at the naive application usage pattern for supportsContext:
 
if (canvas.supportsContext(webgl)) {
  context = canvas.getContext(webgl);
}
 
  The problem is that the same can be achieved with just the getContext
 call,
  and checking whether it succeeded.

 The problem that supportsContext() solves, and which was brought up
 repeatedly during the drive to add this, is that spinning up contexts
 can be expensive.


I tried to address this very argument in my above email; maybe I wasn't
clear enough.

1. If supportsContext() fails, then getContext() fails just as fast. In
that case, supportsContext isn't any faster.
2. If supportsContext succeeds, then the application is going to want to
proceed with calling getContext, so nothing is achieved by supportsContext
being cheaper than getContext.

If you disagree with that argument, then I would like, again, to hear about
what would be an application use case that actually benefits from
supportsContext.

Benoit


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Benoit Jacob
2013/6/19 Boris Zbarsky bzbar...@mit.edu

 On 6/19/13 3:34 PM, Tab Atkins Jr. wrote:

 This is missing the point.  You don't want to wait until it's actually
 time to create the context.  Unless you torture your code flow, by the
 time you're creating a context you should already know that the
 context is supported.  The knowledge of which context to use is most
 useful well before that, when you're first entering the app.


 But supportsContext doesn't give any guarantee that the getContext will
 succeed.


  Plus, it doesn't matter how late you do the detection - if you do a
 straight *detection* at all rather than an initialization (that is, if
 you throw away the context you've just created for testing)


 OK, but why are we making that assumption?  I guess if people insist on
 doing that, then we do in fact need something that will basically try to
 guess whether getContext might succeed.


  Like @supports, the supportsContext() method can be easy and reliable
 with a very simple definition for supports - it returns true if
 calling getContext() with the same arguments would return a context
 rather than erroring, and false otherwise.


 Just so we're clear, this is _not_ what supportsContext is specified to
 do.  As specced, it will return false if you know for a fact that
 getContext would return null.  It will return true if you think that
 getContext might not return null.  This means that a true return doesn't
 really mean squat about what getContext will do.

 And the reason for that is that you can't tell whether getContext will
 return null until you try to do it, given how getContext is specced.


Yes, it seems that supportsContext being under-specified allows for
confusion to happen, as it is given different meanings in different emails
in this thread, that are mutually incompatible:

From Tab's 1st email:

 *The problem that supportsContext() solves, and which was brought up*
 * repeatedly during the drive to add this, is that spinning up contexts*
 * can be expensive.*


From Tab's 2nd email:

 *Like @supports, the supportsContext() method can be easy and reliable*
 * with a very simple definition for supports - it returns true if*
 * calling getContext() with the same arguments would return a context*
 * rather than erroring, and false otherwise.*


The incompatibility is that the second quote's requirement can only be met
if supportsContext(webgl) actually creates an OpenGL context --- which is
incompatible with the first quote, which requires supportsContext to be
significantly quicker than getContext, which can only be achieved by not
actually creating an OpenGL context.

(Replace OpenGL context by Direct3D device or whichever concept applies
to the operating system at hand).

Benoit





 -Boris



Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Benoit Jacob
2013/6/19 Kenneth Russell k...@google.com

 In my experience, in Chromium, creation of the underlying OpenGL
 context for a WebGLRenderingContext almost never fails in isolation.
 Instead, more general failures happen such as the GPU process failing
 to boot, or creation of all OpenGL contexts (including the
 compositor's) failing. These failures would be detected before the app
 calls supportsContext('webgl'). For this reason I believe
 supportsContext's answer can be highly accurate in almost every
 situation.


In Mozilla code, we fail WebGL context creation if any GL error occurs
during initialization, where we have to make a number of specific OpenGL
calls (e.g. querying many constants, enabling point sprites...). So it is
perfectly possible for WebGL specifically to fail on a given device where
OpenGL compositing works, without being specifically blacklisted, and
currently we can rely on these automatic checks rather than having to
curate a blacklist for these problems, which is very nice.

Another way in which WebGL specifically can fail, is that some driver bugs
cause errors that we may want to ignore in our compositor code but not in
WebGL. To give an example, on many Vivante GPU drivers, which are very
common in Chinese mobile devices, the first eglMakeCurrent call on a newly
created context can return false without any actual EGL error [1]. A
browser may want to ignore this error in its compositor to be able to run
nonetheless on such devices, but without WebGL support.

To summarize, blacklisting is not the only reason why WebGL specifically
may fail, and this is particularly concrete on low-end mobile devices.

Checking whether any WebGL context creation succeeded in the current
browser session is a very useful data point indeed, but doesn't help with
the _first_ context creation (which seems particularly relevant for the
Chrome Web Store use case mentioned earlier).

Benoit

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=771774#c2


Re: [whatwg] Hardware accelerated canvas

2012-09-05 Thread Benoit Jacob
- Original Message -
 On Tue, Sep 4, 2012 at 10:15 AM, Boris Zbarsky bzbar...@mit.edu
 wrote:
  So now our list is:
 
  1)  Have a way for pages to opt in to software rendering.
  2)  Opt canvases in to software rendering via some sort of
  heuristic
  (e.g. software by default until there has been drawing to it
  for
  several event loop iterations, or whatever).
  3)  Have a way for pages to opt in to having snapshots taken.
  4)  Auto-snapshot based on some heuristics.
  5)  Save command stream.
  6)  Have a way for pages to explicitly snapshot a canvas.
  7)  Require opt in for hardware accelerated rendering.
  8)  Authors use toDataURL() when they want their data to stick
  around.
  9)  Context lost event that lets authors regenerate the canvas.
  10) Do nothing, assume users will hit reload if their canvas goes
  blank.
 
 11) Default to best-effort (current behavior), but allow opting in to
 getting notifications about lost context, in which case the browser
 would not need to do various tricks in order to attempt to save the
 current state.
 
 I.e. basically 4, but with the ability for the page to opt in to 9.

Keep in mind that snapshotting as in 4 will cause a large memory usage increase 
for large canvases, and will cause animation choppiness on certain pages on 
systems where readback is expensive. So the heuristics have to be specified in 
a precise manner, or else browser vendors could well decide that the best 
heuristic is never.

Benoit

 
 It sounds like no browsers do any such tricks right now, so
 effectively the opt-in would be to just be notified. But possibly
 browsers might feel the need to do various snap-shot heuristics on
 mobile as they start to hardware accelerate there.
 
 / Jonas
 


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Benoit Jacob


- Original Message -
 On Mon, 03 Sep 2012 00:14:49 +0200, Benoit Jacob bja...@mozilla.com
 wrote:
 
  - Original Message -
  On Sun, 2 Sep 2012, Erik Möller wrote:
  
   As we hardware accelerate the rendering of , not just with
   the webgl
   context, we have to figure out how to best handle the fact that
   GPUs loose the
   rendering context for various reasons. Reasons for loosing the
   context differ
   from platform to platform but ranges from going into power-save
   mode, to
   internal driver errors and the famous long running shader
   protection.
   A lost context means all resources uploaded to the GPU will be
   gone
   and have
   to be recreated. For canvas it is not impossible, though IMO
   prohibitively
   expensive to try to automatically restore a lost context and
   guarantee the
   same behaviour as in software.
   The two options I can think of would be to:
   a) read back the framebuffer after each draw call.
   b) read back the framebuffer before the first draw call of a
   frame and build
   a display list of all other draw operations.
  
   Neither seem like a particularly good option if we're looking to
   actually
   improve on canvas performance. Especially on mobile where
   read-back
   performance is very poor.
  
   The WebGL solution is to fire an event and let the
   js-implementation deal with
   recovering after a lost context
   http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
  
   My preferred option would be to make a generic context lost
   event
   for canvas,
   but I'm interested to hear what people have to say about this.
 
  Realistically, there are too many pages that have 2D canvases that
  are
  drawn to once and never updated for any solution other than don't
  lose
  the data to be adopted. How exactly this is implemented is a
  quality
  of
  implementation issue.
 
  With all the current graphics hardware, this means don't use a
  GL/D3D
  surface to implement the 2d canvas drawing buffer storage, which
  implies: don't hardware-accelerate 2d canvases.
 
  If we agree that 2d canvas acceleration is worth it despite the
  possibility of context loss, then Erik's proposal is really the
  only
  thing to do, as far as current hardware is concerned.
 
  Erik's proposal doesn't worsen the problem in anyway --- it
  acknowledges
  a problem that already exists and offers to Web content a way to
  recover
  from it.
 
  Hardware-accelerated 2d contexts are no different from
  hardware-accelerated WebGL contexts, and WebGL's solution has been
  debated at length already and is known to be the only thing to do
  on
  current hardware. Notice that similar solutions preexist in the
  system
  APIs underlying any hardware-accelerated canvas context: Direct3D's
  lost
  devices, EGL's lost contexts, OpenGL's ARB_robustness context loss
  statuses.
 
  Benoit
 
 
  --
  Ian Hickson   U+1047E)\._.,--,'``.
 fL
  http://ln.hixie.ch/   U+263A/,   _.. \   _\
   ;`._
  ,.
  Things that are impossible just take longer.
`._.-(,_..'--(,_..'`-.;.'
 
 I agree with Benoit, this is already an existing problem, I'm just
 pointing the spotlight at it. If we want to take advantage of
 hardware
 acceleration on canvas this is an issue we will have to deal with.
 
 I don't particularly like this idea, but for the sake of having all
 the
 options on the table I'll mention it. We could default to the old
 behaviour and have an opt in for hardware accelerated canvas in
 which
 case you would have to respond to said context lost event.

Two objections against this:

1. Remember this adage from high-performance computing which applies here as 
well: The fast drives out the slow even if the fast is wrong. Browsers want 
to have good performance on Canvas games, demos and benchmarks. Users want good 
performance too. GL/D3D helps a lot there, at the cost of a rather rare -- and 
practically untestable -- problem with context loss. So browsers are going to 
use GL/D3D, period. On the desktop, most browsers already do. It seems 
impossible for the spec to require not using GL/D3D and get obeyed.

2. This would effectively force browsers to ship an implementation that does 
not rely on GL/D3D. For browsers that do have a GL/D3D based canvas 
implementation and target platforms where GL/D3D availability can be taken for 
granted (typically on mobile devices), it is reasonable to expect that in the 
foreseeable future they might want to get rid of their non-GL/D3D canvas impl.

Benoit


 That would
 allow the existing content to keep working as it is without changes.
 It
 would be more work for vendors, but it's up to every vendor to decide
 how
 to best solve it, either by doing it in software or using the
 expensive
 read back alternative in hardware.
 
 Like I said, not my favourite option, but I agree it's bad to break
 the
 web.
 
 --
 Erik Möller
 Core Gfx Lead
 Opera Software
 twitter.com

Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Benoit Jacob


- Original Message -
 What is really meant here by Canvas GPU acceleration?

This means use GL/D3D to implement the 2D canvas drawing primitives; but what 
really matters here, is that this requires using a GL/D3D texture/surface as 
the primary storage for the 2D canvas drawing buffer.

Because of the way that current GPUs work, this entails that the canvas drawing 
buffer is a /discardable/ resource. Erik's proposal is about dealing with this 
dire reality.

Again, accelerated canvases have been widely used for a year and a half now. 
It's not realistic to expect the world to go back to non-accelerated by default 
now.

Benoit


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Benoit Jacob


- Original Message -
 On Sun, 2 Sep 2012, Erik Möller wrote:
 
  As we hardware accelerate the rendering of , not just with
  the webgl
  context, we have to figure out how to best handle the fact that
  GPUs loose the
  rendering context for various reasons. Reasons for loosing the
  context differ
  from platform to platform but ranges from going into power-save
  mode, to
  internal driver errors and the famous long running shader
  protection.
  A lost context means all resources uploaded to the GPU will be gone
  and have
  to be recreated. For canvas it is not impossible, though IMO
  prohibitively
  expensive to try to automatically restore a lost context and
  guarantee the
  same behaviour as in software.
  The two options I can think of would be to:
  a) read back the framebuffer after each draw call.
  b) read back the framebuffer before the first draw call of a
  frame and build
  a display list of all other draw operations.
  
  Neither seem like a particularly good option if we're looking to
  actually
  improve on canvas performance. Especially on mobile where read-back
  performance is very poor.
  
  The WebGL solution is to fire an event and let the
  js-implementation deal with
  recovering after a lost context
  http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
  
  My preferred option would be to make a generic context lost event
  for canvas,
  but I'm interested to hear what people have to say about this.
 
 Realistically, there are too many pages that have 2D canvases that
 are
 drawn to once and never updated for any solution other than don't
 lose
 the data to be adopted. How exactly this is implemented is a quality
 of
 implementation issue.

With all the current graphics hardware, this means don't use a GL/D3D surface 
to implement the 2d canvas drawing buffer storage, which implies: don't 
hardware-accelerate 2d canvases.

If we agree that 2d canvas acceleration is worth it despite the possibility of 
context loss, then Erik's proposal is really the only thing to do, as far as 
current hardware is concerned.

Erik's proposal doesn't worsen the problem in anyway --- it acknowledges a 
problem that already exists and offers to Web content a way to recover from it.

Hardware-accelerated 2d contexts are no different from hardware-accelerated 
WebGL contexts, and WebGL's solution has been debated at length already and is 
known to be the only thing to do on current hardware. Notice that similar 
solutions preexist in the system APIs underlying any hardware-accelerated 
canvas context: Direct3D's lost devices, EGL's lost contexts, OpenGL's 
ARB_robustness context loss statuses.

Benoit

 
 --
 Ian Hickson   U+1047E)\._.,--,'``.
fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._
 ,.
 Things that are impossible just take longer.
   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Endianness of typed arrays

2012-04-27 Thread Benoit Jacob


- Original Message -
 Sorry for joining the conversation late.
 
 On Mar 28, 2012, at 1:39 PM, Kenneth Russell wrote:
 
  On Wed, Mar 28, 2012 at 12:34 PM, Benoit Jacob bja...@mozilla.com
  wrote:
  
  1. In webgl.bufferData implementation, don't call glBufferData,
  instead just cache the buffer data.
  
  2. In webgl.vertexAttribPointer, record the attributes structure
  (their types, how they use buffer data). Do not convert/upload
  buffers yet.
  
  3. In the first WebGL draw call (like webgl.drawArrays) since the
  last bufferData/vertexAttribPointer call, do the conversion of
  buffers and the glBufferData calls. Use some heuristics to drop
  the buffer data cache, as most WebGL apps will not have a use for
  it anymore.
  
  It would never be possible to drop the CPU side buffer data cache.
  A
  subsequent draw call may set up the vertex attribute pointers
  differently for the same buffer object, which would necessitate
  going
  back through the buffer's data and generating new, appropriately
  byte-swapped data for the GPU.

I wanted to reply to the above that while indeed in theory it's never possible 
to drop these caches, in practice some heuristics might work well enough that 
it doesn't matter for real content; and in the worst case where we effectively 
can't ever drop caches, we have a +10% or +20% memory usage increase for 
typical WebGL applications (as buffers typically aren't the majority of a WebGL 
application's memory usage), which is bad but still better than not running 
WebGL at all.

 
 That's true. But there are other plausible approaches. There's
 GL_PACK_SWAP_BYTES:
 
 http://www.opengl.org/sdk/docs/man/xhtml/glPixelStore.xml

This seems specific to desktop OpenGL and doesn't seem to exist in the core 
OpenGL ES 2.0 specification. Maybe as an extension?

 
 Or code generation: translate the shaders to do the byte-swapping
 explicitly in GLSL. For floats you should be able to cast back and
 forth to ints via intBitsToFloat/floatBitsToInt.

Interesting; one would have to measure the performance impact of this. What 
makes me hopeful is that this should only slow down vertex shaders, not 
fragment shaders (which are the performance critical shaders in most 
applications). So this could well be the most practical solution so far. WebGL 
implementations must have a shader compiler anyways, for multiple reasons 
(validation, shading language differences, working around driver bugs).

This only applies to ARRAY_BUFFER, not ELEMENT_ARRAY_BUFFER buffers, but for 
the latter, we already have to keep a permanent CPU-side copy anyway for 
validation purposes, so the approach of swapping bytes in the implementation of 
drawElements should work well and not have any major downside (since having to 
keep a CPU-side copy was the main downside).

Cheers,
Benoit

 
 But these days more and more big-endian systems have support for
 little-endian mode, which is probably the simplest approach. And
 honestly, there just don't seem to be WebGL-enabled user agents on
 big-endian systems. We've left a specification hole in a place
 that's easy to trip over, only out of concern for hypothetical
 systems -- in an era when little-endian has clearly won.
 
 If the web isn't already de facto little-endian -- and I believe my
 colleagues have seen evidence that sites are beginning to depend on
 it -- then typed arrays force developers to test on big-endian
 systems to make sure their code is portable, when it's quite likely
 they don't have any big-endian systems to test on. That's a tax on
 developers they may not be willing or able to pay. I should know, I
 am one! :)
 
 https://github.com/dherman/float.js/blob/master/float.js
 
 In a hilariously ironic twist of fate, I recently noticed that the
 endianness-testing logic originally had a stupid bug that made
 LITTLE_ENDIAN always true. It's now fixed, but I didn't detect the
 bug because I didn't have a big-endian JS engine to test on.
 
  Our emails certainly crossed, but please refer to my other email.
  WebGL applications that assemble vertex data for the GPU using
  typed
  arrays will already work correctly on big-endian architectures.
  This
  was a key consideration when these APIs were being designed. The
  problems occur when binary data is loaded via XHR and uploaded to
  WebGL directly. DataView is supposed to be used in such cases to
  load
  the binary data, because the endianness of the file format must
  necessarily be known.
 
 I'm afraid this is wishful thinking. API's have more than a fixed set
 of use cases. The beautiful thing about platforms is that people
 invent new uses the designers didn't think of. Typed arrays are
 simple, powerful, and general-purpose, and people will use them for
 all kinds of purposes. Take my float explorer:
 
 http://dherman.github.com/float.js/
 
 There's no XHR and no WebGL involved in that code. (And I didn't
 invent that to make a point here -- I wrote it months ago when I

[whatwg] Endianness of typed arrays

2012-03-28 Thread Benoit Jacob
Before I joined this mailign list, Boris Zbarsky wrote:
 C)  Try to guess based on where the array buffer came from and have 
 different behavior for different array buffers.  With enough luck (or 
 good enough heuristics), would make at least some WebGL work, while also 
 making non-WebGL things loaded over XHR work.

FWIW, here is a way to do this that will always work and won't rely on luck. 
The key idea is that by the time one draws stuff, all the information about how 
vertex attributes use buffer data must be known.

1. In webgl.bufferData implementation, don't call glBufferData, instead just 
cache the buffer data. 

2. In webgl.vertexAttribPointer, record the attributes structure (their types, 
how they use buffer data). Do not convert/upload buffers yet.

3. In the first WebGL draw call (like webgl.drawArrays) since the last 
bufferData/vertexAttribPointer call, do the conversion of buffers and the 
glBufferData calls. Use some heuristics to drop the buffer data cache, as most 
WebGL apps will not have a use for it anymore.

 In practice, if forced to implement a UA on a big-endian system today, I 
 would likely pick option (C)  I wouldn't classify that as a victory 
 for standardization, but I'm also not sure what we can do at this point 
 to fix the brokenness.

I agree that seems to be the only way to support universal webgl content on 
big-endian UAs. It's not great due to the memory overhead, but at least it 
shouldn't incur a significant performance overhead, and it typically only 
incurs a temporary memory overhead as we should be able to drop the buffer data 
caches quickly in most cases. Also, buffers are typically 10x smaller than 
textures, so the memory overhead would typically be ~ 10% in corner cases where 
we couldn't drop the caches.

In conclusion: WebGL is not the worst here, there is a pretty reasonable avenue 
for big-endian UAs to implement it in a way that allows running the same 
unmodified content as little-endian UAs.

Benoit