Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-07-17 Thread Ian Hickson
On Tue, 20 Mar 2012, Edward O'Connor wrote:
> 
> Unfortunately, lots of  content (especially content which calls 
> {create,get,put}ImageData methods) assumes that the 's backing 
> store pixels correspond 1:1 to CSS pixels, even though the spec has been 
> written to allow for the backing store to be at a different scale 
> factor.

I've fixed the text so that those methods now always return 96dpi data.


> I'd like to propose the addition of a backingStorePixelRatio property to 
> the 2D context object. Just as window.devicePixelRatio expresses the 
> ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would 
> express the ratio of backing store pixels to CSS pixels. This allows 
> developers to easily branch to handle different backing store scale 
> factors.

I've added window.screen.canvasResolution which returns the resolution 
that is being used for 2D canvases created during the current task.


> Additionally, I think the existing {create,get,put}ImageData API needs 
> to be defined to be in terms of CSS pixels, since that's what existing 
> content assumes.

Done.


> I propose the addition of a new set of methods for working directly with 
> backing store image data. (New methods are easier to feature detect than 
> adding optional arguments to the existing methods.) At the moment I'm 
> calling these {create,get,put}ImageDataHD, but I'm not wedded to the 
> names. (Nor do I want to bikeshed them.)

Done.

I've also added toDataURLHD and toBlobHD.


On Tue, 20 Mar 2012, James Robinson wrote:
>
> If we are adding new APIs for manipulating the backing directly, can we 
> make them asynchronous? This would allow for many optimization 
> opportunities that are currently difficult or impossible.

I haven't done this, because it would make the API rather weird. But I am 
happy to do it if people think the API weirdness is a cost worth paying.

Note that technically getImageData() doesn't have to block -- it's array 
access on ImageData that has to block. It would be possible to implement 
getImageData() in such a way that the ImageData object is lazily filled. 
You'd end up blocking later if the author really needed the data, but it's 
possible to write code that doesn't block (though you wouldn't necessarily 
know how long to wait, I guess).


On Tue, 20 Mar 2012, Boris Zbarsky wrote:
> On 3/20/12 6:36 PM, Glenn Maynard wrote:
> > The drawing calls that happen after would need to be buffered (or 
> > otherwise flush the queue, akin to calling glFinish), so the 
> > operations still happen in order.
> 
> The former seems like it could get pretty expensive and the latter would 
> negate the benefits of making it async, imo.

Having the operations not occur in order would make the API quite 
difficult to use, so if that's not an option, I don't think it's worth it.


On Wed, 21 Mar 2012, Maciej Stachowiak wrote:
> On Mar 20, 2012, at 12:00 PM, James Robinson wrote:
> > 
> > If we are adding new APIs for manipulating the backing directly, can 
> > we make them asynchronous? This would allow for many optimization 
> > opportunities that are currently difficult or impossible.
> 
> Neat idea to offer async backing store access. I'm not sure that we 
> should tie this to backing store access at true backing store resolution 
> vs at CSS pixel nominal resolution, because it will significantly raise 
> the barrier to authors recoding their existing apps to take full 
> advantage of higher resolutions. With Ted's proposal, all they would 
> have to do is use the HD versions of calls and change their loops to 
> read the bounds from the ImageData object instead of assuming. If we 
> also forced the new calls to be async, then more extensive changes would 
> be required.
> 
> I hear you on the benefits of async calls, but I think it would be 
> better to sell authors on their benefits separately.

I think it depends how strong the benefits are. In this particular case, I 
tend to agree that the benefits aren't really worth tying them together, 
and possibly not worth providing the async model as a separate API at all.

Maybe we could have an attribute on ImageData that says whether an array 
index read would have to block on getting the data or whether it's ready, 
maybe coupled with an event that says when it's ready?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Charles Pritchard

On 4/23/2012 6:50 PM, Glenn Maynard wrote:

On Mon, Apr 23, 2012 at 12:43 PM, Darin Fisher  wrote:


That said, I've come around to being OK with getImageDataHD.  As I wrote
recently, this is because it is possible to implement that in a
non-blocking fashion.  It can just queue up a readback.  It only becomes
necessary to block the calling thread when a pixel is dereferenced.  This
affords developers with an opportunity to instead pass the ImageData off to
a web worker before dereferencing.  Hence, the main thread should not jank
up.  This of course requires developers to be very smart about what they
are doing, and for browsers to be smart too.


It's reasonable to expect users to use async APIs in the main thread;
that's just a part of the platform.  It's not reasonable to expect people
to fire up a worker and transfer the buffer to the worker to prevent the
blocking from happening in the main thread.  That's a particularly hackish
workaround, not a replacement for an async API.



Looks like Maciej wants this one in ASAP as a synchronous method.

Dev's are still going to jank up their main thread when working with 
getImageDataHD.

As a couple here have stated -- there's a lot more data with an HD layer.

Processing filters on the main thread has always been a UI blocker.

Here's a +1 to allowing 
worker.postMessage(document.getCSSCanvasContext('2d','layer','1','1')) 
in web workers.

It's completely non-standard but lets us all off the hook.

-Charles


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Glenn Maynard
On Mon, Apr 23, 2012 at 12:43 PM, Darin Fisher  wrote:

> That said, I've come around to being OK with getImageDataHD.  As I wrote
> recently, this is because it is possible to implement that in a
> non-blocking fashion.  It can just queue up a readback.  It only becomes
> necessary to block the calling thread when a pixel is dereferenced.  This
> affords developers with an opportunity to instead pass the ImageData off to
> a web worker before dereferencing.  Hence, the main thread should not jank
> up.  This of course requires developers to be very smart about what they
> are doing, and for browsers to be smart too.
>

It's reasonable to expect users to use async APIs in the main thread;
that's just a part of the platform.  It's not reasonable to expect people
to fire up a worker and transfer the buffer to the worker to prevent the
blocking from happening in the main thread.  That's a particularly hackish
workaround, not a replacement for an async API.

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Darin Fisher
On Sun, Apr 22, 2012 at 6:03 PM, Maciej Stachowiak  wrote:

>
> On Apr 20, 2012, at 6:53 AM, Glenn Maynard wrote:
>
> On Thu, Apr 19, 2012 at 11:28 PM, Maciej Stachowiak  wrote:
>
>> You could also address this by adding a way to be notified when the
>> contents of an ImageData are available without blocking. That would work
>> with both vanilla getImageData and the proposed getImageDataHD. It would
>> also give the author the alternative of just blocking (e.g. if they know
>> the buffer is small) or of sending the data off to a worker for processing.
>>
>
> This would result in people writing poor code, based on incorrect
> assumptions.  It doesn't matter how big the buffer is; all that matters is
> how long the drawing calls before the getImageData take.  For example, if
> multiple canvases are being drawn to (eg. on other pages running in the
> same thread), they may share a single drawing queue.
>
> Any time you retrieve image data synchronously, and it happens to require
> a draw flush, you freeze the UI for all pages sharing that thread.  Why is
> that okay for people to do?  We should know better by now than to expose
> APIs that encourage people to block the UI thread, after spending so much
> time trying to fix that mistake in early APIs.
>
> (This should expose a synchronous API in workers if and when Canvas makes
> it there, of course, just like all other APIs.)
>
>
> All JavaScript that runs on the main thread has the potential to "freeze
> the UI for all pages sharing that thread". One can imagine models that
> avoid this by design - for example, running all JavaScript on one or more
> threads separate from the UI thread. But from where we are today, it's not
> practical to apply such a solution. It's also not practical to make every
> API asynchronous - it's just too hard to code that way.
>
> In light of this, we need some sort of rule for what types of APIs should
> only be offered in asynchronous form on the main thread. Among the major
> browser vendors, there seems to be a consensus that this should at least
> include APIs that do any network or disk I/O. Network and disk are slow
> enough and unpredictable enough that an author could never correctly judge
> that it's safe to do synchronous I/O.
>
> Some feel that a call that reads from the GPU may also be in this category
> of "intrinsically too slow/unpredictable". However, we are talking about
> operations with a much lower upper bound on their execution time. We're
> also talking about an operation that has existed in its synchronous form
> (getImageData) for several years, and we don't have evidence of the types
> of severe problems that, for instance, synchronous XHR has been known to
> cause. Indeed, the amount of trouble caused is low enough that no one has
> yet proposed or implemented an async version of this API.
>

The point is not about whether the jank introduced by GPU readbacks is
emergency level.  The point is that it can be costly, and it can interfere
greatly with having an interactive main thread.  If you assume a goal of 60
FPS, then smallish jank can be killer.  It is common for new GL programmers
to call glGetError too often for example, and that can kill the performance
of the app.  Of course this is no where near as bad as synchronous XHR.  It
doesn't have to be at that level to be a problem.  I think it is fair to
focus on 60 FPS as a goal in other words.

That said, I've come around to being OK with getImageDataHD.  As I wrote
recently, this is because it is possible to implement that in a
non-blocking fashion.  It can just queue up a readback.  It only becomes
necessary to block the calling thread when a pixel is dereferenced.  This
affords developers with an opportunity to instead pass the ImageData off to
a web worker before dereferencing.  Hence, the main thread should not jank
up.  This of course requires developers to be very smart about what they
are doing, and for browsers to be smart too.

I'm still sad that getImageData{HD} makes it easy for bad code in one web
page to screw over other web pages.  The argument that this is easy to do
anyways with long running script is a cop out.  We should guide developers
to do the right thing in this cooperatively multi-tasking system.

-Darin



>
> If adding an async version has not been an emergency so far, then I don't
> think it is critical enough to block adding scaled backing store support.
> Nor am I convinced that we need to deprecate or phase out the synchronous
> version. Perhaps future evidence will change the picture, but that's how it
> looks to me so far.
>
> Regards,
> Maciej
>
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Darin Fisher
On Tue, Apr 17, 2012 at 9:12 PM, Boris Zbarsky  wrote:

> On 4/17/12 6:32 PM, Darin Fisher wrote:
>
>> In Chrome at least, getImageData() doesn't actually block to fetch pixels.
>>  The thread is only blocked when the first dereference of the pixel buffer
>> occurs.
>>
>
> How does that interact with paints that happen after the getImageData
> call?  Or is the point that you send off an async request for a pixel
> snapshot but don't block on it returning until someone tries to reach into
> the pixel buffer?
>
>
To answer your second question:  Yes.

I think the implication for the first question is that you would get back a
snapshot of what the pixel data should have been when you called
getImageData.

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-23 Thread Maciej Stachowiak

On Apr 22, 2012, at 7:10 PM, Glenn Maynard wrote:

> On Sun, Apr 22, 2012 at 8:03 PM, Maciej Stachowiak  wrote:
> All JavaScript that runs on the main thread has the potential to "freeze the 
> UI for all pages sharing that thread".
> 
> APIs on the main thread are designed to allow developers to avoid doing just 
> that.  If the *only* way to do something has that potential, then it's a bug 
> in the API.
> 
> Some feel that a call that reads from the GPU may also be in this category of 
> "intrinsically too slow/unpredictable". However, we are talking about 
> operations with a much lower upper bound on their execution time.
> 
> If the reasonable upper bound is high enough to cause visible UI degradation, 
> and an asynchronous API can prevent that, then it needs an asynchronous API.
> 
> If adding an async version has not been an emergency so far, then I don't 
> think it is critical enough to block adding scaled backing store support.
> 
> I hope we doesn't need an emergency to fix problems.  Nobody's proposing 
> blocking anything, just providing a better API.  This doesn't impose any 
> requirements on implementations who don't need it; it just makes it possible 
> for those who do.  Those who don't can always block and queue the callback to 
> happen as soon as the script returns to the event loop--doing it better is 
> just QoI.

For the record, I don't object to adding an async version of getImageData, or 
some alternate means of getting async behavior. I would, however, object to:

- Removing the existing synchronous getImageData (too much compatibility impact 
at this point)
- Forcing getImageDataHD to only offer an async variant, despite synchronous 
getImageData existing probably indefinitely

At least the latter has been advocated previously on this thread. It's not 
clear to me what you are personally advocating, so I cannot tell if I disagree 
with you.

Regards,
Maciej



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-22 Thread Glenn Maynard
On Sun, Apr 22, 2012 at 8:03 PM, Maciej Stachowiak  wrote:

> All JavaScript that runs on the main thread has the potential to "freeze
> the UI for all pages sharing that thread".
>

APIs on the main thread are designed to allow developers to avoid doing
just that.  If the *only* way to do something has that potential, then it's
a bug in the API.

Some feel that a call that reads from the GPU may also be in this category
> of "intrinsically too slow/unpredictable". However, we are talking about
> operations with a much lower upper bound on their execution time.
>

If the reasonable upper bound is high enough to cause visible UI
degradation, and an asynchronous API can prevent that, then it needs an
asynchronous API.

If adding an async version has not been an emergency so far, then I don't
> think it is critical enough to block adding scaled backing store support.
>

I hope we doesn't need an emergency to fix problems.  Nobody's proposing
blocking anything, just providing a better API.  This doesn't impose any
requirements on implementations who don't need it; it just makes it
possible for those who do.  Those who don't can always block and queue the
callback to happen as soon as the script returns to the event loop--doing
it better is just QoI.

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-22 Thread Maciej Stachowiak

On Apr 20, 2012, at 6:53 AM, Glenn Maynard wrote:

> On Thu, Apr 19, 2012 at 11:28 PM, Maciej Stachowiak  wrote:
> You could also address this by adding a way to be notified when the contents 
> of an ImageData are available without blocking. That would work with both 
> vanilla getImageData and the proposed getImageDataHD. It would also give the 
> author the alternative of just blocking (e.g. if they know the buffer is 
> small) or of sending the data off to a worker for processing.
> 
> This would result in people writing poor code, based on incorrect 
> assumptions.  It doesn't matter how big the buffer is; all that matters is 
> how long the drawing calls before the getImageData take.  For example, if 
> multiple canvases are being drawn to (eg. on other pages running in the same 
> thread), they may share a single drawing queue.
> 
> Any time you retrieve image data synchronously, and it happens to require a 
> draw flush, you freeze the UI for all pages sharing that thread.  Why is that 
> okay for people to do?  We should know better by now than to expose APIs that 
> encourage people to block the UI thread, after spending so much time trying 
> to fix that mistake in early APIs.
> 
> (This should expose a synchronous API in workers if and when Canvas makes it 
> there, of course, just like all other APIs.)

All JavaScript that runs on the main thread has the potential to "freeze the UI 
for all pages sharing that thread". One can imagine models that avoid this by 
design - for example, running all JavaScript on one or more threads separate 
from the UI thread. But from where we are today, it's not practical to apply 
such a solution. It's also not practical to make every API asynchronous - it's 
just too hard to code that way.

In light of this, we need some sort of rule for what types of APIs should only 
be offered in asynchronous form on the main thread. Among the major browser 
vendors, there seems to be a consensus that this should at least include APIs 
that do any network or disk I/O. Network and disk are slow enough and 
unpredictable enough that an author could never correctly judge that it's safe 
to do synchronous I/O.

Some feel that a call that reads from the GPU may also be in this category of 
"intrinsically too slow/unpredictable". However, we are talking about 
operations with a much lower upper bound on their execution time. We're also 
talking about an operation that has existed in its synchronous form 
(getImageData) for several years, and we don't have evidence of the types of 
severe problems that, for instance, synchronous XHR has been known to cause. 
Indeed, the amount of trouble caused is low enough that no one has yet proposed 
or implemented an async version of this API.

If adding an async version has not been an emergency so far, then I don't think 
it is critical enough to block adding scaled backing store support. Nor am I 
convinced that we need to deprecate or phase out the synchronous version. 
Perhaps future evidence will change the picture, but that's how it looks to me 
so far.

Regards,
Maciej



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-20 Thread Glenn Maynard
On Thu, Apr 19, 2012 at 11:28 PM, Maciej Stachowiak  wrote:

> You could also address this by adding a way to be notified when the
> contents of an ImageData are available without blocking. That would work
> with both vanilla getImageData and the proposed getImageDataHD. It would
> also give the author the alternative of just blocking (e.g. if they know
> the buffer is small) or of sending the data off to a worker for processing.
>

This would result in people writing poor code, based on incorrect
assumptions.  It doesn't matter how big the buffer is; all that matters is
how long the drawing calls before the getImageData take.  For example, if
multiple canvases are being drawn to (eg. on other pages running in the
same thread), they may share a single drawing queue.

Any time you retrieve image data synchronously, and it happens to require a
draw flush, you freeze the UI for all pages sharing that thread.  Why is
that okay for people to do?  We should know better by now than to expose
APIs that encourage people to block the UI thread, after spending so much
time trying to fix that mistake in early APIs.

(This should expose a synchronous API in workers if and when Canvas makes
it there, of course, just like all other APIs.)

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-19 Thread Maciej Stachowiak

On Apr 17, 2012, at 3:32 PM, Darin Fisher wrote:
> 
> ^^^ This got me thinking...
> 
> In Chrome at least, getImageData() doesn't actually block to fetch pixels.  
> The thread is only blocked when the first dereference of the pixel buffer 
> occurs.  I believe this is done so that a getImageData() followed by 
> putImageData() call will not need to block the calling thread.
> 
> The above suggests that making getImageData() asynchronous would not actually 
> provide any benefit for cases where the page does not dereference the pixel 
> buffer.  Another use case where this comes up is passing the ImageData to a 
> web worker.  If the web worker is the first to dereference the ImageData, 
> then only the web worker thread should block.
> 
> I think this becomes an argument for keeping getImageData() as is.  It 
> assumes that ImageData is just a handle, and we could find another way to 
> discourage dereferencing the pixel buffer on the UI thread.
> 
> Hmm...

You could also address this by adding a way to be notified when the contents of 
an ImageData are available without blocking. That would work with both vanilla 
getImageData and the proposed getImageDataHD. It would also give the author the 
alternative of just blocking (e.g. if they know the buffer is small) or of 
sending the data off to a worker for processing.

Regards,
Maciej


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-17 Thread Glenn Maynard
On Tue, Apr 17, 2012 at 5:32 PM, Darin Fisher  wrote:

> In Chrome at least, getImageData() doesn't actually block to fetch pixels.
>  The thread is only blocked when the first dereference of the pixel buffer
> occurs.  I believe this is done so that a getImageData() followed by
> putImageData() call will not need to block the calling thread.
>

This isn't good enough.  It gives no way for developers to ensure that they
don't access the image data until doing so won't cause a synchronous flush.

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-17 Thread Boris Zbarsky

On 4/17/12 6:32 PM, Darin Fisher wrote:

In Chrome at least, getImageData() doesn't actually block to fetch pixels.
  The thread is only blocked when the first dereference of the pixel buffer
occurs.


How does that interact with paints that happen after the getImageData 
call?  Or is the point that you send off an async request for a pixel 
snapshot but don't block on it returning until someone tries to reach 
into the pixel buffer?


-Boris


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-17 Thread Oliver Hunt

On Apr 17, 2012, at 3:32 PM, Darin Fisher  wrote:

> 
> 
> On Mon, Apr 16, 2012 at 4:05 PM, Darin Fisher  wrote:
> 
> 
> On Mon, Apr 16, 2012 at 2:57 PM, Oliver Hunt  wrote:
> 
> On Apr 16, 2012, at 2:34 PM, Darin Fisher  wrote:
> 
> > On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt  wrote:
> >
> >>
> >> On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:
> >>
> >> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
> >> more precise issue.
> >>
> >> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:
> >>
> >>> Could someone construct a demonstration of where the read back of the
> >>> imagedata takes longer than a runloop cycle?
> >>>
> >>
> >> I bet this would be fairly easy to demonstrate.
> >>
> >>
> >> Then by all means do :D
> >>
> >
> >
> > Here's an example.
> >
> > Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and apply
> > the following diff (changing the draw function):
> >
> > BEGIN DIFF
> > --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
> > +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
> > @@ -177,10 +177,17 @@
> > // Draw each fish
> > for (var fishie in fish) {
> > fish[fishie].swim();
> > }
> >
> > +
> > +if (window.read_back) {
> > +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
> > +var x = data[0];  // force readback
> > +}
> > +
> > +
> >//draw fpsometer with the current number of fish
> > fpsMeter.Draw(fish.length);
> > }
> >
> > function Fish() {
> > END DIFF
> >
> > Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
> > get 60 FPS.  Setting read_back to true (using dev tools), drops it down to
> > 30 FPS.
> >
> > Using about:tracing (a tool built into Chrome), I can see that the read
> > pixels call is taking ~15 milliseconds to complete.  The implied GL flush
> > takes ~11 milliseconds.
> >
> > The page was sized to 1400 x 1000 pixels.
> 
> How does that compare to going through the runloop -- how long does it take 
> to get from that point to a timeout being called if you do var start = new 
> Date; setTimeout(function() {console.log(new Date - start);}, 0);
> ?
> 
> The answer is ~0 milliseconds.  I know this because without the getImageData 
> call, the frame rate is 60 FPS.  The page calls the draw() function from an 
> interval timer that has a period of 16.7 milliseconds.  The trace indicates 
> that nearly all of that budget is used up prior to the getImageData() call 
> that I inserted.
> 
>  
> 
> This also ignores that possibility that in requesting the data, i probably 
> also want to do some processing on the data, so for the sake of simplicity 
> how long does it take to subsequently iterate through every pixel and set it 
> to 0?
> 
> That adds about 44 milliseconds.  I would hope that developers would either 
> perform this work in chunks or pass ImageData.data off to a web worker for 
> processing.
> 
> ^^^ This got me thinking...
> 
> In Chrome at least, getImageData() doesn't actually block to fetch pixels.  
> The thread is only blocked when the first dereference of the pixel buffer 
> occurs.  I believe this is done so that a getImageData() followed by 
> putImageData() call will not need to block the calling thread.
> 
> The above suggests that making getImageData() asynchronous would not actually 
> provide any benefit for cases where the page does not dereference the pixel 
> buffer.  Another use case where this comes up is passing the ImageData to a 
> web worker.  If the web worker is the first to dereference the ImageData, 
> then only the web worker thread should block.
> 
> I think this becomes an argument for keeping getImageData() as is.  It 
> assumes that ImageData is just a handle, and we could find another way to 
> discourage dereferencing the pixel buffer on the UI thread.
> 
> Hmm...

A long time ago I and Dmitry tried to get canvas to be available on a worker 
thread, and then through some bizarre set of events that desire morphed into 
the image scaling API, which was then discarded due to being too weird.

It does occur to me though that it could be interesting to allow a canvas 
context to be transferred to a worker.  Think about this for a moment:  It 
would allow arbitrarily expensive rendering to occur in the worker, and then 
you just need to have some flush style API that would allow the worker to 
indicate that the content of the canvas was ready to render -- essentially this 
would be a join() on the UI thread, but the rendering would never blocked the 
UI.

Alas when I think about it, i think it may require double buffering the canvas, 
but it could provide a substantial performance boost, with minimal 
developer-side complexity.

--Oliver

> 
> -Darin
> 
> 
>  
>  
> 
> Remember the goal of making this asynchronous is to improve performance, so 
> the 11ms of drawing does have to occur at so

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-17 Thread Darin Fisher
On Mon, Apr 16, 2012 at 4:05 PM, Darin Fisher  wrote:

>
>
> On Mon, Apr 16, 2012 at 2:57 PM, Oliver Hunt  wrote:
>
>>
>> On Apr 16, 2012, at 2:34 PM, Darin Fisher  wrote:
>>
>> > On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt  wrote:
>> >
>> >>
>> >> On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:
>> >>
>> >> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
>> >> more precise issue.
>> >>
>> >> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt 
>> wrote:
>> >>
>> >>> Could someone construct a demonstration of where the read back of the
>> >>> imagedata takes longer than a runloop cycle?
>> >>>
>> >>
>> >> I bet this would be fairly easy to demonstrate.
>> >>
>> >>
>> >> Then by all means do :D
>> >>
>> >
>> >
>> > Here's an example.
>> >
>> > Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and
>> apply
>> > the following diff (changing the draw function):
>> >
>> > BEGIN DIFF
>> > --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
>> > +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
>> > @@ -177,10 +177,17 @@
>> > // Draw each fish
>> > for (var fishie in fish) {
>> > fish[fishie].swim();
>> > }
>> >
>> > +
>> > +if (window.read_back) {
>> > +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
>> > +var x = data[0];  // force readback
>> > +}
>> > +
>> > +
>> >//draw fpsometer with the current number of fish
>> > fpsMeter.Draw(fish.length);
>> > }
>> >
>> > function Fish() {
>> > END DIFF
>> >
>> > Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish,
>> I
>> > get 60 FPS.  Setting read_back to true (using dev tools), drops it down
>> to
>> > 30 FPS.
>> >
>> > Using about:tracing (a tool built into Chrome), I can see that the read
>> > pixels call is taking ~15 milliseconds to complete.  The implied GL
>> flush
>> > takes ~11 milliseconds.
>> >
>> > The page was sized to 1400 x 1000 pixels.
>>
>> How does that compare to going through the runloop -- how long does it
>> take to get from that point to a timeout being called if you do var start =
>> new Date; setTimeout(function() {console.log(new Date - start);}, 0);
>> ?
>>
>
> The answer is ~0 milliseconds.  I know this because without the
> getImageData call, the frame rate is 60 FPS.  The page calls the draw()
> function from an interval timer that has a period of 16.7 milliseconds.
>  The trace indicates that nearly all of that budget is used up prior to the
> getImageData() call that I inserted.
>
>
>
>>
>> This also ignores that possibility that in requesting the data, i
>> probably also want to do some processing on the data, so for the sake of
>> simplicity how long does it take to subsequently iterate through every
>> pixel and set it to 0?
>>
>
> That adds about 44 milliseconds.  I would hope that developers would
> either perform this work in chunks or pass ImageData.data off to a web
> worker for processing.
>

^^^ This got me thinking...

In Chrome at least, getImageData() doesn't actually block to fetch pixels.
 The thread is only blocked when the first dereference of the pixel buffer
occurs.  I believe this is done so that a getImageData() followed by
putImageData() call will not need to block the calling thread.

The above suggests that making getImageData() asynchronous would not
actually provide any benefit for cases where the page does not dereference
the pixel buffer.  Another use case where this comes up is passing the
ImageData to a web worker.  If the web worker is the first to dereference
the ImageData, then only the web worker thread should block.

I think this becomes an argument for keeping getImageData() as is.  It
assumes that ImageData is just a handle, and we could find another way to
discourage dereferencing the pixel buffer on the UI thread.

Hmm...

-Darin




>
>
>>
>> Remember the goal of making this asynchronous is to improve performance,
>> so the 11ms of drawing does have to occur at some point, you're just hoping
>> that by making things asynchronous you can mask that.  But I doubt you
>> would see an actual improvement in wall clock performance.
>>
>
> The 11 ms of drawing occurs on a background thread.  Yes, that latency
> exists, but it doesn't have to block the main thread.
>
> Let me reiterate the point I made before.  There can be multiple web pages
> sharing the same main thread.  (Even in Chrome this can be true!)  Blocking
> one web page has the effect of blocking all web pages that share the same
> main thread.
>
> It is not nice for one web page to jank up the browser's main thread and
> as a result make other web pages unresponsive.
>
>
>
>>
>> I also realised something else that I had not previously considered -- if
>> you're doing bitblit based sprite movement the complexity goes way up if
>> this is asynchronous.
>
>
> I don't follow.  Can you clarify?
>
> Thanks,
> -Darin
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 2:57 PM, Oliver Hunt  wrote:

>
> On Apr 16, 2012, at 2:34 PM, Darin Fisher  wrote:
>
> > On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt  wrote:
> >
> >>
> >> On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:
> >>
> >> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
> >> more precise issue.
> >>
> >> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:
> >>
> >>> Could someone construct a demonstration of where the read back of the
> >>> imagedata takes longer than a runloop cycle?
> >>>
> >>
> >> I bet this would be fairly easy to demonstrate.
> >>
> >>
> >> Then by all means do :D
> >>
> >
> >
> > Here's an example.
> >
> > Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and
> apply
> > the following diff (changing the draw function):
> >
> > BEGIN DIFF
> > --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
> > +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
> > @@ -177,10 +177,17 @@
> > // Draw each fish
> > for (var fishie in fish) {
> > fish[fishie].swim();
> > }
> >
> > +
> > +if (window.read_back) {
> > +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
> > +var x = data[0];  // force readback
> > +}
> > +
> > +
> >//draw fpsometer with the current number of fish
> > fpsMeter.Draw(fish.length);
> > }
> >
> > function Fish() {
> > END DIFF
> >
> > Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
> > get 60 FPS.  Setting read_back to true (using dev tools), drops it down
> to
> > 30 FPS.
> >
> > Using about:tracing (a tool built into Chrome), I can see that the read
> > pixels call is taking ~15 milliseconds to complete.  The implied GL flush
> > takes ~11 milliseconds.
> >
> > The page was sized to 1400 x 1000 pixels.
>
> How does that compare to going through the runloop -- how long does it
> take to get from that point to a timeout being called if you do var start =
> new Date; setTimeout(function() {console.log(new Date - start);}, 0);
> ?
>

The answer is ~0 milliseconds.  I know this because without the
getImageData call, the frame rate is 60 FPS.  The page calls the draw()
function from an interval timer that has a period of 16.7 milliseconds.
 The trace indicates that nearly all of that budget is used up prior to the
getImageData() call that I inserted.



>
> This also ignores that possibility that in requesting the data, i probably
> also want to do some processing on the data, so for the sake of simplicity
> how long does it take to subsequently iterate through every pixel and set
> it to 0?
>

That adds about 44 milliseconds.  I would hope that developers would either
perform this work in chunks or pass ImageData.data off to a web worker for
processing.


>
> Remember the goal of making this asynchronous is to improve performance,
> so the 11ms of drawing does have to occur at some point, you're just hoping
> that by making things asynchronous you can mask that.  But I doubt you
> would see an actual improvement in wall clock performance.
>

The 11 ms of drawing occurs on a background thread.  Yes, that latency
exists, but it doesn't have to block the main thread.

Let me reiterate the point I made before.  There can be multiple web pages
sharing the same main thread.  (Even in Chrome this can be true!)  Blocking
one web page has the effect of blocking all web pages that share the same
main thread.

It is not nice for one web page to jank up the browser's main thread and as
a result make other web pages unresponsive.



>
> I also realised something else that I had not previously considered -- if
> you're doing bitblit based sprite movement the complexity goes way up if
> this is asynchronous.


I don't follow.  Can you clarify?

Thanks,
-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Glenn Maynard
On Mon, Apr 16, 2012 at 2:18 PM, Oliver Hunt  wrote:

> Could someone construct a demonstration of where the read back of the
> imagedata takes longer than a runloop cycle?
>

"Runloop" doesn't mean anything to me (nor to Google [1], nor to the HTML
specification).  If you're talking about WebKit-specific limitations,
please explain what you're talking about (most of us aren't WebKit
developers).

If you make an asynchronous call, the call should execute as soon as
possible after returning to the event loop; if there are no other jobs on
that task queue or other drawing operations pending, then it should happen
with a near-zero delay.  Where this doesn't happen in practice, it's
something that should be fixed. (That would cause problems with many other
async APIs.  For example, if you perform an asynchronous File API read, and
that read has an additional 2ms delay, then sequentially reading a file 64k
at a time would cap out at 32MB/sec.  It should only artificially delay
event queue tasks if it's actually necessary for UI responsiveness.)


[1] https://www.google.com/#sclient=psy-ab&hl=en&q=site:w3.org+%22runloop%22



On Mon, Apr 16, 2012 at 3:45 PM, Oliver Hunt  wrote:

> The IO case has a best case of hundreds of milliseconds, whereas that is
> likely to be close to the worst case on the graphics side.
>

(Actually, the best case should be almost instantaneous, if you're using
XHR to read from an object URL that points to a Blob stored or cached in
RAM, or for network requests that can be served out of network cache.
 You're correct in the more common cases, of course, though I'd say the
best case for network requests is in the tens of milliseconds, not
hundreds.)


On Mon, Apr 16, 2012 at 4:06 PM, Maciej Stachowiak  wrote:

> Would the async version still require a flush and immediate readback if
> you do any drawing after the get call but before the data is returned?
>

So long as the implementation handles all drawing calls asynchronously, no.
 The later drawing operations will simply be queued to happen after the
completion of the readback.

If the implementation can do some things async and some not, then it may
still have to block.  That's just QoA, of course: this is meant to allow
implementations to queue as much as possible, not to require that they do.

--
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 2:34 PM, Darin Fisher  wrote:

> On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt  wrote:
> 
>> 
>> On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:
>> 
>> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
>> more precise issue.
>> 
>> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:
>> 
>>> Could someone construct a demonstration of where the read back of the
>>> imagedata takes longer than a runloop cycle?
>>> 
>> 
>> I bet this would be fairly easy to demonstrate.
>> 
>> 
>> Then by all means do :D
>> 
> 
> 
> Here's an example.
> 
> Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and apply
> the following diff (changing the draw function):
> 
> BEGIN DIFF
> --- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
> +++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
> @@ -177,10 +177,17 @@
> // Draw each fish
> for (var fishie in fish) {
> fish[fishie].swim();
> }
> 
> +
> +if (window.read_back) {
> +var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
> +var x = data[0];  // force readback
> +}
> +
> +
>//draw fpsometer with the current number of fish
> fpsMeter.Draw(fish.length);
> }
> 
> function Fish() {
> END DIFF
> 
> Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
> get 60 FPS.  Setting read_back to true (using dev tools), drops it down to
> 30 FPS.
> 
> Using about:tracing (a tool built into Chrome), I can see that the read
> pixels call is taking ~15 milliseconds to complete.  The implied GL flush
> takes ~11 milliseconds.
> 
> The page was sized to 1400 x 1000 pixels.

How does that compare to going through the runloop -- how long does it take to 
get from that point to a timeout being called if you do var start = new Date; 
setTimeout(function() {console.log(new Date - start);}, 0);
?

This also ignores that possibility that in requesting the data, i probably also 
want to do some processing on the data, so for the sake of simplicity how long 
does it take to subsequently iterate through every pixel and set it to 0?

Remember the goal of making this asynchronous is to improve performance, so the 
11ms of drawing does have to occur at some point, you're just hoping that by 
making things asynchronous you can mask that.  But I doubt you would see an 
actual improvement in wall clock performance.

I also realised something else that I had not previously considered -- if 
you're doing bitblit based sprite movement the complexity goes way up if this 
is asynchronous.

--Oliver



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 2:06 PM, Maciej Stachowiak  wrote:

>
> On Apr 16, 2012, at 12:10 PM, Glenn Maynard wrote:
>
> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote:
>>
>> I don't understand why adding a runloop cycle to any read seems like
>> something that would introduce a much more noticable delay than a memcopy.
>>
>
> The use case is deferred rendering.  Canvas drawing calls don't need to
> complete synchronously (before the drawing call returns); they can be
> queued, so API calls return immediately and the actual draws can happen in
> a thread or on the GPU.  This is exactly like OpenGL's pipelining model
> (and might well be implemented using it, on some platforms).
>
> The problem is that if you have a bunch of that work pipelined, and you
> perform a synchronous readback, you have to flush the queue.  In OpenGL
> terms, you have to call glFinish().  That might take long enough to cause a
> visible UI hitch.  By making the readback asynchronous, you can defer the
> actual operation until the operations before it have been completed, so you
> avoid any such blocking in the UI thread.
>
>
>>  I also don't understand what makes reading from the GPU so expensive
>> that adding a runloop cycle is necessary for good perf, but it's
>> unnecessary for a write.
>>
>
> It has nothing to do with how expensive the GPU read is, and everything to
> do with the need to flush the pipeline.  Writes don't need to do this; they
> simply queue, like any other drawing operation.
>
>
> Would the async version still require a flush and immediate readback if
> you do any drawing after the get call but before the data is returned?
>
>
I think it would not need to.  It would just return a snapshot of the state
of the canvas up to the point where the asyncGetImageData call was made.
 This makes sense if you consider both draw calls and asyncGetImageData
calls being put on the same work queue (without any change in their
respective order).

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 1:39 PM, Oliver Hunt  wrote:

>
> On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:
>
> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
> more precise issue.
>
> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:
>
>> Could someone construct a demonstration of where the read back of the
>> imagedata takes longer than a runloop cycle?
>>
>
> I bet this would be fairly easy to demonstrate.
>
>
> Then by all means do :D
>


Here's an example.

Take http://ie.microsoft.com/testdrive/Performance/FishIETank/, and apply
the following diff (changing the draw function):

BEGIN DIFF
--- fishie.htm.orig 2012-04-16 14:23:29.224864338 -0700
+++ fishie.htm  2012-04-16 14:21:38.115489276 -0700
@@ -177,10 +177,17 @@
 // Draw each fish
 for (var fishie in fish) {
 fish[fishie].swim();
 }

+
+if (window.read_back) {
+var data = ctx.getImageData(0, 0, WIDTH, HEIGHT).data;
+var x = data[0];  // force readback
+}
+
+
//draw fpsometer with the current number of fish
 fpsMeter.Draw(fish.length);
 }

 function Fish() {
END DIFF

Running on a Mac Pro, with Chrome 19 (WebKit @r111385), with 1000 fish, I
get 60 FPS.  Setting read_back to true (using dev tools), drops it down to
30 FPS.

Using about:tracing (a tool built into Chrome), I can see that the read
pixels call is taking ~15 milliseconds to complete.  The implied GL flush
takes ~11 milliseconds.

The page was sized to 1400 x 1000 pixels.

-Darin



>
>
>
>> You're asking for significant additional complexity for content authors,
>> with a regression in general case performance, it would be good to see if
>> it's possible to create an example, even if it's not something any sensible
>> author would do, where their is a performance improvement.
>>
>> Remember, the application is only marginally better when it's not
>> painting due to waiting for a runloop cycle than it is when blocked waiting
>> on a graphics flush.
>>
>
> You can do a lot of other things during this time.  For example, you can
> prepare the next animation frame.  You can run JavaScript garbage
> collection.
>
> Also, it is common for a browser thread to handle animations for multiple
> windows.  If you have animations going in both windows, it would be nice
> for those animations to update in parallel instead of being serialized.
>
>
> None of which changes the fact that your actual developer now needs more
> complicated code, and has slower performance.  If I'm doing purely
> imagedata based code then there isn't anything to defer, and so all you're
> doing is adding runloop latency.  The other examples you give don't really
> apply either.
>
> Most imagedata both code i've seen is not GC heavy, if you're performing
> animations using css animations, etc then I believe that the browser is
> already able to hoist them onto another thread.  If you have animations in
> multiple windows then chrome doesn't have a problem because those windows
> are a separate process, and if you're not, then all you're doing is
> allowing one runloop of work (which may or may not be enough to get a paint
> done) before you start processing your ImageData.  I'm really not sure what
> it is that you're doing with your ImageData such that it takes so much less
> time than the canvas work, but it seems remarkable that there's some
> operation you can perform in JS over all the data returned that takes less
> time that the latency introduced by an async API.
>
> --Oliver
>
>
> -Darin
>
>
>
>>
>> Also, if the argument is wrt deferred rendering rather than GPU copyback,
>> can we drop GPU related arguments from this thread?
>>
>> --Oliver
>>
>> On Apr 16, 2012, at 12:10 PM, Glenn Maynard  wrote:
>>
>> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote:
>>>
>>> I don't understand why adding a runloop cycle to any read seems like
>>> something that would introduce a much more noticable delay than a memcopy.
>>>
>>
>> The use case is deferred rendering.  Canvas drawing calls don't need to
>> complete synchronously (before the drawing call returns); they can be
>> queued, so API calls return immediately and the actual draws can happen in
>> a thread or on the GPU.  This is exactly like OpenGL's pipelining model
>> (and might well be implemented using it, on some platforms).
>>
>> The problem is that if you have a bunch of that work pipelined, and you
>> perform a synchronous readback, you have to flush the queue.  In OpenGL
>> terms, you have to call glFinish().  That might take long enough to cause a
>> visible UI hitch.  By making the readback asynchronous, you can defer the
>> actual operation until the operations before it have been completed, so you
>> avoid any such blocking in the UI thread.
>>
>>
>>>  I also don't understand what makes reading from the GPU so expensive
>>> that adding a runloop cycle is necessary for go

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Maciej Stachowiak

On Apr 16, 2012, at 12:10 PM, Glenn Maynard wrote:

> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote: 
> I don't understand why adding a runloop cycle to any read seems like 
> something that would introduce a much more noticable delay than a memcopy.
> 
> The use case is deferred rendering.  Canvas drawing calls don't need to 
> complete synchronously (before the drawing call returns); they can be queued, 
> so API calls return immediately and the actual draws can happen in a thread 
> or on the GPU.  This is exactly like OpenGL's pipelining model (and might 
> well be implemented using it, on some platforms).
> 
> The problem is that if you have a bunch of that work pipelined, and you 
> perform a synchronous readback, you have to flush the queue.  In OpenGL 
> terms, you have to call glFinish().  That might take long enough to cause a 
> visible UI hitch.  By making the readback asynchronous, you can defer the 
> actual operation until the operations before it have been completed, so you 
> avoid any such blocking in the UI thread.
>  
>  I also don't understand what makes reading from the GPU so expensive that 
> adding a runloop cycle is necessary for good perf, but it's unnecessary for a 
> write.
> 
> It has nothing to do with how expensive the GPU read is, and everything to do 
> with the need to flush the pipeline.  Writes don't need to do this; they 
> simply queue, like any other drawing operation.

Would the async version still require a flush and immediate readback if you do 
any drawing after the get call but before the data is returned?

Regards,
Maciej




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 2:00 PM, Darin Fisher  wrote:
> I have learned that it is not commonly accepted that reading ImageData can be 
> slow.  I had assumed otherwise.

Yes, it's possible to make reading image data slow, but i can make _anything_ 
slow.  I could make postMessage slow even though it's ostensibly asynchronous 
simply by triggering a copy of a large enough object.

The problem I have is that there has been any demonstration that making the 
data read asynchronous will save substantial time vs. a) the typical operations 
that will subsequently be performed on the retrieved data or b) that it would 
generally take longer than a runloop cycle.

Knowingly adding complexity without any good metrics that show getImageData{HD} 
is sufficiently expensive to warrant that complexity seems like the wrong path 
to take.

> 
> -Darin



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 1:45 PM, Oliver Hunt  wrote:

>
> On Apr 16, 2012, at 11:07 AM, Darin Fisher  wrote:
> >
> > See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes
> that
> > didn't exist.  Note how we recently withdrew support for synchronous
> > ArrayBuffer access on XHR?  We did this precisely to discourage use of
> > synchronous mode XHR. Doing so actually broke some existing web pages.
>  The
> > pain was deemed worth it.
>
> Yes, but the reason for this is very simple: synchronous IO can take a
> literally interminable amount of time, in which nothing else can happen.
>  We're talking about something entirely client side, that is theoretically
> going to be done sufficiently quickly to update a frame.
>
> The IO case has a best case of hundreds of milliseconds, whereas that is
> likely to be close to the worst case on the graphics side.
>
>
Sorry, I did not make my point clear.  I did not intend to equate network
delays to graphics delays, as they are obviously not on the same order of
magnitude.  Let me try again.

We decided that we didn't like synchronous XHR.  We decided to withhold new
features from synchronous XHR.  I believe we did so in part to discourage
use of synchronous XHR and encourage use of asynchronous XHR.

I was suggesting that we have an opportunity to apply a similar approach to
canvas ImageData.

I have learned that it is not commonly accepted that reading ImageData can
be slow.  I had assumed otherwise.

-Darin


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 11:07 AM, Darin Fisher  wrote:
> 
> See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
> didn't exist.  Note how we recently withdrew support for synchronous
> ArrayBuffer access on XHR?  We did this precisely to discourage use of
> synchronous mode XHR. Doing so actually broke some existing web pages.  The
> pain was deemed worth it.

Yes, but the reason for this is very simple: synchronous IO can take a 
literally interminable amount of time, in which nothing else can happen.  We're 
talking about something entirely client side, that is theoretically going to be 
done sufficiently quickly to update a frame.

The IO case has a best case of hundreds of milliseconds, whereas that is likely 
to be close to the worst case on the graphics side.

--Oliver


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Tim Streater
On 16 Apr 2012 at 19:07, Darin Fisher  wrote: 

> Aren't we missing an opportunity here?  By giving web developers this easy
> migration path, you're also giving up the opportunity to encourage them to
> use a better API.  Asynchronous APIs are harder to use, and that's why we
> need to encourage their adoption.  If you just give people a synchronous
> version that accomplishes the same thing, then they will just use that,
> even if doing so causes their app to perform poorly.
>
> See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
> didn't exist.  Note how we recently withdrew support for synchronous
> ArrayBuffer access on XHR?  We did this precisely to discourage use of
> synchronous mode XHR. Doing so actually broke some existing web pages.  The
> pain was deemed worth it.

In my app I have about 90 async XMLHttpRequest calls. I have one synchronous 
one that I'd really like to keep as it facilitates a clean tidy up if the user 
ignores my Exit button and quits by closing the window. When the app closes I 
need to run a script in order to shut down a local apache instance amongst 
other things. I hope I'm not going to find this to be a problem if synchronous 
mode XMLHttpRequest is removed from Safari at some future point. My code looks 
like this:


function quitbyClose ()
 {

 // User clicked window close button. Must make a synchronous ajax call to 
tidy up. We have to ensure
 // SESE with no "return" being executed due to the odd way that 
onbeforeunload operates, in two cases:
 // 1) Where Safari re-opens myapp after user quits Safari and user has 
already restarted myapp
 // 2) Where user might try to start this file by hand

 var  request, data;

 if  (portnum>0)
  {

  closeWindows ();  
// Close any popups

  data = "datarootpath=" + encodeURIComponent (datarootpath) + 
"&debugfl=" + debugfl;

  request = new XMLHttpRequest ();
  request.open ("POST", "http://localhost:"; + portnum + 
"/bin/myapp-terminate.php", false);
  request.setRequestHeader ("Content-Type", 
"application/x-www-form-urlencoded; charset=utf-8"); 
  request.onreadystatechange = function() { if  (request.readyState!=4) 
 return false; }
  request.send (data);

  }

 }


--
Cheers  --  Tim


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 1:12 PM, Darin Fisher  wrote:

> Glenn summarizes my concerns exactly.  Deferred rendering is indeed the more 
> precise issue.
> 
> On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:
> Could someone construct a demonstration of where the read back of the 
> imagedata takes longer than a runloop cycle?
> 
> I bet this would be fairly easy to demonstrate.

Then by all means do :D

> 
> 
> You're asking for significant additional complexity for content authors, with 
> a regression in general case performance, it would be good to see if it's 
> possible to create an example, even if it's not something any sensible author 
> would do, where their is a performance improvement.
> 
> Remember, the application is only marginally better when it's not painting 
> due to waiting for a runloop cycle than it is when blocked waiting on a 
> graphics flush.
> 
> You can do a lot of other things during this time.  For example, you can 
> prepare the next animation frame.  You can run JavaScript garbage collection.
> 
> Also, it is common for a browser thread to handle animations for multiple 
> windows.  If you have animations going in both windows, it would be nice for 
> those animations to update in parallel instead of being serialized.

None of which changes the fact that your actual developer now needs more 
complicated code, and has slower performance.  If I'm doing purely imagedata 
based code then there isn't anything to defer, and so all you're doing is 
adding runloop latency.  The other examples you give don't really apply either. 

Most imagedata both code i've seen is not GC heavy, if you're performing 
animations using css animations, etc then I believe that the browser is already 
able to hoist them onto another thread.  If you have animations in multiple 
windows then chrome doesn't have a problem because those windows are a separate 
process, and if you're not, then all you're doing is allowing one runloop of 
work (which may or may not be enough to get a paint done) before you start 
processing your ImageData.  I'm really not sure what it is that you're doing 
with your ImageData such that it takes so much less time than the canvas work, 
but it seems remarkable that there's some operation you can perform in JS over 
all the data returned that takes less time that the latency introduced by an 
async API.

--Oliver

> 
> -Darin
> 
>  
> 
> Also, if the argument is wrt deferred rendering rather than GPU copyback, can 
> we drop GPU related arguments from this thread?
> 
> --Oliver
> 
> On Apr 16, 2012, at 12:10 PM, Glenn Maynard  wrote:
> 
>> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote: 
>> I don't understand why adding a runloop cycle to any read seems like 
>> something that would introduce a much more noticable delay than a memcopy.
>> 
>> The use case is deferred rendering.  Canvas drawing calls don't need to 
>> complete synchronously (before the drawing call returns); they can be 
>> queued, so API calls return immediately and the actual draws can happen in a 
>> thread or on the GPU.  This is exactly like OpenGL's pipelining model (and 
>> might well be implemented using it, on some platforms).
>> 
>> The problem is that if you have a bunch of that work pipelined, and you 
>> perform a synchronous readback, you have to flush the queue.  In OpenGL 
>> terms, you have to call glFinish().  That might take long enough to cause a 
>> visible UI hitch.  By making the readback asynchronous, you can defer the 
>> actual operation until the operations before it have been completed, so you 
>> avoid any such blocking in the UI thread.
>>  
>>  I also don't understand what makes reading from the GPU so expensive that 
>> adding a runloop cycle is necessary for good perf, but it's unnecessary for 
>> a write.
>> 
>> It has nothing to do with how expensive the GPU read is, and everything to 
>> do with the need to flush the pipeline.  Writes don't need to do this; they 
>> simply queue, like any other drawing operation.
>> 
>> -- 
>> Glenn Maynard
>> 
>> 
> 
> 



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
Glenn summarizes my concerns exactly.  Deferred rendering is indeed the
more precise issue.

On Mon, Apr 16, 2012 at 12:18 PM, Oliver Hunt  wrote:

> Could someone construct a demonstration of where the read back of the
> imagedata takes longer than a runloop cycle?
>

I bet this would be fairly easy to demonstrate.


> You're asking for significant additional complexity for content authors,
> with a regression in general case performance, it would be good to see if
> it's possible to create an example, even if it's not something any sensible
> author would do, where their is a performance improvement.
>
> Remember, the application is only marginally better when it's not painting
> due to waiting for a runloop cycle than it is when blocked waiting on a
> graphics flush.
>

You can do a lot of other things during this time.  For example, you can
prepare the next animation frame.  You can run JavaScript garbage
collection.

Also, it is common for a browser thread to handle animations for multiple
windows.  If you have animations going in both windows, it would be nice
for those animations to update in parallel instead of being serialized.

-Darin



>
> Also, if the argument is wrt deferred rendering rather than GPU copyback,
> can we drop GPU related arguments from this thread?
>
> --Oliver
>
> On Apr 16, 2012, at 12:10 PM, Glenn Maynard  wrote:
>
> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote:
>>
>> I don't understand why adding a runloop cycle to any read seems like
>> something that would introduce a much more noticable delay than a memcopy.
>>
>
> The use case is deferred rendering.  Canvas drawing calls don't need to
> complete synchronously (before the drawing call returns); they can be
> queued, so API calls return immediately and the actual draws can happen in
> a thread or on the GPU.  This is exactly like OpenGL's pipelining model
> (and might well be implemented using it, on some platforms).
>
> The problem is that if you have a bunch of that work pipelined, and you
> perform a synchronous readback, you have to flush the queue.  In OpenGL
> terms, you have to call glFinish().  That might take long enough to cause a
> visible UI hitch.  By making the readback asynchronous, you can defer the
> actual operation until the operations before it have been completed, so you
> avoid any such blocking in the UI thread.
>
>
>>  I also don't understand what makes reading from the GPU so expensive
>> that adding a runloop cycle is necessary for good perf, but it's
>> unnecessary for a write.
>>
>
> It has nothing to do with how expensive the GPU read is, and everything to
> do with the need to flush the pipeline.  Writes don't need to do this; they
> simply queue, like any other drawing operation.
>
> --
> Glenn Maynard
>
>
>
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt
Could someone construct a demonstration of where the read back of the imagedata 
takes longer than a runloop cycle?

You're asking for significant additional complexity for content authors, with a 
regression in general case performance, it would be good to see if it's 
possible to create an example, even if it's not something any sensible author 
would do, where their is a performance improvement.

Remember, the application is only marginally better when it's not painting due 
to waiting for a runloop cycle than it is when blocked waiting on a graphics 
flush.

Also, if the argument is wrt deferred rendering rather than GPU copyback, can 
we drop GPU related arguments from this thread?

--Oliver

On Apr 16, 2012, at 12:10 PM, Glenn Maynard  wrote:

> On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote: 
> I don't understand why adding a runloop cycle to any read seems like 
> something that would introduce a much more noticable delay than a memcopy.
> 
> The use case is deferred rendering.  Canvas drawing calls don't need to 
> complete synchronously (before the drawing call returns); they can be queued, 
> so API calls return immediately and the actual draws can happen in a thread 
> or on the GPU.  This is exactly like OpenGL's pipelining model (and might 
> well be implemented using it, on some platforms).
> 
> The problem is that if you have a bunch of that work pipelined, and you 
> perform a synchronous readback, you have to flush the queue.  In OpenGL 
> terms, you have to call glFinish().  That might take long enough to cause a 
> visible UI hitch.  By making the readback asynchronous, you can defer the 
> actual operation until the operations before it have been completed, so you 
> avoid any such blocking in the UI thread.
>  
>  I also don't understand what makes reading from the GPU so expensive that 
> adding a runloop cycle is necessary for good perf, but it's unnecessary for a 
> write.
> 
> It has nothing to do with how expensive the GPU read is, and everything to do 
> with the need to flush the pipeline.  Writes don't need to do this; they 
> simply queue, like any other drawing operation.
> 
> -- 
> Glenn Maynard
> 
> 



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Glenn Maynard
On Mon, Apr 16, 2012 at 1:59 PM, Oliver Hunt  wrote:
>
> I don't understand why adding a runloop cycle to any read seems like
> something that would introduce a much more noticable delay than a memcopy.
>

The use case is deferred rendering.  Canvas drawing calls don't need to
complete synchronously (before the drawing call returns); they can be
queued, so API calls return immediately and the actual draws can happen in
a thread or on the GPU.  This is exactly like OpenGL's pipelining model
(and might well be implemented using it, on some platforms).

The problem is that if you have a bunch of that work pipelined, and you
perform a synchronous readback, you have to flush the queue.  In OpenGL
terms, you have to call glFinish().  That might take long enough to cause a
visible UI hitch.  By making the readback asynchronous, you can defer the
actual operation until the operations before it have been completed, so you
avoid any such blocking in the UI thread.


>  I also don't understand what makes reading from the GPU so expensive that
> adding a runloop cycle is necessary for good perf, but it's unnecessary for
> a write.
>

It has nothing to do with how expensive the GPU read is, and everything to
do with the need to flush the pipeline.  Writes don't need to do this; they
simply queue, like any other drawing operation.

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 11:38 AM, Darin Fisher  wrote:

> On Mon, Apr 16, 2012 at 11:17 AM, Oliver Hunt  wrote:
> 
> On Apr 16, 2012, at 11:07 AM, Darin Fisher  wrote:
> 
> >
> > Carrots and Sticks.
> >
> > Aren't we missing an opportunity here?  By giving web developers this easy
> > migration path, you're also giving up the opportunity to encourage them to
> > use a better API.  Asynchronous APIs are harder to use, and that's why we
> > need to encourage their adoption.  If you just give people a synchronous
> > version that accomplishes the same thing, then they will just use that,
> > even if doing so causes their app to perform poorly.
> >
> > See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
> > didn't exist.  Note how we recently withdrew support for synchronous
> > ArrayBuffer access on XHR?  We did this precisely to discourage use of
> > synchronous mode XHR. Doing so actually broke some existing web pages.  The
> > pain was deemed worth it.
> >
> > GPU readback of a HD buffer is going to suck.  Any use of this new API is
> > going to suck.
> >
> > -Darin
> >
> 
> Any use of imagedata i've seen assumes that they can avoid intermediate 
> states in the canvas ever being visible, if you make reading and writing the 
> data asynchronous you break that invariant and suddenly makes things much 
> harder for the user.
> 
> I agree with Charles Pritchard that it is only the reading of pixel data that 
> should be asynchronous.
> 
> I think developers could learn to cope with this new design just as they do 
> with other asynchronous facets of the platform.
> 
>  
> 
> The reason we don't want IO synchronous is because IO can take a potentially 
> unbound amount of time, if you're on a platform that makes a memcpy take 
> similarly unbound time, i recommend that you work around it.
> 
> Of course, GPU readbacks do not compare to network IO.  However, if the goal 
> is to achieve smooth animations, then it is important that the main thread 
> not hitch for multiple animation frames.  GPU readbacks are irregular in 
> duration and can sometimes be quite expensive if the GPU pipeline is heavily 
> burdened.
> 
>  
> 
> Anyway, the sensible approach to imagedata + hardware backed canvas is to 
> revert to a software backed canvas, as once someone has used imagedata once, 
> they're likely to do it again (and again, and again) so it is probably a win 
> to just do everything in software at that point.  Presumably you could 
> through in heuristics to determine whether or not it's worth going back to 
> the GPU at some point, but many of the common image data use cases will have 
> awful perf if you try to keep them on the GPU 100% of the time.
> 
> I don't think it is OK if at application startup (or animation startup) there 
> is a big UI glitch as the system determines that it should not GPU-back a 
> canvas.  We have the opportunity now to design an API that does not have that 
> bug.
> 
> Why don't you want to take advantage of this opportunity?

We can already do imagedata based access on a gpu backed canvas in webkit 
without ill effects simply by pulling the canvas off GPU memory.   I don't 
understand why adding a runloop cycle to any read seems like something that 
would introduce a much more noticable delay than a memcopy.  I also don't 
understand what makes reading from the GPU so expensive that adding a runloop 
cycle is necessary for good perf, but it's unnecessary for a write.  This feels 
like an argument along the lines of "we hate synchronous APIs, but they make 
sense for graphics.  Let's try and make at least part of this asynchronous to 
satisfy that particular desire."

Moving data to and from the GPU may be expensive, but i doubt it holds a candle 
to the cost of waiting for a full runloop cycle, unless you're doing something 
really inefficient in your backing store management.  The fact is that the 
ImageData is a pixel manipulation API, and any such API is not conducive to 
good performance on the GPU.

--Oliver

> 
> -Darin
> 
> 
> 
>  
> >
> >
> >>
> >>
> >>>
> >>> - James
> >>> On Mar 20, 2012 10:29 AM, "Edward O'Connor" 
> >> wrote:
> >>>
>  Hi,
> 
>  Unfortunately, lots of  content (especially content which calls
>  {create,get,put}ImageData methods) assumes that the 's backing
>  store pixels correspond 1:1 to CSS pixels, even though the spec has been
>  written to allow for the backing store to be at a different scale
>  factor.
> 
>  Especially problematic is that developers have to round trip image data
>  through a  in order to detect that a different scale factor is
>  being used.
> 
>  I'd like to propose the addition of a backingStorePixelRatio property to
>  the 2D context object. Just as window.devicePixelRatio expresses the
>  ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
>  express the ratio of backing store pixels to CSS pixels. This allows
>  developers

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Mon, Apr 16, 2012 at 11:17 AM, Oliver Hunt  wrote:

>
> On Apr 16, 2012, at 11:07 AM, Darin Fisher  wrote:
>
> >
> > Carrots and Sticks.
> >
> > Aren't we missing an opportunity here?  By giving web developers this
> easy
> > migration path, you're also giving up the opportunity to encourage them
> to
> > use a better API.  Asynchronous APIs are harder to use, and that's why we
> > need to encourage their adoption.  If you just give people a synchronous
> > version that accomplishes the same thing, then they will just use that,
> > even if doing so causes their app to perform poorly.
> >
> > See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes
> that
> > didn't exist.  Note how we recently withdrew support for synchronous
> > ArrayBuffer access on XHR?  We did this precisely to discourage use of
> > synchronous mode XHR. Doing so actually broke some existing web pages.
>  The
> > pain was deemed worth it.
> >
> > GPU readback of a HD buffer is going to suck.  Any use of this new API is
> > going to suck.
> >
> > -Darin
> >
>
> Any use of imagedata i've seen assumes that they can avoid intermediate
> states in the canvas ever being visible, if you make reading and writing
> the data asynchronous you break that invariant and suddenly makes things
> much harder for the user.
>

I agree with Charles Pritchard that it is only the reading of pixel data
that should be asynchronous.

I think developers could learn to cope with this new design just as they do
with other asynchronous facets of the platform.



>
> The reason we don't want IO synchronous is because IO can take a
> potentially unbound amount of time, if you're on a platform that makes a
> memcpy take similarly unbound time, i recommend that you work around it.
>

Of course, GPU readbacks do not compare to network IO.  However, if the
goal is to achieve smooth animations, then it is important that the main
thread not hitch for multiple animation frames.  GPU readbacks are
irregular in duration and can sometimes be quite expensive if the GPU
pipeline is heavily burdened.



>
> Anyway, the sensible approach to imagedata + hardware backed canvas is to
> revert to a software backed canvas, as once someone has used imagedata
> once, they're likely to do it again (and again, and again) so it is
> probably a win to just do everything in software at that point.  Presumably
> you could through in heuristics to determine whether or not it's worth
> going back to the GPU at some point, but many of the common image data use
> cases will have awful perf if you try to keep them on the GPU 100% of the
> time.
>

I don't think it is OK if at application startup (or animation startup)
there is a big UI glitch as the system determines that it should not
GPU-back a canvas.  We have the opportunity now to design an API that does
not have that bug.

Why don't you want to take advantage of this opportunity?

-Darin





> >
> >
> >>
> >>
> >>>
> >>> - James
> >>> On Mar 20, 2012 10:29 AM, "Edward O'Connor" 
> >> wrote:
> >>>
>  Hi,
> 
>  Unfortunately, lots of  content (especially content which
> calls
>  {create,get,put}ImageData methods) assumes that the 's backing
>  store pixels correspond 1:1 to CSS pixels, even though the spec has
> been
>  written to allow for the backing store to be at a different scale
>  factor.
> 
>  Especially problematic is that developers have to round trip image
> data
>  through a  in order to detect that a different scale factor is
>  being used.
> 
>  I'd like to propose the addition of a backingStorePixelRatio property
> to
>  the 2D context object. Just as window.devicePixelRatio expresses the
>  ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
>  express the ratio of backing store pixels to CSS pixels. This allows
>  developers to easily branch to handle different backing store scale
>  factors.
> 
>  Additionally, I think the existing {create,get,put}ImageData API needs
>  to be defined to be in terms of CSS pixels, since that's what existing
>  content assumes. I propose the addition of a new set of methods for
>  working directly with backing store image data. (New methods are
> easier
>  to feature detect than adding optional arguments to the existing
>  methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
>  but I'm not wedded to the names. (Nor do I want to bikeshed them.)
> 
> 
>  Thanks for your consideration,
>  Ted
> 
> >>
> >>
>
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Charles Pritchard

On 4/16/2012 11:17 AM, Oliver Hunt wrote:

Anyway, the sensible approach to imagedata + hardware backed canvas is to 
revert to a software backed canvas, as once someone has used imagedata once, 
they're likely to do it again (and again, and again) so it is probably a win to 
just do everything in software at that point.  Presumably you could through in 
heuristics to determine whether or not it's worth going back to the GPU at some 
point, but many of the common image data use cases will have awful perf if you 
try to keep them on the GPU 100% of the time.


The RiverTrail and W16 projects suggest that we'll have a landscape with 
multiple cores, eventually, to work on image data chunks.
Simple instructions can be transformed; simple filters such as Color 
Matrix filters could be compiled to work on the GPU.


Not saying it'll happen, but there are proof of concept projects out there.

-Charles


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Charles Pritchard

On 4/16/2012 11:07 AM, Darin Fisher wrote:

On Wed, Mar 21, 2012 at 8:29 PM, Maciej Stachowiak  wrote:


On Mar 20, 2012, at 12:00 PM, James Robinson wrote:


If we are adding new APIs for manipulating the backing directly, can we
make them asynchronous? This would allow for many optimization
opportunities that are currently difficult or impossible.

I hear you on the benefits of async calls, but I think it would be better
to sell authors on their benefits separately.


Aren't we missing an opportunity here?  By giving web developers this easy
migration path, you're also giving up the opportunity to encourage them to
use a better API.  Asynchronous APIs are harder to use, and that's why we
need to encourage their adoption.  If you just give people a synchronous

...

GPU readback of a HD buffer is going to suck.  Any use of this new API is
going to suck.



The vibe I got from the discussion over at WHATWG is that developers and 
vendors would like to see an async getImageDataHD.

"put" isn't so much of an issue, but "get" is.

As developers, we're going to be using async more-and-more with Canvas; 
from the toBlob semantic to postMessage transfer semantics.
As a Canvas developer, I'd be inclined to use the HD buffer even if it's 
the same backing size as the standard buffer, if it supported async 
semantics.


...

Separately, I'm hoping to see this issue sorted out:
https://lists.webkit.org/pipermail/webkit-dev/2011-April/016428.html

IE9+: magicNumber = window.screen.deviceXDPI / window.screen.logicalXDPI;
WebKit: magicNumber = (window.outerWidth / window.innerWidth) * 
(window.devicePixelRatio||1) ; 'with a small margin of error';


Canvas developers today need to use that nasty webkit hack to get their 
magic number. It's not fun, and I wish we could move off it.
IE9 on the desktop works appropriately. I can use browser zoom and 
re-render with a crisp/sharp Canvas bitmap using the window.screen 
extensions.


-Charles


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Oliver Hunt

On Apr 16, 2012, at 11:07 AM, Darin Fisher  wrote:

> 
> Carrots and Sticks.
> 
> Aren't we missing an opportunity here?  By giving web developers this easy
> migration path, you're also giving up the opportunity to encourage them to
> use a better API.  Asynchronous APIs are harder to use, and that's why we
> need to encourage their adoption.  If you just give people a synchronous
> version that accomplishes the same thing, then they will just use that,
> even if doing so causes their app to perform poorly.
> 
> See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
> didn't exist.  Note how we recently withdrew support for synchronous
> ArrayBuffer access on XHR?  We did this precisely to discourage use of
> synchronous mode XHR. Doing so actually broke some existing web pages.  The
> pain was deemed worth it.
> 
> GPU readback of a HD buffer is going to suck.  Any use of this new API is
> going to suck.
> 
> -Darin
> 

Any use of imagedata i've seen assumes that they can avoid intermediate states 
in the canvas ever being visible, if you make reading and writing the data 
asynchronous you break that invariant and suddenly makes things much harder for 
the user.

The reason we don't want IO synchronous is because IO can take a potentially 
unbound amount of time, if you're on a platform that makes a memcpy take 
similarly unbound time, i recommend that you work around it.

Anyway, the sensible approach to imagedata + hardware backed canvas is to 
revert to a software backed canvas, as once someone has used imagedata once, 
they're likely to do it again (and again, and again) so it is probably a win to 
just do everything in software at that point.  Presumably you could through in 
heuristics to determine whether or not it's worth going back to the GPU at some 
point, but many of the common image data use cases will have awful perf if you 
try to keep them on the GPU 100% of the time.

--Oliver

> 
> 
>> 
>> 
>>> 
>>> - James
>>> On Mar 20, 2012 10:29 AM, "Edward O'Connor" 
>> wrote:
>>> 
 Hi,
 
 Unfortunately, lots of  content (especially content which calls
 {create,get,put}ImageData methods) assumes that the 's backing
 store pixels correspond 1:1 to CSS pixels, even though the spec has been
 written to allow for the backing store to be at a different scale
 factor.
 
 Especially problematic is that developers have to round trip image data
 through a  in order to detect that a different scale factor is
 being used.
 
 I'd like to propose the addition of a backingStorePixelRatio property to
 the 2D context object. Just as window.devicePixelRatio expresses the
 ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
 express the ratio of backing store pixels to CSS pixels. This allows
 developers to easily branch to handle different backing store scale
 factors.
 
 Additionally, I think the existing {create,get,put}ImageData API needs
 to be defined to be in terms of CSS pixels, since that's what existing
 content assumes. I propose the addition of a new set of methods for
 working directly with backing store image data. (New methods are easier
 to feature detect than adding optional arguments to the existing
 methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
 but I'm not wedded to the names. (Nor do I want to bikeshed them.)
 
 
 Thanks for your consideration,
 Ted
 
>> 
>> 



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-04-16 Thread Darin Fisher
On Wed, Mar 21, 2012 at 8:29 PM, Maciej Stachowiak  wrote:

>
> On Mar 20, 2012, at 12:00 PM, James Robinson wrote:
>
> > If we are adding new APIs for manipulating the backing directly, can we
> > make them asynchronous? This would allow for many optimization
> > opportunities that are currently difficult or impossible.
>
> Neat idea to offer async backing store access. I'm not sure that we should
> tie this to backing store access at true backing store resolution vs at CSS
> pixel nominal resolution, because it will significantly raise the barrier
> to authors recoding their existing apps to take full advantage of higher
> resolutions. With Ted's proposal, all they would have to do is use the HD
> versions of calls and change their loops to read the bounds from the
> ImageData object instead of assuming. If we also forced the new calls to be
> async, then more extensive changes would be required.
>
> I hear you on the benefits of async calls, but I think it would be better
> to sell authors on their benefits separately.
>
> Cheers,
> Maciej
>


Carrots and Sticks.

Aren't we missing an opportunity here?  By giving web developers this easy
migration path, you're also giving up the opportunity to encourage them to
use a better API.  Asynchronous APIs are harder to use, and that's why we
need to encourage their adoption.  If you just give people a synchronous
version that accomplishes the same thing, then they will just use that,
even if doing so causes their app to perform poorly.

See synchronous XMLHttpRequest.  I'm sure every browser vendor wishes that
didn't exist.  Note how we recently withdrew support for synchronous
ArrayBuffer access on XHR?  We did this precisely to discourage use of
synchronous mode XHR. Doing so actually broke some existing web pages.  The
pain was deemed worth it.

GPU readback of a HD buffer is going to suck.  Any use of this new API is
going to suck.

-Darin



>
>
> >
> > - James
> > On Mar 20, 2012 10:29 AM, "Edward O'Connor" 
> wrote:
> >
> >> Hi,
> >>
> >> Unfortunately, lots of  content (especially content which calls
> >> {create,get,put}ImageData methods) assumes that the 's backing
> >> store pixels correspond 1:1 to CSS pixels, even though the spec has been
> >> written to allow for the backing store to be at a different scale
> >> factor.
> >>
> >> Especially problematic is that developers have to round trip image data
> >> through a  in order to detect that a different scale factor is
> >> being used.
> >>
> >> I'd like to propose the addition of a backingStorePixelRatio property to
> >> the 2D context object. Just as window.devicePixelRatio expresses the
> >> ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
> >> express the ratio of backing store pixels to CSS pixels. This allows
> >> developers to easily branch to handle different backing store scale
> >> factors.
> >>
> >> Additionally, I think the existing {create,get,put}ImageData API needs
> >> to be defined to be in terms of CSS pixels, since that's what existing
> >> content assumes. I propose the addition of a new set of methods for
> >> working directly with backing store image data. (New methods are easier
> >> to feature detect than adding optional arguments to the existing
> >> methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
> >> but I'm not wedded to the names. (Nor do I want to bikeshed them.)
> >>
> >>
> >> Thanks for your consideration,
> >> Ted
> >>
>
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-21 Thread Charles Pritchard
On Mar 21, 2012, at 8:58 PM, Maciej Stachowiak  wrote:

>> You'll really have three items now to add up.
>> 
>> devicePixelRatio * backingStorePixelRatio * logicalPixelRatio.
>> 
>> Is that middle item really necessary?
>> I wasn't able to get anyone to budge on changing window.devicePixelRatio on 
>> the desktop. It's fixed at 1.
> 
> I was unable to decipher what IE's logical{X,Y}DPI does and how it differs 
> from device{X,Y}DPI and for that matter system{X,Y}DPI. But I don't believe 
> any of those things relate to the canvas backing store, however, so I don't 
> see how they eliminate the need for backingStoreRatio.


When you zoom out or in on a page, the ratio changes. So if I check that value 
after a resize event I know to change the units on my canvas elements if I want 
them to not be blurry (when zoomed in) or if I want to not do excess work (when 
zoomed out).

What is the benefit of drawing to an over sized ("bush res") backing store? 
Seems like on a device where zoom is very common (yes, you, iphone), it could 
make for a little nicer experience. On desktop though, we just repaint on 
resize and zoom transitions are scaled through the gpu anyway.

Seems like a lot of extra work for the phone though. And we can do it as 
authors by just using CSS width = .5* width;

I agree with your assessment, both features are necessary to bring it in.

I'd still like someone o'er in WebKit to pick up the issue that's existed since 
the introduction of Canvas in WebKit: exposing the current pixel ratio so we 
can redraw our Canvas at the appropriate ratio when browser zoom (zoom in or 
out) is in use.

Currently I do outerWidth/innerWidth to estimate.

-Charles

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-21 Thread Maciej Stachowiak

On Mar 21, 2012, at 8:31 PM, Charles Pritchard wrote:

> On 3/21/2012 8:21 PM, Maciej Stachowiak wrote:
>> On Mar 20, 2012, at 3:22 PM, Charles Pritchard wrote:
>> 
>>> On Mar 20, 2012, at 3:05 PM, Edward O'Connor  wrote:
>>> 
 Charles Pritchard wrote:
 
>> But now run through this logic when the  is making a high res
>> backing store automatically: by doing the clever thing, you're now
>> quadrupling the size of the canvas, and you're paying an exorbitant
>> storage cost for doing so.
> Which (a): never happens
 Sorry, what never happens?
>>> The backing store itself is never set by 2x in the implementation. Not in 
>>> any public implementations I've seen. It's always 1:1 with height and width 
>>> units.
>> We're considering the possibility of scaling the backing store in future 
>> releases (which we can't really discuss in detail). We have experimented 
>> with it in WebKit, and we believe it's not viable to ship a production 
>> browser with backing store scaling without the sorts of API changes that Ted 
>> proposed because of how much content breaks.
> 
> The change being the addition of a "backingStorePixelRatio" or the change 
> being the addition of a second set of "HD" items?

We think both those changes are required to handle all cases gracefully.

> 
> I get what you're saying about HD; if the user requests a non-HD, it'd return 
> a typical 1:1 backing store, which most sites expect.
> Still, it seems a bit weird.
> 
> Why not use the method that already exists of managing the CSS and 
> devicePixelRatio? If an author is using new methods,
> they're certainly able to use the old ones.

I'm not sure what you mean by that. As I mentioned, backingStorePixelRatio is 
in general not equal to devicePixelRatio. It's true that you might be able to 
infer the backing store scale by creating a canvas solely for testing, but that 
is needlessly awkward.

> 
> 
>> An automatically scaled backing store is better for authors, because for the 
>> case where they are not doing any direct pixel manipulation, they get higher 
>> quality visual results with no code changes on devices that scale CSS 
>> pixels. But to offer it, we need to take care of the compatibility issues, 
>> and also provide a path for authors who have gone the extra mile to 
>> hand-scale 1x backing stores on 2x devices. In other words, all the 
>> following cases need to work:
>> 
>> devicePixelRatio is 1; backingStorePixelRatio is 1.
>> devicePixelRatio is 2; backingStorePixelRatio is 1.
>> devicePixelRatio is 2; backingStorePixelRatio is 2.
>> 
>> Maybe even other possibilities. In other words, we don't want to force 
>> either the assumption that backingStorePixelRatio is always 1, or that it is 
>> always is equal to devicePixelRatio. We believe that in time, neither is a 
>> safe assumption.
>> 
> 
> Well if they --need-- to work, better to add the value sooner than later.
> 
> My concern is that you've also got window.screen.logicalXPixelRatio on the 
> desktop.
> 
> You'll really have three items now to add up.
> 
> devicePixelRatio * backingStorePixelRatio * logicalPixelRatio.
> 
> Is that middle item really necessary?
> I wasn't able to get anyone to budge on changing window.devicePixelRatio on 
> the desktop. It's fixed at 1.

I was unable to decipher what IE's logical{X,Y}DPI does and how it differs from 
device{X,Y}DPI and for that matter system{X,Y}DPI. But I don't believe any of 
those things relate to the canvas backing store, however, so I don't see how 
they eliminate the need for backingStoreRatio.

Regards,
Maciej


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-21 Thread Charles Pritchard

On 3/21/2012 8:21 PM, Maciej Stachowiak wrote:

On Mar 20, 2012, at 3:22 PM, Charles Pritchard wrote:


On Mar 20, 2012, at 3:05 PM, Edward O'Connor  wrote:


Charles Pritchard wrote:


But now run through this logic when the  is making a high res
backing store automatically: by doing the clever thing, you're now
quadrupling the size of the canvas, and you're paying an exorbitant
storage cost for doing so.

Which (a): never happens

Sorry, what never happens?

The backing store itself is never set by 2x in the implementation. Not in any 
public implementations I've seen. It's always 1:1 with height and width units.

We're considering the possibility of scaling the backing store in future 
releases (which we can't really discuss in detail). We have experimented with 
it in WebKit, and we believe it's not viable to ship a production browser with 
backing store scaling without the sorts of API changes that Ted proposed 
because of how much content breaks.


The change being the addition of a "backingStorePixelRatio" or the 
change being the addition of a second set of "HD" items?


I get what you're saying about HD; if the user requests a non-HD, it'd 
return a typical 1:1 backing store, which most sites expect.

Still, it seems a bit weird.

Why not use the method that already exists of managing the CSS and 
devicePixelRatio? If an author is using new methods,

they're certainly able to use the old ones.



An automatically scaled backing store is better for authors, because for the 
case where they are not doing any direct pixel manipulation, they get higher 
quality visual results with no code changes on devices that scale CSS pixels. 
But to offer it, we need to take care of the compatibility issues, and also 
provide a path for authors who have gone the extra mile to hand-scale 1x 
backing stores on 2x devices. In other words, all the following cases need to 
work:

devicePixelRatio is 1; backingStorePixelRatio is 1.
devicePixelRatio is 2; backingStorePixelRatio is 1.
devicePixelRatio is 2; backingStorePixelRatio is 2.

Maybe even other possibilities. In other words, we don't want to force either 
the assumption that backingStorePixelRatio is always 1, or that it is always is 
equal to devicePixelRatio. We believe that in time, neither is a safe 
assumption.



Well if they --need-- to work, better to add the value sooner than later.

My concern is that you've also got window.screen.logicalXPixelRatio on 
the desktop.


You'll really have three items now to add up.

devicePixelRatio * backingStorePixelRatio * logicalPixelRatio.

Is that middle item really necessary?
I wasn't able to get anyone to budge on changing window.devicePixelRatio 
on the desktop. It's fixed at 1.


-Charles




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-21 Thread Maciej Stachowiak

On Mar 20, 2012, at 12:00 PM, James Robinson wrote:

> If we are adding new APIs for manipulating the backing directly, can we
> make them asynchronous? This would allow for many optimization
> opportunities that are currently difficult or impossible.

Neat idea to offer async backing store access. I'm not sure that we should tie 
this to backing store access at true backing store resolution vs at CSS pixel 
nominal resolution, because it will significantly raise the barrier to authors 
recoding their existing apps to take full advantage of higher resolutions. With 
Ted's proposal, all they would have to do is use the HD versions of calls and 
change their loops to read the bounds from the ImageData object instead of 
assuming. If we also forced the new calls to be async, then more extensive 
changes would be required.

I hear you on the benefits of async calls, but I think it would be better to 
sell authors on their benefits separately.

Cheers,
Maciej


> 
> - James
> On Mar 20, 2012 10:29 AM, "Edward O'Connor"  wrote:
> 
>> Hi,
>> 
>> Unfortunately, lots of  content (especially content which calls
>> {create,get,put}ImageData methods) assumes that the 's backing
>> store pixels correspond 1:1 to CSS pixels, even though the spec has been
>> written to allow for the backing store to be at a different scale
>> factor.
>> 
>> Especially problematic is that developers have to round trip image data
>> through a  in order to detect that a different scale factor is
>> being used.
>> 
>> I'd like to propose the addition of a backingStorePixelRatio property to
>> the 2D context object. Just as window.devicePixelRatio expresses the
>> ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
>> express the ratio of backing store pixels to CSS pixels. This allows
>> developers to easily branch to handle different backing store scale
>> factors.
>> 
>> Additionally, I think the existing {create,get,put}ImageData API needs
>> to be defined to be in terms of CSS pixels, since that's what existing
>> content assumes. I propose the addition of a new set of methods for
>> working directly with backing store image data. (New methods are easier
>> to feature detect than adding optional arguments to the existing
>> methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
>> but I'm not wedded to the names. (Nor do I want to bikeshed them.)
>> 
>> 
>> Thanks for your consideration,
>> Ted
>> 



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-21 Thread Maciej Stachowiak

On Mar 20, 2012, at 3:22 PM, Charles Pritchard wrote:

> On Mar 20, 2012, at 3:05 PM, Edward O'Connor  wrote:
> 
>> Charles Pritchard wrote:
>> 
 But now run through this logic when the  is making a high res
 backing store automatically: by doing the clever thing, you're now
 quadrupling the size of the canvas, and you're paying an exorbitant
 storage cost for doing so.
>>> 
>>> Which (a): never happens
>> 
>> Sorry, what never happens?
> 
> The backing store itself is never set by 2x in the implementation. Not in any 
> public implementations I've seen. It's always 1:1 with height and width units.

We're considering the possibility of scaling the backing store in future 
releases (which we can't really discuss in detail). We have experimented with 
it in WebKit, and we believe it's not viable to ship a production browser with 
backing store scaling without the sorts of API changes that Ted proposed 
because of how much content breaks.

An automatically scaled backing store is better for authors, because for the 
case where they are not doing any direct pixel manipulation, they get higher 
quality visual results with no code changes on devices that scale CSS pixels. 
But to offer it, we need to take care of the compatibility issues, and also 
provide a path for authors who have gone the extra mile to hand-scale 1x 
backing stores on 2x devices. In other words, all the following cases need to 
work:

devicePixelRatio is 1; backingStorePixelRatio is 1.
devicePixelRatio is 2; backingStorePixelRatio is 1.
devicePixelRatio is 2; backingStorePixelRatio is 2.

Maybe even other possibilities. In other words, we don't want to force either 
the assumption that backingStorePixelRatio is always 1, or that it is always is 
equal to devicePixelRatio. We believe that in time, neither is a safe 
assumption.


Regards,
Maciej



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Boris Zbarsky

On 3/20/12 7:04 PM, Glenn Maynard wrote:

If you have previous drawing commands buffered, and you want to avoid extra
copies, then putImageData has to block until the buffered drawing commands
complete.


Yes, but if you're drawing to a GPU directly you want to make the copy 
up front, imo; otherwise you have to wait for the full GPU latency 
before you can return even if there are no other drawing commands in the 
pipeline, which is painful



The question is whether you'd need to make a copy *synchronously*, before
putImageData returns.


If you want to do the image data put async in any way (and that includes 
any sort of direct-to-GPU setup, I'm told) then you need either sync 
copy or copy-on-write as far as I can tell.


-Boris


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Glenn Maynard
On Tue, Mar 20, 2012 at 5:41 PM, Boris Zbarsky  wrote:

> On 3/20/12 6:36 PM, Glenn Maynard wrote:
>
>> The drawing calls that happen after would need to be buffered (or
>> otherwise flush the queue, akin to calling glFinish), so the operations
>> still happen in order.
>>
>
> The former seems like it could get pretty expensive and the latter would
> negate the benefits of making it async, imo.


The latter just means that implementations aren't *required* to actually
buffer drawing operations.

It sounds like implementations are already doing the former, or want to,
from what James said.  It's not inherently expensive, especially if the
input parameters to the drawing call are lightweight, which most canvas
calls are.  OpenGL has always buffered commands like this.  By buffering
the calls, you can push the actual drawing off to a thread and avoid
blocking the UI thread.

I don't see why it needs to block at all.  At least in Gecko the
> putImageData basically just becomes a drawing command itself; you send it
> over to the graphics card and forget about it.


If you have previous drawing commands buffered, and you want to avoid extra
copies, then putImageData has to block until the buffered drawing commands
complete.

Avoiding that extra copy may not be worth the complexity, though.

 what happens if the argument passed to putImageData is modified before
>> it's written?
>>
>
> You have to copy it, yes.  Which you may have to do anyway, because
> imagedata is not premultiplied and for most drawing you want premultiplied
> data.


The question is whether you'd need to make a copy *synchronously*, before
putImageData returns.  Manipulating the data you put into the image doesn't
have to happen until the actual blit occurs (and the two may happen in the
same pass).

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Charles Pritchard

On 3/20/12 3:36 PM, Glenn Maynard wrote:

On Tue, Mar 20, 2012 at 2:08 PM, Boris Zbarsky  wrote:


That would indeed be very nice.  The question is what happens if drawing
happens after the getImageData call...  Or for that matter after the
putImageData call (though I suspect there's less need for putImageData to
be async).


The drawing calls that happen after would need to be buffered (or otherwise
flush the queue, akin to calling glFinish), so the operations still happen
in order.


Webkit did land buffered drawing operations.

When working with Flash (way back when) as a Canvas polyfill, buffered 
drawing made a huge difference. I doubt it has much of a performance 
impact now, except when rendering is done on some high latency pipeline 
(such as, perhaps, the GPU).


The frustrating item here; the area where there may be a clear 
optimization or win is with video/webcam.


We have to do: drawImage(video).getImageData() for each frame [of 
interest]. That one would be nice to have optimized.


We can't use RoC's media stream processing API (workers) because it's 
for output, not input. We don't need the canvas to keep a copy of the 
image in buffer after the getImageData call.


Beyond that case though, I doubt there's much to be done here

-Charles


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Boris Zbarsky

On 3/20/12 6:36 PM, Glenn Maynard wrote:

The drawing calls that happen after would need to be buffered (or
otherwise flush the queue, akin to calling glFinish), so the operations
still happen in order.


The former seems like it could get pretty expensive and the latter would 
negate the benefits of making it async, imo.



putImageData being async makes sense, too, for the same reason: it
avoids having to flush drawing commands earlier in the queue, which
helps keep putImageData from blocking.


I don't see why it needs to block at all.  At least in Gecko the 
putImageData basically just becomes a drawing command itself; you send 
it over to the graphics card and forget about it.



what happens if the argument passed to putImageData is modified before
it's written?


You have to copy it, yes.  Which you may have to do anyway, because 
imagedata is not premultiplied and for most drawing you want 
premultiplied data.



You'd either need a mechanism to detect changes, so you
can make a copy (eg. a copy-on-write mechanism for ArrayBuffer--though
that sort of sounds useful in its own right), or to just say that any
changes to made to the buffer before the async operation completes will
be reflected in the copy.


That seems unfortunately racy.  Also unnecessary, imo.

-Boris


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Glenn Maynard
On Tue, Mar 20, 2012 at 2:08 PM, Boris Zbarsky  wrote:

> That would indeed be very nice.  The question is what happens if drawing
> happens after the getImageData call...  Or for that matter after the
> putImageData call (though I suspect there's less need for putImageData to
> be async).
>

The drawing calls that happen after would need to be buffered (or otherwise
flush the queue, akin to calling glFinish), so the operations still happen
in order.

putImageData being async makes sense, too, for the same reason: it avoids
having to flush drawing commands earlier in the queue, which helps keep
putImageData from blocking.  It's a bit trickier, though: what happens if
the argument passed to putImageData is modified before it's written?  You'd
either need a mechanism to detect changes, so you can make a copy (eg. a
copy-on-write mechanism for ArrayBuffer--though that sort of sounds useful
in its own right), or to just say that any changes to made to the buffer
before the async operation completes will be reflected in the copy.

-- 
Glenn Maynard


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Charles Pritchard
On Mar 20, 2012, at 3:05 PM, Edward O'Connor  wrote:

> Charles Pritchard wrote:
> 
>>> But now run through this logic when the  is making a high res
>>> backing store automatically: by doing the clever thing, you're now
>>> quadrupling the size of the canvas, and you're paying an exorbitant
>>> storage cost for doing so.
>> 
>> Which (a): never happens
> 
> Sorry, what never happens?

The backing store itself is never set by 2x in the implementation. Not in any 
public implementations I've seen. It's always 1:1 with height and width units.


> Developers commonly double the size of their
> es (and scale them down with CSS) to support both the iPhone 3GS
> and iPhone 4. Which means such code would use 4 times as much memory as
> intended when  uses such a backing store.

It would do that, but it doesn't, because none of the implementations use a 
larger backing store.

And yes, in preparing for some future break in implementations, it may be wise 
for authors to run a check on getImageData. I don't, I've not seen it done, but 
it may be prudent.


> 
>> and (b) can be detected via 1x1 pixel canvas.
> 
> Having to round-trip image data through a  in order to detect
> its backing store size is one of the problems I'm trying to solve here.

It's a 1:1 fetch, it doesn't take any time to do getImageData(0,0,1,1);

I don't mean to be pushing back on this issue. I'm more focused on the fact 
that other obvious issues have not been addressed by vendors, and now we're 
examining an issue that does not yet exist.


> 
>>> You really only want to do the "make it twice as big and then scale
>>> it down with CSS" trick when backing store pixels are 1:1 to CSS
>>> pixels.
>> 
>> I do "tricks" to support browser zoom. They are increments; .5,.7,
>> 1.1, 1.2, 1.3, etc.
> 
> Huh? I'm not sure what you mean by "browser zoom," nor do I know what it
> has to do with my proposed additions to the  2D Context API.

I know, and it'd be swell if we could sit down some time and I could walk you 
through the Canvas API an WCAG.

In the meantime, perhaps you could go over to the link I sent (A canvas button) 
on Jumis.com.

By browser zoom, I mean on a desktop, when I use CTRL + to change the pixel 
ratio on the page.

I've posted links to MS docs, and to a live example, on this thread.



-Charles

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Edward O'Connor
Charles Pritchard wrote:

>> But now run through this logic when the  is making a high res
>> backing store automatically: by doing the clever thing, you're now
>> quadrupling the size of the canvas, and you're paying an exorbitant
>> storage cost for doing so.
>
> Which (a): never happens

Sorry, what never happens? Developers commonly double the size of their
es (and scale them down with CSS) to support both the iPhone 3GS
and iPhone 4. Which means such code would use 4 times as much memory as
intended when  uses such a backing store.

> and (b) can be detected via 1x1 pixel canvas.

Having to round-trip image data through a  in order to detect
its backing store size is one of the problems I'm trying to solve here.

>> You really only want to do the "make it twice as big and then scale
>> it down with CSS" trick when backing store pixels are 1:1 to CSS
>> pixels.
>
> I do "tricks" to support browser zoom. They are increments; .5,.7,
> 1.1, 1.2, 1.3, etc.

Huh? I'm not sure what you mean by "browser zoom," nor do I know what it
has to do with my proposed additions to the  2D Context API.


Ted


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Charles Pritchard
On Mar 20, 2012, at 1:42 PM, Edward O'Connor  wrote:

> But now run through this logic when the  is making a high res
> backing store automatically: by doing the clever thing, you're now
> quadrupling the size of the canvas, and you're paying an exorbitant
> storage cost for doing so.

Which (a): never happens and (b) can be detected via 1x1 pixel canvas.


> You really only want to do the "make it twice as big and then scale it
> down with CSS" trick when backing store pixels are 1:1 to CSS pixels.

I do "tricks" to support browser zoom. They are increments; .5,.7, 1.1, 1.2, 
1.3, etc. 


-Charles

Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Edward O'Connor
Tab wrote:

> So, I support adding an alternate API that explicitly returns a
> high-res store. If people fuck *that* up, then we're just screwed.

Yup.

> I'm not as sure about the backingStorePixelRatio bit. What's the
> use-case for it? Why do devs need to detect this, and what will they
> do different in the multiple code paths?

Suppose you're a clever developer who basically does something like this
to handle both an iPhone 3GS and an iPhone 4:

if (window.devicePixelRatio == 1) {
 // create a 100x100 canvas
} else if (window.devicePixelRatio == 2) {
 // create a 200x200 canvas and scale it down to 100x100 with CSS
} // etc.

But now run through this logic when the  is making a high res
backing store automatically: by doing the clever thing, you're now
quadrupling the size of the canvas, and you're paying an exorbitant
storage cost for doing so.

You really only want to do the "make it twice as big and then scale it
down with CSS" trick when backing store pixels are 1:1 to CSS pixels.


Ted


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Charles Pritchard

On 3/20/2012 12:08 PM, Boris Zbarsky wrote:

On 3/20/12 3:00 PM, James Robinson wrote:

If we are adding new APIs for manipulating the backing directly, can we
make them asynchronous? This would allow for many optimization
opportunities that are currently difficult or impossible.


You mean like not blocking the world on the readback?

That would indeed be very nice.  The question is what happens if 
drawing happens after the getImageData call...  Or for that matter 
after the putImageData call (though I suspect there's less need for 
putImageData to be async).


I recommend we complete+use RoC's media processing API in addition to 
the CSS shaders proposal:

http://www.w3.org/TR/streamproc/
https://dvcs.w3.org/hg/FXTF/raw-file/tip/custom/index.html

This would allow async post-processing via workers and less worry about 
putImage semantics.


If we're looking for async getImageData purely for recognition, I think 
the current postMessage transfer semantics sped things up enough.


getImageData and a subsequent draw call are always going to need to grab 
more memory. async isn't going to change that.


-Charles




Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Charles Pritchard

On 3/20/2012 10:53 AM, Tab Atkins Jr. wrote:

Given that the modern iPhones (and I suspect the iPad 3, though I
haven't tested it yet) aren't exposing their high-res backing stores
(they give back ImageData with CSS px resolution), it seems likely
that the original goal of get/putImageData to seamlessly adapt has
failed.  So, I support adding an alternate API that explicitly returns
a high-res store.  If people fuck*that*  up, then we're just screwed.


IE exposes CSS px resolution via window.screen; mobile exposes via 
devicePixelRatio, webkit exposes accidentally through innerHeight and 
outerHeight and Mozilla through CSS queries on device-pixel-ratio.

I tried to get this mess fixed, but I got a lot of push back from Mozilla.

WebKit developers agreed that the MS solution would be OK:
http://msdn.microsoft.com/en-us/library/ms535868(v=vs.85).aspx 



Anne agreed to add it to CSSOM back when he was editing it, once there 
was a second implementation.


I've got two hacks to get pixel resolution over here:
http://www.jumis.com/cme-button.html#abc6

I didn't add the Mozilla one yet.


I strongly suggest we just fix the problem by updating window.screen so 
us developers can manually manage the CSS width vs independent width.
I've been doing this forever, it works fine [I update on resize events 
if the res has changed]:



We already went through this discussion on WHATWG. I didn't like how it 
went back then. Now that we're revisiting it, maybe we can just follow MS.


On desktop, the res changes with browser zoom.

-Charles


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Boris Zbarsky

On 3/20/12 3:00 PM, James Robinson wrote:

If we are adding new APIs for manipulating the backing directly, can we
make them asynchronous? This would allow for many optimization
opportunities that are currently difficult or impossible.


You mean like not blocking the world on the readback?

That would indeed be very nice.  The question is what happens if drawing 
happens after the getImageData call...  Or for that matter after the 
putImageData call (though I suspect there's less need for putImageData 
to be async).


-Boris


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread James Robinson
If we are adding new APIs for manipulating the backing directly, can we
make them asynchronous? This would allow for many optimization
opportunities that are currently difficult or impossible.

- James
On Mar 20, 2012 10:29 AM, "Edward O'Connor"  wrote:

> Hi,
>
> Unfortunately, lots of  content (especially content which calls
> {create,get,put}ImageData methods) assumes that the 's backing
> store pixels correspond 1:1 to CSS pixels, even though the spec has been
> written to allow for the backing store to be at a different scale
> factor.
>
> Especially problematic is that developers have to round trip image data
> through a  in order to detect that a different scale factor is
> being used.
>
> I'd like to propose the addition of a backingStorePixelRatio property to
> the 2D context object. Just as window.devicePixelRatio expresses the
> ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
> express the ratio of backing store pixels to CSS pixels. This allows
> developers to easily branch to handle different backing store scale
> factors.
>
> Additionally, I think the existing {create,get,put}ImageData API needs
> to be defined to be in terms of CSS pixels, since that's what existing
> content assumes. I propose the addition of a new set of methods for
> working directly with backing store image data. (New methods are easier
> to feature detect than adding optional arguments to the existing
> methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
> but I'm not wedded to the names. (Nor do I want to bikeshed them.)
>
>
> Thanks for your consideration,
> Ted
>


Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-03-20 Thread Tab Atkins Jr.
On Tue, Mar 20, 2012 at 10:29 AM, Edward O'Connor  wrote:
> Unfortunately, lots of  content (especially content which calls
> {create,get,put}ImageData methods) assumes that the 's backing
> store pixels correspond 1:1 to CSS pixels, even though the spec has been
> written to allow for the backing store to be at a different scale
> factor.
>
> Especially problematic is that developers have to round trip image data
> through a  in order to detect that a different scale factor is
> being used.
>
> I'd like to propose the addition of a backingStorePixelRatio property to
> the 2D context object. Just as window.devicePixelRatio expresses the
> ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would
> express the ratio of backing store pixels to CSS pixels. This allows
> developers to easily branch to handle different backing store scale
> factors.
>
> Additionally, I think the existing {create,get,put}ImageData API needs
> to be defined to be in terms of CSS pixels, since that's what existing
> content assumes. I propose the addition of a new set of methods for
> working directly with backing store image data. (New methods are easier
> to feature detect than adding optional arguments to the existing
> methods.) At the moment I'm calling these {create,get,put}ImageDataHD,
> but I'm not wedded to the names. (Nor do I want to bikeshed them.)

Given that the modern iPhones (and I suspect the iPad 3, though I
haven't tested it yet) aren't exposing their high-res backing stores
(they give back ImageData with CSS px resolution), it seems likely
that the original goal of get/putImageData to seamlessly adapt has
failed.  So, I support adding an alternate API that explicitly returns
a high-res store.  If people fuck *that* up, then we're just screwed.

I'm not as sure about the backingStorePixelRatio bit.  What's the
use-case for it?  Why do devs need to detect this, and what will they
do different in the multiple code paths?

~TJ