[whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-17 Thread Mark Callow
The spec. for CanvasRenderingContext2D.drawImage says draw nothing when
a video element's readyState is  HAVE_NOTHING or HAVE_METADATA. I was
wondering why this was chosen vs. drawing the poster. A search in the
list archive didn't turn up any discussion or explanation.

Regards

-Mark

-- 
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
たら削除を行い配信者にご連絡をお願いいたし ます。

NOTE: This electronic mail message may contain confidential and
privileged information from HI Corporation. If you are not the intended
recipient, any disclosure, photocopying, distribution or use of the
contents of the received information is prohibited. If you have received
this e-mail in error, please notify the sender immediately and
permanently delete this message and all related copies.



Re: [whatwg] [canvas] request for {create, get, put}ImageDataHD and ctx.backingStorePixelRatio

2012-07-17 Thread Ian Hickson
On Tue, 20 Mar 2012, Edward O'Connor wrote:
 
 Unfortunately, lots of canvas content (especially content which calls 
 {create,get,put}ImageData methods) assumes that the canvas's backing 
 store pixels correspond 1:1 to CSS pixels, even though the spec has been 
 written to allow for the backing store to be at a different scale 
 factor.

I've fixed the text so that those methods now always return 96dpi data.


 I'd like to propose the addition of a backingStorePixelRatio property to 
 the 2D context object. Just as window.devicePixelRatio expresses the 
 ratio of device pixels to CSS pixels, ctx.backingStorePixelRatio would 
 express the ratio of backing store pixels to CSS pixels. This allows 
 developers to easily branch to handle different backing store scale 
 factors.

I've added window.screen.canvasResolution which returns the resolution 
that is being used for 2D canvases created during the current task.


 Additionally, I think the existing {create,get,put}ImageData API needs 
 to be defined to be in terms of CSS pixels, since that's what existing 
 content assumes.

Done.


 I propose the addition of a new set of methods for working directly with 
 backing store image data. (New methods are easier to feature detect than 
 adding optional arguments to the existing methods.) At the moment I'm 
 calling these {create,get,put}ImageDataHD, but I'm not wedded to the 
 names. (Nor do I want to bikeshed them.)

Done.

I've also added toDataURLHD and toBlobHD.


On Tue, 20 Mar 2012, James Robinson wrote:

 If we are adding new APIs for manipulating the backing directly, can we 
 make them asynchronous? This would allow for many optimization 
 opportunities that are currently difficult or impossible.

I haven't done this, because it would make the API rather weird. But I am 
happy to do it if people think the API weirdness is a cost worth paying.

Note that technically getImageData() doesn't have to block -- it's array 
access on ImageData that has to block. It would be possible to implement 
getImageData() in such a way that the ImageData object is lazily filled. 
You'd end up blocking later if the author really needed the data, but it's 
possible to write code that doesn't block (though you wouldn't necessarily 
know how long to wait, I guess).


On Tue, 20 Mar 2012, Boris Zbarsky wrote:
 On 3/20/12 6:36 PM, Glenn Maynard wrote:
  The drawing calls that happen after would need to be buffered (or 
  otherwise flush the queue, akin to calling glFinish), so the 
  operations still happen in order.
 
 The former seems like it could get pretty expensive and the latter would 
 negate the benefits of making it async, imo.

Having the operations not occur in order would make the API quite 
difficult to use, so if that's not an option, I don't think it's worth it.


On Wed, 21 Mar 2012, Maciej Stachowiak wrote:
 On Mar 20, 2012, at 12:00 PM, James Robinson wrote:
  
  If we are adding new APIs for manipulating the backing directly, can 
  we make them asynchronous? This would allow for many optimization 
  opportunities that are currently difficult or impossible.
 
 Neat idea to offer async backing store access. I'm not sure that we 
 should tie this to backing store access at true backing store resolution 
 vs at CSS pixel nominal resolution, because it will significantly raise 
 the barrier to authors recoding their existing apps to take full 
 advantage of higher resolutions. With Ted's proposal, all they would 
 have to do is use the HD versions of calls and change their loops to 
 read the bounds from the ImageData object instead of assuming. If we 
 also forced the new calls to be async, then more extensive changes would 
 be required.
 
 I hear you on the benefits of async calls, but I think it would be 
 better to sell authors on their benefits separately.

I think it depends how strong the benefits are. In this particular case, I 
tend to agree that the benefits aren't really worth tying them together, 
and possibly not worth providing the async model as a separate API at all.

Maybe we could have an attribute on ImageData that says whether an array 
index read would have to block on getting the data or whether it's ready, 
maybe coupled with an event that says when it's ready?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-17 Thread Silvia Pfeiffer
I think this is simply an idea that hasn't been raised before. I like
it. Though even then sometimes there may be nothing when there is no
explicit poster and preload is set to none.

Regards,
Silvia.

On Tue, Jul 17, 2012 at 9:58 AM, Mark Callow callow_m...@hicorp.co.jp wrote:
 The spec. for CanvasRenderingContext2D.drawImage says draw nothing when
 a video element's readyState is  HAVE_NOTHING or HAVE_METADATA. I was
 wondering why this was chosen vs. drawing the poster. A search in the
 list archive didn't turn up any discussion or explanation.

 Regards

 -Mark

 --
 注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
 が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
 報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
 たら削除を行い配信者にご連絡をお願いいたし ます。

 NOTE: This electronic mail message may contain confidential and
 privileged information from HI Corporation. If you are not the intended
 recipient, any disclosure, photocopying, distribution or use of the
 contents of the received information is prohibited. If you have received
 this e-mail in error, please notify the sender immediately and
 permanently delete this message and all related copies.



[whatwg] Canvas hit region feedback

2012-07-17 Thread Ian Hickson
On Thu, 5 Jul 2012, Edward O'Connor wrote:
 
 Currently, there are only two ways to invoke the clear regions that 
 cover the pixels algorithm: by calling either addHitRegion() or 
 clearRect(). Authors should be able to explicitly remove a hit region as 
 well, with a removeHitRegion(id) method.
 
 Consider a region of a canvas which the author would like to toggle 
 between clickable and non-clickable states without drawing. Maybe 
 they're indicating clickability by drawing a different outline around 
 the region without actually redrawing the region itself, or perhaps 
 there is no visible indication that the region's clickability is 
 changing. Such an author should be able to straightforwardly achieve 
 this without redrawing the region (as clearRect would require) and 
 without installing a dummy hit region (as addHitRegion would require).

Done.


On Thu, 5 Jul 2012, Charles Pritchard wrote:

 There's also just removing the element from the DOM. Yes, I'd like a 
 removeHitRegion(Element) feature; though I can skate by with the empty 
 addHitRegion method.

I don't follow this proposal.


 I've not seen a response from you regarding the issues that Richard and 
 Steve have brought up around the lightweight nodes feature-proposal. It 
 seems relevant to the method signature of removeHitRegion.

I checked the list but didn't see any recent relevant e-mails from anyone 
named Richard or Steve. If they filed bugs, I hope to get to those soon; 
I've been focusing on e-mail for a while to get the e-mail pile under 
control after having neglected it for too long.


Re: using backing bitmaps for hit testing:

On Fri, 6 Jul 2012, Rik Cabanier wrote:

 Yeah, this is the standard way of doing hit-testing. However, one 
 important use case is that this can be done with nested canvas elements. 
 Most (if not all) games, will use off-screen canvas elements to draw 
 elements which can then be reused.

 The programmer will creates hit test canvas elements which are then 
 composited similarly to the off-screen canvases.
 
 It seems that the additions that Ian made does not cover this use case 
 unless there's a way to extract the hit regions from a canvas and then 
 apply/remove them (with a possible matrix manipulation) to/from another 
 canvas.

That's an interesting idea. I haven't added this yet, but it seems like 
something we should definitely keep in mind; if it turns out that the hit 
region API is popular, it would definitely be a logical next step.


On Sat, 7 Jul 2012, Dean Jackson wrote:
 
 We're aware of this technique, but it has a couple of obvious issues:
 
 1. It requires us to create a duplicate canvas, possibly using many MB 
 of RAM. It's generally going to be less memory to keep a list of 
 geometric regions. And performance won't be terrible if you implement 
 some spatial hashing, etc.
 
 2. It doesn't allow for sub pixel testing. In your algorithm above, only 
 one region can be at a pixel (which also means it isn't our standard 
 drawing code with anti-aliasing). Consider a zoomed canvas, where we 
 might want more accurate hit testing.

Certainly implementations are welcome to use a hit region list with fine 
paths, rather than pixels, so long as the effect is equivalent.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


[whatwg] Input modes: Please help me with some research!

2012-07-17 Thread Ian Hickson

One of the features that I'm looking at specifying (again) is a mechanism 
for authors to help user agents pick the most appropriate input mode. For 
some cases this is easy; for example, user agents can know that an input 
type=number field should have a numeric keyboard. However, in some other 
cases it's not at all obvious; e.g. you want a numeric keyboard for credit 
card fields, which are type=text.

To do this properly, I need to have a list of all the possible keyboards 
we should expose. Two things would be helpful to that end:

 - Screenshots of keyboards

 - Details of APIs in existing platforms that control input modes.

I've added some screenshots of keyboards from Android to this wiki page:

   http://wiki.whatwg.org/wiki/Text_input_keyboard_mode_control

If anyone can help out by adding more screenshots from other platforms, 
especially for non-English input languages, or by providing links to 
documentation for input mode APIs on operating systems that support them 
that would be great.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] seamless iframes and event propagation

2012-07-17 Thread Ojan Vafai
On Mon, Jul 16, 2012 at 9:24 AM, Dimitri Glazkov dglaz...@chromium.orgwrote:

 On Sat, Jul 14, 2012 at 4:45 AM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
 
  On 07/14/2012 12:38 AM, Ojan Vafai wrote:
 
  It's been pointed out to me that what I'm asking for is essentially the
  same retargeting as we do for shadow DOMs in web components, where the
  iframe is the shadow host and the document is the shadow root. This
 covers
  all the details of what properties need to be updated when crossing the
  document boundary. The only addition on top of that is that we need to
  convert the coordinate space of mouse events appropriately when we cross
  the boundary.
 
 
 
  What, you'd propagate mouse events to parent doc but update coordinate
 related coordinates when
  passing the doc boundary... that is odd.
  Something in the original target document may keep a reference to the
 event and then suddenly during event dispatch the
  coordinate values would change.

 We should probably recreate an event object at each seamless frame
 boundary.


As I look at the event retargetting spec more closely, I think it's dealing
with a different set of design constraints and the seamless iframe
retargeting could do something much simpler.

Here's how I picture it working:
1. Create an event in the outer document with the seamless iframe
as its target and mouse events in the outer document's coordinate space and
begin the capture phase.
2. When the capture phase is done, if stopPropagation was not called, fire
a new event in the inner document as normal.
3. Execute target phase in the outer document.
4. Execute the bubble phase in the outer document.

If preventDefault is called on either the outer document's event or the
inner document's event, then the default action of the event is prevented.

It's not clear to me if any events should be exempt from this. For example,
should focuses/blurs that are entirely contained within the seamless iframe
fire in the outer document? My intuition is no, but I could easily be
swayed either way.

Ojan


Re: [whatwg] seamless iframes and event propagation

2012-07-17 Thread Erik Arvidsson
On Tue, Jul 17, 2012 at 4:28 PM, Ojan Vafai o...@chromium.org wrote:
 It's not clear to me if any events should be exempt from this. For example,
 should focuses/blurs that are entirely contained within the seamless iframe
 fire in the outer document? My intuition is no, but I could easily be
 swayed either way.

mouseover/out etc should not fire in the outer document if the mouse
is just moving inside the iframe. In other words there must never be a
case where target and relatedTarget are the same.

-- 
erik


[whatwg] Archive API - proposal

2012-07-17 Thread Andrea Marchesini
Hi All,

I would like to propose a new javascript/web API that provides the ability to 
read the content of an archive file through DOMFile objects.
I have started to work on this API because it has been requested during some 
Mozilla Game Meeting by game developers who often use ZIP files as storage 
system.

What I'm describing is a read-only and asynchronous API built on top of FileAPI 
( http://dev.w3.org/2006/webapi/FileAPI/ ).

Here a draft written in webIDL:

interface ArchiveRequest : DOMRequest
{
  // this is the ArchiveReader:
  readonly attribute nsIDOMArchiveReader reader;
}

[Constructor(Blob blob)]
interface ArchiveReader
{
  // any method is supposed to be asynchronous

  // The ArchiveRequest.result is an array of strings (the filenames)
  ArchiveRequest getFilenames();

  // The ArchiveRequest.result is a DOMFile 
(http://dev.w3.org/2006/webapi/FileAPI/#dfn-file)
  ArchiveRequest getFile(DOMString filename);
};

Here an example about how to use it:

function startRead() {
  // Starting from a input type=file id=file /:
  var file = document.getElementById('file').files[0];

  if (file.type != 'application/zip') {
alert(This archive format is not supported);
return;
  }

  // The ArchiveReader object works with Blob objects:
  var archiveReader = new ArchiveReader(file);

  // Any request is asynchronous:
  var handler = archiveReader.getFilenames();
  handler.onsuccess = getFilenamesSuccess;
  handler.onerror = errorHandler;

  // Multiple requests can run at the same time:
  var handler2 = archiveReader.getFile(levels/1.txt);
  handler2.onsuccess = getFileSuccess;
  handler2.onerror = errorHandler;
}

// The getFilenames handler receives a list of DOMString:
function getFilenamesSuccess() {
  for (var i = 0; i  this.result.length; ++i) {
/* this.reader is the ArchiveReader:
var handle = this.reader.getFile(this.result[i]);
handle.onsuccess = ...
*/
  }
}

// The GetFile handler receives a File/Blob object (and it can be used with 
FileReader):
function getFileSuccess() {
  var reader = FileReader();
  reader.readAsText(this.result);
  reader.onload = function(event) {
// alert(event.target.result);
  }
}

function errorHandler() {
  // ...
}

I would like to receive feedback about this.. In particular:
. Do you think it can be useful?
. Do you see any limitation, any feature missing?

Regards,
AM


Re: [whatwg] seamless iframes and event propagation

2012-07-17 Thread Dimitri Glazkov
An interesting quirk here is whether the full list of event ancestors
should be computed ahead of time (per
http://www.w3.org/TR/dom/#dispatching-events). If yes, then it's still just
like retargeting, but with issuing a new event object at the iframe
boundary. If no, then two separate dispatches will work as you describe.

:DG


Re: [whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-17 Thread Charles Pritchard
On Jul 17, 2012, at 9:04 PM, Mark Callow callow_m...@hicorp.co.jp wrote:

 On 18/07/2012 00:17, Silvia Pfeiffer wrote:
 I think this is simply an idea that hasn't been raised before. I like
 it. Though even then sometimes there may be nothing when there is no
 explicit poster and preload is set to none.
 The language gives me the impression that drawing nothing was a
 deliberate choice, in particular because later on it says:


We don't have events based on poster, so we don't know whether or not it's been 
loaded. Poster is meant for the video implementation. We use other events to 
know if video is playing.

So as a coder, I can just do an attribute check to see if poster exists, then 
load it into an image tag. It's a normal part of working with Canvas. We always 
follow onload events.


-Charles