Re: [whatwg] Memory management problem of video elements

2014-08-20 Thread Philip Jägenstedt
On Tue, Aug 19, 2014 at 3:54 PM, duanyao duan...@ustc.edu wrote:
 于 2014年08月19日 20:23, Philip Jägenstedt 写道:

 On Tue, Aug 19, 2014 at 11:56 AM, duanyao duan...@ustc.edu wrote:

 If the media element object keeps track of its current playing url and
 current position (this requires little memory), and the media file is
 seekable, then
 the media is always resumable. UA can drop any other associated memory of
 the media element, and users will not notice any difference except a
 small
 delay
 when they resume playing.

 That small delay is a problem, at least when it comes to audio
 elements used for sound effects. For video elements, there's the
 additional problem that getting back to the same state will require
 decoding video from the previous keyframe, which could take several
 seconds of CPU time.

 Of course, anything is better than crashing, but tearing down a media
 pipeline and recreating it in the exact same state is quite difficult,
 which is probably why nobody has tried it, AFAIK.

 UA can pre-create the media pipeline according to some hints, e.g. the video
 element is becoming visible,
 so that the delay may be minimized.

 There is a load() method on media element, can it be extended to instruct
 the UA to recreate
 the media pipeline? Thus script can reduce the delay if it knows the media
 is about to be played.

load() resets all state and starts resource selection anew, so without
a way of detecting when a media element has destroyed its media
pipeline to save memory, calling load() can in the worst case increase
the time until play.

 Audios usually eat much less memory, so UAs may have a different strategy
 for them.

 Many native media players can save playing position on exit, and resume the
 playing from that position on the next run.
 Most users are satisfied with such feature. Is recovering to exact same
 state important to some web applications?

I don't know what is required for site compat, but ideally destroying
and recreating a pipeline should get you back to the exact same
currentTime and continue playback at the correct video frame and audio
sample. It could be done.

 I'm not familiar with game programing. Are sound effects small audio files
 that are usually
 played as a whole? Then it should be safe to recreate the pipeline.

There's also a trick called audio sprites where you put all sound
effects into a single file with some silence in between and then seek
to the appropriate offset.

Philip


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Jonas Sicking
On Tue, Aug 19, 2014 at 1:29 PM, Marcos Caceres w...@marcosc.com wrote:
 interface WakeLock : EventTarget {
Promisevoid request();
Promisevoid release();
attribute EventHandler onlost;
 }

What are the use cases for onlost?

Though I don't really mind exposing this state. My experience is that
if any sane implementation strategy will need to keep some specific
state, and that state affects the behavior of the API, then eventually
someone will come up with a use case for exposing it. And that
exposing it is really easy anyway.

However I think what we'd need is something like

  readonly attribute boolean held;
  attribute EventHandler onheldchange;

FWIW, the web platform sorely needs a construct for readonly state
variable + event whenever the state changes. I.e. some form of
observable which remembers the last produced value. I had hoped the
Streams would get us closer to that, but the current definition seems
to be too different.

/ Jonas


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Anne van Kesteren
On Wed, Aug 20, 2014 at 10:29 AM, Jonas Sicking jo...@sicking.cc wrote:
 FWIW, the web platform sorely needs a construct for readonly state
 variable + event whenever the state changes. I.e. some form of
 observable which remembers the last produced value. I had hoped the
 Streams would get us closer to that, but the current definition seems
 to be too different.

Isn't that Object.observe() with custom records produced by the
specific object you are defining for the property you want to enable
this for (in this case held, it seems like)?


-- 
http://annevankesteren.nl/


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Jonas Sicking
On Wed, Aug 20, 2014 at 1:33 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Wed, Aug 20, 2014 at 10:29 AM, Jonas Sicking jo...@sicking.cc wrote:
 FWIW, the web platform sorely needs a construct for readonly state
 variable + event whenever the state changes. I.e. some form of
 observable which remembers the last produced value. I had hoped the
 Streams would get us closer to that, but the current definition seems
 to be too different.

 Isn't that Object.observe() with custom records produced by the
 specific object you are defining for the property you want to enable
 this for (in this case held, it seems like)?

That's a good question. It'd be awesome if Object.observe() solved
this problem for us.

One thing that I'd worry about is that it'll be hard for authors to
know which properties are observable, and which ones aren't. But maybe
that's something we can live with.

/ Jonas


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Olli Pettay

On 08/20/2014 11:33 AM, Anne van Kesteren wrote:

On Wed, Aug 20, 2014 at 10:29 AM, Jonas Sicking jo...@sicking.cc wrote:

FWIW, the web platform sorely needs a construct for readonly state
variable + event whenever the state changes. I.e. some form of
observable which remembers the last produced value. I had hoped the
Streams would get us closer to that, but the current definition seems
to be too different.


Isn't that Object.observe() with custom records produced by the
specific object you are defining for the property you want to enable
this for (in this case held, it seems like)?




Object.observe() + some custom records sounds rather inconsistent API. Why 
would one getter in some
prototype be handle differently to other getters?
(and it is not too clear how one would implement that.)





Re: [whatwg] Memory management problem of video elements

2014-08-20 Thread duanyao

于 2014年08月20日 15:52, Philip Jägenstedt 写道:

On Tue, Aug 19, 2014 at 3:54 PM, duanyao duan...@ustc.edu wrote:

于 2014年08月19日 20:23, Philip Jägenstedt 写道:


On Tue, Aug 19, 2014 at 11:56 AM, duanyao duan...@ustc.edu wrote:

If the media element object keeps track of its current playing url and
current position (this requires little memory), and the media file is
seekable, then
the media is always resumable. UA can drop any other associated memory of
the media element, and users will not notice any difference except a
small
delay
when they resume playing.

That small delay is a problem, at least when it comes to audio
elements used for sound effects. For video elements, there's the
additional problem that getting back to the same state will require
decoding video from the previous keyframe, which could take several
seconds of CPU time.

Of course, anything is better than crashing, but tearing down a media
pipeline and recreating it in the exact same state is quite difficult,
which is probably why nobody has tried it, AFAIK.

UA can pre-create the media pipeline according to some hints, e.g. the video
element is becoming visible,
so that the delay may be minimized.

There is a load() method on media element, can it be extended to instruct
the UA to recreate
the media pipeline? Thus script can reduce the delay if it knows the media
is about to be played.

load() resets all state and starts resource selection anew, so without
a way of detecting when a media element has destroyed its media
pipeline to save memory, calling load() can in the worst case increase
the time until play.
I meant we could add an optional parameter to load() to support soft 
reload, e.g. load(boolean soft),

which doesn't reset states and re-select resource.

Maybe it is better to reuse pause() method to request UA to recreate the 
media pipeline. If a media element is in
memory-saving state, it must be in paused state as well, so invoke 
pause() should not have undesired side effects.


Anyway, it seems the spec needs to introduce a new state of media 
element: memory-saving state.
In low memory condition, UA can select some low-priority media elements 
and turn them into memory-saving state.


Suggested priorities for videos are:
(1) recently (re)started, playing, and visible videos
(2) previously (re)started, playing, and visible videos
(3) paused and visible videos; playing and invisible videos
(4) paused and invisible videos

Priorities for audios are to be considered.

Memory-saving state implies paused state.

If memory becomes sufficient, or a media elements priorities are about 
to change, UA can restore some of them to
normal paused state (previously playing media doesn't automatically 
resume playback).


If pause() method is invoked on a media element in memory-saving state, 
UA must restore it to normal paused state.



Audios usually eat much less memory, so UAs may have a different strategy
for them.

Many native media players can save playing position on exit, and resume the
playing from that position on the next run.
Most users are satisfied with such feature. Is recovering to exact same
state important to some web applications?

I don't know what is required for site compat, but ideally destroying
and recreating a pipeline should get you back to the exact same
currentTime and continue playback at the correct video frame and audio
sample. It could be done.


I'm not familiar with game programing. Are sound effects small audio files
that are usually
played as a whole? Then it should be safe to recreate the pipeline.

There's also a trick called audio sprites where you put all sound
effects into a single file with some silence in between and then seek
to the appropriate offset.
I think if UA can get and set currentTime property accurately, it should 
be able to recreate the pipeline

with the same accuracy. What are the main factors limiting the accuracy?
However, a UA using priorities to manage media memory is unlikely to 
reclaim a in-use audio sprites element's memory.


Philip





Re: [whatwg] Memory management problem of video elements

2014-08-20 Thread Philip Jägenstedt
On Wed, Aug 20, 2014 at 12:04 PM, duanyao duan...@ustc.edu wrote:
 于 2014年08月20日 15:52, Philip Jägenstedt 写道:

 On Tue, Aug 19, 2014 at 3:54 PM, duanyao duan...@ustc.edu wrote:

 I'm not familiar with game programing. Are sound effects small audio
 files
 that are usually
 played as a whole? Then it should be safe to recreate the pipeline.

 There's also a trick called audio sprites where you put all sound
 effects into a single file with some silence in between and then seek
 to the appropriate offset.

 I think if UA can get and set currentTime property accurately, it should be
 able to recreate the pipeline
 with the same accuracy. What are the main factors limiting the accuracy?

I don't know, but would guess that not all media frameworks can seek
to an exact audio sample but only to the beginning of a video frame or
an audio frame, in which case currentTime would be slightly off. One
could just lie about currentTime until playback continues, though.

Philip


Re: [whatwg] Memory management problem of video elements

2014-08-20 Thread duanyao

于 2014年08月20日 19:26, Philip Jägenstedt 写道:

On Wed, Aug 20, 2014 at 12:04 PM, duanyao duan...@ustc.edu wrote:

于 2014年08月20日 15:52, Philip Jägenstedt 写道:


On Tue, Aug 19, 2014 at 3:54 PM, duanyao duan...@ustc.edu wrote:

I'm not familiar with game programing. Are sound effects small audio
files
that are usually
played as a whole? Then it should be safe to recreate the pipeline.

There's also a trick called audio sprites where you put all sound
effects into a single file with some silence in between and then seek
to the appropriate offset.

I think if UA can get and set currentTime property accurately, it should be
able to recreate the pipeline
with the same accuracy. What are the main factors limiting the accuracy?

I don't know, but would guess that not all media frameworks can seek
to an exact audio sample but only to the beginning of a video frame or
an audio frame, in which case currentTime would be slightly off. One
could just lie about currentTime until playback continues, though.
Such limitation also affects seeking, not only memory-saving feature, 
and the spec allows quality-of-implementation issue, so I think this 
is acceptable.
Additionally, a media in memory-saving state must be paused, I think 
users won't care about the small error of resuming position.




Philip





Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Marcos Caceres



On Wednesday, August 20, 2014 at 5:15 AM, Olli Pettay wrote:

 On 08/20/2014 11:33 AM, Anne van Kesteren wrote:
  On Wed, Aug 20, 2014 at 10:29 AM, Jonas Sicking jo...@sicking.cc 
  (mailto:jo...@sicking.cc) wrote:
   FWIW, the web platform sorely needs a construct for readonly state
   variable + event whenever the state changes. I.e. some form of
   observable which remembers the last produced value. I had hoped the
   Streams would get us closer to that, but the current definition seems
   to be too different.
  
  
  
  Isn't that Object.observe() with custom records produced by the
  specific object you are defining for the property you want to enable
  this for (in this case held, it seems like)?
 
 
 
 Object.observe() + some custom records sounds rather inconsistent API. Why 
 would one getter in some
 prototype be handle differently to other getters?
 (and it is not too clear how one would implement that.)


Agree - a custom thing would not be great. And of course Object.observe would 
work really nicely, but I've been told a bunch of times by various people that 
we can't use Object.observe on DOM APIs (this *really* sucks). 

Getting a bit off topic, but it would be nice if we had a DOM Observer (kinda 
like a mutation observer) that returned you the same record as Object.observe 
and could be used with the attributes of WebIDL defined objects. It would make 
designing, and using, these APIs much simpler. 


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Anne van Kesteren
On Wed, Aug 20, 2014 at 5:09 PM, Marcos Caceres w...@marcosc.com wrote:
 And of course Object.observe would work really nicely, but I've been told a 
 bunch of times by various people that we can't use Object.observe on DOM APIs 
 (this *really* sucks).

Not by default, but we can make it work as I said. We wouldn't do it
for innerHTML, but we might for input.value or some such.


-- 
http://annevankesteren.nl/


Re: [whatwg] HTML differences from HTML4 document updated

2014-08-20 Thread Simon Pieters
On Tue, 07 May 2013 16:37:21 +0200, Gordon P. Hemsley  
gphems...@gmail.com wrote:



Simon,

I think it would be good to consider the target audiences, of which
there are probably many:

You have the audience who is worried that HTML5 is some grand
departure from the HTML 4.01 they (think they) know and love. For
them, you'll want to describe what exactly has been removed and why,
instilling the idea of a separation between semantic and
presentational markup.

Then you have the audience that is excited to see what they can do now
with HTML5 that they couldn't do with HTML 4.01. For them, you'd list
the new elements and attributes and such.

Then you probably have some other incidentals such as things that were
removed or changed just because they were never implemented or people
never used them. These probably don't fall into either of the two
categories above.

But you also have another issue to consider: For this document, the
difference between the W3C's concept of specification snapshots and
WHATWG's concept of a living standard is not trivial. For the former,
you can have snapshot documents detailing the differences between each
snapshot specification; for the latter, you need a living document
that is anchored by a fixed point at one end (HTML 4.01).

This raises the question of the purpose of this document: Is it to
simplify the transition from HTML 4.01 to HTML5+? Or is it to act as
an HTML changelog from here on out? Because I think attempting to do
both within a single document will become unwieldy as time goes on.


Thanks. I've tried to make it a bit more focused by having one document  
that compares WHATWG HTML to HTML4 and a separate document that compares  
W3C HTML5 to HTML4, dropped W3C HTML 5.1 (covered by  
http://www.w3.org/html/landscape/ ) and dropped the Changes (covered by  
http://platform.html5.org/history/ ).


https://github.com/whatwg/html-differences/commit/a34fa020d2e2c17bb84fe963dc3f8de2250c31c4
https://github.com/whatwg/html-differences/commit/06499f22bcfd5f72ac1e7b3f3f3e4863e2db9c0b

--
Simon Pieters
Opera Software


Re: [whatwg] Memory management problem of video elements

2014-08-20 Thread Ian Hickson
On Wed, 20 Aug 2014, Philip Jägenstedt wrote:

 I don't know, but would guess that not all media frameworks can seek to 
 an exact audio sample but only to the beginning of a video frame or an 
 audio frame, in which case currentTime would be slightly off. One could 
 just lie about currentTime until playback continues, though.

Note that setting currentTime is required to be precise, even if that 
means actually playing the content in silence for a while to get to the 
precise point. To seek fast, we have a separate fastSeek() method.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Proposal: Wake Lock API

2014-08-20 Thread Marcos Caceres



On Wednesday, August 20, 2014 at 11:14 AM, Anne van Kesteren wrote:

 On Wed, Aug 20, 2014 at 5:09 PM, Marcos Caceres w...@marcosc.com 
 (mailto:w...@marcosc.com) wrote:
  And of course Object.observe would work really nicely, but I've been told a 
  bunch of times by various people that we can't use Object.observe on DOM 
  APIs (this *really* sucks).
 
 
 
 Not by default, but we can make it work as I said. We wouldn't do it
 for innerHTML, but we might for input.value or some such.
 
Ok, if we get [Observable] in WebIDL, then the API basically becomes something 
neat like:

```
partial interface Document {
[Observable] attribute boolean keepScreenOn;   
} 
```

So nice. 


Re: [whatwg] `brand-color` meta extension

2014-08-20 Thread Mark Callow
On 2014/06/26 12:58, Marcos Caceres wrote:
 I would be in favor of this. It would be good to support the legacy content 
 as its use on the Web is significant. Search I did back in Oct 2013 found 
 these proprietary tags appeared on something like 1% of pages in Alexa's top 
 78K pages 
1%! Significant? Hardly. Typo?

Regards

-Mark

-- 
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
たら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and
privileged information from HI Corporation. If you are not the intended
recipient, any disclosure, photocopying, distribution or use of the
contents of the received information is prohibited. If you have received
this e-mail in error, please notify the sender immediately and
permanently delete this message and all related copies.



Re: [whatwg] `brand-color` meta extension

2014-08-20 Thread Tab Atkins Jr.
On Wed, Aug 20, 2014 at 12:39 PM, Mark Callow
callow.m...@artspark.co.jp wrote:
 On 2014/06/26 12:58, Marcos Caceres wrote:
 I would be in favor of this. It would be good to support the legacy content 
 as its use on the Web is significant. Search I did back in Oct 2013 found 
 these proprietary tags appeared on something like 1% of pages in Alexa's top 
 78K pages
 1%! Significant? Hardly. Typo?

The web corpus is somewhere north of a trillion pages.  1% of that is
still 10 billion+.

Even for things that aren't evenly distributed, and so occur mostly on
newer content, 1% is a large fraction, which people are likely to run
into on a roughly daily basis.

Chrome, for example, only starts considering whether a feature can be
removed when its usage is under .01% (we usually prefer it to be less
than .003% or so).

~TJ