Re: [whatwg] Codecs for audio and video

2009-07-08 Thread Philip Jagenstedt
On Tue, 07 Jul 2009 22:45:41 +0200, Charles Pritchard ch...@jumis.com  
wrote:



On 7/7/09 1:10 PM, Philip Jagenstedt wrote:
On Tue, 07 Jul 2009 17:52:29 +0200, Charles Pritchard ch...@jumis.com  
wrote:



Philip Jagenstedt wrote:
For all of the simpler use cases you can already generate sounds  
yourself with a data uri. For example, with is 2 samples of silence:  
data:audio/wav;base64,UklGRigAAABXQVZFZm10IBABAAEARKwAAIhYAQACABAAZGF0YQQA.
Yes you can use this method, and with the current audio tag and  
autobuffer, it may work to some degree.


It does not produce smooth transitions.

At some point, a Blob / Stream API could make things like this easier.
If the idea is to write a Vorbis decoder in JavaScript that would be  
quite cool in a way, but for vendors already implementing Vorbis it  
wouldn't really add anything. A pure JS-implementation of any modern  
audio codec would probably be a ridiculous amount of code and slow, so  
I doubt it would be that useful in practice.


Well I'd like to disagree, and reiterate my prior arguments.  Vorbis  
decoders have been written in ActionScript and in Java.
They are not ridiculous, in size, nor in CPU usage. They can play audio  
streams, smoothly, and the file size is completely
tolerable. And the idea is codec neutrality, a Vorbis decoder is just  
one example.


OK, I won't make any assumptions of the size/speed of such an  
implementation until I see one.


For some use cases you could use 2 audio elements in tandem, mixing new  
sound to a new data URI when the first is nearing the end (although  
sync can't be guaranteed with the current API). But yes, there are  
things which can only be done by a streaming API integrating into the  
underlying media framework.

Yes, the current API is inadequate. data: encoding is insufficient.
Here's the list of propsed features right out of a comment block in the  
spec:

This list of features can be written without a spec, using canvas,
using a raw data buffer, and using ECMAScript.

A few of these features may need hardware level support, or a fast  
computer.

The audio tag would be invisible, and the canvas tag would
provide the user interface.
Your use cases probably fall under audio filters and synthesis. I  
expect that attention will turn to gradually more complex use cases  
when the basic API we have now is implemented and stable cross-browser  
and cross-platform.
Yes, some of these use cases qualify as filters, some qualify as  
synthesis.
I'm proposing that simple filters and synthesis can be accomplished with  
modern
ECMAScript virtual machines and a raw data buffer. My use cases are  
qualified to current capabilities.


Apart from those use cases, I'm proposing that a raw data buffer will  
allow for

codec neutrality.

There are dozens of minor audio codecs, some simpler than others, some  
low bitrate,
that could be programmed in ECMAScript and would run just fine with  
modern ECMAScript VMs.


Transcoding lossy data is a sub-optimal solution. Allowing for arbitrary  
audio
codecs is a worthwhile endeavor. ECMAScript can detect if playback is  
too slow.


Additionally, in some cases, the programmer could work-around broken  
codec implementations.
It's forward-looking, it allows real backward compatibility and  
interoperability across browsers.


canvas allows for arbitrary, programmable video, audio should allow
for programmable audio. Then, we can be codec neutral in our media  
elements.


While stressing that I don't think this should go into the spec until  
there's a proof-of-concept implementation that does useful stuff, is the  
idea to set audio.src=new MySynthesizer() and play()? (MySynthesizer would  
need to implement some standard interface.) You also have the question of  
push vs pull, i.e. does the audio source request data from the synthesizer  
when needed or does the synthesizer need to run a loop pushing audio data?


--
Philip Jägenstedt
Core Developer
Opera Software


Re: [whatwg] Limit on number of parallel Workers.

2009-07-08 Thread Eduard Pascual
On Wed, Jul 8, 2009 at 1:59 AM, Ian Hicksoni...@hixie.ch wrote:

 I include below, for the record, a set of e-mails on the topic of settings
 limits on Workers to avoid DOS attacks.

 As with other such topics, the HTML5 spec allows more or less any
 arbitrary behaviour in the face of hardware limitations. There are a
 variety of different implementations strategies, and these will vary
 based on the target hardware. How to handle a million new workers will be
 different on a system with a million cores and little memory than a system
 with one core but terabytes of memory, or a system with 100 slow cores vs
 a system with 10 fast cores.

 I have therefore not added any text to the spec on the matter. Please let
 me know if you think there should really be something in the spec on this.


Shouldn't a per-user setting be the sanest approach for the worker
limit? For example, it would quite make sense for me to want a low
limit (let's say 10 or so) workers on my laptop's browser; but have no
restriction (or a much higher one, like some thousand workers) on my
workstation.
Ian's point is key here: what's an appropriate limit for workers
depends almost entirely on hardware resources (and probably also on
implementation efficiency and other secondary aspects), and there is a
*huge* variety of hardware configurations that act as web clients, so
it's just impossible to hardcode a limit in the spec that works
properly for more than a minority. At most, I would suggest a note
like this in the spec User agents SHOULD provide the user a way to
limit the ammount of workers running at a time.: emphasis on the
SHOULD rather than a MUST, and also on the fact that the final
choice is for users to make. Then it'd be up to each implementor to
decide on default, out-of-the-box limits for their browser (it would
make sense, for example, if Chromium had a lower default limit than
FF, since C's workers are more expensive).

Just my two cents.

Regards,
Eduard Pascual


Re: [whatwg] Codecs for audio and video

2009-07-08 Thread Charles Pritchard

On 7/8/09 2:20 AM, Philip Jagenstedt wrote:
On Tue, 07 Jul 2009 22:45:41 +0200, Charles Pritchard 
ch...@jumis.com wrote:

At some point, a Blob / Stream API could make things like this easier.
If the idea is to write a Vorbis decoder in JavaScript that would be 
quite cool in a way, but for vendors already implementing Vorbis it 
wouldn't really add anything. A pure JS-implementation of any modern 
audio codec would probably be a ridiculous amount of code and slow, 
so I doubt it would be that useful in practice.


Well I'd like to disagree, and reiterate my prior arguments.  Vorbis 
decoders have been written in ActionScript and in Java.
They are not ridiculous, in size, nor in CPU usage. They can play 
audio streams, smoothly, and the file size is completely
tolerable. And the idea is codec neutrality, a Vorbis decoder is just 
one example.


OK, I won't make any assumptions of the size/speed of such an 
implementation until I see one.
Well,  again, there exist implementations running on Sun/Oracle's Java 
VM and the Flash VM.
These two use byte-code packaging, so the file size is under 100k, 
deflated ECMAScript

source would also weigh under 100k.

Transcoding lossy data is a sub-optimal solution. Allowing for 
arbitrary audio
codecs is a worthwhile endeavor. ECMAScript can detect if playback is 
too slow.

I want to point this out again.

While there is some struggle to define a standard codec (so we might be 
spared the burden
of so very many encoders), there is a very large supply of 
already-encoded media in the wild.


I've recently worked on a project that required a difficult to 
obtain/install codec.
Open source descriptions were available, and if it was an option, I 
certainly would have
paid to have the codec written in ECMAScript, and delivered it with the 
media files.


In that particular case, paying someone to write a decoder for one 
particular, minority codec,
would have been cheaper, and more correct, than paying for the 
transcoding of 60 gigs of low bit-rate audio.


Most media formats are lossy, making their current format, whatever the 
encumbrance, the best solution.


Additionally, in some cases, the programmer could work-around broken 
codec implementations.
It's forward-looking, it allows real backward compatibility and 
interoperability across browsers.


canvas allows for arbitrary, programmable video, audio should allow
for programmable audio. Then, we can be codec neutral in our media 
elements.


While stressing that I don't think this should go into the spec until 
there's a proof-of-concept implementation that does useful stuff, is 
the idea to set audio.src=new MySynthesizer() and play()? 
(MySynthesizer would need to implement some standard interface.) You 
also have the question of push vs pull, i.e. does the audio source 
request data from the synthesizer when needed or does the synthesizer 
need to run a loop pushing audio data?


Well we really need to define what useful stuff is, you know, to set 
that bar.


There are two use cases that I think are important: a codec 
implementation (let's use Vorbis),

and an accessibility implementation, working with a canvas element.

I don't know what would qualify for accessibility. A topographical map, 
which makes a lower or higher

pitched hum, based on elevation (surrounding the pointer), is an example.

On that same line of thinking, a hum of varying intensity signaling 
proximity to a clickable element,
(we're still talking about canvas) might be useful.  If there is no 
sound in the right-channel,
there are no elements to be clicked on, to the right of the pointer. If 
it is a low-sound, then the

element is rather far away.

Site developers still need to put in the work. With a buffered audio 
API, they'll at least

have the option to do so.

Can we come to an agreement as to what would constitute a reasonable 
proof of concept?
This is meant to allow canvas to be more accessible to the visually 
impaired.


Obviously, audio src tags could be used in many cases with canvas, 
so our test-case

should be one where audio src would be insufficient.

Both of these use cases can be accomplished with a raw audio buffer.
They do not need native channel mixing, nor toDataURL support.

In the long term, I think those two options would be nice, but in the 
short term, would just cause delays in adoption.

As Robert has said, there are much more important things to work on
( https://bugzilla.mozilla.org/show_bug.cgi?id=490705 ).


I think at this point, the model should play buffered bytes as they are 
made available (if the buffer has anything, start playing it).


I believe the buffered attribute can be used by the ECMAScript loop to 
detect
how much data is buffered, and whether it should continue decoding or 
take other actions.


The buffered audio API should be handled by the media API in a way 
similar to streaming Web radio.


There should be an origin-clean flag, for future use. One might 
theoretically
add audio 

Re: [whatwg] Limit on number of parallel Workers.

2009-07-08 Thread Michael Nordman
 This type of UA-specific setting is something best left outside the spec
entirely.
Yup


On Wed, Jul 8, 2009 at 10:08 AM, Drew Wilson atwil...@google.com wrote:

 I think Ian's decision to add no language to the spec is the correct one.
 To be clear, we were never asking for Ian to put a limit in the spec -
 rather, given the de facto existence of limits on some platforms, we wanted
 to discuss how those platforms should behave to ensure that they were still
 compliant with the specification.
 Per previous discussions, some implementations have little or no overhead
 per worker (e.g. Firefox which uses a static thread pool to service worker
 tasks). On those platforms, it makes no sense to allow the user to specify a
 maximum number of workers, so having language in the spec saying that UAs
 SHOULD do so is inappropriate.

 This type of UA-specific setting is something best left outside the spec
 entirely.

 -atw


 On Wed, Jul 8, 2009 at 3:41 AM, Eduard Pascual herenva...@gmail.comwrote:

 On Wed, Jul 8, 2009 at 1:59 AM, Ian Hicksoni...@hixie.ch wrote:
 
  I include below, for the record, a set of e-mails on the topic of
 settings
  limits on Workers to avoid DOS attacks.
 
  As with other such topics, the HTML5 spec allows more or less any
  arbitrary behaviour in the face of hardware limitations. There are a
  variety of different implementations strategies, and these will vary
  based on the target hardware. How to handle a million new workers will
 be
  different on a system with a million cores and little memory than a
 system
  with one core but terabytes of memory, or a system with 100 slow cores
 vs
  a system with 10 fast cores.
 
  I have therefore not added any text to the spec on the matter. Please
 let
  me know if you think there should really be something in the spec on
 this.
 

 Shouldn't a per-user setting be the sanest approach for the worker
 limit? For example, it would quite make sense for me to want a low
 limit (let's say 10 or so) workers on my laptop's browser; but have no
 restriction (or a much higher one, like some thousand workers) on my
 workstation.
 Ian's point is key here: what's an appropriate limit for workers
 depends almost entirely on hardware resources (and probably also on
 implementation efficiency and other secondary aspects), and there is a
 *huge* variety of hardware configurations that act as web clients, so
 it's just impossible to hardcode a limit in the spec that works
 properly for more than a minority. At most, I would suggest a note
 like this in the spec User agents SHOULD provide the user a way to
 limit the ammount of workers running at a time.: emphasis on the
 SHOULD rather than a MUST, and also on the fact that the final
 choice is for users to make. Then it'd be up to each implementor to
 decide on default, out-of-the-box limits for their browser (it would
 make sense, for example, if Chromium had a lower default limit than
 FF, since C's workers are more expensive).

 Just my two cents.

 Regards,
 Eduard Pascual





Re: [whatwg] A Selector-based metadata proposal (was: Annotating structured data that HTML has no semantics for)

2009-07-08 Thread Ian Hickson
On Wed, 10 Jun 2009, Eduard Pascual wrote:
 
  I think this is a level of indirection too far -- when something is a 
  heading, it should _be_ a heading, it shouldn't be labeled opaquely 
  with a transformation sheet elsewhere defining that is maps to the 
  heading semantic.

 That doesn't make much sense. When something is a heading, it *is* a 
 heading. What do you mean by should be a heading?.

I mean that a conforming implementation should intrinsically know that the 
content is a heading, without having to do further processing to discover 
this.

For example, with this CSS and HTML:

   h1 { color: blue; }

   h1 Introduction /h1

...the HTML processor knows, regardless of what else is going on, that the 
word Introduction is part of a heading. It only knows that the word 
should be blue after applying processing rules for CSS.

I think by and large the same should hold for more elaborate semantics.


(I didn't really agree with your other responses regarding my criticisms 
of your proposal either, but I don't have anything except my opinions to 
go on as far as those go, so I can't argue my case usefully there.)


  I think CRDF has a bright future in doing the kind of thing GRDDL does,

 I'm not sure about what GRDDL does: I just took a look through the spec, 
 and it seems to me that it's just an overcomplication of what XSLT can 
 already do; so I'm not sure if I should take that statement as a good or 
 a bad thing.

A good thing.

GRDDL is a way to take an HTML page and infer RDF information from that 
page despite the page, e.g. by implementing Microformats using XSLT. So 
for example, GRDDL can be used to extract hCard data from an HTML page and 
turn it into RDF data.


  It's an interesting way of converting, say, Microformats to RDF.

 The ability to convert Microformats to RDF was intended (although not 
 fully achieved: some bad content would be treated differently between 
 CRDF and Microformats); and in the same way CRDF also provides the 
 ability to define de-centralized Microformats.org-like vocabularies (I'm 
 not sure if referring to these as microformats would still be 
 appropiate).

I think this is a particularly useful feature; I would encourage you to 
continue to develop this idea as a separate language, and see if there is 
a market for it.


Cheers,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Need UNIFIED UI for web browser plugin security settings.

2009-07-08 Thread Jonas Sicking
On Tue, Jul 7, 2009 at 9:14 PM, Bijubijumaill...@gmail.com wrote:
 On Tue, Jul 7, 2009 at 11:10 PM, Nils Dagsson
 Moskoppnils-dagsson-mosk...@dieweltistgarnichtso.net wrote:
 So in browsers, we need a UNIFIED UI for plugin security settings.

 Actually, do we ? I don't think so. Like with UI for audio and
 video, implementors can very well compete on this.

 I mean UNIFIED security settings UI different plugins running on same browser.
 I dont think ball is in browser makers court.
 They need all plugins vendors to use the facility they provide.

Indeed, but plugin APIs isn't something that whatwg has generally been
specifying. I believe that the list at [1] is the appropriate place to
bring this up.

[1] https://mail.mozilla.org/listinfo/plugin-futures

/ Jonas


[whatwg] setting location.hash property and browser reload

2009-07-08 Thread Peter Michaux
When setting the location.hash property, some older browsers reloaded
the entire page. The newest versions of the major browsers doe not
reload the page when setting location.hash. This seems to be now de
facto standard behavior. Sites like Yahoo Maps! depends on the page
not reloading when setting location.hash for a good user experience.

I'm not aware of any standard guaranteeing there is no page reload
when setting location.hash. Will HTML5 make such a guarantee? I cannot
find anything in the spec.

Thanks,
Peter


Re: [whatwg] setting location.hash property and browser reload

2009-07-08 Thread Ian Hickson
On Wed, 8 Jul 2009, Peter Michaux wrote:

 When setting the location.hash property, some older browsers reloaded 
 the entire page. The newest versions of the major browsers doe not 
 reload the page when setting location.hash. This seems to be now de 
 facto standard behavior. Sites like Yahoo Maps! depends on the page not 
 reloading when setting location.hash for a good user experience.
 
 I'm not aware of any standard guaranteeing there is no page reload when 
 setting location.hash. Will HTML5 make such a guarantee? I cannot find 
 anything in the spec.

HTML5 requires that there not be a reload. Setting location.hash 
eventually (if you follow the admittedly convoluted definitions) is 
equivalent to running the navigation algorithm:

   http://www.whatwg.org/specs/web-apps/current-work/#navigate

...which, in step 4, just scrolls and aborts the algorithm without 
actually changing the active Document object.

HTH,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'