Re: [whatwg] html5 designers

2011-03-22 Thread Régis Kuckaertz
Gustave,

Read html5doctor.com, by the author of Introducing HTML5 among others.

—Régis

On Mon, Mar 21, 2011 at 6:33 PM, Gustavo Duenas 
gdue...@leftandrightsolutions.com wrote:

 Does anyone knows a good email list for html5 designers, I'm rather lost
 here, this is for programmers.

 gustavo



Re: [whatwg] Peer-to-peer use case (was Peer-to-peer communication, video conferencing, device, and related topics)

2011-03-22 Thread Stefan Håkansson LK
Some feedback below. (Stuff where I agree and there is no question have left 
out).

 
 On Mon, 31 Jan 2011, Stefan H kansson LK wrote this use case:
 

We've since produced an updated use case doc: 
http://www.ietf.org/id/draft-holmberg-rtcweb-ucreqs-01.txt

...

  The web author developing the application has decided to display a 
  self-view as well as the video from the remote side in rather small 
  windows, but the user can change the display size during 
 the session. 
  The application also supports if a participant (for a 
 longer or shorter 
  time) would like to stop sending audio (but keep video) or 
 video (keep 
  audio) to the other peer (mute).
...
 
 All of this except selectively muting audio vs video is currently 
 possible in the proposed API.
 
 The simplest way to make selective muting possible too would 
 be to change 
 how the pause/resume thing works in GeneratedStream, so that 
 instead of 
 pause() and resume(), we have individual controls for audio 
 and video. 
 Something like:
 
void muteAudio();
void resumeAudio();
readonly attribute boolean audioMuted;
void muteVideo();
void resumeViduo();
readonly attribute boolean videoMuted;
 
 Alternatively, we could just have mutable attributes:
 
attribute boolean audioEnabled;
attribute boolean videoEnabled;
 
 Any opinions on this?
We're looking into this and will produce a more elaborate input related to this.

...

  !The web application must be able to!If the video is 
 going to be displayed !
  !define the media format to be used for !in a large window, 
 use higher bit-!
  !the streams sent to a peer.!rate/resolution. 
 Should media settings!
  !   !be allowed to be 
 changed during a !
  !   !session (at e.g. 
 window resize)?  !
 
 Shouldn't this be automatic and renegotiated dynamically via SDP 
 offer/answer?
Yes, this should be (re)negotiated via SDP, but what is unclear is how the SDP 
is populated based on the application's preferences.

...

  !Streams being transmitted must be  !Do not starve 
 other traffic (e.g. on  !
  !subject to rate control!ADSL link) 
!
 
 Not sure whether this requires any thing special. Could you elaborate?
What I am after is that the RTP/UDP streams sent from one UA to the other must 
have some rate adaptation implemented. HTTP uses TCP transport, and TCP reduces 
the send rate when a packet does not arrive (so that flows share the available 
throughput in a fair way when there is a bottleneck). For UDP there is no such 
mechanism, so unless something is added in the RTP implementation it could 
starve other traffic. I don't think it should be visible in the API though, it 
is a requirment on the implemenation in the UA.

...

 
  !Synchronization between audio and video!   
!
  !must be supported  !   
!
 
 If there's one stream, that's automatic, no?
One audiovisual stream is actually transmitted as two RTP streams (one audio, 
one video). And synchronization at playout is not automatic, it is something 
you do based on RTP timestamps and RTCP stuff. But again, this is a req on the 
implementaion in the UA, not on the API.

...

  !The web application must be made aware !To be able to 
 inform user and take!
  !of when streams from a peer are no !action (one of the 
 peers still has!
  !longer received!connection with 
 the server)   !
  
 --
 --
  !The browser must detect when no streams!   
!
  !are received from a peer   !   
!
 
 These aren't really yet supported in the API, but I intend 
 for us to add 
 this kind of thing at the same time sa we add similar metrics 
 to video 
 and audio. To do this, though, it would really help to have 
 a better 
 idea what the requirements are. What information should be available? 
 Packets received per second (and sent, maybe) seems like 
 an obvious 
 one, but what other information can we collect?
I think more studies are required to answer this one.

//Stefan

Re: [whatwg] Peer-to-peer communication, video conferencing, device, and related topics

2011-03-22 Thread Harald Alvestrand

Up front statement, orthogonal to the details of the specification:

I've discussed this interface somewhat with Ian before in private, and 
don't agree with his approach on several points - both technical and 
organizational.


I also don't believe that quick iteration and rapid prototyping is best 
served by putting this spec inside the HTML5 specification, and have 
therefore been working on an independent specification document. 
Unfortunately my skills at writing HTML-type specs are nowhere near 
Ian's, so it's taken much more time than desirable to get the proposal 
I'm writing up into a shape where I dare show it in public without 
feeling embarrassed (weakening my own argument somewhat). Still, I'm 
hoping to have it available in a matter of days.


I also don't believe that having all the discussions related to HTML5 on 
a single mailing list is an optimal approach, and will therefore be 
suggesting another mailing list for the public discussion of that 
specification. I haven't figured out which one yet.


Now on to details

On 03/18/11 05:45, Ian Hickson wrote:

When replying to this e-mail please only quote the parts to which you are
responding, and adjust the subject line accordingly.

This e-mail is a reply to about a year's worth of feedback collected on
the topics of peer-to-peer communication, video conferencing, thedevice
element, and related topics. This feedback was used to update the spec
recently, greatly expanding on the placeholder that had previously
sketched a proposal for how these features should work. (This e-mail does
not include replies to most of the feedback received after the change to
the spec. I'll be replying to the bulk of this more recent feedback in a
separate e-mail soonish.)

Here is a high-level overview of the changes; for specific rationales,
please see the detailed responses to the e-mails below.

  *device  has been replaced with a Geolocation-style API for requesting
user access to local media devices (such as cameras).

I have no issue with these.

  * locally-generated streams can be paused and resumed.
I believe this property should be moved up to the stream level (which 
I prefer to call StreamSource, because I think we also need an 
interface named StreamSink).


I also believe that the recording interface should be removed from this 
part of the specification; there should be no requirement that all 
streams be recordable.


The streams should be regarded as a control surface, not as a data 
channel; in many cases, the question of what is the format of the 
stream at this point is literally unanswerable; it may be represented 
as hardware states, memory buffers, byte streams, or something 
completely different. Recording any of these requires much more 
specification than just record here.

  * the ConnectionPeer interface has been replaced with a PeerConnection
interface that interacts directly with ICE and its dependencies.
I disagree with a number of aspects of this interface. In particular, I 
believe the relationship between SDP and ICE is fundamentally misstated; 
it is possible, and often desirable, to use ICE without using SDP; there 
are other ways of encoding the information we need to pass.


In the RTCWEB IETF effort, the idea of mandating use of SDP is being 
pushed back on.


I also believe the configuration string format is too simplistic and 
contains errors; at the very least, we need a keyword:value format 
(JSON?) so that we can extend the configuration string without breaking 
existing scripts, and the STUN/TURN strings are incompletely defined 
(you can't specify that you're using TURN over TCP, for instance).

  * PeerConnection has been streamlined (compared to ConnectionPeer), e.g.
there is no longer a feature for direct file transfer or for reliable
text messaging.
This is good. There was no backing specification for the corresponding 
wire formats.

  * the wire format for the unreliable data channel has been specified.
I agree that before this functionality is implementable, we need a 
specification for its format. However, I don't believe the current 
specification is reasonable; it has complexities (such as masking) that 
don't correspond to a known threat model (given the permission-to-send 
model of ICE, the idea of cross-channel attacks using an ICE channel is 
irrelevant).

  * the spec has been brought up to date with recent developments in other
Web specs such as File API and WebIDL.

Good.



[whatwg] Video and Audio Tracks API

2011-03-22 Thread Lachlan Hunt

Hi,
  This is regarding the recently added audioTracks and videoTracks APIs 
to the HTMLMediaElement.


The design of these APIs seems to be done a little strangely, in that 
dealing with each track is done by passing an index to each method on 
the TrackList interfaces, rather than treating the audioTracks and 
videoTracks as collections of individual audio/video track objects. 
This design is inconsistent with the design of the TextTrack interface, 
and seems sub-optimal.


The use of ExclusiveTrackList for videoTracks also seems rather 
limiting. What about cases where the second video track is a 
sign-language track, or some other video overlay.  This is a use case 
that you seem to be trying to address with the mediaGroup feature, even 
though the example given actually includes all tracks in the same file. 
The example from the spec is:


video src=movie.vid#track=Videoamp;track=English autoplay controls 
mediagroup=movie/video

video src=movie.vid#track=sign autoplay mediagroup=movie/video

Normally, sign language tracks I've seen broadcast on TV programs 
display the sign language interpreter in a small box in the bottom corner.


Other use cases include PiP features, such as director commentary or 
storyboards as available on some Blu-ray and DVDs [1].  So in cases 
where both tracks are included in the same file, having the ability to 
selectively enable multiple video tracks would seems easier to do than 
synchronising separate video files.


There are also the use cases for controlling the volume of individual 
tracks that are not addressed by the current spec design.


I believe the design would work better like this:

---

interface HTMLMediaElement : HTMLElement {
  ...
  readonly attribute AudioTrack[] audioTracks;
  readonly attribute VideoTrack[] videoTracks;
}

interface MediaTrack {
  readonly attribute DOMString label;
  readonly attribute DOMString language;

   attribute boolean enabled;
}

Interface AudioTrack : MediaTrack {
   attribute double volume;
   attribute boolean muted;
  // Other potential future APIs include bass, treble, channels, etc.
}

Interface VideoTrack : MediaTrack {
  // ...
}

---

This proposal replaces TrackList.getName(index) with 
MediaTrack[index].label, and .getLanguage(index) with .language, which 
is more consistent with the design of the TextTrack interface.  The 
isEnabled(), and enable() and disable() functions have also been 
replaced with a single mutable boolean .enabled property.


[1] http://en.wikipedia.org/wiki/Picture-in-picture

--
Lachlan Hunt - Opera Software
http://lachy.id.au/
http://www.opera.com/


Re: [whatwg] Proposal for @label attribute associated with kind=metadata TimedTextTracks

2011-03-22 Thread Eric Winkelman
On Monday, March 21, 2011 11:17 AM, Tab Atkins Jr. 
[mailto:jackalm...@gmail.com] wrote:

  Use Case:
 
  Many video streams contain in-band metadata for application signaling,
 and other uses.  By using this metadata, a web page can synchronize an
 application with the delivered video, or provide other synchronized services.
 
  An example of this type of metadata is EISS (
 http://www.cablelabs.com/specifications/OC-SP-ETV-AM1.0-I06-110128.pdf
 ) which is used to control applications that are synchronized with a 
 television
 broadcast.
 
  In general, a media stream can be expected to carry several types of
 metadata and the types of metadata may vary in time.
 
  Problem:
 
  For in-band metadata tracks, there is neither a standard way to represent
 the type of metadata in the HTMLTrackElement interface nor is there a
 standard way to represent multiple different types of metadata tracks.
 
  Proposal:
 
  For TimedTextTracks with kind=metadata the @label attribute should
 contain a MIME type for the metadata and that a track only contain Cues
 created from metadata of that MIME type.
 
  This implies that streams with multiple types of metadata require the
 creation of multiple metadata track objects, one for each MIME type.
 
 
  I don't understand. Are you saying that right now all tracks that are
  of kind=metadata are made available through a single TextTrack? Cause
  I don't think that's the case.
 
  Or are you worried about text track files that contain more than one
  type of metadata? If the latter, then how is the browser to know how
  to separate out the individual cues from a single track into
  multipled?
 
  Can you clarify?
 
 I'm also somewhat confused.  The OP mentions in-band metadata, but then
 proposes adding something to out-of-band track kind=metadata
 elements.

I'm not proposing adding anything to out-of-band track kind=metadata 
elements.  In-band metadata tracks are added to the DOM by the media player, 
and have the same @label attribute that out-of-band tracks do.  I'm suggesting 
a use for that @label attribute that solves a problem I've encounter using 
metadata tracks.

 I'm not familiar enough with in-band metadata tracks to know if it would be
 useful to expose additional information about them, but for out-of-band
 tracks I suspect that any information you may need is application-specific,
 and thus can be served with a data-* attribute.

I agree, there are a number of solutions for out-of-band metadata tracks, but  
my concern is specifically in-band metadata tracks.  

If an in-band kind=metadata track appears, what kind of information does it 
contain?  Can you tell by looking at the DOM?  Can you tell by looking at the 
cue's text?

Eric


Re: [whatwg] Proposal for @label attribute associated with kind=metadata TimedTextTracks

2011-03-22 Thread Tab Atkins Jr.
On Tue, Mar 22, 2011 at 9:40 AM, Eric Winkelman
e.winkel...@cablelabs.com wrote:
 On Monday, March 21, 2011 11:17 AM, Tab Atkins Jr. 
 [mailto:jackalm...@gmail.com] wrote:
 I'm also somewhat confused.  The OP mentions in-band metadata, but then
 proposes adding something to out-of-band track kind=metadata
 elements.

 I'm not proposing adding anything to out-of-band track kind=metadata 
 elements.  In-band metadata tracks are added to the DOM by the media player, 
 and have the same @label attribute that out-of-band tracks do.  I'm 
 suggesting a use for that @label attribute that solves a problem I've 
 encounter using metadata tracks.

Ah, now I understand.  You're referring to this as a @label
attribute.  Attributes only exist on elements, which is why I thought
you had suddenly switched to talking about out-of-band track
elements.  The term you want is property to refer to properties on
the javascript objects. ^_^


 I'm not familiar enough with in-band metadata tracks to know if it would be
 useful to expose additional information about them, but for out-of-band
 tracks I suspect that any information you may need is application-specific,
 and thus can be served with a data-* attribute.

 I agree, there are a number of solutions for out-of-band metadata tracks, but 
  my concern is specifically in-band metadata tracks.

 If an in-band kind=metadata track appears, what kind of information does it 
 contain?  Can you tell by looking at the DOM?  Can you tell by looking at the 
 cue's text?

What kind of data *is* carried by in-band metadata tracks?

~TJ


Re: [whatwg] Ongoing work on an editing commands (execCommand()) specification

2011-03-22 Thread Ehsan Akhgari
- Original Message -
 From: Robert O'Callahan rob...@ocallahan.org
 To: Ehsan Akhgari eh...@mozilla.com
 Cc: Aryeh Gregor simetrical+...@gmail.com, whatwg 
 whatwg@lists.whatwg.org, Ryosuke Niwa rn...@webkit.org,
 Ehsan Akhgari ehsan.akhg...@gmail.com, Hallvord R. M. Steen 
 hallv...@opera.com
 Sent: Monday, March 21, 2011 11:48:41 PM
 Subject: Re: [whatwg] Ongoing work on an editing commands (execCommand()) 
 specification
 On Tue, Mar 22, 2011 at 12:55 PM, Ehsan Akhgari  eh...@mozilla.com 
 wrote:
 
 
 
 You're proposing to remove something from Gecko and Webkit which has
 been supported for many years (about 8 years for Gecko). We do not
 have the ability to make sure that nobody is relying on this in any of
 the billions of available web sites. Unless you have a very strong
 argument on why we should remove support for an API as old as this
 one, I'm not sure that we're going to do that, and I think that Webkit
 might have similar constraints as well. So far, the argument that
 you've proposed is extrapolating the assumption that this API doesn't
 have any users from three implementations which use the editing APIs.
 I'm afraid you should have a _much_ larger sample if you want to draw
 this conclusion.
 
 I would personally very much like to get one of the two modes killed
 in favor of the other, since that means an easier spec to implement,
 less code to maintain, and easier life for me. But I think we should
 carefully think about what this would means for potential users who
 are using the CSS mode in their applications.
 
 
 We can deprecate the CSS mode and leave it unspecified, without
 removing it from Webkit and Gecko. That won't hurt interop since
 anyone using it is probably UA-sniffing already.
 
 If sometime in the future we decide that a CSS mode is worth having,
 then someone can start writing a spec for it then.

Yes, that would make sense, I think.

--
Ehsan Akhgari
eh...@mozilla.com
http://ehsanakhgari.org/


Re: [whatwg] Ongoing work on an editing commands (execCommand()) specification

2011-03-22 Thread Ryosuke Niwa
On Mon, Mar 21, 2011 at 8:48 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 We can deprecate the CSS mode and leave it unspecified, without removing it
 from Webkit and Gecko. That won't hurt interop since anyone using it is
 probably UA-sniffing already.

 If sometime in the future we decide that a CSS mode is worth having, then
 someone can start writing a spec for it then.


I support this idea.

- Ryosuke


Re: [whatwg] Proposal for @label attribute associated with kind=metadata TimedTextTracks

2011-03-22 Thread Eric Winkelman
On Tuesday, March 22, 2011 10:47 AM, Tab Atkins Jr. wrote:

 Ah, now I understand.  You're referring to this as a @label attribute.
 Attributes only exist on elements, which is why I thought you had suddenly
 switched to talking about out-of-band track elements.  The term you want
 is property to refer to properties on the javascript objects. ^_^

Ah, yes, now I understand the confusion.  Within the whatwg specs, the word 
attribute is generally used and I was trying to be consistent.  

  If an in-band kind=metadata track appears, what kind of information does
 it contain?  Can you tell by looking at the DOM?  Can you tell by looking at 
 the
 cue's text?
 
 What kind of data *is* carried by in-band metadata tracks?

I'm familiar with metadata tracks containing: content advisories, ad insertion 
triggers, and application signaling; each with their own format.  There is a 
lot of activity in this area, so the types/formats of metadata tracks is likely 
to increase for a while.

- Eric


Re: [whatwg] Ongoing work on an editing commands (execCommand()) specification

2011-03-22 Thread Ryosuke Niwa
On Thu, Mar 17, 2011 at 3:31 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 I just rewrote the spec, and it's now both shorter and produces better
 results.  For a quick view of the results, as compared to the browser
 you're currently using, you can look here:

 http://aryeh.name/spec/editcommands/autoimplementation.html


Thanks for the rewrite.  New results look much more promising.


 * In one case, WebKit normalizes markup more aggressively than the
 spec does, so it winds up being shorter and still correct, but only
 because the spec ignored ancestors beyond what it had to modify; I'm
 ambivalent about this one


One thing we might want to consider is to merge elements when forcing style
or pushing down style.  For example, if we had bhello /bworld and
bolded world, I'd expect to get bhello world/b instead of bhello
/bbworld/b.  While it's not that much of an improvement in this very
simple case, the effect is obvious when the applied on more complicated
markup.

I hope this addresses many of Ryosuke's objections to my previous algorithm.


Yes, it addresses most of my current concerns except StyleWithCSS.  I think
we should just obsolete StyleWithCSS and let it unspecified so that we can
keep it backward compatible.

- Ryosuke


Re: [whatwg] Ongoing work on an editing commands (execCommand()) specification

2011-03-22 Thread Ehsan Akhgari
 One thing we might want to consider is to merge elements when forcing
 style or pushing down style. For example, if we had bhello
 /bworld and bolded world, I'd expect to get bhello world/b
 instead of bhello /bbworld/b. While it's not that much of an
 improvement in this very simple case, the effect is obvious when the
 applied on more complicated markup.

This makes quite a bit of sense.

--
Ehsan Akhgari
eh...@mozilla.com
http://ehsanakhgari.org/


Re: [whatwg] Ongoing work on an editing commands (execCommand()) specification

2011-03-22 Thread Robert O'Callahan
On Wed, Mar 23, 2011 at 12:51 PM, Aryeh Gregor simetrical+...@gmail.comwrote:

 On Mon, Mar 21, 2011 at 11:48 PM, Robert O'Callahan
 rob...@ocallahan.org wrote:
  We can deprecate the CSS mode and leave it unspecified, without removing
 it
  from Webkit and Gecko. That won't hurt interop since anyone using it is
  probably UA-sniffing already.
 
  If sometime in the future we decide that a CSS mode is worth having,
 then
  someone can start writing a spec for it then.

 That seems silly, since it's very simple to spec and implement.  I'll
 just add it to the spec.


So it has valid use cases after all?

I'll spec it instead.  I'm generally not happy with just leaving
 things unspecced if browsers aren't willing to drop support.


I think this is unwise. Given that some browsers are unwilling to drop
support for almost anything, that would mean we need to spec a superset of
every experimental feature those engines add, at least those that are
unprefixed, even if they're barely used on the Web. It's especially
problematic when the same feature is implemented differently in different
browsers. Then you end up speccing a feature for the sake of interop, but
whatever you spec can't give you interop.

IMHO we should spec features if and only if there are use-cases (not
reasonably covered by existing features), or if needed for interop with
existing content.

Rob
-- 
Now the Bereans were of more noble character than the Thessalonians, for
they received the message with great eagerness and examined the Scriptures
every day to see if what Paul said was true. [Acts 17:11]