Re: [whatwg] Recording interface (Re: Peer-to-peer communication, video conferencing, and related topics (2))

2011-04-09 Thread James Salsman
Sorry for the top posting, but I would like to reiterate my considered
opinion that Speex be supported for recording.  It is the standard
format available from Adobe Flash recording, low bandwidth, open
source and unencumbered, efficient, and it is high quality for its
bandwidth.

On Tue, Mar 29, 2011 at 11:37 PM, Stefan Håkansson LK
 wrote:
>
>
>> -Original Message-
>> From: whatwg-boun...@lists.whatwg.org
>> [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
>> whatwg-requ...@lists.whatwg.org
>> Sent: den 29 mars 2011 20:33
>> To: whatwg@lists.whatwg.org
>> Subject: whatwg Digest, Vol 84, Issue 69
>> >> >  I also believe that the recording interface should be
>> removed from
>> >> > this  part of the specification; there should be no requirement
>> >> > that all  streams be recordable.
>> > Recording of streams is needed for some use cases unrelated
>> to video
>> > conferencing, such as recording messages.
>> Having a recording function is needed in multiple use cases;
>> I think we all agree on that.
>> This is mostly a matter of style, which I'm happy to defer on.
>> >> >  The streams should be regarded as a control surface,
>> not as a data
>> >> > channel; in  many cases, the question of "what is the
>> format of the stream at this point"
>> >> >  is literally unanswerable; it may be represented as hardware
>> >> > states, memory  buffers, byte streams, or something
>> completely different.
>> > Agreed.
>> >
>> >
>> >> >  Recording any of these requires much more specification
>> than just
>> >> > "record here".
>> > Could you elaborate on what else needs specifying?
>> One thing I remember from an API design talk I viewed:
>> "An ability to record to a file means that the file format is
>> part of your API."
>>
>> For instance, for audio recording, it's likely that you want
>> control over whether the resulting file is in Ogg Vorbis
>> format or in MP3 format; for video, it's likely that you may
>> want to specify that it will be stored using the VP8 video
>> codec, the Vorbis audio codec and the Matroska container
>> format. These desires have to be communicated to the
>> underlying audio/video engine,  so that the proper transforms
>> can be inserted into the processing stream, and I think they
>> have to be communicated across this interface; since the
>> output of these operations is a blob without any inherent
>> type information, the caller has to already know which format
>> the media is in.
> This is absolutely correct, and it is not only about codecs or
> container formats. Maybe you need to supply info like audio
> sampling rate, video frame rate, video resolution, ...
> There was an input on this already last November:
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-November/029069.html
>>
>> Clearer?
>>
>>
>>
>> >
>>
>>
>>
>> --
>>
>> Message: 2
>> Date: Tue, 29 Mar 2011 15:27:58 +0200
>> From: "Wilhelm Joys Andersen" 
>> To: whatwg@lists.whatwg.org
>> Subject: [whatwg] ,  and styling
>> Message-ID: 
>> Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes
>>
>> Hi,
>>
>> I'm currently writing tests in preparation for Opera's implementation
>> of  and . In relation to this, I have a few
>> questions
>> about issues that, as far as I can tell, are currently
>> undefined in the
>> specification.
>>
>> The spec says:
>>
>>    "If there is no child summary element [of the details element], the
>>    user agent should provide its own legend (e.g. "Details")." [1]
>>
>> How exactly should this legend be provided? Should the user agent add
>> an implied  element to the DOM, similar to , a
>> pseudo-element, or a magic non-element behaving differently from both
>> of the above? In the current WebKit implementation[2], the UA-provided
>> legend behaves inconsistently from from an author-provided 
>> in the following ways:
>>
>>   * Although it can be styled with rules applying to
>> , it does
>>     not respond to :hover or :first-child.
>>
>>   * With regards to text selection, it behaves more like an >     type='submit'> than a user-provided . Text within this
>>     implied element may only be selected _together_ with the text
>>     preceding and following it.
>>
>>   * A different mouse cursor is used.
>>
>> This indicates that it is slightly more magic than I would prefer. I
>> believe a closer resemblance to an ordinary element would be more
>> convenient for authors - a ::summary pseudo element with "Details" as
>> its content() might be the cleanest approach, although that would
>> require a few more bytes in the author's stylesheet to cater to both
>> author- and UA-defined summaries:
>>
>>    summary, ::summary {
>>      color: green;
>>    }
>>
>> Furthermore, the rendering spec says:
>>
>>    "The first container is expected to contain at least one line box,
>>    and that line box is expected to contain a disclosure
>> widget (typically
>>    a triangle), horizontally positioned within the left padding of the
>>    detai

[whatwg] Recording interface (Re: Peer-to-peer communication, video conferencing, and related topics (2))

2011-03-29 Thread Stefan Håkansson LK


> -Original Message-
> From: whatwg-boun...@lists.whatwg.org
> [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
> whatwg-requ...@lists.whatwg.org
> Sent: den 29 mars 2011 20:33
> To: whatwg@lists.whatwg.org
> Subject: whatwg Digest, Vol 84, Issue 69
> >> >  I also believe that the recording interface should be
> removed from
> >> > this  part of the specification; there should be no requirement
> >> > that all  streams be recordable.
> > Recording of streams is needed for some use cases unrelated
> to video
> > conferencing, such as recording messages.
> Having a recording function is needed in multiple use cases;
> I think we all agree on that.
> This is mostly a matter of style, which I'm happy to defer on.
> >> >  The streams should be regarded as a control surface,
> not as a data
> >> > channel; in  many cases, the question of "what is the
> format of the stream at this point"
> >> >  is literally unanswerable; it may be represented as hardware
> >> > states, memory  buffers, byte streams, or something
> completely different.
> > Agreed.
> >
> >
> >> >  Recording any of these requires much more specification
> than just
> >> > "record here".
> > Could you elaborate on what else needs specifying?
> One thing I remember from an API design talk I viewed:
> "An ability to record to a file means that the file format is
> part of your API."
>
> For instance, for audio recording, it's likely that you want
> control over whether the resulting file is in Ogg Vorbis
> format or in MP3 format; for video, it's likely that you may
> want to specify that it will be stored using the VP8 video
> codec, the Vorbis audio codec and the Matroska container
> format. These desires have to be communicated to the
> underlying audio/video engine,  so that the proper transforms
> can be inserted into the processing stream, and I think they
> have to be communicated across this interface; since the
> output of these operations is a blob without any inherent
> type information, the caller has to already know which format
> the media is in.
This is absolutely correct, and it is not only about codecs or
container formats. Maybe you need to supply info like audio
sampling rate, video frame rate, video resolution, ...
There was an input on this already last November:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2010-November/029069.html
>
> Clearer?
>
>
>
> >
>
>
>
> --
>
> Message: 2
> Date: Tue, 29 Mar 2011 15:27:58 +0200
> From: "Wilhelm Joys Andersen" 
> To: whatwg@lists.whatwg.org
> Subject: [whatwg] ,  and styling
> Message-ID: 
> Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes
>
> Hi,
>
> I'm currently writing tests in preparation for Opera's implementation
> of  and . In relation to this, I have a few
> questions
> about issues that, as far as I can tell, are currently
> undefined in the
> specification.
>
> The spec says:
>
>"If there is no child summary element [of the details element], the
>user agent should provide its own legend (e.g. "Details")." [1]
>
> How exactly should this legend be provided? Should the user agent add
> an implied  element to the DOM, similar to , a
> pseudo-element, or a magic non-element behaving differently from both
> of the above? In the current WebKit implementation[2], the UA-provided
> legend behaves inconsistently from from an author-provided 
> in the following ways:
>
>   * Although it can be styled with rules applying to
> , it does
> not respond to :hover or :first-child.
>
>   * With regards to text selection, it behaves more like an  type='submit'> than a user-provided . Text within this
> implied element may only be selected _together_ with the text
> preceding and following it.
>
>   * A different mouse cursor is used.
>
> This indicates that it is slightly more magic than I would prefer. I
> believe a closer resemblance to an ordinary element would be more
> convenient for authors - a ::summary pseudo element with "Details" as
> its content() might be the cleanest approach, although that would
> require a few more bytes in the author's stylesheet to cater to both
> author- and UA-defined summaries:
>
>summary, ::summary {
>  color: green;
>}
>
> Furthermore, the rendering spec says:
>
>"The first container is expected to contain at least one line box,
>and that line box is expected to contain a disclosure
> widget (typically
>a triangle), horizontally positioned within the left padding of the
>details element." [3]
>
> For user agents aiming to support the suggested default rendering, how
> should the disclosure widget be embedded? Ideally, graphical browsers
> should all do this in a similar manner, and in a way that
> allows authors
> to style these elements to the same extent as any other element.
>
> There are several options:
>
>   * A ::marker pseudo element[4].
>   * A default, non-repeating background image positioned within
> the recommended 40 pixe

[whatwg] Recording interface (Re: Peer-to-peer communication, video conferencing, and related topics (2))

2011-03-29 Thread Harald Alvestrand

>  I also believe that the recording interface should be removed from this
>  part of the specification; there should be no requirement that all
>  streams be recordable.

Recording of streams is needed for some use cases unrelated to video
conferencing, such as recording messages.
Having a recording function is needed in multiple use cases; I think we 
all agree on that.

This is mostly a matter of style, which I'm happy to defer on.

>  The streams should be regarded as a control surface, not as a data channel; 
in
>  many cases, the question of "what is the format of the stream at this point"
>  is literally unanswerable; it may be represented as hardware states, memory
>  buffers, byte streams, or something completely different.

Agreed.



>  Recording any of these requires much more specification than just
>  "record here".

Could you elaborate on what else needs specifying?

One thing I remember from an API design talk I viewed:
"An ability to record to a file means that the file format is part of 
your API."


For instance, for audio recording, it's likely that you want control 
over whether the resulting file is in Ogg Vorbis format or in MP3 
format; for video, it's likely that you may want to specify that it will 
be stored using the VP8 video codec, the Vorbis audio codec and the 
Matroska container format. These desires have to be communicated to the 
underlying audio/video engine,  so that the proper transforms can be 
inserted into the processing stream, and I think they have to be 
communicated across this interface; since the output of these operations 
is a blob without any inherent type information, the caller has to 
already know which format the media is in.


Clearer?