Re: [whatwg] Peer-to-peer use case (was Peer-to-peer communication, video conferencing, device, and related topics)

2011-03-25 Thread Per-Erik Brodin

On 2011-03-22 11:01, Stefan HÃ¥kansson LK wrote:

On 2011-03-18 05:45, Ian Hickson wrote:


All of this except selectively muting audio vs video is currently
possible in the proposed API.

The simplest way to make selective muting possible too would
be to change
how the pause/resume thing works in GeneratedStream, so that
instead of
pause() and resume(), we have individual controls for audio
and video.
Something like:

void muteAudio();
void resumeAudio();
readonly attribute boolean audioMuted;
void muteVideo();
void resumeViduo();
readonly attribute boolean videoMuted;

Alternatively, we could just have mutable attributes:

attribute boolean audioEnabled;
attribute boolean videoEnabled;

Any opinions on this?

We're looking into this and will produce a more elaborate input related to this.



Basically we would like to be able to address the Stream components 
individually and also not limit them to zero or one audio and zero or 
one video components per Stream. That way we could activate/deactivate 
them individually and also split out components and combine components 
from different Stream objects into a new Stream object.


One good use case is the multi-party video conference where you would 
like to record the audio from all participants using a StreamRecorder. 
This would be done by taking the audio component from the local 
GeneratedStream and combining it with the audio components from the 
remote streams to form a new Stream object which can then be recorded.


This could also be a way to handle multiple cameras such as front and 
back cameras of mobile devices that was mentioned in another thread. 
When playing a Stream containing several video components, the first 
active component (if any) would be shown. Active audio components would 
be mixed.


//Per-Erik




Re: [whatwg] Peer-to-peer use case (was Peer-to-peer communication, video conferencing, device, and related topics)

2011-03-22 Thread Stefan HÃ¥kansson LK
Some feedback below. (Stuff where I agree and there is no question have left 
out).

 
 On Mon, 31 Jan 2011, Stefan H kansson LK wrote this use case:
 

We've since produced an updated use case doc: 
http://www.ietf.org/id/draft-holmberg-rtcweb-ucreqs-01.txt

...

  The web author developing the application has decided to display a 
  self-view as well as the video from the remote side in rather small 
  windows, but the user can change the display size during 
 the session. 
  The application also supports if a participant (for a 
 longer or shorter 
  time) would like to stop sending audio (but keep video) or 
 video (keep 
  audio) to the other peer (mute).
...
 
 All of this except selectively muting audio vs video is currently 
 possible in the proposed API.
 
 The simplest way to make selective muting possible too would 
 be to change 
 how the pause/resume thing works in GeneratedStream, so that 
 instead of 
 pause() and resume(), we have individual controls for audio 
 and video. 
 Something like:
 
void muteAudio();
void resumeAudio();
readonly attribute boolean audioMuted;
void muteVideo();
void resumeViduo();
readonly attribute boolean videoMuted;
 
 Alternatively, we could just have mutable attributes:
 
attribute boolean audioEnabled;
attribute boolean videoEnabled;
 
 Any opinions on this?
We're looking into this and will produce a more elaborate input related to this.

...

  !The web application must be able to!If the video is 
 going to be displayed !
  !define the media format to be used for !in a large window, 
 use higher bit-!
  !the streams sent to a peer.!rate/resolution. 
 Should media settings!
  !   !be allowed to be 
 changed during a !
  !   !session (at e.g. 
 window resize)?  !
 
 Shouldn't this be automatic and renegotiated dynamically via SDP 
 offer/answer?
Yes, this should be (re)negotiated via SDP, but what is unclear is how the SDP 
is populated based on the application's preferences.

...

  !Streams being transmitted must be  !Do not starve 
 other traffic (e.g. on  !
  !subject to rate control!ADSL link) 
!
 
 Not sure whether this requires any thing special. Could you elaborate?
What I am after is that the RTP/UDP streams sent from one UA to the other must 
have some rate adaptation implemented. HTTP uses TCP transport, and TCP reduces 
the send rate when a packet does not arrive (so that flows share the available 
throughput in a fair way when there is a bottleneck). For UDP there is no such 
mechanism, so unless something is added in the RTP implementation it could 
starve other traffic. I don't think it should be visible in the API though, it 
is a requirment on the implemenation in the UA.

...

 
  !Synchronization between audio and video!   
!
  !must be supported  !   
!
 
 If there's one stream, that's automatic, no?
One audiovisual stream is actually transmitted as two RTP streams (one audio, 
one video). And synchronization at playout is not automatic, it is something 
you do based on RTP timestamps and RTCP stuff. But again, this is a req on the 
implementaion in the UA, not on the API.

...

  !The web application must be made aware !To be able to 
 inform user and take!
  !of when streams from a peer are no !action (one of the 
 peers still has!
  !longer received!connection with 
 the server)   !
  
 --
 --
  !The browser must detect when no streams!   
!
  !are received from a peer   !   
!
 
 These aren't really yet supported in the API, but I intend 
 for us to add 
 this kind of thing at the same time sa we add similar metrics 
 to video 
 and audio. To do this, though, it would really help to have 
 a better 
 idea what the requirements are. What information should be available? 
 Packets received per second (and sent, maybe) seems like 
 an obvious 
 one, but what other information can we collect?
I think more studies are required to answer this one.

//Stefan