On 03/15/2011 10:58 PM, Robert O'Callahan wrote:
Instead of creating new state signalling and control API for streams, what
about the alternative approach of letting<video>  and<audio>  use sensors as
sources, and a way to connect the output of<video>  and<audio>  to encoders?
Then we'd get all the existing state machinery for free. We'd also get
sensor input for audio processing (e.g. Mozilla or Chrome's audio APIs), and
in-page video preview, and using<canvas>  to take snapshots, and more...

Rob

Ah, yes, this would be good.


-Olli

Reply via email to