Hi all, Just noticed that Emil (main developper of jitsi) is going to do a lecture at upcoming Fosdem. Key note:
<snip> About a year ago the Jitsi project developers started work on support for video conference calls. We had had audio conferencing for a while at that point and we were using it regularly in our dev meetings. Video seemed like a logical next step so we rolled our sleeves and got to work. The first choice that we needed to make was how to handle video distribution. The approach that we had been using for audio was for one of the participating Jitsi instances to mix all flows. That's easy to do for audio streams and any recent machine can easily mix a call with six or more participants. Video however was a different story. Mixing video into composite images is an extremely expensive affair and one could never achieve this real-time with today's desktop or laptop computers. We had to choose between an approach where the conference organizer would simply switch to the active speaker or a solution where a central node would relay all streams to all participants, while every participant keeps sending a single stream. We finally went for the latter which also seems to be the approach taken by Skype and Google for their respective conferencing services. We started by implementing all this in Jitsi but along the way we also decided to make the RTP relaying part a separate server-side component. This is how Jitsi Videobridge was born: an XMPP server component that focus agents can control via dedicated XMPP IQs. </snip> After reading this, my first reaction was: great! My second was: bummer, it's just for xmpp. Third: Isn't this something that should/could have been done in asterisk? Hans. -- _____________________________________________________________________ -- Bandwidth and Colocation Provided by http://www.api-digital.com -- asterisk-video mailing list To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-video
