Hi everyone, I actually managed to enable the H.264 encoding/decoding in Ekiga (in fact Opal) at higher resolutions. At least, it is running stable at a resolution of 960x720 px. The main issue I am facing at the moment concerns the hardware support, since there are a few constraints that have to be overcome in order to obtain a good HD-videoconferencing experience:
1) HD-camera: According to my researches, it is still extremely difficult (impossible?) to obtain webcams offering a high resolution (at least 960x720) combined with an acceptable frame rate (24 or above). The cam I am currently using and which belongs to the best webcam I could find on the market (Logitech Sphere), offers only 15 fps at a resolution of 960x720. It becomes almost unusable at higher resolutions. A few solutions came to my mind to overcome the capturing constraint and which are: a) Wait for better cameras that are about to enter the market like the Tandberg PrecisionHD (30 fps @ 720p). http://www.tandberg.com/our_story/microsoft-precisionhd-camera.jsp Another considerably more expensive camera is the Sony XCD-SX90CR 1394b SXGA which is already available. b) Use a HDMI capture board to use the raw video output of a HD-camcorder. Well, the only HDMI capture board I could find is the BlackMagic Design Intensity. Unfortunately, no Linux drivers are available for this card at the moment... 2) CPU: H.264 is a codec that offers good compression rates while producing good quality outputs. The downside is that encoding in H.264 requires a lot of computational time (CPU) and this becomes a critical bottleneck if you want to encode at high resolution. Options to alleviate the CPU would consist in: a) Performing the decoding or even the encoding (partly) on the GPU. NVidia GeForce graphic cards that support the PureVideo HD and CUDA technology could be used for this purpose. NVidia has already an API to support this process under Linux, VDPAU (Video Decode and Presentation API for Unix). http://www.nabble.com/New-Video-Decode-and-Presentation-API-td20506048.html b) Using video codecs that are less hungry in resources (CPU) than H.264. I noticed for example that Theora, which provides slightly poorer video outputs than H.264, consumes considerably less CPU. I have also searched for low-latency codecs and bumped into this one: Dirac Pro (http://www.bbc.co.uk/rd/projects/dirac/diracpro.shtml) It seems that Dirac has been added to the ffmpeg and gstreamer frameworks. c) Encoding the video stream in-camera. Nowadays HDV- or AVCHD- Camcorders can output Full-HD MPEG-2 or H.264 streams that can be accessed via the Firewire port. The main issue would consist in accessing or simply forwarding the MPEG-2 transport stream from Ekiga in the case of a HDV-camcorder. Audio/Video synchronization might also be another issue since I want to use a CELT-encoded audio streams. I tend to favor the last-mentioned solution path but would be glad to head other people's opinions or inputs. Thomas @Eugen: Yes, I will for sure commit the changes I made to the Ekiga/Opal community as soon as I feel that they are mature. Eugen Dedu wrote: > > > Hi, > > I see that nobody answered :o( Please repost your message upstream, to > ptlib/opal mailing list (opalvoip-u...@lists.sourceforge.net). Send > also there the modifications you have done for theora. > > Cheers, > Eugen > > -- View this message in context: http://www.nabble.com/High-resolutions-with-H.264-tp22863659p23175784.html Sent from the Ekiga Dev mailing list archive at Nabble.com. _______________________________________________ Ekiga-devel-list mailing list Ekiga-devel-list@gnome.org http://mail.gnome.org/mailman/listinfo/ekiga-devel-list