However, audio still gets called much more often than it needs to as
per the fDurationInMicroseconds prameter.
Remember, You Have Complete Source Code.
I have told you *repeatedly* what you need to look at to figure out
why your problem is happening. Why do you keep ignoring my advice??
Two more questions:
1. If I encoded discrete video frames using H.264 (say using
ffmpeg), should I directly feed to H264VideoRTPSink, or should I use
H264VideoStreamFramer in between?
No, you must use (your own subclass of ) "H264VideoStreamFramer"
inbetween your H.264 video source object and your "H264VideoRTPSink".
2. If I used a "proprietary" discrete video frame encoder and
decoder, what is the best way to use the live libraries to stream
and receive. I am planning to use a derived FileSink class at the
client end to receive and decode the elementary stream, but I am not
sure of the server side.
If by "proprietary" you mean a proprietary codec (rather than just a
proprietary implementation of a public codec), then the answer is
that you can't - unless a RTP payload format has been defined for
this codec.
--
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
[email protected]
http://lists.live555.com/mailman/listinfo/live-devel