Hi,

My advice is to buy a more premium capture card for what you are trying to achieve.
Clocks drift over time and you will see this as the next issue.

You should get NV12 or YV12 and raw PCM audio direct from a premium capture card, ready for encode.
Timestamps will be provided by hardware in sync and will not drift.

Try a Datapath VisionLC capture card instead of paying for a developer. It's SDK has an example for saving video files to disk.


On 20/08/2022 21:55, Reynolds Kosloskey wrote:
On 19 Aug 2022 19:22, Reynolds Kosloskey <[email protected]> wrote:

    Hello all!

    I am working on a hobby project that does the following:

      * captures video frames from a capture card
      * captures audio from same capture card
      * decodes audio (video is already raw frame data)
      * encodes that video and audio into a new file

    My problems:

      * Resulting audio and video are heavily out of sync
      * Resulting file is not playable if recording for longer than X
        seconds

    I am willing to pay for assistance in finding out why these
    issues happen, and to help educate me on the fixes in the process.

    If anyone is interested, please reach out to me with rate
    requirements.

On 8/20/2022 2:59 AM, [email protected] wrote:
Hi, is the AV out of sync by a constant or does it drift further over time?

The A/V starts out of sync, I do not know if it would drift further out of sync over time because I can only successfully get small lengths of video to encode to playable files so far.

What information do you have on your timestamps?
Coming up with a proper methodology for setting timestamps is likely one of the keys to a solution.  It would likely be easier to see how the timestamps are being affected by looking at the code, which I have attached.
What format is the audio on the wire for you to have to decode it?

The audio on the wire is EAC3.  I do not technically have to decode it.  The reason I did is that I did not know how to package the raw stream into the new file.  So what I do is decode it such that I would then have raw audio frames to encode.

---

The program  basics:

A Magewell Capture card receives audio and video via an HDMI port.  The Magewell SDK is used to expose the video and audio. The video is exposed as raw frames, the audio is received as a raw stream.

The video presents the easier problem.  I use a time_base of 1/1000 second (millisecond).  I convert the frame to YUV, then encode it with a pts equal to the number of milliseconds since the start of encoding.

With the audio, I feed audio data to the decoder until it "locks" on a codec.  Once a codec is detected, I use that to set the output codec number of channels.  I then use *av_packet_rescale_ts* to adjust the time stamp.

Because the audio decoder calls callbacks to get new audio data, as opposed to me feeding it directly as data is available, I use a circular (ring) buffer to hold the data when it arrives, then peel off data from that buffer on the decoder callbacks.

Unless one has the Magewell hardware and SDK, they cannot actually debug and test this stuff.  I appreciate any feedback (I am not strong on C++), but as I said earlier, happy to pay for assistance with this.

*---source code attached----*

--Reynolds


Best Regards,
Richard Lince,
Founder & Managing Director

bluebox.video <https://www.bluebox.video>
+44(0)7841665146

logo
_______________________________________________
Libav-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to