/wrote Paul Davis <[EMAIL PROTECTED]> [Tue, 23 Jul 2002 19:58:16 -0400]

|the most fundamental problem with SGI's approach to audio+video is
|that its not based on the desirability of achieving low latency for
|realtime processing and/or monitoring. its centered on the playback of
|existing, edited, ready-to-view material. the whole section at the end
|of that page about "pre-queuing" the data makes this point very clear.
|
|SGI tried to solve the problem of the Unix read/write API mismatch
|with realtime streaming media by adding timestamps to data. CoreAudio
|solves it by getting rid of read/write, and acknowledging the inherent
|time-basis of the whole thing, but for some reason keeps timestamps
|around without using them in many cases. JACK follows CoreAudio's
|lead, but gets rid of the timestamps.

My concern is audio+video synchronization:
  Currently, i'm using the audio clock, snd_pcm_status_get_tstamp or
  ioctl(fd, SNDCTL_DSP_GETOPTR, &current_ptr) to provide a timestamp
  at the start of the video processing..

  With JACK's api not providing a timestamp, I cannot know
  whether there's any extra buffering/latency
  added once the callback buffer is processed.. 
  (which would be the case of rather braindead hardware or driver)

  ...  It also requires me to keep a local clock that i would 
  empirically correct by one buffer_size, as set by 
  the jack_set_buffer_size callback, in the hope it corresponds
  with the actual delay between processing and the actual time
  sound will be heard. This is critical for audio/video 
  synchronization, whichever the latency the system is aiming for.

  am I at least right in my assumption?


Reply via email to