This is just an academic question. Let's say you are doing something that processes audio and/or video data, either for display and use, or to burn it to a cd or file, and the overall process will run for several hours. Assume it is a single cpu box, without preemption or low latency patches.
If the app is reniced to a low priority of 10, versus a high priority of -1, but system load is such that the actual cpu time *on average* remains the same, how will the timing and quality of this vary? I imagine that there would be a statistical variation where the time spent in any contiguous moment of working on the data would be more consistent under higher priority, and maybe have increased variability of the lengths of time spent away from processing this data. People often talk about gaps in audio, maximum latency, average latency, so on, and it is the characteristic of latency and buffer sizes/underruns/overruns that I am curious about. Conjecture is fine, I'm just looking for any insight someone here might have. D. Stimits, [EMAIL PROTECTED]
