On Tue, May 6, 2008 at 2:03 PM, Brian Willoughby <[EMAIL PROTECTED]> wrote: > Frederick, > > This is great news! Thanks for your effort. > > Your proof-of-concept raises a few questions for me: > > 1) I know that the ratio of uncompressed to compressed data is > unpredictable, but I never really considered whether the input block size or > the output block size is constant. I'm assuming that if you're breaking the > uncompressed input file into multiple pieces, then the uncompressed block > size must be constant, while the output block size becomes variable. I > suppose the FLAC format allows for some variation in block size anyway. My > question (finally) is: Are there any anomalies at the points in the stream > where you concatenate the compressed blocks from the multiple threads? Do > you end up with partial blocks in order to fit things together?
Yes, the input block size is constant (for regular wavs at least) whereas the output block size is not. There are no anomalies in the concatenation as I make sure the input file is divided up on block boundaries. > 2) How hard would it be to enable md5 calculation? Not hard, I don't think. Since md5 is not really parallelizable I think it suffices to spawn off a separate thread to calculate the md5 for the whole file at the same time as the encode, then wait for that thread too and fire off a separate write request for the metadata. > 3) Do you accelerate decode as well as encode? I'm thinking that the > variable block size would require each thread to scan its block for the > start of a new block header, and also continue processing past the end of > its region to grab any partial block skipped by the adjacent thread. > Scanning the entire file for block offsets would probably expend a > significant amount of time, so it might be better for each thread to scan > within its own region. There's probably no way to weave the output > together, though, until all threads have completed - at least the ones > processing earlier parts of the time line. I looked into it briefly, but haven't had time to get it working. The tricky part is dividing up the input file equally given the variable block size and the (potential) lack of a seektable block. The best solution might just be to scan the output file and push (frame number, byte offset, byte size) onto a queue which the decoding threads would pop off, but that's a little more complicated. I'm still mulling it over. :) > I agree that it would be useful to have a parallel file- or block-based > API. It would be more effort to maintain, but there is a gain for the added > work, now that multiple processors is more common, even on laptops. The > flac-mt command-line also seems like a good idea. Cool! I'll start working on something then. -- Frederick Akalin http://www.akalin.cx _______________________________________________ Flac-dev mailing list [email protected] http://lists.xiph.org/mailman/listinfo/flac-dev
