Hi, Joerg Schilling wrote: > Guess why I recommend to use more than 128MB for the star FIFO > in order to keep the tape streaming. > > With current I/O speed, you need current RAM sizes for buffering.
Googling for contemporary speeds ... HP ... 36 MB/s DLT ... 80 MB/s LTO ... well, i'd need a new computer first. How come that the time granularity of the backup processing chain does not get finer as the systems get faster ? Since our fifos are mainly having an averaging effect, a finer granularity would avoid the need to make them larger. But we clearly have to enlarge them. We still have to prepare for a few realtime seconds of shortage. There must be something negative growing with our faster systems. More processes which impose disturbance ? Larger amounts of disk data and therefore larger disturbances ? Compression ratios staid more or less the same. 2:1 as rule of thumb. For something else: In http://lists.debian.org/cdwrite/2004/cdwrite/2006/01/msg00057.html i ask wether the current behavior of cdrecord with padsize= and multiple tracks will be uphold or wether it will be changed to comply to the man page. A short answer would be welcome. Have a nice day :) Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]