On Lun 27 May 2002 22:28, Cameron Simpson wrote: > On 17:42 27 May 2002, daniel <[EMAIL PROTECTED]> wrote: > | say i have a whole bunch of files i wanna tarball weekly (~2gb). i can't > | back that up to a cd, so the first thought is to have tar compress the > | whole lot into 650mb chunks. can this be done so that each chunk is a > | useable fragment? or do i have to reassemble all the pieces in order to > | extract some data from the combined tarball? > > With tar alone, possibly you can use the -L option (see then manual > entry). This may require some cooperation on your part. And I don't > know if each component is self contains - it may just be cut up on block > boundaries, with no respect for file boundaries. > > With a small wrapper script, sure. > Basicly you want something shaped like this: > > find ..... list everything to backup ... \ > > | the-script > > where the-script in turn pipes off into a tar which is reads names > from stdin, lstat()s then to get their size, checks how much biugger > the tar file would be (1 x 512 byte block, plus file size rounded up > to 512 byte multiple) and if it won't fit, terminates the current tar > (close the pipe) and opens a fresh out, with a dialogue to get you to > change tapes or rename files or enter a new filename or something. > > Shouldn't be too hard.
This would work good if there aren't big files to backup, in which case you could get some backups <<650Mb. Saludos... :-) -- Porqué usar una base de datos relacional cualquiera, si podés usar PostgreSQL? ----------------------------------------------------------------- Martín Marqués | [EMAIL PROTECTED] Programador, Administrador, DBA | Centro de Telematica Universidad Nacional del Litoral ----------------------------------------------------------------- _______________________________________________ Redhat-list mailing list [EMAIL PROTECTED] https://listman.redhat.com/mailman/listinfo/redhat-list