Tim is correct in that gzip datastream allows for concatenation of
compressed blocks of data, so you might break the input stream into
a bunch of blocks [A, B, C, etc], and then can append those together
into [A.gz, B.gz, C.gz, etc], and when uncompressed, you will get
the original input stream.
On Tue, Oct 09, 2012 at 09:54:03PM -0700, Tim Kientzle wrote:
>
> On Oct 8, 2012, at 3:21 AM, Wojciech Puchar wrote:
>
> >> Not necessarily. If I understand correctly what Tim means, he's talking
> >> about an in-memory compression of several blocks by several separate
> >> threads, and then - a
On Oct 8, 2012, at 3:21 AM, Wojciech Puchar wrote:
>> Not necessarily. If I understand correctly what Tim means, he's talking
>> about an in-memory compression of several blocks by several separate
>> threads, and then - after all the threads have compressed their
>
> but gzip format is single
Not necessarily. If I understand correctly what Tim means, he's talking
about an in-memory compression of several blocks by several separate
threads, and then - after all the threads have compressed their
respective blocks - writing out the result to the output file in order.
Of course, this woul
Not necessarily. If I understand correctly what Tim means, he's talking
about an in-memory compression of several blocks by several separate
threads, and then - after all the threads have compressed their
but gzip format is single stream. dictionary IMHO is not reset every X
kilobytes.
paral
On Mon, Oct 08, 2012 at 08:38:33AM +0200, Wojciech Puchar wrote:
> >>gzip cannot - it is single stream.
> >
> >gunzip commutes with cat, so gzip
> >compression can be multi-threaded
> >by compressing separate blocks and
> >concatenating the result.
>
> right. but resulting file format must be diff
gzip cannot - it is single stream.
gunzip commutes with cat, so gzip
compression can be multi-threaded
by compressing separate blocks and
concatenating the result.
right. but resulting file format must be different.
___
freebsd-hackers@freebsd.org ma
On Sun, 7 Oct 2012 19:00+0200, Wojciech Puchar wrote:
> > I would be willing to work on a SMP version of tar (initially just gzip or
> > something).
> >
> > I don't have the best experience in compression, and how to multi-thread it,
> > but I think I
On Oct 7, 2012, at 10:00 AM, Wojciech Puchar wrote:
>> I would be willing to work on a SMP version of tar (initially just gzip or
>> something).
>>
>> I don't have the best experience in compression, and how to multi-thread it,
>> but I think I would be ab
I would be willing to work on a SMP version of tar (initially just gzip or
something).
I don't have the best experience in compression, and how to multi-thread it,
but I think I would be able to learn and help out.
gzip cannot - it is single stream.
bzip2 - no idea
grzip (from ports,
> Someone might want to ask if parallelizing tar is even possible.
Answer: Yes. Here's a simple parallel version of tar:
find . | cpio -o -H ustar | gzip > outfile.tgz
There are definitely other approaches.
Tim
___
freebsd-hackers@freebsd.org m
On 10/02/2012 03:06 AM, Adrian Chadd wrote:
> .. please keep in mind that embedded platforms (a) don't necessarily
> benefit from it, and (b) have a very small footprint. Bloating out the
> compression/archival tools for the sake of possible SMP support will
> make me very, very sad.
>
>
>
> Adr
On Oct 2, 2012, at 12:36 AM, Yamagi Burmeister wrote:
> On Mon, 1 Oct 2012 22:16:53 -0700
> Tim Kientzle wrote:
>
>> There are a few different parallel command-line compressors and
>> decompressors in ports; experiment a lot (with large files being read from
>> and/or written to disk) and see
On Mon, Oct 01, 2012 at 10:16:53PM -0700, Tim Kientzle wrote:
> * Implement within libarchive directly. This would benefit tar and
> a handful of other programs that use libarchive, but may not be
> worth the complexity.
The complexity shouldn't actually be that bad. Basically, use a la
Don't worry. I'm well known to over-optimize for both size and speed. I
have an old Pentium 3 800MHz single core that I can use to simulate an
embedded device (well, a decently powered one), to verify that I'm not
killing the single-core performance (I could add CPU capability
detection to help
.. please keep in mind that embedded platforms (a) don't necessarily
benefit from it, and (b) have a very small footprint. Bloating out the
compression/archival tools for the sake of possible SMP support will
make me very, very sad.
Adrian
___
freebsd-
On Mon, 1 Oct 2012 22:16:53 -0700
Tim Kientzle wrote:
> There are a few different parallel command-line compressors and decompressors
> in ports; experiment a lot (with large files being read from and/or written
> to disk) and see what the real effect is. In particular, some decompression
> a
On Oct 1, 2012, at 9:51 AM, Brandon Falk wrote:
> I would be willing to work on a SMP version of tar (initially just gzip or
> something).
>
> I don't have the best experience in compression, and how to multi-thread it,
> but I think I would be able to learn and help out
On 1 October 2012 12:51, Brandon Falk wrote:
> I would be willing to work on a SMP version of tar (initially just gzip or
> something).
>
> I don't have the best experience in compression, and how to multi-thread it,
> but I think I would be able to learn and help out.
>
I would be willing to work on a SMP version of tar (initially just gzip
or something).
I don't have the best experience in compression, and how to multi-thread
it, but I think I would be able to learn and help out.
Note: I would like to make this for *BSD under the BSD license. I am
20 matches
Mail list logo