How well does the tool deal with *really big* archives?

Say, a 100mb archive with over 5000 files (each usually around 30k).  All
laid out in a single directory; not a hiearchy.  As you can imagine, dumping
all those files into the local filesystem introduces some 'interesting'
complications regarding performance.  So I'd rather avoid extracting them
directly.  I'm fine with decompressing the .tar.gz (or .bz) part of it first
as that balloons the 100mb archive to about 1gb uncompressed.  It's the
resulting avalance of individual files, potentially with names not legal for
NTFS that presents the biggest problem.

I want to open the archives, generate a hash of each file and record other
filesystem metadata like size, creation and modification dates and the like.
Can I use this class to get at that data or do I have to dump it to disk
first?

I'm in the process of trying the sharplib class but would welcome advice on
any 'gotchas' ahead of time....

-Bill Kearney


----- Original Message ----- 

> Hrm... Looks like no attachments are allowed. Here it is inline. You
should
> be able to manually extract each file individually so you can tweak the
> filename. I use the ExtractContents method here because it works for my
> tars...



------------------------ Yahoo! Groups Sponsor --------------------~--> 
Get fast access to your favorite Yahoo! Groups. Make Yahoo! your home page
http://us.click.yahoo.com/dpRU5A/wUILAA/yQLSAA/X1EolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/AspNetMetroArea/

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 




Reply via email to