On Fri, Aug 13, 2010 at 9:03 AM, Andrew Farnsworth <farn...@gmail.com> wrote:
> Being a dev server I can control hits on it so I let the copy get to >25%
> and then hit it... it still died and started over.  25% of 700Mb is over
> 100Mb.  If it doesn't hit restartable by then a developer needs to be
> introduced to the cluebat.
>
> Andy
>

I didn't think of this, but I bet robocopy sees that the log file is
different from the original when new data is added to it.  I don't
know if it does a checksum or just checks the timestamp, but that
would explain the behavior.  It sees that your log file is different
and assumes that resuming would corrupt your data.

If this is the case, you might be back to figuring out how to loop
through the file a few KB at a time.
I don't think this is the answer in your enviroment, but I'll throw it
out there anyway:

Wget has a resumable download option where it grabs the target file
starting at the point where the local copy leaves off.  I'm pretty
sure that it ignores timestamps and etags and only goes by file size.
If you could create a vhost and restrict it to access from localhost,
and use the log folder as the web root, and get the Delorean up to
88mph, you might be able to get your log.

-- 
Don Delp
618.616.2993
http://nesman.net/

-- 
You received this message because you are subscribed to the Google Groups 
"NLUG" group.
To post to this group, send email to nlug-talk@googlegroups.com
To unsubscribe from this group, send email to 
nlug-talk+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/nlug-talk?hl=en

Reply via email to