Thanks, Jean-Baptiste. This is very interesting, partly because it is very 
similar in approach to an idea I have about splitting the data replication task 
across multiple processes and hosts. Since I am working with Lustre and with 
very large (petascale) file systems, I want to be able to exploit Lustre's find 
and changelogs features, but I could use FPart to do the actual work.  One 
could also use a job scheduler to keep track of jobs that fail.

Malcolm.

--

Malcolm Cowe, Solutions Architect
High Performance Data Division 
+61 408 573 001



> -----Original Message-----
> From: Jean-Baptiste Denis [mailto:[email protected]]
> Sent: Saturday, July 12, 2014 6:59 PM
> To: [email protected]
> Subject: Re: [robinhood-support] Project Web Site?
> 
> On 07/12/2014 12:51 AM, Cowe, Malcolm J wrote:
> >  BTW, I was wanting to look at the backup purpose and use it to create a
> parallel data mover
> 
> You might be interested in using the fpart program
> (http://sourceforge.net/projects/fpart/) which can be used for moving data
> around. See the "migrating data" section in the README. A colleague of
> mine
> wrote it to help us moving hundreds of Terabytes (80% of the files were <
> 128k...) using multiple rsync streams between proprietary NAS appliance.
> 
> Jean-Baptiste
> 
> 
> ------------------------------------------------------------------------------
> _______________________________________________
> robinhood-support mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/robinhood-support

------------------------------------------------------------------------------
_______________________________________________
robinhood-support mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/robinhood-support

Reply via email to