Hi Jan,

We are syncing ACLs, groups, owners and timestamps aswell :)

/Andi Christiansen

>     On 11/17/2020 1:07 PM Jan-Frode Myklebust <janfr...@tanso.net> wrote:
> 
> 
>     Nice to see it working well!
> 
>     But, what about ACLs? Does you rsync pull in all needed metadata, or do 
> you also need to sync ACLs ? Any plans for how to solve that ?
> 
>     On Tue, Nov 17, 2020 at 12:52 PM Andi Christiansen 
> <a...@christiansen.xxx> wrote:
> 
>         > > Hi all,
> > 
> >         thanks for all the information, there was some interesting things 
> > amount it..
> > 
> >         I kept on going with rsync and ended up making a file with all top 
> > level user directories and splitting them into chunks of 347 per rsync 
> > session(total 42000 ish folders). yesterday we had only 14 sessions with 
> > 3000 folders in each and that was too much work for one rsync session..
> > 
> >         i divided them out among all GPFS nodes to have them fetch an area 
> > each and actually doing that 3 times on each node and that has now boosted 
> > the bandwidth usage from 3Gbit to around 16Gbit in total..
> > 
> >         all nodes have been seing doing work above 7Gbit individual which 
> > is actually near to what i was expecting without any modifications to the 
> > NFS server or TCP tuning..
> > 
> >         CPU is around 30-50% on each server and mostly below or around 30% 
> > so it seems like it could have handled abit more sessions..
> > 
> >         Small files are really a killer but with all 96+ sessions we have 
> > now its not often all sessions are handling small files at the same time so 
> > we have an average of about 10-12Gbit bandwidth usage.
> > 
> >         Thanks all! ill keep you in mind if for some reason we see it 
> > slowing down again but for now i think we will try to see if it will go the 
> > last mile with a bit more sessions on each :)
> > 
> >         Best Regards
> >         Andi Christiansen
> > 
> >         > On 11/17/2020 9:57 AM Uwe Falke <uwefa...@de.ibm.com 
> > mailto:uwefa...@de.ibm.com > wrote:
> >         >
> >         > 
> >         > Hi, Andi, sorry I just took your 20Gbit for the sign of 2x10Gbps 
> > bons, but
> >         > it is over two nodes, so no bonding. But still, I'd expect to 
> > open several
> >         > TCP connections in parallel per source-target pair  (like with 
> > several
> >         > rsyncs per source node) would bear an advantage (and still I 
> > thing NFS
> >         > doesn't do that, but I can be wrong).
> >         > If more nodes have access to the Isilon data they could also 
> > participate
> >         > (and don't need NFS exports for that).
> >         >
> >         > Mit freundlichen Grüßen / Kind regards
> >         >
> >         > Dr. Uwe Falke
> >         > IT Specialist
> >         > Hybrid Cloud Infrastructure / Technology Consulting & 
> > Implementation
> >         > Services
> >         > +49 175 575 2877 Mobile
> >         > Rathausstr. 7, 09111 Chemnitz, Germany
> >         > uwefa...@de.ibm.com mailto:uwefa...@de.ibm.com
> >         >
> >         > IBM Services
> >         >
> >         > IBM Data Privacy Statement
> >         >
> >         > IBM Deutschland Business & Technology Services GmbH
> >         > Geschäftsführung: Sven Schooss, Stefan Hierl
> >         > Sitz der Gesellschaft: Ehningen
> >         > Registergericht: Amtsgericht Stuttgart, HRB 17122
> >         >
> >         >
> >         >
> >         > From:   Uwe Falke/Germany/IBM
> >         > To:     gpfsug main discussion list 
> > <gpfsug-discuss@spectrumscale.org mailto:gpfsug-discuss@spectrumscale.org >
> >         > Date:   17/11/2020 09:50
> >         > Subject:        Re: [EXTERNAL] [gpfsug-discuss] 
> > Migrate/syncronize data
> >         > from Isilon to Scale over       NFS?
> >         >
> >         >
> >         > Hi Andi,
> >         >
> >         > what about leaving NFS completeley out and using rsync  (multiple 
> > rsyncs
> >         > in parallel, of course) directly between your source and target 
> > servers?
> >         > I am not sure how many TCP connections (suppose it is NFS4) in 
> > parallel
> >         > are opened between client and server, using a 2x bonded interface 
> > well
> >         > requires at least two.  That combined with the DB approach 
> > suggested by
> >         > Jonathan to control the activity of the rsync streams would be my 
> > best
> >         > guess.
> >         > If you have many small files, the overhead might still kill you. 
> > Tarring
> >         > them up into larger aggregates for transfer would help a lot, but 
> > then you
> >         > must be sure they won't change or you need to implement your own 
> > version
> >         > control for that class of files.
> >         >
> >         > Mit freundlichen Grüßen / Kind regards
> >         >
> >         > Dr. Uwe Falke
> >         > IT Specialist
> >         > Hybrid Cloud Infrastructure / Technology Consulting & 
> > Implementation
> >         > Services
> >         > +49 175 575 2877 Mobile
> >         > Rathausstr. 7, 09111 Chemnitz, Germany
> >         > uwefa...@de.ibm.com mailto:uwefa...@de.ibm.com
> >         >
> >         > IBM Services
> >         >
> >         > IBM Data Privacy Statement
> >         >
> >         > IBM Deutschland Business & Technology Services GmbH
> >         > Geschäftsführung: Sven Schooss, Stefan Hierl
> >         > Sitz der Gesellschaft: Ehningen
> >         > Registergericht: Amtsgericht Stuttgart, HRB 17122
> >         >
> >         >
> >         >
> >         >
> >         > From:   Andi Christiansen <a...@christiansen.xxx>
> >         > To:     "gpfsug-discuss@spectrumscale.org 
> > mailto:gpfsug-discuss@spectrumscale.org "
> >         > <gpfsug-discuss@spectrumscale.org 
> > mailto:gpfsug-discuss@spectrumscale.org >
> >         > Date:   16/11/2020 20:44
> >         > Subject:        [EXTERNAL] [gpfsug-discuss] Migrate/syncronize 
> > data from
> >         > Isilon to Scale over    NFS?
> >         > Sent by:        gpfsug-discuss-boun...@spectrumscale.org 
> > mailto:gpfsug-discuss-boun...@spectrumscale.org
> >         >
> >         >
> >         >
> >         > Hi all,
> >         >
> >         > i have got a case where a customer wants 700TB migrated from 
> > isilon to
> >         > Scale and the only way for him is exporting the same directory on 
> > NFS from
> >         > two different nodes...
> >         >
> >         > as of now we are using multiple rsync processes on different 
> > parts of
> >         > folders within the main directory. this is really slow and will 
> > take
> >         > forever.. right now 14 rsync processes spread across 3 nodes 
> > fetching from
> >         > 2..
> >         >
> >         > does anyone know of a way to speed it up? right now we see from 
> > 1Gbit to
> >         > 3Gbit if we are lucky(total bandwidth) and there is a total of 
> > 30Gbit from
> >         > scale nodes and 20Gbits from isilon so we should be able to reach 
> > just
> >         > under 20Gbit...
> >         >
> >         >
> >         > if anyone have any ideas they are welcome!
> >         >
> >         >
> >         > Thanks in advance
> >         > Andi Christiansen _______________________________________________
> >         > gpfsug-discuss mailing list
> >         > gpfsug-discuss athttp://spectrumscale.org
> >         > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >         >
> >         >
> >         >
> >         >
> >         >
> >         >
> >         > _______________________________________________
> >         > gpfsug-discuss mailing list
> >         > gpfsug-discuss athttp://spectrumscale.org
> >         > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >         _______________________________________________
> >         gpfsug-discuss mailing list
> >         gpfsug-discuss athttp://spectrumscale.org
> >         http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> > 
> >     >     _______________________________________________
>     gpfsug-discuss mailing list
>     gpfsug-discuss at spectrumscale.org
>     http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to