Hello,

We had very bad perf with both rbh scans and changelogs,
our robinhood daemon is running on a baremetal server 
with lot of ram, multi-cpu and fast ssd. 

Thanks to Sebastien Piechurski we discovered that the assignment 
of the process was the main reason of the bad performances.

To fix it : 
- lustre has been assign to the cpu in charge of the hba 
- rbh and mysqld has been assign to the same cpu

Maybe i am out of topic, sorry if so...

Regards
Hervé



----- Mail original -----
De: "Iannetti, Gabriele" ianne...@gsi.de>
À: "Kumar, Amit" 
Cc: "lustre-discuss" 
Envoyé: Mercredi 9 Décembre 2020 10:49:08
Objet: Re: [lustre-discuss] Robinhood scan time

Hi Amit,

we also faced very slow full scan performance before.

As was mentioned before by Aurélien it is essential to investigate the 
processing stages within the Robinhood logs.

In our setup the GET_FID stage was the bottleneck, since the stage had a 
relatively low total number of entries processed more often.
So increasing the number of nb_threads_scan helped.

Of course other stages e.g. DB_APPLY with relatively low total number of 
entries processed can indicate a bottleneck on the database.
So you have to keep in mind that there are multiple layers to take into 
consideration for performance tuning.

For running multiple file system scan tests you could consider doing a partial 
scan (with same test data) with Robinhood instead of scanning the hole file 
system, which will take much more time.

I would like to share a diagram with you, where you can see a comparision with 
nb_threads_scan 64 vs 2.
This was the maximum we have tested so far. In the production system the number 
is set to 48.
Since more is not always better. As far as I can remember we hit issues with 
the main memory then.

Best regards
Gabriele



________________________________________
From: lustre-discuss  on behalf of Degremont, Aurelien 
Sent: Tuesday, December 8, 2020 10:39
To: Kumar, Amit; Stephane Thiell
Cc: lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Robinhood scan time

There could be lots of difference between these 2 systems.
- What is the backend FS type? (ZFS or LDiskfs)
- How many MDT do you have?
- Is 2 threads enough to maximum your scan throughput? Stephane said he used 4 
and 8 of them.
- What the workload running on the MDT at the same time, is it overloaded 
already by your users' jobs?

Robinhood is also dumping its pipeline stats regularly in the logs. You can 
spot which step of the pipeline is slowing you down.

Aurélien

Le 07/12/2020 20:59, « Kumar, Amit »  a écrit :


    Hi Stephane & Aurélien

    Here are the stats that I see in my logs:

    Below is the best and worst avg. speed I noted in the log, with 
nb_threads_scan=2:
    2020/11/03 16:51:04 [4850/3] STATS |      avg. speed  (effective):    
618.32 entries/sec (3.23 ms/entry/thread)
    2020/11/25 18:06:10 [4850/3] STATS |      avg. speed  (effective):    
187.93 entries/sec (10.62 ms/entry/thread)

    Finally the full scan results are below:
    2020/11/25 17:13:41 [4850/4] FS_Scan | Full scan of /scratch completed, 
369729104 entries found (123 errors). Duration = 1964257.21s

    Stephane, now I wonder what could have caused poor scanning performance. 
Once I kicked off my initial scan during the LAD with same number of threads(2) 
my scan along with some users jobs in the following days caused opening and 
closing of file 150-200 million file operations and as a result filled up my 
change log too soon than I expected.  I had to cancel the first initial scan to 
bring the situation under control. After I cleared change log, I asked 
Robinhood to perform a new full scan. I am not sure if this cancel and restart 
could have caused delays with additional lookup into database for existing 
entries of already scanned 200millions files by then? Other thing your point 
out is you have RAID10 SSD, on our end I have RAID-5 3.6TB of SSD's, this 
probably explains the slowness?

    I wasn't sure of the impact of the scan hence chose only 2 threads, I am 
guessing I could bump that up to 4 next times to see if the benefits my scan 
times.

    Thank you,
    Amit

    -----Original Message-----
    From: Stephane Thiell 
    Sent: Monday, December 7, 2020 11:43 AM
    To: Degremont, Aurelien 
    Cc: Kumar, Amit ; Russell Dekema ; lustre-discuss@lists.lustre.org
    Subject: Re: [lustre-discuss] Robinhood scan time

    Hi Amit,

    Your number is very low indeed.

    At our site, we're seeing ~100 million files/day during a Robinhood scan 
with nb_threads_scan =4 and on hardware using Intel based CPUs:

    2020/11/16 07:29:46 [126653/2] STATS |      avg. speed  (effective):   
1207.06 entries/sec (3.31 ms/entry/thread)

    2020/11/16 07:31:44 [126653/29] FS_Scan | Full scan of /oak completed, 
1508197871 entries found (65 errors). Duration = 1249490.23s

    In that case, our Lustre MDS and Robinhood server are running all on 2 x 
CPU E5-2643 v3 @ 3.40GHz.
    The Robinhood server has 768GB of RAM and 7TB of SSDs in RAID-10 for the DB.

    On another filesystem, using AMD Naples -based CPUs and a dedicated 
Robinhood DB, hosted a different server with AMD Rome CPUs, we’re seeing a rate 
of 266M/day during a Robinhood scan with nb_threads_scan = 8:

    2020/09/20 21:43:46 [25731/4] FS_Scan | Full scan of /fir completed, 
877905438 entries found (744 errors). Duration = 284564.88s


    Best,

    Stephane

    > On Dec 7, 2020, at 4:49 AM, Degremont, Aurelien  wrote:
    >
    > Hi Amit,
    >
    > Thanks for this data point, that's interesting.
    > Robinhood prints a scan summary in its logfile at the end of scan. It 
could be nice if you can copy/paste it, for further reference.
    >
    > Aurélien
    >
    > Le 04/12/2020 23:39, « lustre-discuss au nom de Kumar, Amit »  a écrit :
    >
    >    CAUTION: This email originated from outside of the organization. Do 
not click links or open attachments unless you can confirm the sender and know 
the content is safe.
    >
    >
    >
    >    Dual Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz;
    >    256GB RAM
    >    System x3650 M5
    >    Storage for MDT is from NetApp EF560.
    >
    >    Best regards,
    >    Amit
    >
    >    -----Original Message-----
    >    From: Russell Dekema 
    >    Sent: Friday, December 4, 2020 4:27 PM
    >    To: Kumar, Amit 
    >    Cc: lustre-discuss@lists.lustre.org
    >    Subject: Re: [lustre-discuss] Robinhood scan time
    >
    >    Greetings,
    >
    >    What kind of hardware are you running on your metadata array?
    >
    >    Cheers,
    >    Rusty Dekema
    >
    >    On Fri, Dec 4, 2020 at 5:12 PM Kumar, Amit  wrote:
    >>
    >> HI All,
    >>
    >>
    >>
    >> During LAD’20 Andreas mentioned if I could share the Robinhood scan time 
for the 369millions files we have. So here it is. It took ~23 days for me to 
complete initial scan of all 369 million files, on a dedicated robinhood server 
that has 384GB RAM. I had it setup with all tweaks for database and client that 
was mentioned in Robinhood document. I only used 2 threads for this scan. Hope 
this reference helps.
    >>
    >>
    >>
    >> Thank you,
    >>
    >> Amit
    >>
    >>
    >>
    >> _______________________________________________
    >> lustre-discuss mailing list
    >> lustre-discuss@lists.lustre.org
    >> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
    >
    >
    >    ----IF CLASSIFICATION START----
    >
    >    ----IF CLASSIFICATION END----
    >    _______________________________________________
    >    lustre-discuss mailing list
    >    lustre-discuss@lists.lustre.org
    >    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
    >
    > _______________________________________________
    > lustre-discuss mailing list
    > lustre-discuss@lists.lustre.org
    > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to