On 3/24/17 12:17 PM, IBM Spectrum Scale wrote:
Hi Bryan,

Making sure Malahal's reply was received by the user group.

>> Then we noticed that the CES host had 5.4 million files open

This is technically not possible with ganesha alone. A process can only open 1 
million files on RHEL distro. Either we have leaks in kernel or some other 
processes contributing to this.

Ganesha does keep NFSv3 files open and keep open for performance. It doesn't 
have a good logic to close them after some inactivity. It does close them if 
the number is close to max open files which is configurable.
how do you set the max open files for Ganesha?

PS: kNFS does open, read/write, and then close.  No caching in older versions. 
They did have a feature in a recent code to cache open files in NFSv3 as well.

Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact  1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.



From:        Bryan Banister 
<bbanis...@jumptrading.com><mailto:bbanis...@jumptrading.com>
To:        gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org><mailto:gpfsug-discuss@spectrumscale.org>
Date:        03/23/2017 08:27 AM
Subject:        Re: [gpfsug-discuss] CES node slow to respond
Sent by:        
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
________________________________



Anybody from IBM willing/able to give us some explanation of why ganesha is 
holding open so many files?  Is this expected/needed/etc?

Or do we have to open a PMR to get some kind of explanation?
-B

-----Original Message-----
From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil
Sent: Thursday, March 23, 2017 10:24 AM
To: gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] CES node slow to respond

FYI all

We also ran into this after bumping maxFilesToCache.

Mar 22 13:02:37 ces1 ntpd[1191]: ./../lib/isc/unix/ifiter_ioctl.c:348:
unexpected error:
Mar 22 13:02:37 ces1 ntpd[1191]: making interface scan socket: Too many
open files in system

fix

sysctl -w fs.file-max=1000000


On 3/22/17 12:01 PM, Bryan Banister wrote:
> We had a similar issue and also were instructed by IBM Support to increase 
> the maxFilesToCache to an insane value... Basically when the file cache gets 
> full then the host will spend all of its cycles looking for a file to evict 
> every time a new file is opened...  baaaah.
>
> Not sure why Ganesha has to keep so many files open... I can't believe our 
> NFS clients actually keep that many open.  cNFS never needed this.
> -Bryan
>
> -----Original Message-----
> From: 
> gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
>  [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt Weil
> Sent: Wednesday, March 22, 2017 11:43 AM
> To: gpfsug main discussion list 
> <gpfsug-discuss@spectrumscale.org><mailto:gpfsug-discuss@spectrumscale.org>
> Subject: [gpfsug-discuss] CES node slow to respond
>
> All,
>
> We had an indecent yesterday where one of our CES nodes slowed to a
> crawl.  GPFS waiters showed pre fetch threads going after inodes.
> iohist also showed lots of inode fetching.  Then we noticed that the CES
> host had 5.4 million files open.
>
> The change I made was to set maxStatCache=DEFAULT because it is linux.
> And set maxFilesToCache=10000000 it was set to 500000.  Then restarted GPFS.
>
> Is there something else we should change as well.
>
> Thanks
>
> Matt
>
>
> ________________________________
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> ________________________________
>
> Note: This email is for the confidential use of the named addressee(s) only 
> and may contain proprietary, confidential or privileged information. If you 
> are not the intended recipient, you are hereby notified that any review, 
> dissemination or copying of this email is strictly prohibited, and to please 
> notify the sender immediately and destroy this email and any attachments. 
> Email transmission cannot be guaranteed to be secure or error-free. The 
> Company, therefore, does not make any guarantees as to the completeness or 
> accuracy of this email or any attachments. This email is for informational 
> purposes only and does not constitute a recommendation, offer, request or 
> solicitation of any kind to buy, sell, subscribe, redeem or perform any type 
> of transaction of a financial product.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


________________________________
The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

________________________________

Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



________________________________
The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to