All,

This turned out to be processes copying data from GPFS to local /tmp. 
Once the system memory was full it started blocking while the data was
being flushed to disk.  This process was taking long enough to have
leases expire.

Matt


On 3/2/16 2:24 PM, Simon Thompson (Research Computing - IT Services) wrote:
> Vaguely related, we used to see the out of memory killer regularly go for 
> mmfsd, which should kill user process and pbs_mom which ran from gpfs.
>
> We modified the gpfs init script to set the score for mmfsd for oom to help 
> prevent this. (we also modified it to wait for ib to come up as well, need to 
> revisit this now I guess as there is systemd support in 4.2.0.1 so we should 
> be able to set a .wants there).
>
> Simon
> ________________________________________
> From: gpfsug-discuss-boun...@spectrumscale.org 
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Bryan Banister 
> [bbanis...@jumptrading.com]
> Sent: 02 March 2016 20:17
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] cpu shielding
>
> I would agree with Vic that in most cases the issues are with the underlying 
> network communication.  We are using the cgroups to mainly protect against 
> runaway processes that attempt to consume all memory on the system,
> -Bryan
>
> -----Original Message-----
> From: gpfsug-discuss-boun...@spectrumscale.org 
> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of 
> viccorn...@gmail.com
> Sent: Wednesday, March 02, 2016 2:15 PM
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] cpu shielding
>
> Hi,
>
> How sure are you that it is cpu scheduling that is your problem?
>
> Are you using IB or Ethernet?
>
> I have seen problems that look like yours in the past with single-network 
> Ethernet setups.
>
> Regards,
>
> Vic
>
> Sent from my iPhone
>
>> On 2 Mar 2016, at 20:54, Matt Weil <mw...@genome.wustl.edu> wrote:
>>
>> Can you share anything more?
>> We are trying all system related items on cpu0 GPFS is on cpu1 and the
>> rest are used for the lsf scheduler.  With that setup we still see
>> evictions.
>>
>> Thanks
>> Matt
>>
>>> On 3/2/16 1:49 PM, Bryan Banister wrote:
>>> We do use cgroups to isolate user applications into a separate cgroup which 
>>> provides some headroom of CPU and memory resources for the rest of the 
>>> system services including GPFS and its required components such SSH, etc.
>>> -B
>>>
>>> -----Original Message-----
>>> From: gpfsug-discuss-boun...@spectrumscale.org
>>> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Matt
>>> Weil
>>> Sent: Wednesday, March 02, 2016 1:47 PM
>>> To: gpfsug main discussion list
>>> Subject: [gpfsug-discuss] cpu shielding
>>>
>>> All,
>>>
>>> We are seeing issues on our GPFS clients where mmfsd is not able to respond 
>>> in time to renew its lease. Once that happens the file system is unmounted. 
>>>  We are experimenting with c groups to tie mmfsd and others to specified 
>>> cpu's.  Any recommendations out there on how to shield GPFS from other 
>>> process?
>>>
>>> Our system design has all PCI going through the first socket and that seems 
>>> to be some contention there as the RAID controller with SSD's and nics are 
>>> on that same bus.
>>>
>>> Thanks
>>>
>>> Matt
>>>
>>>
>>> ____
>>> This email message is a private communication. The information transmitted, 
>>> including attachments, is intended only for the person or entity to which 
>>> it is addressed and may contain confidential, privileged, and/or 
>>> proprietary material. Any review, duplication, retransmission, 
>>> distribution, or other use of, or taking of any action in reliance upon, 
>>> this information by persons or entities other than the intended recipient 
>>> is unauthorized by the sender and is prohibited. If you have received this 
>>> message in error, please contact the sender immediately by return email and 
>>> delete the original message from all computer systems. Thank you.
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>>
>>> ________________________________
>>>
>>> Note: This email is for the confidential use of the named addressee(s) only 
>>> and may contain proprietary, confidential or privileged information. If you 
>>> are not the intended recipient, you are hereby notified that any review, 
>>> dissemination or copying of this email is strictly prohibited, and to 
>>> please notify the sender immediately and destroy this email and any 
>>> attachments. Email transmission cannot be guaranteed to be secure or 
>>> error-free. The Company, therefore, does not make any guarantees as to the 
>>> completeness or accuracy of this email or any attachments. This email is 
>>> for informational purposes only and does not constitute a recommendation, 
>>> offer, request or solicitation of any kind to buy, sell, subscribe, redeem 
>>> or perform any type of transaction of a financial product.
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>> ____
>> This email message is a private communication. The information transmitted, 
>> including attachments, is intended only for the person or entity to which it 
>> is addressed and may contain confidential, privileged, and/or proprietary 
>> material. Any review, duplication, retransmission, distribution, or other 
>> use of, or taking of any action in reliance upon, this information by 
>> persons or entities other than the intended recipient is unauthorized by the 
>> sender and is prohibited. If you have received this message in error, please 
>> contact the sender immediately by return email and delete the original 
>> message from all computer systems. Thank you.
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> ________________________________
>
> Note: This email is for the confidential use of the named addressee(s) only 
> and may contain proprietary, confidential or privileged information. If you 
> are not the intended recipient, you are hereby notified that any review, 
> dissemination or copying of this email is strictly prohibited, and to please 
> notify the sender immediately and destroy this email and any attachments. 
> Email transmission cannot be guaranteed to be secure or error-free. The 
> Company, therefore, does not make any guarantees as to the completeness or 
> accuracy of this email or any attachments. This email is for informational 
> purposes only and does not constitute a recommendation, offer, request or 
> solicitation of any kind to buy, sell, subscribe, redeem or perform any type 
> of transaction of a financial product.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


____
This email message is a private communication. The information transmitted, 
including attachments, is intended only for the person or entity to which it is 
addressed and may contain confidential, privileged, and/or proprietary 
material. Any review, duplication, retransmission, distribution, or other use 
of, or taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is unauthorized by the sender and is 
prohibited. If you have received this message in error, please contact the 
sender immediately by return email and delete the original message from all 
computer systems. Thank you.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to