mPct to ensure you have a reasonable and understood limit for
>how much memory mmfsd will use.
>
>Best,
>Chris
>
>On 12/1/20, 1:32 PM, "gpfsug-discuss-boun...@spectrumscale.org on behalf of
>Renata Maria Dart" ren...@slac.stanford.edu> wrote:
>
>Hi, some of ou
Hi, some of our gpfs clients will get stale file handles for gpfs
mounts and it seems to be related to memory depletion. Even after the
memory is freed though gpfs will continue be unavailable and df will
hang. I have read about setting vm.min_free_kbytes as a possible fix
for this, but wasn't
Hi Frederick, ours is a physics research lab with a mix of new
eperiments and ongoing research. While some users embrace and desire
the latest that tech has to offer and are actively writing code to
take advantage of it, we also have users running older code on data
from older experiments which
Thanks very much for your response Carl, this is the information I was looking
for.
Renata
On Thu, 20 Feb 2020, Carl Zetie - ca...@us.ibm.com wrote:
>To reiterate what?s been said on this thread, and to reaffirm the official IBM
>position:
>
>
> * Scale 4.2 reaches EOS in September 2020,
Hi, I understand gpfs 4.2.3 is end of support this coming September. The
support page
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux__rhelkerntable
indicates that gpfs version 5.0 will not run on rhel6 and is unsupported.
1. Is there extended support
ase contact
>1-800-237-5511 in the United States or your local IBM Service Center in
>other countries.
>
>The forum is informally monitored as time permits and should not be used
>for priority messages to the Spectrum Scale (GPFS) team.
>
>
>
>From: Renata Maria Dart
>To:
>From: gpfsug-discuss-boun...@spectrumscale.org
>[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Renata Maria Dart
>[ren...@slac.stanford.edu]
>Sent: 27 June 2018 19:09
>To: gpfsug-discuss@spectrumscale.org
>Subject: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr issues
Hi, any gpfs commands fail with:
root@ocio-gpu01 ~]# mmlsmgr
get file failed: Not enough CCR quorum nodes available (err 809)
gpfsClusterInit: Unexpected error from ccr fget mmsdrfs. Return code: 158
mmlsmgr: Command failed. Examine previous error messages to determine cause.
The two "working"
Hi, we have a client cluster of 4 nodes with 3 quorum nodes. One of the
quorum nodes is no longer in service and the other was reinstalled with
a newer OS, both without informing the gpfs admins. Gpfs is still
"working" on the two remaining nodes, that is, they continue to have access
to the