rsize , wsize is set to 1Mhowever... some current kernel levels
(RHEL7) are cutting it down to 256K peaces .. it is solved with 7.3 (I
think/hope)Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage
I'm sorry for your trouble, but those 4 steps you got from IBM support does
not seem correct. IBM support might not always realize that it's an ESS,
and not plain GPFS... If you take down an ESS IO-node without moving its RG
to the other node using "--servers othernode,thisnode", or by using
We’re currently deploying LROC in many of our compute nodes – results so far
have been excellent. We’re putting in 240gb SSDs, because we have mostly small
files. As far as I know, the amount of inodes and directories in LROC are not
limited, except by the size of the cache disk.
Look at these
as many as possible and both
have maxFilesToCache 128000
and maxStatCache 4
do these effect what sits on the LROC as well? Are those to small?
1million seemed excessive.
On 12/20/16 11:03 AM, Sven Oehme wrote:
> how much files do you want to cache ?
> and do you only want to cache
All,
What is your favorite method for stopping a user process from eating up all
the system memory and saving 1 GB (or more) for the GPFS / system
processes? We have always kicked around the idea of cgroups but never
moved on it.
The problem: A user launches a job which uses all the memory on
Hi Brian,
If I’m not mistaken, once you run the mmlsdisk command on one client any other
client running it will produce the exact same output. Therefore, what we do is
run it once, output that to a file, and propagate that file to any node that
needs it.
HTHAL…
Kevin
On Dec 20, 2016, at
All,
Does the mmlsdisk command generate a lot of admin traffic or take up a lot
of GPFS resources?
In our case, we have it in some of our monitoring routines that run on all
nodes. It is kind of nice info to have, but I am wondering if hitting the
filesystem with a bunch of mmlsdisk commands is