than directIO. If you need the database to grow all the time, I would avoid
using direct IO and use a larger GPFS pagepool to allow it cache data. Using
directIO is the better solution.
Jim Doherty
On Monday, June 7, 2021, 11:03:26 AM EDT, Wally Dietrich
wrote:
Hi. Is there
nance (mmfsck, mmrestripefs) make
sure they have enough pagepool as a small pagepool could impact the
performance of these operations.
Jim Doherty On Monday, June 7, 2021, 08:55:49 AM EDT, Leonardo Sala
wrote:
Hallo,
we do have multiple bare-metal GPFS clusters with infiniband f
You will need to do this with chown from the c library functions (could do
this from perl or python). If you try to change this from a shell script you
will hit the Linux command which will have a lot more overhead. I had a
customer attempt this using the shell and it ended up taking
What is the minimum release level of the Spectrum Scale 5.0.4 cluster? Is it
4.2.3.X?
Jim Doherty
On Thursday, May 28, 2020, 6:31:21 PM EDT, Prasad Surampudi
wrote:
We have two scale clusters, cluster-A running version Scale 4.2.3 and RHEL6/7
and Cluster-B running Spectrum
application.
There have been some memory leaks fixed in Ganesha that will be in 4.2.3 PTF15
which is available on fixcentral
Jim Doherty
On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme
wrote:
Unfortunate more complicated :)
The consumption here is an estimate based on 512b inodes
For any process with a large number of threads the VMM size has become an
imaginary number ever since the glibc change to allocate a heap per thread.
I look to /proc/$pid/status to find the memory used by a proc RSS + Swap +
kernel page tables.
Jim
On Wednesday, March 6, 2019, 4:25:48
Are all of the slow IOs from the same NSD volumes?
You could run an mmtrace and take an internaldump and open a ticket to the
Spectrum Scale queue. You may want to limit the run to just your nsd servers
and not all nodes like I use in my example. Or one of the tools we use to
review a
At a guess with no data if the application is opening more files than
can fit in the maxFilesToCache (MFTC) objects GPFS will expand the MFTC to
support the open files, but it will also scan to try and free any unused
objects. If you can identify the user job that is causing this y
It could mean a shortage of nsd server threads or a congested network.
Jim
On Thursday, October 4, 2018, 3:55:10 PM EDT, Buterbaugh, Kevin L
wrote:
Hi All,
What does it mean if I have a few dozen very long I/O’s (50 - 75 seconds) on a
gateway as reported by “mmdiag —iohist” an
The data is also shown in an internaldump as a part of the mmfsadm dump tscomm
data, the RTO & RTT times are listed in microseconds. So the RTO here in my
example is 18.5 seconds (see below). You can get the same information from
the Linux networking command ss -i. The normal setting
You may want to get an mmtrace, but I suspect that the disk IOs are slow.
The iohist is showing the time from when the start IO was issued until it was
finished. Of course if you have disk IOs taking 10x too long then other IOs
are going to queue up behind it. If there are more IOs th
dev/mapper/ devices in the `mmlsnsd -X` output, but
due to the nsddevices file configuration they are all configured as ‘generic’
Devtype.
-Bryan
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org]On Behalf Of Jim Doherty
Sent: Wednesday, January
Run a mmlsnsd -X I suspect you will see that GPFS is using one of the
/dev/sd* "generic" paths to the LUN, not the /dev/mapper/ path. In our case
the device is setup as dmm
[root@service5 ~]# mmlsnsd -X
Disk name NSD volume ID Device Devtype Node name
2099520 bytes in use
17500049370 hard limit on memory usage 16778240 bytes committed to regions
1 number of regions 4 allocations 0 frees
0 allocation failures
On Mon, 2017-07-24 at 13:11 +, Jim Doherty wrote:
There are 3 places that the G
There are 3 places that the GPFS mmfsd uses memory the pagepool plus 2 shared
memory segments. To see the memory utilization of the shared memory segments
run the command mmfsadm dump malloc . The statistics for memory pool id 2
is where maxFilesToCache/maxStatCache objects are and th
15 matches
Mail list logo