+1 from me.
Someone did a building block install for us and named a couple io nodes with
initial upper case (unlike all other unix hostnames in our env which are all
lowercase). For a while it just bothered us, and we complained occasionally to
hear that it was not easy to change. Over two
We tune vm-related sysctl values on our gpfs clients.
These are values we use for 256GB+ mem hpc nodes:
vm.min_free_kbytes=2097152
vm.dirty_bytes = 3435973836
vm.dirty_background_bytes = 1717986918
The vm.dirty parameters are to prevent NFS from buffering huge amounts of
writes and then pushing
Could you talk about upcoming work to address excessive prefetch when reading
small fractions of many large files?
Some bioinformatics workloads have a client node reading relatively small
regions of multiple 50GB+ files. We've seen this trigger excessive prefetch
bandwidth (especially on 16MB
We run sklm for tape encryption for spectrum archive – no encryption in gpfs
filesystem on disk pools.
We see no grep hits for “not trust” in our last few sklm_audit.log files.
Best,
Chris
From: on behalf of "Wahl, Edward"
Reply-To: gpfsug main discussion list
Date: Tuesday, September 8,
As Alex mentioned, there are tools that will keep filesystem metadata in a
database and provide query tools.
NYGC uses Starfish and we’ve had good experience with it. At first the only
feature we used is “sfdu” which is a quick replacement for recursive du. Using
this we can script csv reports
We’ve had good luck moving from older Mellanox 1710 ethernet switches to newer
Arista ethernet switches.
Our core is a pair of Arista 7508s primarily with 100G cards.
Leaf switches are Arista 7280QR for racks with 40Gb-connected servers and
7280SR for racks w/ 10Gb-connected servers.
Uplinks
If you have two clusters that are hard to merge, but you are facing the need to
provide capacity for more writes, another option to consider would be to set up
a filesystem on GL2 with an AFM relationship to the filesystem on the netapp
gpfs cluster for accessing older data and point clients to
On our recent ESS systems we do not see /etc/tuned/scale/tuned.conf (or
script.sh) owned by any package (rpm -qif …).
I’ve attached what we have on our ESS 5.3.3 systems.
Best,
Chris
From: on behalf of "Wahl, Edward"
Reply-To: gpfsug main discussion list
Date: Monday, September 16, 2019 at
o restart the nsds. Otherwise I think your plan is
> sound.
>
> Regards,
> Alex
>
>
> On Mon, Jun 17, 2019 at 9:24 AM Christopher Black
> wrote:
>
> > Our network team sometimes needs to take down sections of our network
Our network team sometimes needs to take down sections of our network for
maintenance. Our systems have dual paths thru pairs of switches, but often the
maintenance will take down one of the two paths leaving all our nsd servers
with half bandwidth.
Some of our systems are transmitting at a
We've done it both ways. You will get better performance and fewer challenges
of ensuring processes and memory don't step on eachother if afm gateway node is
not also doing nsd server work. However, using an nsd server that mounts two
filesystems (one via mmremotefs from another cluster) did
I was under the impression that AFM could not move between filesystems in the
same cluster without going through NFS, but perhaps that is outdated. We’ve
only used it in the past to move data between clusters. Could someone with more
experience with AFM within a cluster comment? Our goal is to
the original purpose of
the thread.
Best,
Chris
On 3/29/19, 1:30 PM, "gpfsug-discuss-boun...@spectrumscale.org on behalf of
Matt Cowan" wrote:
On Fri, 29 Mar 2019, Christopher Black wrote:
...
> Main reasoning of the new cluster for us is to be able to make
I suggest option A.
We are facing a similar transition and are going with a new cluster and then
4.x cluster to 5.x cluster migration of existing data. An extra wrinkle for us
is we are going to join some of the old hardware to the new cluster once it is
free of serving current data.
Main
I don’t have a solution, just similar experience with mmputacl vs setfacl.
IMO, needing to dump and reapply full ACLs rather than just specifying what is
to be added is one of a few reasons mmputacl is inferior to setfacl. We do all
our extended ACL manipulation with setfacl from a gpfs native
Thanks for the quick and detailed reply! I had read the manual and was aware of
the warnings about -d (mentioned in my PS).
On systems with high churn (lots of temporary files, lots of big and small
deletes along with many new files), I’ve previously used estimates of snapshot
size as a useful
We have some large filesets (PB+) and filesystems where I would like to monitor
delete rates and estimate how much space we will get back as snapshots expire.
We only keep 3-4 daily snapshots on this filesystem due to churn.
I’ve tried to query the sizes of snapshots using the following command:
We use realmd and some automation for sssd configs to get linux hosts to have
local login and ssh tied to AD accounts, however we do not apply these configs
on our protocol nodes.
From: on behalf of Christof Schmitt
Reply-To: gpfsug main discussion list
Date: Wednesday, January 9, 2019 at
Other tools and approaches that we've found helpful:
msrsync: handles parallelizing rsync within a dir tree and can greatly speed up
transfers on a single node with both filesystems mounted, especially when
dealing with many small files
Globus/GridFTP: set up one or more endpoints on each side,
I can confirm gpfs 5.0.1.1 works with CentOS 7.5 for us (kernel package version
3.10.0-862.el7.x86_64).
Best,
Chris
From: on behalf of Felipe Knop
Reply-To: gpfsug main discussion list
Date: Friday, September 7, 2018 at 6:08 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss]
20 matches
Mail list logo