I frequently change quorum on the fly on both our 4.x and 5.0 clusters during
upgrades/maintenance.
You have sanity in the CCR to start with? (mmccr query, lsnodes, etc,etc)
Anything useful in the logs or if you drop debug on it? ('export DEBUG=1'and
then re-run command)
Ed Wahl
OSC
-O
e: [gpfsug-discuss] Handling bad file names in policies?
Why not just configure a file placement policy using a non existent pool or a
bad encryption key to prevent files with non-printables characters from even
being created in the first place.
Alec
On Fri, Oct 8, 2021, 11:49 AM Wahl, Edward
lt;mailto:gpfsug-discuss-boun...@spectrumscale.org>
An: gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>
CC:
Betreff: [EXTERNAL] Re: [gpfsug-discuss] Handling bad file names in policies?
Datum: Di, 5. Okt 2021 01:29
On 04/10/2021 23:23, Wahl, Edward wrote:
> I
nday, October 4, 2021 7:29 PM
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Handling bad file names in policies?
On 04/10/2021 23:23, Wahl, Edward wrote:
> I know I've run into this before way back, but my notes on how I
> solved this aren't getting the job done in
I know I've run into this before way back, but my notes on how I solved this
aren't getting the job done in Scale 5.0.5.8 and my notes are from 3.5. 😉
Anyone know a way to get a LIST policy to properly feed bad filenames into the
output or an external script?
When I say bad I mean things like c
Does 'rinv |grep -i serial ' work on the x86?
Ed Wahl
OSC
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Hannappel, Juergen
Sent: Thursday, September 2, 2021 11:31 AM
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Serial number of
>-E- Link: ib2s5/U1/P6<-->node152/U1/P1 - Unexpected actual link speed 10
This looks like a bad cable (or port). Trying re-seating the cable on both
ends, or replacing it to get to full Link Speed.
Re-run ibdiagnet to confirm or use something like 'ibportstate' to check it.
Ed Wahl
OSC
-
Curious if this was ever fixed or someone has an APAR # ? I'm still running
into it on 5.0.5.6
Ed Wahl
OSC
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of Stef Coene
Sent: Thursday, July 16, 2020 9:47 AM
To: gpfsug-discuss@spectrumscale.org
Subject:
restripefs/mmrestripefile commands.
Fred
__
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com
- Original message -----
From: "Wahl, Edward"
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list
Cc
Replying to a 3 year old message I sent, hoping that in the last couple of
years that Scale has added some ILM extensions into the policy engine that I
have missed, or somehow didn't notice?
Just ran into a file with an 'unbalanced' flag and I REALLY don't want to have
to mmlsattr everything. AG
Ran into something a good while back and I'm curious how many others this
affects. If folks with encryption enabled could run a quick word count on
their SKLM server and reply with a rough count I'd appreciate it.
I've gone round and round with IBM SKLM support over the last year on this and
We also went with independent filesets for both backup (and quota) reasons for
several years now, and have stuck with this across to 5.x. However we still
maintain a minor number of dependent filesets for administrative use. Being
able to mmbackup on many filesets at once can increase your
I saw something EXACTLY like this way back in the 3.x days when I had a backend
storage unit that had a flaky main memory issue and some enclosures were
constantly flapping between controllers for ownership. Some NSDs were
affected, some were not. I can imagine this could still happen in 4.x a
Disregarding all the other reasons not to run it on the NSDs, many years of
rsync on GPFS has shown us it is ALWAYS faster from clients with reasonable
networks and no other overhead.
Ed
-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of Giovanni Bracco
Se
Interesting. We just deployed an ESS here and are running into a very similar
problem with the gui refresh it appears. Takes my ppc64le's about 45 seconds
to run rinv when they are idle.
I had just opened a support case on this last evening. We're on ESS 5.3.4 as
well. I will wait to see w
What package provides this /usr/lib/tuned/ file?
Ed
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Olaf Weiser
Sent: Monday, September 16, 2019 3:12 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Ganesha all IPv6 sockets - ist
I recall looking at this a year or two back. Ganesha is either v4 and v6 both
(ie: the encapsulation you see), OR ipv4 ONLY. (ie: /etc/modprobe.d/ipv6.conf
disable=1)
Ed
From: gpfsug-discuss-boun...@spectrumscale.org
on behalf of Billich Heinrich Rainer
I'm assuming that was a run in the foreground and not using QoS?
Our timings sound roughly similar for a Foreground run under 4.2.3.x. 1 hour
and ~2 hours for 100million and 300 million each. Also I'm assuming actual
file counts, not inode counts!
Background is, of course, all over the place
We use NHC here (Node Health Check) from LBNL and our SS clients are almost all
using NFS root. We have a check where we look for access to a couple of
dotfiles (we have multiple SS file systems) and will mark a node offline if the
checks fail.
Many things can contribute to the failure of a si
This is rather dependant on SS version.
So what used to happen before 4.2.2.* is that a client would be unable to mount
the filesystem in question and would give an error in the mmfs.log.latest for
an SGPanic, In 4.2.2.* It appears it will now mount the file system and then
give errors on fi
Hey Jason if you want to get me an lsscsi output I can probably whip up a
multi-path.conf block for your customer or talk to them on the phone if you
like.
Ed
- Reply message -
From: "Jason Bennett"
To: "gpfsug main discussion list"
Subject: [gpfsug-discuss] multipath.conf for EMC V-
This is a great idea. However there are quite a few other things to consider:
-max file count? If you need say a couple of billion files, this will affect
things.
-wish to store small files in the system pool in late model SS/GPFS?
-encryption? No data will be stored in the system pool so la
Along the same vein I've patched rsync to maintain source atimes in Linux for
large transitions such as this. Along with the stadnard "patches" mod for
destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. I've not
yet ported it to 3.1.x
https://www.osc.edu/sites/osc.edu/files/sta
First off let me recommend vsftpd. We've used that in a few single point to
point cases to excellent results.
Next, I'm going to agree with Johnathan here, any hacker that someone gains
advantage on an FTP server will probably not have the knowledge to take
advantage of the IB, however there
24 matches
Mail list logo