While the apostrophe is evil it's not the problem:
root@it-gti-02 test1]# mkdir "it/stu'pid name"
[root@it-gti-02 test1]# mmchattr -i yes it/stu\'pid\ name
[root@it-gti-02 test1]# mmchattr -i no it/stu\'pid\ name
> From: "Paul Ward"
> To: "gpfsug main discussion list"
> Sent: Wednesday, 23
Hi,
I use a python script via cron job, it checks how many snapshots exist and
removes those that
exceed a configurable limit, then creates a new one.
Deployed via puppet it's much less hassle than click around in a GUI/
> From: "Kidger, Daniel"
> To: "gpfsug main discussion list"
> Sent:
Hi,
I noticed that whe I read directory entries with the usual readdir() function
that
for fifos I get in the d_type files a 0, i.e. DT_UNKNOWN while if I try that on
a
different file system e.g. ext4 I get the expected DT_FIFO.
Is this a bug or an expected feature?
--
Dr. Jürgen Hannappel
Hi,
just got notified that 5.1.2.2 is out.
What are the changes to 5.1.2.1?
https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=summary-changes
does not specify that
--
Dr. Jürgen Hannappel DESY/ITTel. : +49 40 8998-4616
___
gpfsug-discuss
Hi,
I just noticed that tday a new ESS release (6.1.2.1) appeared on fix central.
What I can't find is a list of changes to 6.1.2.0, and anyway finding the
change list is always a PITA.
Does anyone know what changed?
--
Dr. Jürgen Hannappel DESY/ITTel. : +49 40 8998-4616
Hi,
on an ESS node with power cpu I can get the serial number from
/proc/device-tree/system-id
which is very useful sometimes
On nodes with X86 architecture (Lenovo GSS or IBM ESS3XXX_) there
is no such pseudo-file. Is there a simple way to get at the serial number?
--
Dr. Jürgen Hannappel
Hi,
mmrepquota reports without the --block-size parameter the size in units of
1KiB, so (if no ill-advised copy-paste editing confuses us) we are not talking
about 400GiB but 400KiB.
With just 863 files (from the inode part of the repquota output) and therefore
0.5KiB/file on average that
Hi,
when unlinking filesets that sometimes fails because some open files on that
fileset still exist.
Is there a way to find which files are open, and from which node?
Without running a mmdsh -N all lsof on serveral (big) remote clusters, that
is.
--
Dr. Jürgen Hannappel DESY/ITTel. :
Hi,
in a program after reading a file I did a gpfs_fcntl() with
GPFS_CLEAR_FILE_CACHE to get rid of the now unused pages in the file cache.
That works fine, but if the file system is read-only (in a remote cluster) this
fails with a message that the file system is read only.
Is that expected
Hi,
I have a CES node exporting some filesystems vis smb and ganesha in a standard
CES setup.
Now I want to mount a nfs share from a different, non-CES server on this CES
node.
This did not work:
mount -o -fstype=nfs4,minorversion=1,rw,rsize=65536,wsize=65536
some.other.server:/some/path /mnt/
37)
> Email: jtol...@us.ibm.com
> LinkedIn: www.linkedin.com/in/john-t-olson
> Follow me on twitter: @John_T_Olson
> "Hannappel, Juergen" ---08/26/2020 07:25:12 AM---Hello, in the bulletin [
> https://www.ibm.com/support/pages/node/6323241 |
> https://www.ibm.com/support
Hello,
in the bulletin https://www.ibm.com/support/pages/node/6323241 it's mentioned
"IBM Spectrum Scale, shipped with Openstack keystone, is exposed to
vulnerabilities as detailed below."
I am not aware of any openstack components in our standard Scale deployments,
so how am I to read this
... just for the record:
man mmchnode | grep force | wc -l
0
In the man page the --force option is not mentioned at all.
The same is true for mmdelnode:
man mmdelnode | grep force | wc -l
0
But there the error output gives a hint that it's there:
mmdelnode: If the affected nodes are
Thanks!
That helped. With the --force I could change roles, expell the node and have
the "cluster" now up on the remaining node.
> From: "Jan-Frode Myklebust"
> To: "gpfsug main discussion list"
> Sent: Tuesday, 18 August, 2020 15:45:33
> Subject: Re: [gpfsug-discuss] Tiny cluster quorum
Hi,
on a tiny GPFS cluster with just two nodes one node died (really dead, cannot
be switched on any more), and now I cannot remove it from the cluster anymore.
[root@exflonc42 ~]# mmdelnode -N exflonc41
mmdelnode: Unable to obtain the GPFS configuration file lock.
mmdelnode: GPFS was unable to
Hi,
an example in python 2.7 (for Python 3 you need to add e.g.
errors='ignore'
to the parameters of the Popen call to get the proper kind of
text stream as output.
import subprocess
import csv
mmlsfileset = subprocess.Popen(['/usr/lpp/mmfs/bin/mmlsfileset',
Hi,
I tried to do gpfs_fcntl with the GPFS_ACCESS_RANGE hint,
with the isWrite field set to 0 and start = 0, lenght = size of the file
to indicate that I want to read the entire file now,
but on a readonly remote file system the gpfs_fcntl() call returns -1
and sets errno to "Read-only file
Hi,
a gpfs.mount target should be automatically created at boot by the
systemd-fstab-generator from the fstab entry, so no need with hackery like
ismountet.txt...
- Original Message -
> From: "Jonathan Buzzard"
> To: gpfsug-discuss@spectrumscale.org
> Sent: Tuesday, 28 April, 2020
18 matches
Mail list logo