dite in
aerogel/Nick Foster's sample
It’s the “Nick Foster's sample” folder I want to delete, but it says it is
immutable and I can’t disable that.
I suspect it’s the apostrophe confusing things.
Kindest regards,
Paul
Paul Ward
TS Infrastructure Architect
Natural History M
eum
T: 02079426450
E: p.w...@nhm.ac.uk
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of IBM Spectrum Scale
Sent: 21 February 2022 16:12
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] immutable folder
Hi Paul,
Have you tried mmunlinkfileset first?
Regards, The Spectr
Hi Paul,
Have you tried mmunlinkfileset first?
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then please
iption automatically generated
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of IBM Spectrum Scale
Sent: 19 January 2022 15:09
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org
Subject: Re: [gpfsug-discuss] mmbackup file selections
This is to set environmen
should that be run –
on all backup nodes, or all nodes?
Kindest regards,
Paul
Paul Ward
TS Infrastructure Architect
Natural History Museum
T: 02079426450
E: p.w...@nhm.ac.uk
A picture containing drawing
Description automatically generated
From: gpfsug-discuss-boun...@spectrumscale.org
On Behal
Hi Paul,
If you run mmbackup with "DEBUGmmbackup=2", it keeps all working files even
after successful backup. They are available at MMBACKUP_RECORD_ROOT
(default is FSroot or FilesetRoot directory).
In .mmbackupCfg directory, there are 3 directories:
updatedFiles : contains a file that
This is expected. GPFS readdir only support d_type: DT_REG, DT_DIR and
DT_LNK. All other type will be returned as DT_UNKNOWN.
Regards, The Spectrum Scale (GPFS) team
--
If you fe
Hi Steve,
Can you please look into the below query from Hannappel.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum S
Mark,
GPFS does not support to rename an existing snapshot.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS)
Forwarding for Christof. Please see below.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then please po
Jonathan,
CVE-2021-33909 and 3.10.0-1160.36.2.el7.x86_64 was published on July 20,
2021.
GPFS has not been tested on this RHEL kernel yet per our FAQ
https://www.ibm.com/docs/en/spectrum-scale/5.1.1?topic=spectrum-scale-faq.
For both IBM Spectrum Scale 5.1.1.2 and IBM Spectrum Scale 5.0.5.8
I refresh task(s) failed: WATCHFOLDER"
>
> It also says
> "Failure reason: Command mmwatch all functional
--list-clustered-status
> failed"
>
> Running mmwatch manually gives:
> mmwatch: The Clustered Watch Folder function is only available in the
My suggestion for this question is that it should be directed to your IBM
sales team and not the Spectrum Scale support team. My reading of the
information you provided is that your processor counts as 2 cores. As for
the PVU value my guess is that at a minimum it is 50 but again that should
Hi Wally,
I don't see a dedicated document for DB2 from Scale document sets, however,
usually the workloads of database are doing direct I/O, so those
documentation sections in Scale for direct I/O should be good to review.
Here I have a list about tunings for direct I/O for your reference.
http
Hi Billich,
>Or maybe illplaced files use larger inodes? Looks like for each used inode
we increased by about 4k: 400M inodes, 1.6T increase in size
Basically a migration policy run with -I defer would just simply mark the
files as illPlaced which would not cause metadata extension for such files
Hi,
The data and metadata replications are 2 on both source and destination
filesystems, so from:
$ mmrepquota -j srcfilesys | grep fileset
srcfileset FILESET 800800800 0 none |
863 00
0 none
$ mmrepquota -j dstfilesys | grep files
Hi Ratan,
Can you please look into this GUI issue.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then p
Prasad,
This is unexpected. Please open a PMR so that data can be collected and
looked at.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit othe
Bob, could you please provide the version of ESS/Scale you have installed?
Also, could you please provide information about the exact GUI screen you
are using that is not providing the data?
Regards, The Spectrum Scale (GPFS) team
---
Hi Billich,
I think the problem is that you are specifying --choice-algorithm fast and
as per documentation "The fast choice method does not completely sort the
candidates by weight."
To sort the list you can try specifying --choice-algorithm exact which is
also the default.
Regards, The Spectr
It seems like a defect. Could you please open a help case and if possible
provide a sample program and the steps you took to create the problem?
Also, please provide the version of Scale you are using where you see this
behavior. This should result in a defect being opened against GPFS which
Hi Brian,
Can you please answer the below S3 API related query. Or would you know who
would be the right person to forward this to.
Regards, The Spectrum Scale (GPFS) team
--
If you
Hi Diane,
Can you help Simon with the below query. Or else would you know who would
be the best person to be contacted here.
Regards, The Spectrum Scale (GPFS) team
--
If you feel th
Hi Eric,
Please help me to understand your question. You have Spectrum Archive and
Spectrum Scale in your system, and both of them are connected to IBM SKLM
for encryption. Now you got lots of error/warning message from SKLM log.
Now you want to understand which component, Scale or Archive, makes
ug-discuss-boun...@spectrumscale.org
On 08/09/2020 14:04, IBM Spectrum Scale wrote:
> I think a better metaphor is that the bridge we just crossed has
> collapsed and as long as we do not need to cross it again our journey
> should reach its intended destination :-) As I understand the intent o
I think a better metaphor is that the bridge we just crossed has collapsed
and as long as we do not need to cross it again our journey should reach
its intended destination :-) As I understand the intent of this message
is to alert the user (and our support teams) that the directory from which
This has been fixed in Spectrum Scale 4.2.3.20, 5.0.4.2, and 5.0.5.0.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Sc
Hi Jan-Frode,
Do you have a specific question on this or is this sent just for informing
others.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit
I now better understand the functionality you were aiming to achieve. You
want anything in systemd that is dependent on GPFS file systems being
mounted to block until they are mounted. Currently we do not offer any
such feature though as Carl Zetie noted there is an RFE for such
functionality
Option 'suspend' is same to 'empty' if the cluster is updated to Scale
4.1.1. The option 'empty' was introduced in 4.1.1 to support disk deletion
in a fast way, 'suspend' option was not removed with due consideration for
previous users.
> And really what I currently want to do is suspend a set of
With regard to your question:
"The question is: could we safely use the -o ro option for all
clients
even if this option is not cited in the official (v. 4 release 2.0)
documentation?"
The answer is yes, because the '-o ro' option comes from the OS
Hi Yeep,
"Hello" and thanks for reaching out to the team. We will keep an eye
out for any future specific questions as you evaluate things further.
Regards, The Spectrum Scale (GPFS) team
--
The third option is to specify the flavor of regex desired.
Right now, if specified, must be one of these: 'x','b','f', 'ix','ib'
'x' extended regular expressions - the default - as implemented
by regcomp and regexec library functions
wi
Hi Jaime,
When I copy & paste your command to try, this is what I got.
/usr/lpp/mmfs/bin/mmbackup /gpfs/fs1/home -N tapenode3-ib ??tsm?servers
TAPENODE3,TAPENODE4 -s /dev/shm --tsm-errorlog $tmpDir/home-tsm-errorlog
--scope inodespace -v -a 8 -L 2
Regards, The Spectrum Scale (GPFS) team
---
ryone.)
> >
> > -Paul
> >
> > From: gpfsug-discuss-boun...@spectrumscale.org boun...@spectrumscale.org> On Behalf Of IBM Spectrum Scale
> > Sent: Wednesday, January 15, 2020 16:00
> > To: gpfsug main discussion list
> > Cc: gpfsug-discuss-boun...@spect
> prohibit installing two versions/releases for the same (non-kernel)
> package name. But that’s not the case for everyone.)
>
> -Paul
>
> From: gpfsug-discuss-boun...@spectrumscale.org boun...@spectrumscale.org> On Behalf Of IBM Spectrum Scale
> Sent: Wednesday, January 1
>> I don't see any yum options which match rpm's '--force' option.
Actually, you do not need to use --force option since efix RPMs have
incremental efix number in rpm name.
Efix package provides update RPMs to be installed on top of corresponding
PTF GA version. When you install 5.0.4.1 efix9, i
Under FS mount dir or $MMBACKUP_RECORD_ROOT dir if you set, mmbackup
creates the following file that contains all backup candidate files.
.mmbackupCfg/updatedFiles/.list*
As a default, mmbackup deletes the file upon successful backup completion
but keeps all temporary files until next mmback
s properly decompressed.
Mit freundlichen Grüßen / Kind regards
IBM Spectrum Scale Dr. Alexander Wolf-Reber
Spectrum Scale Release Lead Architect
Hi Diane,
Can you please help customer with the below issue. Or else can you point me
to the right folks who can help here.
Regards, The Spectrum Scale (GPFS) team
--
If you feel th
run mmfsck or try to delete another fileset to see if that could trigger
cleanup. Thanks.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other us
You can try if restarting GPFS daemon would help. Thanks.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), t
As far as I know there is no way to change nsd names once they have been
created, and yes mmvdisk automatically generates the nsd names.
Regards, The Spectrum Scale (GPFS) team
--
If
Right for the example from Ryan(and according to the thread name, you know
that it is writing to a file or directory), but for other cases, it may
take more steps to figure out what access to which file is causing the
long waiters(i.e., when mmap is being used on some nodes, or token revoke
pen
The short answer is there is no easy way to determine what file/directory
a waiter may be related. Generally, it is not necessary to know the
file/directory since a properly sized/configured cluster should not have
long waiters occurring, unless there is some type of failure in the
cluster. I
Kristy, there is no equivalent to the -e option in the quota API. If your
application receives negative quota values it is suggested that you use
the mmlsquota command with the -e option to obtain the most recent quota
usage information, or run the mmcheckquota command. Using either the -e
opt
Damir, Joseph,
> Is this something to pay attention to, and what does this waiter mean?
This waiter means GPFS fails to reconnect broken verbs connection, which
can cause performance degradation.
> I have seen these on our cluster after the IB network goes down (GPFS
still runs over ethernet) a
case if it is useful. One customer workload crashed
every time, though it took almost a full day to get to that point so you
can imagine the time wasted.
> On Aug 21, 2019, at 1:20 PM, IBM Spectrum Scale
wrote:
>
> To my knowledge there has been no notification sent regarding this
| Office of Advanced Research Computing - MSB
C630, Newark
`'
> On Aug 21, 2019, at 1:10 PM, IBM Spectrum Scale
wrote:
>
> As was noted this problem is fixed in the Spectrum Scale 5.0.3 release
stream. Regarding the version number format of 5.0.2.0/1 I assume that i
As was noted this problem is fixed in the Spectrum Scale 5.0.3 release
stream. Regarding the version number format of 5.0.2.0/1 I assume that it
is meant to convey version 5.0.2 efix 1.
Regards, The Spectrum Scale (GPFS) team
-
Since Spectrum Scale 5.0.3.3 has not yet been released I think the
reference to it in the notice was incorrect. It should have referred to
version 5.0.3.2 as it does in other statements. Thanks for noting the
discrepancy. I will alert the appropriate folks so this can be fixed.
Regards, The
Bob, like most questions of this time I think the answer depends on a
number of variables. Generally we do not recommend running the
mmcheckquota command during the peak usage of your Spectrum Scale system.
As I think you know the command will increase the IO to the NSDs that hold
metadata and
From my understanding, your interpretation is correct.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), the
land
Telephone: +41 56 310 46 67
E-Mail: marc.cau...@psi.ch
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of IBM Spectrum Scale
[sc...@us.ibm.com]
Sent: Thursday, April 18, 2019 5:54 PM
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun
We can try to provide some guidance on what you are seeing but generally
to do true analysis of performance issues customers should contact IBM lab
based services (LBS). We need some additional information to understand
what is happening.
On which node did you collect the waiters and what comm
I think it would be wise to first set the failure group on the existing
NSDs to a valid value and not use -1. I would also suggest you not use
consecutive numbers like 1 and 2 but something with some distance between
them, for example 10 and 20, or 100 and 200.
Regards, The Spectrum Scale (GPF
This is a known issue. The workaround is to use --force-nsd-mismatch
option. Just make sure that the failure group is different from those used
by the vdisk NSDs
Regards, The Spectrum Scale (GPFS) team
David,
Thanks for reporting this download issue. The mail is forwarded
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Sc
Hi Kevin,
The I/O hist shown by the command mmdiag --iohist actually depends on the
node on which you are running this command from.
If you are running this on a NSD server node then it will show the time
taken to complete/serve the read or write I/O operation sent from the
client node.
And if
@Frank, Can you please help with the below query.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then plea
Hello Henrik,
What you are seeing has to do with whether UAC (User Access Control) is
enabled/disabled on Windows.
On Windows 7 and 2012R2 etc, my guess is that you have disabled UAC (since
that is what GPFS required in the past). When UAC is disabled, the default
owner of a local file/dir cre
Alvise,
Could you send us the output of the following commands from both server
nodes.
mmfsadm dump nspdclient > /tmp/dump_nspdclient.
mmfsadm dump pdisk > /tmp/dump_pdisk.
Regards, The Spectrum Scale (GPFS) team
---
um nodes. We moved our Windows nodes into the storage cluster which
was entirely Linux and that solved it. If this is not an option, perhaps
adding some Linux nodes to your remote cluster as quorum nodes would help.
-Roger
From: gpfsug-discuss-boun...@spectrumscale.org<
mailto:gpfsug-
Hello Renar,
A few things to try:
Make sure IPv6 is disabled. On each Windows node, run "mmcmi host
", with being itself and each and every node in the
cluster. Make sure mmcmi prints valid IPv4 address.
To eliminate DNS issues, try adding IPv4 entries for each cluster node in
"c:\windows\sy
Hello,
Unfortunately, to allow bidirectional passwordless ssh between
Linux/Windows (for sole purpose of mm* commands), the literal username
'root' is a requirement. Here are a few variations.
1. Use domain account 'root', where 'root' belongs to "Domain Admins"
group. This is the easiest 1-s
Hi Mathias,
Can you help with below query.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spectrum Scale
(GPFS), then please pos
This means that the files having the below inode numbers 38422 and 281057
are orphan files (i.e. files not referenced by any directory/folder) and
they will be moved to the lost+found folder of the fileset owning these
files by mmfsck repair.
Regards, The Spectrum Scale (GPFS) team
---
Note, RHEL 7.6 is not yet a supported platform for Spectrum Scale so you
may want to use RHEL 7.5 or wait for RHEL 7.6 to be supported.
Using "generic" for the device type should be the proper option here.
Regards, The Spectrum Scale (GPFS) team
AFAIK GSS/DSS are handled by Lenovo not IBM so you would need to contact
them for release plans. I do not know which version of GPFS was included
in GSS 3.3a but I can tell you that GPFS 3.5 is out of service and GPFS
4.1.x will be end of service in April 2019.
Regards, The Spectrum Scale (GP
Hi Aaron,
The header dump shows all zeroes were received for the header. So no valid
magic, version, originator, etc. The "512 more bytes" would have been the
meat after the header. Very unexpected hence the shutdown.
Logs around that event involving the machines noted in that trace would be
req
Hi Aaron,
I just searched the core GPFS source code. I didn't find TCP_QUICKACK
being used explicitly.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can b
In released GPFS, we only support one subblocks-per-fullblock in one file
system, like Sven mentioned that the subblocks-per-fullblock is derived by
the smallest block size of metadata and data pools, the smallest block size
decides the subblocks-per-fullblock and subblock size of all pools.
There
Hi,
as there are more often similar questions rising, we just put an article
about the topic on the Spectrum Scale Wiki
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20
(GPFS)/page/Downsampling%2C%20Upsampling%20and%20Aggregation%20of%20t
Only nodes in the home cluster will participate as token managers.
Note that "mmdiag --tokenmgr" lists all potential token manager nodes, but
there will be additional information for the nodes that are currently
appointed.
--tokenmgr
Displays information about token management. For each mounte
There is a fix in 4.2.3.9 efix3 that corrects a condition where GPFS was
failing a revalidate call and that was causing kNFS to generate EBADHANDLE.
Without more information on your case (traces), I cannot say for sure that
this will resolve your issue, but it is available for you to try.
Regards
errno 521 is EBADHANDLE (a Linux NFS error); it is not from spectrum scale.
/* Defined for the NFSv3 protocol */
#define EBADHANDLE521 /* Illegal NFS file handle */
Regards, The Spectrum Scale (GPFS) team
--
> 1) How is file deletion handled?
This depends on whether there's snapshot and whether COW is needed. If COW
is not needed or there's no snapshot at all, then the file deletion is
handled as non-compressed file(don't decompress the data blocks and simply
discard the data blocks, then delete the
Hi
Please check the IO type before examining the IP address for the output of
mmdiag --iohist. For the "lcl"(local) IO, the IP address is not necessary
and we don't show it. Please check whether this is your case.
=== mmdiag: iohist ===
I/O history:
I/O start time RWBuf type disk:sectorNum
Hi,
mmchfs Device -o syncnfsis the correct way of setting the syncnfs so
that it applies to the file system both on the home and the remote cluster
On 4.2.3+ syncnfs is the default option on Linux . Which means GPFS will
implement the syncnfs behavior regardless of what the mount command say
Hi Damir,
Since many GPFS management command got unresponsive and you are running
ESS, mail-list maybe not a good way to track this kinds of issue.
Could you please raise a ticket to ESS/SpectrumScale to get help from IBM
Service team?
Regards, The Spectrum Scale (GPFS) team
---
Just to follow up on the question about where to learn why a NSD is marked
down you should see a message in the GPFS log, /var/adm/ras/mmfs.log.*
Regards, The Spectrum Scale (GPFS) team
--
ugh the nodes in question are legacy with only 1GB connections (and
40GB to the back of the storage.
We're currently running 4.2.3-8
Peter Childs
Research Storage
ITS Research and Teaching Support
Queen Mary, University of London
IBM Spectrum Scale wrote
What is in the dump that indi
What is in the dump that indicates the metanode is moving around? Could
you please provide an example of what you are seeing?
You noted that the access is all read only, is the file opened for read
only or for read and write?
What makes you state that this particular file is interfering with t
The only additional piece of information I would add is that you can see
what the maximum NSD size is defined for a pool by looking at the output
of mmdf.
Fred
Regards, The Spectrum Scale (GPFS) team
--
Hello,
Can you provide the Windows OS and GPFS versions. Does the mmmount hang
indefinitely or for a finite time (like 30 seconds or so)? Do you see any
GPFS waiters during the mmmount hang?
Regards, The Spectrum Scale (GPFS) team
---
Hi Renata,
You may want to reduce the set of quorum nodes. If your version supports
the --force option, you can run
mmchnode --noquorum -N --force
It is a good idea to configure tiebreaker disks in a cluster that has only
2 quorum nodes.
Regards, The Spectrum Scale (GPFS) team
-
If you didn't run mmchconfig release=LATEST and didn't change the fs
version, then you can downgrade either or both of them. Thanks.
Regards, The Spectrum Scale (GPFS) team
--
If you
Hi Kuei-Yu,
Should we update the document as the requested below ?
Thanks.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question can benefit other users of Spect
Here is the link to our GPFS FAQ which list details on supported versions.
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux
Search for "Table 30. IBM Spectrum Scale for Linux RedHat kernel support"
and it lists the details that you are looking fo
Hi Alexander, Markus,
Can you please try to answer the below query.
Or else forward this to the right folks.
Regards, The Spectrum Scale (GPFS) team
--
If you feel that your question
reason why we changed the Recovery point objective (RPO)
snapshots by 15 to 720 minutes in the version 5.0.0 of IBM Spectrum Scale
AFM-DR?
- Can we use additional Independent Peer-snapshots to reduce the RPO
interval (720 minutes) of IBM Spectrum Scale AFM-DR?
- In addition to the above questi
This means that the stripe group descriptor on the disk dcs3800u31b_lun7
is corrupted.
As we maintain copies of the stripe group descriptor on other disks as
well we can copy the good descriptor from one of those disks to this one.
Please open a PMR and work with IBM support to get this fixed.
/usr/lpp/mmfs/bin/mmcommon notifyOverload will not cause tracing to be
started. One can verify that using the underlying command being called as
shown in the following example with /tmp/n containing node names one each
line that will get the notification and the IP address being the file
system
nyone know if IBM has an official statement and/or perhaps a FAQ
document about the Spectre/Meltdown impact on GPFS?
Thank you
From: on behalf of IBM Spectrum
Scale
Reply-To: gpfsug main discussion list
Date: Thursday, January 4, 2018 at 20:36
To: gpfsug main discussion list
Subject: Re: [gpfs
Hi John,
For all Flashes, alerts and bulletins for IBM Spectrum Scale, please check
this link:
https://www.ibm.com/support/home/search-results/1060/system_storage/storage_software/software_defined_storage/ibm_spectrum_scale?filter=DC.Type_avl:CT792,CT555,CT755&sortby=-dcdate_sortrange&am
AFAIK you can increase the pagepool size dynamically but you cannot shrink
it dynamically. To shrink it you must restart the GPFS daemon. Also,
could you please provide the actual pmap commands you executed?
Regards, The Spectrum Scale (GPFS) team
are read
from those files?
I thought LROC only keeps that block of data that is prefetched from the
disk, and will not prefetch the whole file if a stub of data is read.
Please do let me know, if i understood it wrong.
On Feb 22, 2018, 4:08 PM -0500, IBM Spectrum Scale ,
wrote:
I do not think
I do not think AFM is intended to solve the problem you are trying to
solve. If I understand your scenario correctly you state that you are
placing metadata on NL-SAS storage. If that is true that would not be
wise especially if you are going to do many metadata operations. I
suspect your pe
Looking at the mmfind.README it indicates that it only supports the format
you used with the semi-colon. Did you capture any output of the problem?
Regards, The Spectrum Scale (GPFS) team
---
As I think you understand we can only provide general guidance as regards
your questions. If you want a detailed examination of your requirements
and a proposal for a solution you will need to engage the appropriate IBM
services team.
My personal recommendation is to use as few file systems as
1 - 100 of 145 matches
Mail list logo